Installing and Configuring Logstash to Ingest Fortinet Syslogs

Ali Younes
10 Jun 202224:54

Summary

TLDRThis tutorial video guides viewers through the process of installing and configuring Logstash to send syslogs from a Fortigate firewall to Elasticsearch. The presenter, building on a previous video, demonstrates setting up the Logstash input, filter, and output plugins, including parsing and enriching data. The video also covers troubleshooting steps, such as adjusting firewall settings and ensuring correct permissions for certificates, to ensure a smooth data flow into Elasticsearch for indexing and visualization in Kibana.

Takeaways

  • πŸ˜€ The video is a tutorial on installing and configuring Logstash to send syslogs from a firewall to Elasticsearch.
  • πŸ”§ Part two of a previous video series, where Elasticsearch and Kibana were installed and configured to work together.
  • πŸ’» The presenter uses Oracle Linux and installs Java 11 or 17 as a prerequisite for Logstash, using the command `yum install java-1.8.0-openjdk`.
  • πŸ“ Logstash is introduced as an open-source data collection engine that can enrich data, like adding geo-locations.
  • πŸ“‘ The presenter guides through the process of setting up Logstash using a `yum` repository and a public signing key.
  • πŸ”Œ The video demonstrates configuring the firewall to send syslogs to Logstash, specifying the IP address and port 5144.
  • πŸ“‘ The Logstash configuration file is detailed, explaining the input, filter, and output plugins necessary for processing the syslog data.
  • πŸ“š The use of the 'grok' filter is highlighted for parsing syslog messages into structured data.
  • πŸ›  The 'mutate' filter is used to clean up the log data by removing unnecessary fields like the original syslog priority value.
  • πŸ—“οΈ The 'kv' filter is introduced to parse key-value pairs from the log messages, and a 'date' filter is used to convert log timestamps into a usable format for Elasticsearch.
  • πŸ”’ The video concludes with securing data transmission to Elasticsearch using SSL, including copying the CA certificate and setting appropriate file permissions.
  • πŸ“Š Finally, the presenter shows how to view the ingested data in Kibana and mentions future plans to create dashboards for data visualization.

Q & A

  • What is the purpose of the video?

    -The purpose of the video is to demonstrate the installation and configuration of Logstash to send syslogs from a firewall to Elasticsearch, which is a continuation of a previous video where Elasticsearch and Kibana were installed and configured.

  • What are the prerequisites for installing Logstash as mentioned in the video?

    -The prerequisites for installing Logstash include having Java installed, specifically JDK 11 or JDK 17, and the video uses JDK 17.

  • How does the video demonstrate the installation of Logstash?

    -The video demonstrates the installation of Logstash by using the yum repository, downloading the public signing key, creating a repository file, and then executing the 'yum install logstash' command.

  • What is the default port used for syslog in the video?

    -The default port used for syslog in the video is 5144.

  • How does the video handle the firewall configuration for sending syslogs?

    -The video shows how to log in to the firewall, configure syslog settings, enable the syslog server, set the IP address of the syslog server to the Logstash server, and specify the port to 5144.

  • What are the three main components of Logstash configuration files?

    -The three main components of Logstash configuration files are the input plugin, filter plugin, and output plugin.

  • What is the purpose of the 'grok' filter plugin used in the video?

    -The 'grok' filter plugin is used to parse arbitrary text and structure it, allowing the user to define patterns for extracting data from logs.

  • How does the video address the issue of viewing the data before applying filters?

    -The video suggests sending the output to the console (stdout) initially to view the data before applying any filters for further processing.

  • What is the significance of setting the correct permissions for the certificate in the video?

    -Setting the correct permissions for the certificate is crucial for the Logstash service to run without issues, as it ensures that the service can access and use the certificate for secure communication with Elasticsearch.

  • How does the video demonstrate troubleshooting steps for Logstash?

    -The video demonstrates troubleshooting by showing the process of giving the correct permissions to the certificate file, changing the file owner to 'logstash', and then successfully restarting the Logstash service.

  • What is the final step shown in the video for confirming that data is being sent to Elasticsearch?

    -The final step shown in the video is logging into Kibana, navigating to the 'Discover' section, and confirming that the data from the firewall is being displayed in real-time.

Outlines

00:00

πŸ”§ Setting Up Logstash for Firewall Syslogs

The video begins with an introduction to the task of installing Logstash and configuring it to receive syslogs from a firewall and send them to Elasticsearch. This is a continuation of a previous tutorial where Elasticsearch and Kibana were set up. The speaker provides a brief overview of their setup and mentions the need for Java 11 or 17 for Logstash. They demonstrate the installation process using the YUM repository and public signing key, and proceed to configure the firewall to send syslogs to the Logstash server on port 5144. The video also covers the basic components of a Logstash configuration file, which includes input, filter, and output plugins.

05:02

πŸ“‘ Configuring Logstash Input and Testing with Stdout

In this section, the speaker delves into configuring the Logstash input plugin, specifically using the UDP input plugin to receive syslog messages from the firewall. They explain the importance of setting the correct host and port, and then proceed to test the setup by directing the output to the standard output (stdout) for initial parsing. The speaker also discusses the necessity of opening the specified port in the firewall settings and demonstrates how to start Logstash with the specified configuration file. The video shows the raw syslog messages being received, setting the stage for further filtering and processing.

10:03

🌐 Filtering and Parsing Firewall Logs with Logstash

The speaker continues by adding filters to the Logstash configuration. They start with the grok filter to parse the syslog message into a structured format, using a specific pattern to extract the syslog priority and message. The video illustrates how to use the grok debugger and apply the filter to the incoming logs. Subsequently, the mutate filter is introduced to clean up the log data by removing unnecessary fields. The speaker also explains how to use the kv filter to split the log message into individual fields and the subsequent steps to refine the log data further, including removing the original message and adding a new 'log date' field for better timestamp management.

15:03

⏰ Timestamp Conversion and Finalizing Logstash Filters

This part of the video focuses on converting the 'log date' field into a proper timestamp format that Elasticsearch can use. The speaker describes the process of using the date filter plugin to parse the date and time into a structured format. They also discuss the importance of removing redundant fields like 'log date', 'date', and 'time' once the timestamp has been created. Additionally, the speaker converts the 'received bytes' and 'send bytes' fields to integer types to facilitate mathematical operations in Elasticsearch. The video concludes this section by showing a cleaner log data structure ready for output to Elasticsearch.

20:06

πŸ”— Sending Parsed Logs to Elasticsearch and Verifying with Kibana

The final segment of the video script covers configuring the Logstash output plugin to send the parsed log data to Elasticsearch. The speaker details the necessary Elasticsearch output plugin settings, including specifying the hosts, index name, user authentication, and enabling SSL with a CA certificate. They demonstrate how to copy the certificate from the Elasticsearch node and set the appropriate file permissions for Logstash to access it. After making these configurations, the speaker shows how to start Logstash as a service and verify that the logs are being ingested into Elasticsearch using Kibana. They also troubleshoot a permissions issue with the certificate and demonstrate the final, successful log data stream into Elasticsearch, which can now be visualized in Kibana.

Mindmap

Keywords

πŸ’‘Logstash

Logstash is an open-source data collection engine that is part of the Elastic Stack. In the video, it is used to ingest and process syslogs from a firewall, which are then sent to Elasticsearch for indexing and analysis. The script describes the process of installing Logstash, configuring it to receive data, and setting up filters to parse and transform the data.

πŸ’‘Elasticsearch

Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. In the context of the video, it is the destination where Logstash sends the processed data for indexing. The video mentions configuring Elasticsearch to work with Logstash and Kibana, which are part of the same stack.

πŸ’‘Kibana

Kibana is an open-source data visualization plugin for Elasticsearch. It is used for creating visualizations and dashboards to understand and interact with the data stored in Elasticsearch. The script refers to Kibana when discussing how to visualize the data that has been ingested from the firewall via Logstash.

πŸ’‘Syslog

Syslog is a standard for message logging, used widely on Unix and Unix-like operating systems. In the video, the term 'syslogs' refers to the logs generated by the presenter's firewall, which are being sent to Logstash for further processing and eventual storage in Elasticsearch.

πŸ’‘Fortigate

Fortigate is a brand of network security appliances that provide firewall, antivirus, and other security services. The script mentions setting up syslog on a Fortigate device, indicating that the firewall logs from this device are the source of the data being ingested into Logstash.

πŸ’‘Java

Java is a high-level, class-based, object-oriented programming language that is widely used in enterprise environments. The video script specifies that Java 11 or Java 17 is required to run Logstash, and the presenter installs Open JDK to meet this requirement.

πŸ’‘YUM Repository

YUM (Yellowdog Updater, Modified) is a package management tool for Linux. In the script, the presenter uses the YUM repository to download and install Logstash on their Oracle Linux server, demonstrating a common method for software installation on Linux systems.

πŸ’‘Configuration File

A configuration file in the context of the video refers to the file where the settings for Logstash are defined, such as input sources, filters, and output destinations. The script describes editing the Logstash configuration file to specify how data should be received and processed.

πŸ’‘Grok Filter

The Grok filter in Logstash is used for parsing unstructured data into structured data. The script mentions using the Grok filter to parse the syslog messages, allowing for better structure and easier querying in Elasticsearch.

πŸ’‘Mutate Filter

The Mutate filter in Logstash is used for transforming data, such as renaming, converting data types, removing, or adding fields. In the video, the presenter uses the Mutate filter to clean up the log data by removing unnecessary fields and creating a new timestamp field.

πŸ’‘Date Filter

The Date filter in Logstash is used to parse and convert date strings into a standard format that can be used by Elasticsearch. The script describes using the Date filter to create a proper timestamp from the log date and time fields for indexing in Elasticsearch.

πŸ’‘Elasticsearch Output Plugin

The Elasticsearch output plugin in Logstash is used to send the processed data to an Elasticsearch cluster. The script details configuring this plugin with the necessary settings such as hosts, index name, and security features like SSL and authentication.

Highlights

Introduction to installing Logstash for sending firewall syslogs to Elasticsearch.

Part two of a previous video series on setting up Elasticsearch and Kibana.

Requirement of Java 11 or Java 17 for Logstash installation.

Installation of Java 1.8.0 Open JDK on Oracle Linux.

Downloading and installing Logstash using the YUM repository.

Configuring Fortigate syslog server to send logs to Logstash.

Explanation of Logstash components: input, filter, and output plugins.

Setting up Logstash to receive UDP messages from Fortigate on port 5144.

Initial testing of Logstash configuration with stdout output.

Using the grok filter to parse syslog messages.

Utilizing mutate filter to clean and remove unnecessary fields from logs.

Introduction of the kv filter for parsing key-value pairs in log messages.

Creating a timestamp field from log date and time for Elasticsearch indexing.

Converting received and sent bytes to integer for data analysis.

Configuring the Elasticsearch output plugin for near real-time search and analytics.

Securing Elasticsearch connection with SSL and CA certificate.

Verification of data ingestion in Kibana's Discover feature.

Troubleshooting permissions for the CA certificate for Logstash service.

Final confirmation of successful data streaming from Fortigate to Elasticsearch.

Transcripts

play00:00

hello everyone welcome to my channel

play00:03

in this video i'm going to be installing

play00:04

log stash and configuring it so that i

play00:07

can send uh the syslogs of my firewall

play00:10

uh to elasticsearch

play00:13

in a previous video this is part two of

play00:15

uh of another video where i installed

play00:17

elasticsearch analog and kibana and

play00:20

configured them i will post the link of

play00:22

that video in the description

play00:24

and now we need to send the data to

play00:27

elasticsearch from my firewall

play00:29

so we are going i'll show you first my

play00:32

setup here

play00:33

so last time i

play00:35

installed elasticsearch and cabana got

play00:37

them to talk to each other and now we

play00:39

need to send fortigate syslogs

play00:42

to stash do some filtering here and then

play00:45

send them to elasticsearch to be able to

play00:47

uh

play00:50

to be uh indexed

play00:53

so right here on their

play00:56

website you can look at the log stash

play00:59

introduction it's basically a an open

play01:02

source data collection engine and i'm

play01:04

going to use log stash that i am able to

play01:07

do something like

play01:09

enriching the data with for example guip

play01:13

locations

play01:14

i'm going to be doing that later on

play01:17

if we go to getting started with log

play01:19

stash

play01:20

it tells us we need

play01:23

java

play01:25

and for that you need java 11 or java 17

play01:28

using jdk 17

play01:30

so i want to open jdk

play01:33

to installing here at the top and i have

play01:36

oracle linux so i run the command yum

play01:38

install java 1.8.0

play01:42

open jdk

play01:44

and this is my server here i already ran

play01:47

this command i installed java

play01:50

it should not

play01:51

do anything right

play01:52

now okay nothing to do

play01:55

i'm gonna go back to

play01:57

log stash

play01:59

documentation here

play02:01

installing log stash i'm gonna be using

play02:04

the yum repository so i'm gonna be

play02:06

downloading and installing this public

play02:09

signing key

play02:13

just copy paste this command and i will

play02:18

now create

play02:20

a file and this repository called logs

play02:26

logs dash dot repo

play02:29

oh and i have this from before so

play02:31

basically i just copy pasted this

play02:34

information here and all i have to do is

play02:37

say yum install log stash

play02:42

now install log stash

play02:48

and it will start

play02:50

installing right now

play02:53

i'm gonna go to my firewall

play02:57

log in and see how to set up assist log

play03:04

open command prompt from here

play03:08

config log

play03:12

syslog and with port gates you can

play03:14

install or you can

play03:16

setup forces log servers

play03:20

i'm just going to use the first one

play03:24

and i have those settings ready

play03:27

status enable server the ip address of

play03:30

the syslog server in my case it's going

play03:31

to be the log stash server

play03:33

and the port

play03:35

we are sending to 5144

play03:38

and if we look at the

play03:40

full configuration here you can see all

play03:42

the other default values

play03:44

for example the format is still a

play03:46

default

play03:47

you can

play03:49

you can use other format csv or cf or

play03:51

rfc 5424

play03:54

but i'm just going to keep it on the

play03:55

default format

play03:58

okay

play04:00

let's see where it is at and it is

play04:03

finished we're gonna jump right away to

play04:06

the configuration file now

play04:08

so nano

play04:09

let's see log stash

play04:12

conf d and i'm gonna

play04:16

firewall.conf

play04:18

this is from before so i'm gonna say

play04:21

this time forty gate

play04:26

48.5

play04:28

okay and with logs we have three

play04:32

main components of the configuration

play04:34

files the input plug-in

play04:37

if without

play04:39

filter

play04:41

plugin and the output

play04:45

plugin so input

play04:47

is where the data is coming from

play04:49

filter is where you add filters to

play04:54

basically customize how the data looks

play04:57

and then output send it to wherever you

play04:59

want in our case to elastic search

play05:02

so i'm going to start with the input

play05:06

and i'm going to open the documentation

play05:09

here we can see the input plugins we

play05:12

have many input plugins file i worked

play05:15

with this one

play05:17

and i'm going to go to the udb

play05:20

udp input plugin so read messages as

play05:23

events over network via udp

play05:27

and the settings here i'm going to use

play05:28

the host and the port

play05:32

the host and the port

play05:35

and

play05:38

so it's a udp

play05:41

plugin

play05:43

the spacing is very important here

play05:46

host

play05:50

and this is where you put the ip of log

play05:52

stash so basically the ip that logs

play05:55

listens on

play05:57

and then

play05:58

port

play06:01

i'm just going to do five one four four

play06:04

five and four is this default the syslog

play06:06

port

play06:07

uh but if you're gonna use any port

play06:11

under

play06:12

1024

play06:13

or the main uh or the well-known ports

play06:16

i think you're gonna have to give

play06:17

logstash

play06:19

admin privileges or something like that

play06:21

so

play06:22

i'm going to use 5144

play06:25

and this is basically it for the input

play06:27

plugin

play06:28

now filter before we do any filters i'm

play06:31

just going to look at the data first and

play06:34

we need to add something to the output

play06:36

right now we're not going to send to

play06:37

elasticsearch we want to parse it first

play06:40

so to start testing i'm going to send

play06:43

the output to the shell just

play06:46

basically to this console right here std

play06:49

out

play06:50

and that's all i have to do

play06:53

and something very important because we

play06:55

are listening on this port

play06:57

five one four four

play06:58

we have to enable it i'm gonna say yes

play07:01

here fortigate.gov

play07:03

and i opened this

play07:05

port already

play07:07

firewall.cmd

play07:09

list

play07:11

all

play07:13

and i opened this port so this is

play07:15

important here as well

play07:18

and to run

play07:20

logstash

play07:22

we're gonna go here

play07:24

log stash

play07:30

and dash f to specify the file

play07:34

log stash conf and fortigate.com

play07:52

okay takes few seconds

play07:57

now we're going to be seeing all the

play07:59

strata coming from the firewall

play08:02

not porous not anything so see this

play08:04

message here this is what we're going to

play08:05

be working with

play08:08

we're going to grab this message

play08:11

and

play08:13

we're going to start

play08:19

filtering now we're going to work on the

play08:21

filters the first filter i'm going to

play08:23

use is

play08:25

the grog filter

play08:27

so

play08:28

you go to the filter plug-ins uh filter

play08:31

plug-ins

play08:32

and where's the garage filter

play08:34

grock filter a quick description here

play08:36

parse arbitrary text and structure it

play08:40

and you can open uh

play08:43

grok

play08:45

debuggers online

play08:47

i did this before

play08:49

so i basically copy pasted this message

play08:52

whatever you have

play08:56

from here for example this whole message

play08:59

this is what we need to filter

play09:01

or what we need to find the pattern of

play09:06

so

play09:07

it's basically on this format percentage

play09:10

open bracket and we can see here this

play09:12

number this is the syslog

play09:14

5424

play09:16

priority

play09:18

and then percentage of greedy data and

play09:19

we put them in a field called message

play09:22

so now

play09:24

we kind of paused it a little bit and

play09:25

now we will work with this

play09:29

message

play09:31

okay

play09:36

and

play09:36

before i run i'm gonna run this command

play09:40

with dash or

play09:42

to reload the configurations

play09:43

automatically

play09:45

and i'm gonna duplicate this session

play09:49

bring it here

play09:55

i'm going to open the configuration file

play09:57

on this side

play10:00

go to gate and now i'm going to be

play10:02

adding

play10:04

the

play10:08

filters

play10:09

and it i'll save the file and it will

play10:12

reload on the side automatically

play10:15

so the first one we said was the growth

play10:17

filter

play10:18

i'm gonna copy paste from

play10:24

okay so double space is important

play10:30

i'm gonna copy paste from the document

play10:32

here on the lab

play10:34

so grog

play10:35

just follow documentation how the format

play10:38

of this

play10:39

uh

play10:40

filter works

play10:42

okay so growth match and then message

play10:44

and then

play10:46

you'll put the pattern in

play10:49

so message and then

play10:51

this is the pattern and then we're going

play10:54

to overwrite the message this this

play10:55

message is right here

play10:57

i'm gonna save

play10:59

see how that looks

play11:02

so now it's reloading automatically

play11:06

we won't be able to see much here but

play11:09

it's better than before

play11:11

we don't have this is the event original

play11:13

the message

play11:16

i have to catch it

play11:22

okay we're just gonna continue building

play11:24

on the filters

play11:28

the next one i'm going to do is mutate i

play11:30

don't want to see all these original

play11:33

events and logs and the version

play11:36

of the whole of the

play11:38

of the basically of the event of the log

play11:40

and timestamp this is the time that the

play11:43

event got

play11:46

ingested in logs not the event that we

play11:48

need actually

play11:50

which is here of the event itself

play11:55

so i'm going to remove those fields we

play11:56

don't need them

play11:58

control and save

play12:00

let's see how it looks now

play12:04

so a little cleaner now we see only the

play12:06

message

play12:08

so we see the message it does not have

play12:09

that syslog priority value here

play12:12

and now we're gonna grab this message or

play12:15

we're not gonna grab it anyway we're

play12:16

just gonna add another filter

play12:18

that will parse each and every field by

play12:21

itself

play12:22

and for that we're gonna use

play12:27

a filter plug-in plug-in called

play12:30

kv

play12:33

and we're feel we're doing a field we're

play12:35

doing a field split on the on the space

play12:39

so between every field see

play12:40

field

play12:42

and then its value and then space and

play12:44

then another field and then its value

play12:45

and then space

play12:47

so this is what will help us parse those

play12:50

um those fields so ctrl x save yes

play12:56

it will reload

play13:02

and now look at this we are seeing the

play13:04

fields parsed but we're still seeing the

play13:07

main

play13:09

the main message here so we're going to

play13:10

remove it because we parse them there

play13:12

are individual fields

play13:15

okay

play13:17

we're going to do another mutate filter

play13:20

and remove

play13:21

that message

play13:22

now we're going to do something else

play13:28

so we're we're going to remove this

play13:30

message the big chunky message and then

play13:33

we're going to add a filter

play13:34

or we're going to add a field sorry

play13:38

we're going to call this field log date

play13:40

and we're going to

play13:41

put the value of date space and time

play13:45

because you see those

play13:46

date and time

play13:51

if i

play13:52

find them here and the first line of

play13:53

this message date

play13:55

space time

play13:58

in

play13:58

elasticsearch you need that at time

play14:02

stamp you need something like this

play14:05

but generated from the field values

play14:08

themselves

play14:09

to be able to ingest them and

play14:12

index them in elasticsearch

play14:14

so we're gonna do that first

play14:17

we're gonna go next so

play14:19

this is a step ahead

play14:23

i'm gonna save that

play14:26

reload

play14:28

and now we have those clean fields

play14:30

without that big message

play14:34

okay

play14:35

and now the last

play14:38

filter

play14:40

uh last filter

play14:42

plugin is the date one

play14:46

so we added this field from previous

play14:48

step

play14:49

we call log date it has the values of

play14:52

date and time

play14:53

and then we're matching

play14:56

we're looking at this

play14:57

log date we say

play15:00

convert it to this format

play15:02

year month day and then hour minute

play15:05

seconds

play15:06

we can set the time zone to convert

play15:09

the time zone to our timezone and then

play15:11

the target is at time stamp we need this

play15:14

field

play15:16

to um to ingest our data into

play15:19

elasticsearch

play15:23

so let's see what happens now

play15:28

and this

play15:33

i want to catch it from

play15:35

the top

play15:36

to be able to see

play15:38

i'm going to stop it for a bit and look

play15:40

at that

play15:42

new timestamp field

play15:45

so this timestamp field is made from

play15:48

this field time

play15:50

and

play15:51

this field date now we don't need date

play15:53

we don't need time

play15:55

and definitely definitely not this log

play15:57

date that we created just to create this

play15:59

timestamp

play16:00

so another filter one more filter

play16:03

at the end

play16:05

to say

play16:07

i'm gonna keep this running

play16:16

just say remove log date date and time

play16:20

i'm also gonna we're also going to

play16:22

convert received bytes and send bytes to

play16:26

integer type

play16:28

because we will

play16:30

be able to uh

play16:33

do mathematical calculations on those if

play16:35

it's a string we can't add a string with

play16:38

a string now

play16:39

sent bytes and received whites are

play16:41

integers

play16:43

and we can see timestamp and we don't

play16:44

see time and date anymore

play16:46

now this is much cleaner than before we

play16:50

are able to

play16:52

send those

play16:53

to elasticsearch

play16:55

to do that we're going to go to the

play16:57

output plugin now

play16:59

and we're going to say elasticsearch

play17:07

i'll show you the output plugin

play17:10

and the documentation

play17:14

so the last search here so we're sending

play17:16

the data to elasticsearch

play17:19

it'll search

play17:20

this plugin

play17:22

output plugin provides near near

play17:24

real-time search and analytics for all

play17:26

types of data so we're just sending to

play17:28

elasticsearch

play17:29

and there are many many options and

play17:31

settings here

play17:32

but what i'm going to use

play17:36

is the following

play17:44

okay

play17:46

i don't want to copy anything wrong

play17:49

okay so hosts was which is the last

play17:52

search

play17:53

post with https because we're

play17:56

gonna use the cert ca cert

play17:59

so the index we're gonna save them in a

play18:02

an index called firewall

play18:04

you can keep it as firewall but i added

play18:07

the date of today

play18:08

and then

play18:09

user and password it's very secure

play18:12

password i know and then we're enabling

play18:14

ssl i'm saying this is the ca cert

play18:19

this is closing the brackets for

play18:21

elasticsearch

play18:23

and now should we comment this out as cd

play18:25

out

play18:26

now we'll keep it so that we can make

play18:28

sure that the data is still

play18:31

uh running

play18:32

we're gonna save

play18:35

uh

play18:36

yes

play18:36

but it will not send right away because

play18:39

we need to copy the cert from the last

play18:41

search node

play18:43

or from the ca

play18:46

i'm just going to say yes

play18:49

so it's still outputting to the console

play18:52

here but not going to ask search

play18:55

because we need

play18:57

to copy that cert and this is

play19:01

my elastic search node

play19:03

scp

play19:06

let's see

play19:08

elasticsearch inserts

play19:10

http underscore ca

play19:13

now we're sending it

play19:16

to 192 168 25

play19:22

and

play19:27

let's see log

play19:31

logs

play19:33

dash

play19:35

oh that's correct

play19:39

um

play19:41

oh i did this before

play19:43

history

play19:47

girl scp i'm just going to copy this

play19:54

no maybe the one before

play19:58

this recipe

play20:00

[Music]

play20:02

okay host variation failed

play20:05

oh because i have to

play20:14

um

play20:17

somehow

play20:18

i need to remove this key

play20:20

from here

play20:24

now copy again are you sure yes

play20:27

password

play20:31

now the cser is added this reloaded

play20:37

it's still going let us log in

play20:40

to kibana

play20:45

with my very secure password

play20:49

and let's look

play20:50

at

play20:52

or go here management stack management

play20:54

index management

play20:57

see if we have

play20:59

a firewall index and we sure do

play21:04

those are from yesterday and before

play21:05

yesterday and this is the new index

play21:09

it's reload documents 46.

play21:14

the size is increasing here

play21:18

and to look at the data

play21:20

we will go to data views

play21:23

and oh i already added

play21:25

the data view

play21:27

so you can create data view and

play21:29

put a name here that's what will match

play21:31

the index and then we can go to

play21:33

discover

play21:35

and we can see the data coming in from

play21:38

the firewall

play21:40

let me see when was the last time before

play21:42

i

play21:43

broke it broke it oh that was

play21:46

yesterday

play21:47

i'll go back to the last 15 minutes

play21:50

let's stream the data live

play21:54

every 10 seconds let's say apply

play21:57

and now we can see our data coming in

play21:59

from the firewall

play22:01

you can look at each document

play22:04

um in alaska they call the document it's

play22:07

the same as event the same as a log

play22:10

we're looking at this document it is

play22:13

a ping

play22:14

um

play22:17

media player i don't know destination ip

play22:21

so now you can look at your data

play22:24

visualize the data build dashboards i'll

play22:27

do another video of some examples

play22:30

of building a dashboard

play22:32

and this is basically

play22:34

how you configure logstash

play22:37

to send the data i'm going to stop this

play22:40

right here oh actually

play22:42

i stopped it

play22:46

and now the data will will stop

play22:48

streaming into

play22:50

i need to do

play22:52

another thing here

play22:55

i don't want to stream

play22:59

to the shell so i'm going to comment

play23:01

this out

play23:03

and i'm going to run

play23:08

from either one i'm going to run lux as

play23:11

a service

play23:14

start log stash hopefully this does not

play23:17

break

play23:18

it shouldn't break status

play23:22

logs service

play23:24

now after i started the logs service

play23:27

it stopped logging in cabana i fixed it

play23:30

but it stopped logging and the reason

play23:32

was

play23:33

i needed to give permission to

play23:35

the certificate i'll show you what

play23:41

what permissions i gave it

play23:44

log stash

play23:46

so this is the certificate i copied over

play23:49

and i gave it six six zero

play23:52

so change mod or

play23:55

mod

play23:56

zero and

play24:00

this command this file

play24:02

and also

play24:05

i gave log stash user permission also

play24:08

change

play24:09

owner like this

play24:12

root log stash

play24:14

and the same

play24:16

same file okay

play24:19

and after that i started the service and

play24:21

it worked

play24:23

uh it worked again

play24:26

so there's a lot of troubleshooting

play24:28

while doing this

play24:29

but hopefully this video was um

play24:33

was informative

play24:35

and it helped you

play24:36

configure logstash

play24:38

you can use it to ingest data from

play24:41

basically anything

play24:44

and this is my data running

play24:46

and

play24:47

ingesting it's coming into elasticsearch

play24:50

thank you for watching and i will see

play24:52

you in the next one

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
LogstashElasticsearchSyslogsFirewallConfigurationData IngestionKibanaNetwork SecurityData FilteringIT TutorialSystem Monitoring