Creating a Proxmox cluster with 3 old laptops

ElectronicsWizardry
13 Apr 202220:20

Summary

TLDRThis video script details the process of creating a 'laptop cluster' using older laptops with Sandy Bridge and Ivy Bridge architectures, running Proxmox for virtualization. The experiment explores cluster stability, performance, and the feasibility of using such setups for virtual machines. It discusses the advantages of using laptops for low power consumption and ease of access, as well as the limitations in upgradability and storage. The script also covers the challenges faced with high availability and shared storage using Ceph, concluding that while the project is educational, it may not be practical for long-term use due to configuration complexities and hardware limitations.

Takeaways

  • πŸ’» The video discusses creating a 'laptop cluster' using old laptops as nodes, leveraging their low cost and availability.
  • πŸ”Œ The laptops have been upgraded with additional RAM and gigabit Ethernet to meet the minimum requirements for running a hypervisor like Proxmox.
  • πŸ“ˆ The creator aims to test the stability, performance, and overall experience of using these old laptops in a cluster configuration.
  • πŸ”„ The video highlights the challenges of setting up a cluster with only two nodes due to voting issues in Proxmox, which requires a third system for a more robust setup.
  • πŸ‘ Advantages of using laptops for a cluster include their small size, low power consumption, and built-in peripherals, which reduce the need for additional hardware.
  • πŸ‘Ž Disadvantages include limited upgradability, especially for memory and storage, compared to desktops or servers.
  • πŸ› οΈ The creator installs Proxmox VE 7.1.2 on all laptops, a free hypervisor known for its low hardware requirements and ease of setup.
  • πŸ”„ The video demonstrates the process of creating a cluster in Proxmox, including setting up nodes and handling migration between them.
  • πŸ”’ The creator discusses the importance of CPU compatibility when migrating VMs between different CPU architectures and the need to adjust settings accordingly.
  • πŸ”„ The script covers the setup of High Availability (HA) in Proxmox, which allows VMs to be automatically restarted on other nodes in case of a node failure.
  • πŸ’Ύ The video also explores the use of replication and shared storage solutions like Ceph for data redundancy and improved VM migration capabilities.
  • 🚧 The creator concludes that while the laptop cluster is a good learning experience, it may not be suitable for long-term use due to configuration complexities and performance limitations.

Q & A

  • What is the purpose of creating the 'laptop cluster' as described in the script?

    -The purpose is to utilize older laptops, which are essentially free for the user, to create a cluster using Proxmox and evaluate the experience, stability, and performance of such a setup.

  • Why are the 10-year-old laptops considered suitable for this project?

    -These laptops are suitable because they have quad-core processors, which provide a reasonable amount of performance, and they have between 8 to 12 gigabytes of RAM, which is the bare minimum for running Proxmox or another hypervisor.

  • What network improvements were made to the laptops for the cluster setup?

    -A gigabit Ethernet adapter was added to the laptop that only had a 100-megabit integrated network card to ensure reasonable network speeds for the cluster.

  • Why is having a third system in the cluster important for the voting mechanism in Proxmox?

    -A third system is important because Proxmox requires a certain number of votes for decisions in a cluster. With two systems, if one fails, the cluster cannot reach the required number of votes, but with three, there is always a majority to agree on actions.

  • What is the role of the older laptop with a Core 2 Duo system in the cluster?

    -The older laptop with a Core 2 Duo system serves as the third vote in the Proxmox cluster, ensuring that there is always a quorum for decision-making even if one of the faster systems goes offline.

  • What are some advantages of using laptops for a cluster as mentioned in the script?

    -Advantages include easy availability, small size, low power consumption, built-in keyboard, video monitor, and mouse, as well as a built-in battery that can act as a UPS.

  • What are the disadvantages of using laptops compared to small form factor desktops or servers?

    -Disadvantages include limited upgradability, such as a maximum of two hard drive bays in most cases, and limited memory upgradeability. Some laptops may also have lower-end network cards that can limit performance.

  • What software was installed on the laptops for the cluster, and why was it chosen?

    -A plain install of Proxmox VE 7.1.2 was installed because it is a free hypervisor that the user is comfortable with and that generally works well on low-power hardware with low hardware requirements.

  • How does the script describe the process of creating a cluster in Proxmox?

    -The process involves creating a cluster on the first node and then using the join information to add other systems to the cluster. Proxmox makes it easy to manage the cluster as if it were a single system.

  • What issues did the user encounter when trying to migrate VMs between different nodes with different CPU architectures?

    -The user encountered errors when trying to migrate VMs because the CPU architectures (Sandy Bridge and Ivy Bridge) support different features. The user had to set the CPU type to 'kvm64' to resolve the issue.

  • What is the user's conclusion about using old laptops for a long-term cluster setup?

    -The user concluded that it might not be practical for a long-term setup due to the complexity and issues encountered, such as difficulties with configuration and shared storage using Ceph on such old systems.

Outlines

00:00

πŸ’» Creating a Laptop Cluster with Proxmox

The video script details the process of creating a 'laptop cluster' using several older laptops, approximately 10 years old, with Sandy Bridge and Ivy Bridge architectures. The purpose is to harness their quad-core processors and 8-12 GB of RAM to run Proxmox, a hypervisor, and evaluate the stability and performance of such a setup. The creator addresses the challenge of setting up a cluster with only two nodes and the solution of adding a third, older laptop to serve as the 'third vote' for cluster decisions. Advantages of using laptops include their low power consumption, small size, and built-in components. However, the creator also notes the limited upgradability and memory upgradeability compared to desktops or servers.

05:02

πŸ”„ Setting Up and Managing the Cluster

The script continues with the setup process of the cluster in Proxmox, including the initial configuration and the addition of nodes. It explains how the cluster allows management as if it were a single system and demonstrates the creation of virtual machines (VMs) across different nodes. The creator also discusses the limitations of using local storage and the inability to use resources from other systems by default. The paragraph concludes with an attempt to migrate a running VM between nodes, which results in an error due to CPU feature differences, highlighting the need to configure the VM to match the lowest common CPU features across all nodes.

10:04

πŸ”„ Exploring High Availability and Data Replication

This section delves into the concept of high availability (HA) in Proxmox, which ensures VMs continue running on other systems if one fails. The creator discusses the limitations of using local storage for HA and introduces the idea of replication, which involves copying VM data to another system at regular intervals. The script provides an example of setting up replication for a Windows Server 2019 VM and explains the use of shared storage as an alternative to local storage, mentioning the setup of a basic Ceph storage system using additional SSDs in the laptops.

15:06

πŸ“Š Assessing Performance and Usability of Ceph Storage

The script discusses the performance of the Ceph storage system set up on the old laptops, noting the slow write speeds and overall degraded performance. The creator uses benchmarking tools to illustrate the performance issues and shares insights on the potential causes, such as the need for network communication between systems and the limitations of using SSDs in a distributed storage setup. The paragraph also touches on the usability of Ceph for light use cases but advises against it for more demanding applications due to its performance limitations on the old hardware.

20:06

🚫 Challenges and Lessons Learned from the Laptop Cluster

In the final paragraph, the creator reflects on the challenges encountered while setting up the laptop cluster and the lessons learned. They express concerns about the complexity of managing a cluster on old hardware and the difficulties faced with Ceph storage, especially after simulating a node failure. The script describes issues with VM booting and data integrity after attempting to migrate VMs and test high availability. The creator concludes that while the experience was valuable for learning about clusters and failure handling, it might not be practical for long-term use or in new setups due to the complications and performance issues encountered.

πŸ“š Conclusion and Call for Audience Experience

The video concludes with a call to action, inviting viewers to share their experiences with setting up clusters on old systems and how they have fared. The creator summarizes the project and its outcomes, highlighting the educational value of experimenting with clusters on physical hardware and the insights gained about the practical limitations of using old laptops for such purposes.

Mindmap

Keywords

πŸ’‘Laptop Cluster

A 'Laptop Cluster' refers to a group of interconnected laptops working together to perform tasks that are typically handled by more powerful servers. In the context of the video, the creator is attempting to use old laptops to form a cluster using Proxmox, a hypervisor, to see how well they perform and cooperate as a unit. The idea is to repurpose older technology for new uses, demonstrating resourcefulness and the potential for older hardware to still be viable in certain applications.

πŸ’‘Proxmox

Proxmox is an open-source hypervisor based on the Linux kernel that provides a platform for virtualization. It is used in the video to create a virtualization environment across the old laptops, allowing them to function as a cluster. The script discusses the setup and management of this environment, highlighting its ease of use and the challenges faced when dealing with a cluster of older hardware.

πŸ’‘Sandy Bridge and Ivy Bridge

Sandy Bridge and Ivy Bridge are microarchitectures for Intel's processors, which were prevalent about a decade ago. In the video, the creator mentions that the old laptops have these architectures, indicating their age and the performance level they can offer. These terms are important as they set the expectation for the hardware capabilities of the laptops being used in the cluster.

πŸ’‘Quad Cores

A 'Quad Core' refers to a processor with four independent processing units, or cores, capable of handling multiple tasks simultaneously. The video mentions that the old laptops have quad-core processors, which is significant because it suggests they have a 'reasonable amount of performance' for their age, making them suitable candidates for the cluster project.

πŸ’‘Gigabit Ethernet

Gigabit Ethernet is a transmission technology based on the Ethernet frame format and protocol used in local area networks (LANs), which provides a data rate of 1 billion bits (one gigabit) per second. In the script, the creator adds a Gigabit Ethernet adapter to one of the laptops to improve network speeds, which is crucial for the cluster's performance and communication between nodes.

πŸ’‘High Availability (HA)

High Availability (HA) in the context of the video refers to the system's ability to remain operational and functional despite the failure of some of its components. The creator discusses setting up HA in Proxmox to ensure that if one laptop in the cluster fails, the virtual machines can continue to run on another system, demonstrating a key feature of cluster configurations.

πŸ’‘Replication

Replication in the video refers to the process of creating and maintaining copies of virtual machines on different systems within the cluster. This is a form of redundancy that ensures data availability and business continuity. The script describes setting up replication for a Windows Server 2019 VM, which would allow it to be restored on another system in the event of a failure.

πŸ’‘Ceph

Ceph is a distributed storage system that provides a unified platform for block storage, object storage, and file storage. In the video, the creator attempts to set up a Ceph cluster using additional SSDs in the laptops for shared storage across the nodes. However, the performance and stability issues encountered highlight the complexity of implementing distributed storage solutions on older hardware.

πŸ’‘ZFS Send/Receive

ZFS Send/Receive is a feature of the Z File System (ZFS) that allows for the efficient replication of data between ZFS storage pools. In the context of the video, it is used to facilitate the replication process in Proxmox, ensuring that VM data is copied from one system to another within the cluster, which is vital for the high availability setup.

πŸ’‘VM Migration

VM Migration is the process of moving a virtual machine from one physical host to another with minimal downtime. The video script discusses the ease of migration within the cluster, especially when using shared storage like Ceph, and the importance of this feature for high availability and load balancing in a cluster environment.

πŸ’‘e-waste

e-waste refers to electronic devices that are no longer needed or are obsolete and are discarded. The video script mentions obtaining old laptops from e-waste bins, emphasizing the environmental aspect and the potential for repurposing such devices in creative projects like setting up a laptop cluster.

Highlights

Creating a laptop cluster using older laptops to explore performance and stability with Proxmox.

Utilizing Sandy Bridge and Ivy Bridge architecture laptops with quad-cores for reasonable performance.

Ensuring a minimum of 8GB RAM for running Proxmox or another hypervisor.

Adding a gigabit Ethernet adapter to improve network speeds for the cluster.

Challenges of setting up a cluster with only two nodes due to voting and agreement requirements.

Using an older laptop or a Raspberry Pi as a third node to facilitate voting in the cluster.

Advantages of using laptops for a cluster: availability, low power consumption, and built-in peripherals.

Disadvantages include limited upgradability and memory upgradeability compared to desktops or servers.

Using Proxmox VE 7.1.2 for its low hardware requirements and suitability for low-power hardware.

Initial setup of the cluster includes hostname and IP address assignments for each node.

Creating and managing a cluster in Proxmox allows treating multiple systems as one entity.

Migrating VMs between nodes requires compatible CPU types and may involve copying disk images.

Issues with CPU feature compatibility between different generations of Intel processors.

Using replication in Proxmox to enable high availability for VMs in case of node failure.

Exploring the use of Ceph for shared storage in the cluster, despite performance limitations.

Performance benchmarks for Ceph storage showing moderate read speeds but slow writes.

Migrating VMs with shared storage is faster due to not needing to copy disk images.

Testing high availability and experiencing issues with VM booting after node failure.

Learning from the experiment that small clusters with old hardware can be challenging for distributed storage like Ceph.

Recommendation to use simpler setups for home servers and caution against complex configurations on old systems.

Overall conclusion that while the laptop cluster experiment had educational value, it may not be practical for long-term use.

Transcripts

play00:00

today i'm going to be creating what i

play00:01

want to call the laptop cluster i have a

play00:03

few of these older laptops that are

play00:05

about 10 years old now and i'm just

play00:06

going to try putting them in a cluster

play00:08

using proxmox and see how it goes see

play00:10

how the experience works see how it is

play00:12

stability wise see what type of

play00:14

performance i can get out of it and just

play00:16

see how the overall experience is

play00:18

the main reason i'm trying this is

play00:19

because laptops like this for me are

play00:21

essentially free these are about 10 year

play00:23

old laptops running sandy bridge and ivy

play00:25

bridge architectures they luckily have

play00:28

quad cores though so they have a

play00:29

reasonable amount of performance i

play00:31

believe these systems have between 8 and

play00:33

12 gigabytes of ram which is what i'd

play00:35

say the bare minimum for running like

play00:36

proxmox or another hypervisor is

play00:39

and i've added a gigabit nick one to the

play00:41

one that only has 100 megabit integrated

play00:43

nick and this guy is a built-in gigabit

play00:45

nick so those reasonable network speeds

play00:47

going on right now

play00:49

now one issue is i only have two of

play00:51

these reasonably fast laptops i can use

play00:54

right here

play00:55

and proxmox in most clusters get kind of

play00:57

unhappy with two clusters because you

play00:59

have a problem that if one node dies

play01:02

then you only have one node in the

play01:03

cluster the way that proxmox works with

play01:06

voting is it has to reach a certain

play01:07

number of votes to have anything happen

play01:09

in a cluster and with two devices that

play01:11

means both devices would have to agree

play01:13

and if one node's offline

play01:15

they can't that can't happen because one

play01:17

of the nodes is offline and can't agree

play01:20

so you can either set it so that one

play01:22

node is essentially the master and says

play01:23

what always happens

play01:25

or you can make it so that you don't

play01:27

need less votes to agree but basically

play01:29

both of those were in the high

play01:30

availability you can do it in a home lab

play01:32

setting if you need to

play01:34

but the best solution is actually just

play01:36

get a third system that i have down here

play01:38

this laptop is even older running a core

play01:40

2 duo system with about 2 gigs of ram

play01:43

but it's more than enough just to be

play01:44

that third vote in proxmox i don't plan

play01:47

on running any vms or containers on it

play01:49

and you can also use something like a

play01:50

raspberry pi for this use case you just

play01:52

need enough horsepower for it to

play01:53

basically be the third vote

play01:55

so then if either the faster systems go

play01:57

offline or just the third system there's

play02:00

always two votes left and the two

play02:02

systems can keep running as a full

play02:04

cluster

play02:05

now one thing you might think is why

play02:07

might you want to use laptops well the

play02:09

first reason is they're pretty easy to

play02:10

get your hands on at least for me i seem

play02:12

to get these fairly free from e-waste

play02:14

bins and other places like that and just

play02:16

find them used from other people

play02:17

relatively easily

play02:19

the next thing is um they're pretty

play02:21

small low power and you can kind of tuck

play02:23

them away easily these are opened up and

play02:26

big right now but i can close the lid on

play02:27

these stack them in a small pile and

play02:29

kind of

play02:30

tuck them in a corner pretty easily and

play02:32

they use fairly minimal space

play02:34

and also because they're designed to run

play02:35

off battery power they use quite little

play02:37

power compared to some other systems

play02:40

so all three of these systems

play02:42

together should be less than 100 watts

play02:43

and probably like 30 watts 40 watts at

play02:46

idle would be my guess as laptops are

play02:48

typically quite good at very low idle

play02:50

power consumption

play02:52

the other advantages they essentially

play02:53

have a built-in keyboard video monitor

play02:55

and mouse so you don't have to add

play02:57

anything if you want to administer a

play02:58

system and they also have a built-in

play03:00

battery which essentially works as a ups

play03:02

so i can yank the power cord and the

play03:04

system will just keep going so i don't

play03:06

need an additional ups but the

play03:08

disadvantages compared to a small form

play03:10

factor desktop or a full on server is

play03:12

much less upgradability a nice laptop

play03:14

might have two hard drive bays but

play03:16

that's pretty rare if with some newer

play03:18

models and smaller ones and you're not

play03:20

going to get more than that like you can

play03:22

with a desktop or server

play03:24

relatively limited memory upgradeability

play03:26

this guy is fairly high-end and supports

play03:28

32 gigabytes of memory but a lot of

play03:30

systems from this era don't and it can

play03:32

often be quite expensive to find the

play03:34

largest dimms that you need to get that

play03:36

32 or 16 gigs of ram that the laptop can

play03:38

support up to

play03:40

um like this guy here has here a good

play03:42

amount of low end laptops put 100

play03:44

megabit network cards instead of gigabit

play03:46

which really limits things like nas uses

play03:49

i'd say streaming any type of shared

play03:51

storage on the network i'd really

play03:53

strongly suggest you get a um

play03:55

usb 3 to one gigabit nick because i find

play03:58

it works better than 100 meg but neither

play04:01

are great solutions

play04:03

as for software i've just put a plain

play04:04

install of proxmox ve 7.1.2 on all of

play04:08

these which is just the newest version

play04:10

of proxmox at this time

play04:12

i chose proxmox as a free hypervisor i'm

play04:14

pretty comfortable with it and it

play04:16

generally works quite well on fairly low

play04:18

power hardware like this as it has

play04:20

fairly low hardware requirements itself

play04:22

i've done some of the initial setup like

play04:24

going through the installer updating

play04:25

these systems and doing kind of my basic

play04:27

setup for all proxmox systems i talked a

play04:29

little bit more about that my proxmox

play04:31

setup guide a few videos ago if you want

play04:33

to see more details

play04:34

so now i've gotten the three laptops

play04:36

pulled up on my computer screen here i

play04:38

have them open in the browser and in a

play04:40

terminal with ssh and they look like

play04:42

normal blank proxmox nodes nothing has

play04:45

been set up i gave them all their own

play04:47

host names and ip addresses

play04:49

so let's go take a look at creating that

play04:51

cluster so i'm going to take a look at

play04:52

my first node here and just create a

play04:55

cluster and let's call it the

play04:57

lappy and it's going to start creating

play04:59

the cluster right now and i've created

play05:01

it

play05:02

so now i can see the um join information

play05:05

here which i want to copy and paste into

play05:07

my other systems and that way they can

play05:09

become part of the cluster proxmox is in

play05:12

general pretty easy when it comes to

play05:13

setting up clusters

play05:15

so now my cluster is being created

play05:17

looking in the web gui the biggest

play05:18

change on the left is now i see my two

play05:21

other laptops here and i can see and

play05:22

manage all of those settings just like

play05:24

if i was on the web page

play05:26

so the easiest thing the cluster lets

play05:27

you do is manage it as if it was one

play05:29

system

play05:30

also i can look in the command line with

play05:32

pve cm status and it shows me my three

play05:34

systems i'm running here the fact that

play05:37

they're all votes are happy here shows

play05:39

all the system shows how many votes they

play05:41

have and everything is working correctly

play05:42

in this cluster so now let's take a look

play05:45

at creating a vm so i've uploaded an iso

play05:48

of open sustain here and now i can see

play05:51

my multiple nodes that i can put it on i

play05:53

can call it like

play05:55

open susse1

play05:57

um and then i can just create the vm

play05:59

like i normally would be able to on the

play06:01

node so

play06:03

nothing really special creating this vm

play06:05

on here

play06:07

now the one thing is when i'm creating

play06:09

the vm i can still only see storage and

play06:11

other devices on the local system so

play06:14

just because i have the cluster and i

play06:15

can see all the systems doesn't mean i

play06:17

can use the storage or any of the

play06:18

resources from the other systems by

play06:20

default and it essentially just lets me

play06:22

manage the other systems so now i've

play06:24

installed a few vms on my cluster and

play06:26

let's play around with running them on

play06:28

different systems so i have a ubuntu vm

play06:31

that's currently shut down and i have a

play06:33

windows 10 vm that's doing what windows

play06:35

10 loves to do and updating

play06:38

so i'm going to take the shutdown ubuntu

play06:39

vm and now i can actually migrate it to

play06:41

other systems

play06:43

so i can pick a different node in the

play06:44

system so let's do laptop 2 or this guy

play06:47

my iv bridge quad core

play06:49

and it says it has a relatively large

play06:50

disc so it might take a while

play06:53

and with migrating between the systems

play06:55

it will copy the vm configuration so all

play06:57

the details of how the vm is made and

play06:59

then it will also copy the disk image

play07:02

that's what's going to take a while so

play07:03

if i look at my screen here it's about a

play07:05

gig out of 32 gigs down copying that 32

play07:08

gigabyte vm image to the other system

play07:12

and this isn't limited to be being done

play07:14

on shutdown systems i can also do this

play07:16

on a running system here

play07:18

so it might take a while it also warns

play07:20

me that and it also migrate my running

play07:22

vm

play07:23

but actually when doing this i got a

play07:25

little bit of an error here which is

play07:27

saying that it can't move it due to

play07:28

missing instructions

play07:31

so one of the things in proxmox that's

play07:33

basically ignored at least by me if

play07:35

you're running a single node is the type

play07:37

of cpu you're running so if you go under

play07:39

memory processors as an edit by default

play07:42

it will do the kvm 64 which is that and

play07:44

i normally recommend just using a host

play07:46

with a single system

play07:47

and host is just going to copy whatever

play07:49

cpu your system has and just show that

play07:52

to the vm

play07:53

but if you're migrating between multiple

play07:55

nodes like i'm doing here and especially

play07:57

since these are different generations of

play07:58

this one being a sandy bridge and this

play08:00

one being an ivy bridge they support

play08:02

different features

play08:03

so i'm going to have to normally set to

play08:05

the lowest common denominator which

play08:06

should be sandy bridge here

play08:09

let's take a look at that so this one

play08:10

says it's set to sandy bridge but it

play08:13

doesn't seem to be working because both

play08:14

of these are at least sandy bridge

play08:16

so i'm going to have to take a look at

play08:17

why it doesn't like sandy bridge and

play08:19

what exactly it's complaining about

play08:21

so it looks like with a bit of googling

play08:23

the ibrs involves some of the specters

play08:26

mitigations that will be force required

play08:28

if it has it if you use the ibrs one if

play08:30

you don't use it it won't force require

play08:32

it

play08:33

but it's still interesting it doesn't

play08:34

want to migrate my one with the sandy

play08:36

bridges i believe they should both be at

play08:38

least sandy bridge so once this finishes

play08:41

i'm going to try turning it down to just

play08:43

a kvm 64 and see if that will migrate

play08:46

and taking a look at my ubuntu shutdown

play08:48

vm that's finished it looks like it's

play08:49

copied all the 32 gigs of the virtual

play08:52

machine disk successfully so it's on the

play08:54

new system and on the left i can see it

play08:56

on the laptop too and let's just try

play08:59

firing it up on this guy right now

play09:01

so it looks like it started and it's

play09:03

throwing that same error that it can't

play09:05

start using that function with sandy

play09:07

bridge so let's so i took a little bit

play09:09

of a deeper look to see if i could

play09:10

figure out what's going on google didn't

play09:12

have an obvious answer other than just

play09:14

use whatever the host says in kvm which

play09:16

does work and taking a look at the cpu

play09:19

features i ran a little online

play09:21

difference and it looks like there's a

play09:22

few features like smx aes and la hf

play09:26

lm that are different that the older

play09:29

system has this new one doesn't have and

play09:31

i'm guessing that's the issue i'm not

play09:33

sure why it isn't supported to my

play09:34

knowledge intel didn't remove those

play09:36

between generations

play09:38

but it might also just be due to the

play09:40

bios configuration or however dell set

play09:42

the system up as it's often that those

play09:44

cpu features that are disabled by either

play09:46

the bios or the system manufacturer or

play09:48

other things

play09:49

but it does look like in this use case

play09:51

setting it to kvm 64 does work fine so

play09:53

i'm just going to do that for right now

play09:56

firing up the console i can see that my

play09:58

ubuntu vm is working correctly on the

play10:01

other system just as i'd expect now that

play10:04

i have a proxmox cluster set up and i

play10:06

have migration working between my

play10:07

laptops let's try to do a little bit

play10:09

more of clustering and try to make these

play10:11

systems work together a little bit more

play10:14

so one way to do that in proxmox is ha

play10:17

or high availability and the idea behind

play10:19

it is if one of these laptops was to die

play10:22

it'll keep running my vms on another

play10:24

system now it's not perfect because it

play10:27

shuts down the vm when one of the

play10:28

systems dies and then the other one

play10:30

fires it up in a little bit but it's

play10:32

still pretty good and doesn't really

play10:33

require any special software you just

play10:35

have to have it start up when the vm

play10:37

restarts

play10:38

but the problem with this and my current

play10:40

configuration is since i'm using local

play10:42

storage on both of these laptops if this

play10:44

system were to go down all of its vm

play10:46

data and vm disks would now be

play10:48

unaccessible to the other systems to

play10:50

boot from

play10:51

so there's a few ways to get around that

play10:53

the easiest way in proxmox is with

play10:55

replication so i'm going to take a look

play10:57

of an example system i have replication

play10:59

set up with here and replication

play11:01

essentially just copies the data from

play11:04

one vm to on one system to another vm on

play11:07

another system by default it's every 15

play11:09

minutes so the worst case scenario is

play11:12

you have an image that's 15 minutes old

play11:14

if it was to fail

play11:15

and it'll also use zfs send and receive

play11:18

under the hood to make this work so

play11:20

looking at my system i have vm 103 right

play11:23

now which is my windows server 2019

play11:25

system set up that replication if i take

play11:28

a look under data center application i

play11:30

can see that it's on laptop one but it's

play11:32

being replicated to laptop two every 15

play11:35

minutes

play11:36

so if i look at my data i'm storing on

play11:38

here i can see that i have my vm 103

play11:41

disk on the host that it's running off

play11:43

of which makes sense and i also have it

play11:45

on the other host that doesn't have the

play11:47

vm and that way if this guy dies it

play11:50

already has the data essentially on here

play11:52

maybe a little bit old and can just

play11:53

restart it on here

play11:55

so that's pretty simple works pretty

play11:56

well and if i was to fail a system it'll

play11:59

just

play12:00

keep working that way

play12:01

now the other way to do it is to have

play12:03

shared storage and the idea behind this

play12:05

is you have some sort of storage system

play12:06

that all the computers in the cluster

play12:08

like all these laptops can access at the

play12:11

same time and everyone can see it unlike

play12:13

local storage where only that system can

play12:15

see it

play12:16

there's a few ways to do that like

play12:17

having a separate nas or sam that

play12:19

everyone can access but i already have

play12:20

three laptops here so i don't want to

play12:22

set up more than that so what i'd want

play12:24

to do is try to make it so that each of

play12:26

these laptops has an additional drive in

play12:28

it and it uses that drive to share

play12:29

storage

play12:30

and in proxmox ceph is the best way to

play12:32

set that up i am not an expert in seth

play12:35

and really don't know that much but

play12:36

proxmox is pretty plug and play and i

play12:39

probably made one of the worst setups

play12:40

you can create

play12:41

both of these laptops have an extra ssd

play12:44

in it and i just said create a cosd with

play12:46

that in there

play12:47

and taking a look at my cluster right

play12:49

now

play12:50

if i look at ceph on my systems i can

play12:53

see that i have two nodes in my cef both

play12:55

of the osds are working correctly

play12:58

and it says it's really degraded and my

play13:01

guess is this is due to that wanting to

play13:02

have three copies of all my data but i

play13:05

only have two systems that can store it

play13:07

so it only has two copies i believe this

play13:09

is edible somewhere but i can't find it

play13:12

right now so it's going to stay in this

play13:14

mode it does mean if one of these

play13:16

systems were to go down it will keep it

play13:18

running now one thing i was curious

play13:19

about when setting up the ceph

play13:20

configuration was how performance is

play13:22

going to be like because compared to a

play13:24

traditional single internal drive seth

play13:26

has a lot more going on it has two

play13:28

systems that has to talk to it has to go

play13:30

over the network there's just a lot more

play13:31

happening

play13:32

so i installed a vm on ceph and ran the

play13:35

most basic disk benchmark with crystal

play13:36

disk mark on windows 150ish megabytes

play13:39

per second read

play13:41

44 right and those random writes are

play13:43

really bad and my guess is that's

play13:45

because it has to like confirm the right

play13:47

with the second system and go over the

play13:49

network every time there's a write

play13:50

that's pretty slow and i really don't

play13:53

know how exactly to speed up faster

play13:54

networking would do it you might be able

play13:56

to have it so one local drive confirms

play13:58

it and then it writes it out later to

play14:00

some sort of like caching solution

play14:02

i'm not sure but i believe that a

play14:04

commercial like a correctly set up seth

play14:07

can do it

play14:08

now the question is is this enough

play14:10

performance

play14:11

and if you're just playing around it

play14:13

kind of is so i'm on windows 10 i'm just

play14:16

going to fire up like edge for example

play14:18

it takes a little bit longer than i

play14:20

expect but it's kind of between hard

play14:22

drive and ssd in most uses

play14:25

i'd say this is usable for light use but

play14:27

i probably wouldn't want to push it too

play14:28

hard another thing i did to take a look

play14:30

at ceph performance was using the gnome

play14:33

disk benchmark tool so taking a look at

play14:35

it here i can see those speeds popping

play14:37

up really high likely due to ram cache

play14:40

and then going down as probably the ram

play14:41

cache isn't used as much

play14:43

latency is .2 milliseconds average but i

play14:46

can see some spikes in the couple of

play14:48

milliseconds

play14:49

and the write speeds are pretty crummy

play14:51

just as crystal disk mark showed here

play14:54

it's usable but this is one of the

play14:55

longest windows installs i've done when

play14:57

this guy was installing so i'd say

play14:59

barely usable and this is two ssds on

play15:02

two old laptops if you set it up

play15:03

correctly seth can be very fast but this

play15:05

is not a correct ceph setup

play15:08

i'd probably stay away from using ceph

play15:10

like this on these old systems while it

play15:11

works it's pretty crummy and you're

play15:13

really pushing a limit of what you can

play15:15

do if seth i'd probably stick to

play15:18

replication if you need to have an h a

play15:19

solution now the one big upside with

play15:22

steph and any other solution where all

play15:24

the systems can access the data is that

play15:26

migration works a lot better so let's

play15:28

take a look at migration on here so i'm

play15:30

going to migrate my windows 10 vm over

play15:33

to the laptop system and because it's

play15:35

shared storage it doesn't have to touch

play15:37

the storage at all all it has to do is

play15:39

copy the ram from this system to this

play15:41

system and it'll just migrate the vm so

play15:44

in this case it's super fast to migrate

play15:46

only like a couple of seconds less than

play15:48

a minute whereas it would take a lot

play15:50

longer if it didn't have the storage

play15:51

copied already so now it's almost done

play15:53

copying the vm state 16 milliseconds of

play15:56

downtime and i could take a look at my

play15:58

virtual machine and now it's running on

play15:59

a different host and works just fine the

play16:02

vm doesn't even notice that it got moved

play16:04

between hard systems now let's take a

play16:06

look at testing that high availability

play16:08

here and seeing how well that works

play16:10

so i'm going to set up some high

play16:11

availability in proxmox i set it up for

play16:14

vms 103 which is my one that's being

play16:16

replicated to the other system and i'm

play16:18

also going to set it up for my one on

play16:21

ceph storage so that's my windows 10 bvm

play16:24

and i'm going to set it up with a group

play16:25

i called good ones which is these

play16:27

laptops that can actually run the vms if

play16:29

you don't do it sometimes it put it on

play16:30

my third laptop which can't run the vms

play16:33

and you run into a lot of weird issues

play16:34

so i'm going to add this guy to it and

play16:37

then if this guy is to die which it's

play16:39

going to do right now i'm just going to

play16:40

go kill it

play16:41

it's going to automatically it should

play16:44

move all the vms to this system when it

play16:46

detects it proxmox says this is a up to

play16:49

two minutes

play16:50

thing but we'll see how long it actually

play16:52

takes

play16:53

so it's detected that this laptop's dead

play16:55

and now i can see that these two vms or

play16:57

three of them actually are now running

play16:59

on this system now

play17:01

and it's they had to reboot because they

play17:03

went offline it did force the vms off

play17:06

you can't do it where it keeps the ram

play17:08

state

play17:09

but if i take a look at the console

play17:11

right now it's still a functional vm

play17:14

my luck on camera seems to be pretty bad

play17:17

i did quite a bit of testing for this

play17:19

video and did mini power off cycles and

play17:21

just showed how everything would migrate

play17:23

successfully from one system to the

play17:24

other but every time i seem to try to

play17:27

show it on camera it doesn't work and i

play17:29

think i just found a way to brick two

play17:31

vms

play17:32

my windows 10 bvm that was stored on

play17:35

ceph

play17:36

is now just not happy it'll boot into

play17:38

like the windows 10 logo where has the

play17:40

dots and then it just dies and restarts

play17:42

and then my windows 2019 system wasn't

play17:45

happy when i migrated over to the

play17:47

replicated disk says it can't boot from

play17:49

the volume

play17:50

and won't boot ever again

play17:52

so i just broke two vms with trying to

play17:54

migrate it

play17:56

um there might be a way to look in

play17:57

snapshots and get older versions of the

play17:59

data and i don't know enough about ceph

play18:01

to try to get this working correctly

play18:04

but it just doesn't seem happy with what

play18:06

i've done here

play18:08

taking a little bit of a closer look at

play18:09

ceph on here sef was not happy to lose

play18:12

one of its nodes i'm guessing ceph is

play18:14

kind of like proxmox where it really

play18:16

wants three or more nodes so when it's

play18:19

with two nodes and one of them dies it's

play18:21

just kind of like i don't know what to

play18:22

do and it just fails

play18:24

um that's what it looks like

play18:26

it showed it it's looking a little bit

play18:27

better now because it detects both

play18:29

systems are up

play18:30

but it's i wasn't happy with two systems

play18:32

you want to probably have the third

play18:34

system actually in ceph all running but

play18:36

i'm not going to get into ceph too much

play18:38

more here

play18:39

i've had replication work fairly around

play18:41

proxmox just not now it seems to be so

play18:45

overall is this something i'd want to

play18:46

set up more for the long term or in a

play18:48

new setup my answer is probably not for

play18:51

me personally these laptops turned out

play18:53

to be a lot more of a bayer and proxmox

play18:55

to use than i expected i thought this

play18:57

would be a pretty simple cluster to use

play18:59

but i think i just found a way to keep

play19:01

breaking things in ways i didn't expect

play19:03

i probably want to do just a full wipe

play19:05

on all these systems if i was to use it

play19:07

more because i think i just have

play19:08

something weird in my configuration now

play19:10

that it really just doesn't seem to like

play19:13

i think the other thing i've learned is

play19:14

don't do shared storage like ceph on

play19:16

such a small system it just really isn't

play19:18

happy you gotta make sure these

play19:20

distributed systems are set up with a

play19:22

good configuration i've tried multiple

play19:25

times to have a distributed storage on

play19:27

old systems and it never works out well

play19:30

you kind of just can't do it at a small

play19:32

home level like this with older systems

play19:34

you need to actually throw correct

play19:35

hardware to get distributed storage

play19:37

working correctly

play19:39

but if you just want to play around with

play19:40

clusters this is a pretty good way to do

play19:42

it these laptops are very cheap now you

play19:45

can get the kind of feel for setting up

play19:47

clusters on real metal instead of having

play19:49

a cluster running in a vm

play19:51

and i think it can have a lot of

play19:53

learning in how to set up clusters how

play19:54

to handle failures and stuff like that

play19:56

that you couldn't do if you just had a

play19:58

single node but if you just want a

play20:00

simple home server or something i'd do a

play20:02

single node possibly set up two and

play20:04

proxmox but don't do anything fancy just

play20:06

migrate vms every once in a while if you

play20:08

need to

play20:09

thanks for watching this video on

play20:11

setting up a laptop cluster of these old

play20:12

systems let me know if you're using our

play20:14

cluster on old systems like this and how

play20:16

it's working for you

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Laptop ClusterProxmoxHome LabSandy BridgeIvy BridgeVirtualizationCPU ArchitectureGigabit NICCluster StabilityVM MigrationCeph Storage