Lidar Accuracy Comparison - 13 Different Sensors!

The 3rd Dimension
25 Oct 202312:23

Summary

TLDR本视频对比了16种不同激光雷达传感器数据集,这些数据集来自现实捕捉行业的领先制造商,旨在找出最佳传感器。测试地点位于佛罗里达州中部的一个约8英亩的公园,通过独立检查拍摄来比较每个数据集的水平和垂直精度,以及植被穿透能力和数据密度。结果显示,地面扫描提供最准确的结果,而数据处理和采集方式的重要性超过了传感器本身的价格。

Takeaways

  • 🏆 对比了16种不同的激光雷达传感器数据集,旨在找出现实捕捉行业中的佼佼者。
  • 📈 研究使用了中佛罗里达一个约8英亩的公园作为测试场地,具有周边道路和内部绿化区域。
  • 📊 通过独立的检查射击,比较了每个数据集的水平和垂直精度。
  • 🌿 评估了不同数据集在植被穿透和数据密度方面的表现。
  • 🔍 使用高精度全站仪进行了12个特征的检查射击,以提取点云中的特征。
  • 📝 通过Excel电子表格比较了检查射击的坐标,以确定水平误差。
  • 📊 对于垂直误差,通过将点云栅格化并比较平均高程与检查射击的数据来测量。
  • ⚙️ 发现了一些数据集的问题,例如Leica RTC360数据集的坐标系统与控制文件不匹配。
  • 🌳 地面扫描仪提供了最密集的数据集,而移动扫描仪和无人机(UAV)扫描仪在水平坐标的精度上表现较差。
  • 🖼️ 摄影测量数据集在遮挡物下无法提供准确的点,而地面数据集提供了最准确的结果。
  • 💡 实验表明,数据的捕获和处理方式比传感器本身的花费更重要。
  • 👨‍💼 经验和正确的现场及办公室程序比昂贵的传感器更为关键,对于获取期望的结果至关重要。

Q & A

  • 视频比较了多少个不同的激光雷达传感器数据集?

    -视频比较了16个不同的激光雷达传感器数据集。

  • 这些数据集是关于哪个地区的扫描?

    -这些数据集是关于佛罗里达州中部一个约8英亩大小的公园的扫描。

  • 控制点是如何设置的?

    -控制点是使用指南针全站仪进行射击的,通过闭合导线调整,高程通过闭合水准环进行校正。

  • 视频作者如何验证点云数据的水平和垂直精度?

    -视频作者通过使用高精度全站仪设置并射击12个从点云中提取的特征点,然后将这些检查点带入云比较软件中,与提供的点云进行比较,以确定水平和垂直精度。

  • 在比较中,哪个传感器类型提供了最密集的数据集?

    -地面扫描仪提供了最密集的数据集。

  • 在数据集的RGB值方面,有哪些问题需要注意?

    -地面扫描仪和无人机扫描在相机看不到的区域没有提供准确的RGB值。移动扫描仪在路边缘后迅速完成扫描,而无人机扫描则错误地将穿透树冠的点着色为绿色。

  • 在比较中,哪个传感器类型在水平精度方面表现最好?

    -地面扫描仪在水平精度方面表现最好,平均误差为13毫米。

  • 视频作者在处理数据时遇到了哪些问题?

    -视频作者在处理数据时遇到了一些问题,例如Leica RTC360数据集的坐标系统与控制文件不匹配,Asai传感器的数据集没有很好地捕捉到周围环境,而Navis VLX3数据集的水平误差异常高,可能是由于操作员的错误。

  • 视频作者得出的最重要的结论是什么?

    -视频作者得出的最重要的结论是,数据的捕获和处理方式比购买的传感器的价格更重要。即使拥有顶级的传感器,如果没有正确的训练和经验,也可能无法获得预期的结果。

  • 视频作者建议在进行激光雷达扫描时应该重视什么?

    -视频作者建议在进行激光雷达扫描时应该重视现场和办公室的程序,这比在传感器上的投资更为重要。

  • 为什么视频作者没有公布每个传感器的精确度数值?

    -视频作者没有公布每个传感器的精确度数值,因为数据捕获方式存在很大变化,如果由不同的工作人员进行数据捕获和处理,很可能会得到不同的结果。

Outlines

00:00

🔍 现实捕捉行业激光雷达传感器数据集比较

本视频将对比16种不同激光雷达传感器数据集,这些数据集来自行业内领先制造商,并尝试选出最佳数据集。这些数据集均扫描了相同地点,并使用了相同的控制文件。视频将使用一系列独立检查拍摄来比较每个数据集的水平和垂直精度,并比较植被穿透能力和数据密度。这是首次公开进行如此大规模的激光雷达精度比较,且由一个无偏见的用户完成。该用户没有收集或处理任何扫描数据,也没有偏向任何一种数据集的任何理由。

05:01

🌳 研究地点与数据采集过程

研究地点是位于佛罗里达州中部的一个约8英亩的公园,周围有一条周界道路,内部有植被区域、游乐场和一些结构。调查控制点使用罗盘全站仪精确测量,并使用闭合水准路线进行高程校正。这些控制点被提供给多家当地公司,这些公司使用各自的激光雷达传感器通过飞行、驾驶、步行或跳跃式测量该地点。这些独立公司负责数据的收集和处理。视频作者在审查数据后,使用高精度全站仪和提供的控制文件在现场拍摄了12个特征点,并通过最小三次观测值进行测量。测量结果经过场外校准和最小二乘法平差处理,观测值的标准偏差在2至3毫米之间。

10:03

📊 数据集比较与分析

在比较过程中,视频作者发现了几个问题。例如,Leica RTC360数据集的坐标系统与控制文件不一致,尽管进行了缩放,但数据集仍然不匹配,因此不得不放弃该数据集。移动数据集的传感器由于是移动数据,没有很好地穿透周界道路,无法准确识别检查点的水平位置。然而,数据集中的植被穿透能力和数据密度差异显著。地面扫描仪提供了最密集的数据集,而Pheros扫描仪的每平方米点数最多。数据采集方式也显著影响了点的密度。地面扫描仪在RGB值不准确区域没有着色,而无人机扫描仪错误地将穿透树冠的点着色为绿色。地面扫描仪提供了最准确的结果,平均水平误差为13毫米,垂直误差为4毫米。

Mindmap

Keywords

💡激光雷达

激光雷达(LiDAR)是一种遥感技术,通过向目标发射激光并测量反射回来的光的时间来计算距离,从而获取目标的精确位置。在视频中,激光雷达用于捕捉现实世界的数据,用于比较不同制造商的传感器数据集的准确性。

💡摄影测量

摄影测量是一种利用摄影技术获取物体形状、大小和位置信息的方法。它通常用于建筑、考古和地理信息系统等领域。在视频中,摄影测量数据集同样被用来与激光雷达数据集进行比较,以评估其准确性。

💡水平和垂直精度

水平和垂直精度是指测量结果与实际值之间的接近程度。水平精度涉及测量结果在平面上的正确性,而垂直精度涉及测量结果在垂直方向上的正确性。视频中,作者通过独立的检查拍摄来比较不同数据集的水平和垂直精度。

💡植被穿透

植被穿透是指激光雷达或其他遥感技术穿透植被覆盖并测量地面或其他隐藏物体的能力。在视频中,作者比较了不同数据集的植被穿透能力,以评估它们在密集植被区域的性能。

💡数据密度

数据密度是指在给定区域内点的数量,它影响着三维模型的细节程度和质量。在视频中,作者比较了不同传感器数据集的数据密度,以评估它们在创建详细三维模型时的能力。

💡RGB值

RGB值是指红色(Red)、绿色(Green)、蓝色(Blue)三个颜色通道的强度,它们组合在一起可以表示颜色图像中的每个像素的颜色。在视频中,RGB值用于评估数据集在颜色还原方面的准确性。

💡控制点

控制点是在地理空间数据中用于参考的已知位置点,它们用于校准和提高测量数据的准确性。在视频中,作者使用控制点来校准和验证不同传感器数据集的准确性。

💡误差分析

误差分析是评估测量结果中误差大小和来源的过程。在视频中,作者通过比较不同传感器数据集的测量结果与控制点的实际值,来分析和评估这些数据集的误差。

💡传感器

传感器是一种检测设备,可以将测量的信息转换成电信号或其他所需形式的信息输出,以满足信息传输、处理、存储、显示、记录和控制等要求。在视频中,作者比较了不同制造商的激光雷达和摄影测量传感器的性能。

💡移动扫描

移动扫描是指在移动的车辆或其他平台上进行的扫描,它可以快速收集大量数据。在视频中,作者提到了使用移动扫描设备收集的数据集,并对其准确性进行了评估。

💡无人机

无人机(UAV)是一种无需人在机内驾驶的飞行器,可以远程控制或自主飞行。在视频中,无人机用于搭载传感器进行空中扫描,收集地面和建筑物的数据。

Highlights

比较了16种不同的激光雷达传感器数据集,来自现实捕捉行业的领先制造商。

这是首次公开提供的如此大规模的激光雷达精度比较,由一个无偏见的用户完成。

研究地点是佛罗里达州中部的一个约8英亩的公园。

使用高精度全站仪设置和提供的控制文件,现场拍摄了12个特征。

通过Cloud Compare软件和Excel电子表格比较了数据集的水平和垂直精度。

研究了植被穿透能力和数据密度。

Leica RTC360数据集存在坐标系统问题,最终被放弃。

Asai传感器的移动数据集没有很好地捕捉到周围环境。

Navis VLX3数据集的水平误差意外地高,可能是由于操作员错误。

地面扫描仪提供了最密集的数据集。

数据采集方式显著影响点的密度。

地面扫描仪在RGB值的准确性方面存在问题。

移动扫描仪在道路边缘后迅速完成扫描。

摄影测量数据集在遮挡物下无法提供准确点。

地面扫描仪提供最准确的结果,平均水平误差为13毫米,垂直误差为4毫米。

SLAM扫描仪的准确性排在第二位,平均水平误差为19毫米,垂直误差为8毫米。

UAV扫描仪的准确性排在第三位,平均水平误差为22毫米,垂直误差为11毫米。

移动单元的准确性排在第四位,平均水平误差为41毫米,垂直误差为8毫米。

DGI Phantom 4数据集在开阔区域表现出相当高的准确性。

数据的采集和处理方式比传感器的价格更重要。

经验和正确的工作流程比昂贵的设备更为关键。

Transcripts

play00:00

in this video we are going to compare 16

play00:02

different lar sensor data sets from some

play00:05

of the leading manufacturers in the

play00:07

reality capture industry and tried to

play00:09

Crown a

play00:13

[Applause]

play00:15

champion I recently got my hands on 16

play00:18

different liar and one photogrammetry

play00:21

data sets that all scan the exact same

play00:23

site and were provided this exact same

play00:25

control file I am going to compare the

play00:28

horizontal and vertical accuracy of each

play00:30

using a series of independent check

play00:33

shots we will also compare vegetation

play00:35

penetration and density of data there is

play00:38

never been a liar accuracy comparison of

play00:40

this magnitude made publicly available

play00:42

and most importantly done by an unbiased

play00:45

user I didn't collect or process any of

play00:48

the scan data nor do I have any reason

play00:50

to favor any one data set over another

play00:53

let's get into

play00:55

it the study site that was used is a

play00:58

park in Central Florida that is about 8

play01:00

acres in size it has a perimeter Road

play01:03

running around an interior vegetated

play01:06

area that has a playground in a few

play01:09

structures the survey control was shot

play01:11

in with a compass wole adjusted closed

play01:13

Travers and the elevations were

play01:15

tightened up with a closed level Loop

play01:17

these control points were provided to a

play01:19

plethora of local companies that brought

play01:21

their best wiar sensors and either flew

play01:25

drove walked or leapfrogged the site

play01:28

these independent companies were solely

play01:31

responsible for the collection and

play01:32

processing of their data once I reviewed

play01:35

the data I went out to the site with a

play01:37

high Precision Total Station setup and

play01:39

the provided control file and shot in 12

play01:41

features that could be extracted from

play01:43

the point Cloud every observation I took

play01:46

was shot in with a minimum of three sets

play01:48

of observations the instrument and

play01:50

Equipment were field calibrated

play01:52

beforehand and the observations were

play01:54

postprocessed in a leas squares

play01:56

adjustment the standard deviations of my

play01:59

observations from the Le squares

play02:00

adjustment were in the range of 2 to 3

play02:02

mm these values were confirmed in the

play02:05

air propagation program I wrote last

play02:08

year survey buddy check out my YouTube

play02:10

channel for more information and a free

play02:12

copy of that I then brought these check

play02:14

shots into Cloud compare along with each

play02:16

provided Point Cloud to determine the

play02:18

horizontal air I extracted the

play02:20

coordinates of the feature that was shot

play02:22

in as a check shot and compared the two

play02:25

in an Excel spreadsheet the total

play02:27

station checkshot was assumed to be the

play02:30

accepted as true value and the point

play02:32

Cloud extracted coordinate was the

play02:34

measure value in my opinion this was the

play02:37

best possible method of measuring

play02:39

horizontal error in the data set after

play02:41

the fact that the end user can expect

play02:43

when they're trying to extract

play02:45

horizontal features to make sure there

play02:47

wasn't a shift in the RGB values over

play02:50

the points due to a camera to light our

play02:52

sensor alignment issue I checked to

play02:54

ensure the intensity values aligned with

play02:56

the RGB values the errors measured with

play02:59

this method method can be considered as

play03:01

absolute errors to measure the vertical

play03:03

error I rasterized the point Cloud

play03:06

taking an average elevation using a

play03:08

great cell size of .1 ft and compareed

play03:11

those points to the check shots

play03:13

rasterizing the point cloud in this

play03:15

situation basically took an average

play03:17

elevation of all the points in a 0.1x

play03:21

0.1 ft box and returned a new point I

play03:24

then selected the four nearest Riz

play03:27

points surrounding the check shot and

play03:29

used to those as my measured Point Cloud

play03:31

elevations I chose to do this because

play03:33

when the delivered Point clouds were

play03:35

going to be used to create a surface it

play03:37

is likely a similar form of averaging

play03:40

the elevations of the points would have

play03:42

been used and I wanted to reduce the

play03:44

chance of outliers in the point Cloud

play03:46

causing results to appear worse than

play03:48

they were I wanted to use four elevation

play03:50

points so I could still see which clouds

play03:53

had more vertical spread compared to

play03:55

others the final vertical error was

play03:57

calculated off an average of the four

play03:59

selected

play04:02

points there were a few issues I noticed

play04:05

while going through the data while

play04:06

creating the air spreadsheet the data I

play04:09

received for the Leica rtc360 wasn't

play04:11

provided in the same coordinate system

play04:13

as the control file it seemed to be a

play04:15

metric version of it although when I

play04:17

scaled the point Cloud up to us survey

play04:19

feed it still didn't fit correctly there

play04:21

was something wrong with the data set

play04:23

and I eventually had to abandon it the

play04:25

mobile data set from the asai sensor

play04:27

didn't capture much of the surrounding

play04:29

and since it was mobile data it did not

play04:32

penetrate well enough off the Perimeter

play04:33

Road to pick out horizontal locations of

play04:36

my chat shots but I was able to compare

play04:38

elevations looking at the data I would

play04:40

assume they had the sensor pointed

play04:42

inward to capture the interior of the

play04:44

site this data had significant vertical

play04:46

error and quite a bit more vertical

play04:48

spread than any other data set the navis

play04:50

vlx3 data set had an unexpectedly high

play04:53

level of horizontal air in the range of

play04:55

0.2 ft after speaking with one of the

play04:58

surveyors that was on on-site during the

play05:00

data capture process it sounds like it

play05:03

was most likely caused by user error

play05:05

from the operator of the instrument that

play05:07

was relatively new to using that unit

play05:09

this data set was not used in the final

play05:12

accuracy comparison the terrestrial

play05:14

scanners provided the densest data sets

play05:17

the pheros scan had more points per

play05:18

square meter compared to any other data

play05:20

set whether or not this density is

play05:22

necessary depends on your application

play05:24

and anyone that's tried to work with

play05:26

Point clouds in the billions of points

play05:28

knows it can be quite difficult for most

play05:30

computers or software packages to handle

play05:33

data set this large that being said you

play05:35

can always strip away points in the

play05:37

office but you cannot add new ones in

play05:40

how the data was captured also affected

play05:42

density of points significantly the phoh

play05:44

data set appeared to have more scan

play05:46

setups compared to the Regal VZ 600i for

play05:49

example as a result there were areas of

play05:52

the Regal scan with much lower density

play05:54

the landbased scans didn't provide

play05:56

accurate RGB values in areas in which

play05:58

their camera couldn't see and the same

play06:00

is true of the UAV scans this was

play06:03

expected but should be noted for example

play06:05

the landbased scans didn't colorize

play06:07

overhead features like rooftops and

play06:09

trees and the UAV scans erroneously

play06:12

colorized points that penetrated tree

play06:15

canopies with green values that their

play06:17

cameras picked up from the leaves of the

play06:19

trees the mobile scans was entity

play06:21

quickly after the edge of the road this

play06:23

was especially true of the vm1 that had

play06:26

a single sensor the vmx 2ha cre created

play06:29

a much denser data set and colorized

play06:32

points further off of the road due to

play06:34

its dual sensors and multiple cameras

play06:36

the cheaper UAV sensors that collected

play06:38

fewer points per second produced less

play06:41

dense data sets affecting horizontal

play06:43

accuracy for feature extraction the

play06:45

photogrammetry data set was unable to

play06:48

provide any accurate points under

play06:50

overhead obstructions and had quite a

play06:52

bit of vertical fuzz in the point

play06:55

Cloud the terrestrial data sets provided

play06:58

the most accurate results with an

play07:00

average horizontal error of 13 mm

play07:03

horizontally and 4 mm vertically this

play07:06

was not a huge surprise for a couple of

play07:08

reasons terrestrial scanning has the

play07:10

most stable platform for their sensor as

play07:12

it is sitting motionless on a tripod

play07:15

having a dead Ser data set allows for

play07:17

one to extract horizontal features more

play07:19

precisely if your data set only has a

play07:21

point every 20 mm it will be impossible

play07:24

to define a feature more precisely than

play07:26

that the three units used in this

play07:28

comparison are all had near identical

play07:30

errors associated with them horizontally

play07:32

although the znf point Cloud was

play07:34

significantly noisier vertically than

play07:36

the Regal or feral and since I don't

play07:38

know exactly how each data set was

play07:40

processed it is possible those two point

play07:43

clouds had additional post-processing to

play07:45

condense the vertical component of their

play07:47

points the slam units produced the

play07:49

second most accurate results with an

play07:51

average horizontal error of 19 mm and 8

play07:55

mm vertically the vx2 had a very

play07:57

visually appealing data set that was

play07:59

very dense and quite accurate

play08:01

considering how quickly the data was

play08:03

captured the emisd data did not have RGB

play08:06

values and was not dense enough to pick

play08:08

out the interior check shots based on

play08:10

intensity alone the UAV points produced

play08:12

the third most accurate results with an

play08:14

average horizontal era of 22 mm and 11

play08:18

mm vertically the density of all the UAV

play08:21

sensors was too poor to pick out

play08:23

horizontal coordinates of the Interior

play08:25

check shots due to overhead obstructions

play08:27

the mobile units produced the fourth

play08:30

most accurate results of the lar data

play08:32

sets with an average horizontal err of

play08:34

41 mm and 8 mm verly I was a bit

play08:38

surprised by the poor performance of the

play08:40

horizontal accuracy of the Regal mobile

play08:43

units I was later informed that these

play08:45

data sets did not use the control for

play08:48

horizontal shift only vertical and

play08:50

relied solely on gnss observations to

play08:53

horizontally locate their data sets

play08:56

lastly the tried and true DGI Phantom 4

play08:58

Pro this data set proved to be quite

play09:01

accurate out in the open with with a

play09:02

horizontal error of 10 mm and 11 mm

play09:06

vertically but again since this was a

play09:08

NYE only flight nothing was mapped under

play09:10

any overhead obstructions and if we had

play09:13

check shots in any kind of vegetation

play09:15

they would have shown significant

play09:18

err my intent when I set out to do this

play09:21

accuracy comparison was to determine

play09:24

which of these 17 data sets was the most

play09:26

accurate which sensor was the best Which

play09:28

comp penetrate vegetation better and

play09:31

create a more dense Cloud compared to

play09:33

the others as I dug deeper into the data

play09:35

sets a few things became apparent

play09:37

terrestrial scanning was more accurate

play09:39

and provided a better Point Cloud than

play09:41

slam and slam beat out UAV in the same

play09:44

categories mobile was a bit of a wash

play09:46

accuracy wise considering it wouldn't be

play09:49

a totally fair comparison if control

play09:51

points weren't used horizontally the

play09:53

Regal scanners did produce a relatively

play09:55

better product than the hassa scanner

play09:57

and the premium Regal scanner produced a

play10:00

better data set in all respects compared

play10:02

to the other two photogrammetry is a

play10:04

perfectly acceptable solution for any

play10:07

sites that don't have areas with

play10:08

vegetation or overhead obstructions that

play10:11

need to be mapped I chose not to State

play10:13

the accuracy of each sensor because

play10:15

there is so much variation in how the

play10:17

data was captured if this test was

play10:19

repeated with different crew members

play10:20

capturing and processing the data and

play10:23

quite certain we would see different

play10:24

results that being said I'm fairly

play10:26

confident that the standings would not

play10:28

change in terms of accuracy from what

play10:30

I've seen here I think terrestrial would

play10:32

beat slam and slam would beat mobile and

play10:35

UAV as for who would take third place

play10:37

I'm not as sure I'm willing to bet there

play10:39

are enough variables that it would be

play10:41

hard to say one would always be more

play10:44

accurate than the other for example I

play10:46

believe mobile would be able to more

play10:48

accurately Define a feature on the side

play10:51

of a building next to the road but I

play10:53

believe UAV would be capable of more

play10:56

accurately defining a horizontal feature

play10:59

30 yards off the edge of Road nearing

play11:01

the limits of a mobile data set hands

play11:04

down the most important conclusion I can

play11:07

draw from this experiment was that how

play11:09

the data is captured and processed makes

play11:12

a larger difference than how much you

play11:14

spend on a particular sensor yes some

play11:17

sensors will provide more points per

play11:19

second and have a better IMU but if you

play11:21

are not tying that into control properly

play11:24

than was spending that extra $50,000 on

play11:27

a more expensive sensor worthwhile if

play11:30

you spend 100,000 on my newest slam

play11:32

scanner and strap it on someone that

play11:34

doesn't have the proper training and

play11:36

experience there's a good chance you

play11:38

won't get the results you had hoped for

play11:40

if you're trying to rush a job by

play11:42

skipping scan setups and thinking that

play11:44

your top tier terrestrial scanner will

play11:47

allow you to cut Corners then you may be

play11:49

in for a surprise when you get back to

play11:51

the office and start processing your

play11:53

data if I had to place a wager on a

play11:56

relatively inexperienced team with a

play11:59

best set money can buy setup compared to

play12:01

a crew that knew what they were doing

play12:03

with a setup that cost 1/4 as much but

play12:06

that had the knowledge how to properly

play12:08

bring in survey control into a project

play12:10

I'm betting on experience every day of

play12:12

the week if nothing else this comparison

play12:15

highlighted that field and office

play12:17

procedures are more important than how

play12:20

much you spend on a sensor

Rate This

5.0 / 5 (0 votes)

Related Tags
激光雷达比较现实捕捉精度评测植被穿透数据集分析无偏见评测中央佛罗里达公园高精度测量点云数据行业标准传感器性能
Do you need a summary in English?