Lidar Accuracy Comparison - 13 Different Sensors!
Summary
TLDR本视频对比了16种不同激光雷达传感器数据集,这些数据集来自现实捕捉行业的领先制造商,旨在找出最佳传感器。测试地点位于佛罗里达州中部的一个约8英亩的公园,通过独立检查拍摄来比较每个数据集的水平和垂直精度,以及植被穿透能力和数据密度。结果显示,地面扫描提供最准确的结果,而数据处理和采集方式的重要性超过了传感器本身的价格。
Takeaways
- 🏆 对比了16种不同的激光雷达传感器数据集,旨在找出现实捕捉行业中的佼佼者。
- 📈 研究使用了中佛罗里达一个约8英亩的公园作为测试场地,具有周边道路和内部绿化区域。
- 📊 通过独立的检查射击,比较了每个数据集的水平和垂直精度。
- 🌿 评估了不同数据集在植被穿透和数据密度方面的表现。
- 🔍 使用高精度全站仪进行了12个特征的检查射击,以提取点云中的特征。
- 📝 通过Excel电子表格比较了检查射击的坐标,以确定水平误差。
- 📊 对于垂直误差,通过将点云栅格化并比较平均高程与检查射击的数据来测量。
- ⚙️ 发现了一些数据集的问题,例如Leica RTC360数据集的坐标系统与控制文件不匹配。
- 🌳 地面扫描仪提供了最密集的数据集,而移动扫描仪和无人机(UAV)扫描仪在水平坐标的精度上表现较差。
- 🖼️ 摄影测量数据集在遮挡物下无法提供准确的点,而地面数据集提供了最准确的结果。
- 💡 实验表明,数据的捕获和处理方式比传感器本身的花费更重要。
- 👨💼 经验和正确的现场及办公室程序比昂贵的传感器更为关键,对于获取期望的结果至关重要。
Q & A
视频比较了多少个不同的激光雷达传感器数据集?
-视频比较了16个不同的激光雷达传感器数据集。
这些数据集是关于哪个地区的扫描?
-这些数据集是关于佛罗里达州中部一个约8英亩大小的公园的扫描。
控制点是如何设置的?
-控制点是使用指南针全站仪进行射击的,通过闭合导线调整,高程通过闭合水准环进行校正。
视频作者如何验证点云数据的水平和垂直精度?
-视频作者通过使用高精度全站仪设置并射击12个从点云中提取的特征点,然后将这些检查点带入云比较软件中,与提供的点云进行比较,以确定水平和垂直精度。
在比较中,哪个传感器类型提供了最密集的数据集?
-地面扫描仪提供了最密集的数据集。
在数据集的RGB值方面,有哪些问题需要注意?
-地面扫描仪和无人机扫描在相机看不到的区域没有提供准确的RGB值。移动扫描仪在路边缘后迅速完成扫描,而无人机扫描则错误地将穿透树冠的点着色为绿色。
在比较中,哪个传感器类型在水平精度方面表现最好?
-地面扫描仪在水平精度方面表现最好,平均误差为13毫米。
视频作者在处理数据时遇到了哪些问题?
-视频作者在处理数据时遇到了一些问题,例如Leica RTC360数据集的坐标系统与控制文件不匹配,Asai传感器的数据集没有很好地捕捉到周围环境,而Navis VLX3数据集的水平误差异常高,可能是由于操作员的错误。
视频作者得出的最重要的结论是什么?
-视频作者得出的最重要的结论是,数据的捕获和处理方式比购买的传感器的价格更重要。即使拥有顶级的传感器,如果没有正确的训练和经验,也可能无法获得预期的结果。
视频作者建议在进行激光雷达扫描时应该重视什么?
-视频作者建议在进行激光雷达扫描时应该重视现场和办公室的程序,这比在传感器上的投资更为重要。
为什么视频作者没有公布每个传感器的精确度数值?
-视频作者没有公布每个传感器的精确度数值,因为数据捕获方式存在很大变化,如果由不同的工作人员进行数据捕获和处理,很可能会得到不同的结果。
Outlines
🔍 现实捕捉行业激光雷达传感器数据集比较
本视频将对比16种不同激光雷达传感器数据集,这些数据集来自行业内领先制造商,并尝试选出最佳数据集。这些数据集均扫描了相同地点,并使用了相同的控制文件。视频将使用一系列独立检查拍摄来比较每个数据集的水平和垂直精度,并比较植被穿透能力和数据密度。这是首次公开进行如此大规模的激光雷达精度比较,且由一个无偏见的用户完成。该用户没有收集或处理任何扫描数据,也没有偏向任何一种数据集的任何理由。
🌳 研究地点与数据采集过程
研究地点是位于佛罗里达州中部的一个约8英亩的公园,周围有一条周界道路,内部有植被区域、游乐场和一些结构。调查控制点使用罗盘全站仪精确测量,并使用闭合水准路线进行高程校正。这些控制点被提供给多家当地公司,这些公司使用各自的激光雷达传感器通过飞行、驾驶、步行或跳跃式测量该地点。这些独立公司负责数据的收集和处理。视频作者在审查数据后,使用高精度全站仪和提供的控制文件在现场拍摄了12个特征点,并通过最小三次观测值进行测量。测量结果经过场外校准和最小二乘法平差处理,观测值的标准偏差在2至3毫米之间。
📊 数据集比较与分析
在比较过程中,视频作者发现了几个问题。例如,Leica RTC360数据集的坐标系统与控制文件不一致,尽管进行了缩放,但数据集仍然不匹配,因此不得不放弃该数据集。移动数据集的传感器由于是移动数据,没有很好地穿透周界道路,无法准确识别检查点的水平位置。然而,数据集中的植被穿透能力和数据密度差异显著。地面扫描仪提供了最密集的数据集,而Pheros扫描仪的每平方米点数最多。数据采集方式也显著影响了点的密度。地面扫描仪在RGB值不准确区域没有着色,而无人机扫描仪错误地将穿透树冠的点着色为绿色。地面扫描仪提供了最准确的结果,平均水平误差为13毫米,垂直误差为4毫米。
Mindmap
Keywords
💡激光雷达
💡摄影测量
💡水平和垂直精度
💡植被穿透
💡数据密度
💡RGB值
💡控制点
💡误差分析
💡传感器
💡移动扫描
💡无人机
Highlights
比较了16种不同的激光雷达传感器数据集,来自现实捕捉行业的领先制造商。
这是首次公开提供的如此大规模的激光雷达精度比较,由一个无偏见的用户完成。
研究地点是佛罗里达州中部的一个约8英亩的公园。
使用高精度全站仪设置和提供的控制文件,现场拍摄了12个特征。
通过Cloud Compare软件和Excel电子表格比较了数据集的水平和垂直精度。
研究了植被穿透能力和数据密度。
Leica RTC360数据集存在坐标系统问题,最终被放弃。
Asai传感器的移动数据集没有很好地捕捉到周围环境。
Navis VLX3数据集的水平误差意外地高,可能是由于操作员错误。
地面扫描仪提供了最密集的数据集。
数据采集方式显著影响点的密度。
地面扫描仪在RGB值的准确性方面存在问题。
移动扫描仪在道路边缘后迅速完成扫描。
摄影测量数据集在遮挡物下无法提供准确点。
地面扫描仪提供最准确的结果,平均水平误差为13毫米,垂直误差为4毫米。
SLAM扫描仪的准确性排在第二位,平均水平误差为19毫米,垂直误差为8毫米。
UAV扫描仪的准确性排在第三位,平均水平误差为22毫米,垂直误差为11毫米。
移动单元的准确性排在第四位,平均水平误差为41毫米,垂直误差为8毫米。
DGI Phantom 4数据集在开阔区域表现出相当高的准确性。
数据的采集和处理方式比传感器的价格更重要。
经验和正确的工作流程比昂贵的设备更为关键。
Transcripts
in this video we are going to compare 16
different lar sensor data sets from some
of the leading manufacturers in the
reality capture industry and tried to
Crown a
[Applause]
champion I recently got my hands on 16
different liar and one photogrammetry
data sets that all scan the exact same
site and were provided this exact same
control file I am going to compare the
horizontal and vertical accuracy of each
using a series of independent check
shots we will also compare vegetation
penetration and density of data there is
never been a liar accuracy comparison of
this magnitude made publicly available
and most importantly done by an unbiased
user I didn't collect or process any of
the scan data nor do I have any reason
to favor any one data set over another
let's get into
it the study site that was used is a
park in Central Florida that is about 8
acres in size it has a perimeter Road
running around an interior vegetated
area that has a playground in a few
structures the survey control was shot
in with a compass wole adjusted closed
Travers and the elevations were
tightened up with a closed level Loop
these control points were provided to a
plethora of local companies that brought
their best wiar sensors and either flew
drove walked or leapfrogged the site
these independent companies were solely
responsible for the collection and
processing of their data once I reviewed
the data I went out to the site with a
high Precision Total Station setup and
the provided control file and shot in 12
features that could be extracted from
the point Cloud every observation I took
was shot in with a minimum of three sets
of observations the instrument and
Equipment were field calibrated
beforehand and the observations were
postprocessed in a leas squares
adjustment the standard deviations of my
observations from the Le squares
adjustment were in the range of 2 to 3
mm these values were confirmed in the
air propagation program I wrote last
year survey buddy check out my YouTube
channel for more information and a free
copy of that I then brought these check
shots into Cloud compare along with each
provided Point Cloud to determine the
horizontal air I extracted the
coordinates of the feature that was shot
in as a check shot and compared the two
in an Excel spreadsheet the total
station checkshot was assumed to be the
accepted as true value and the point
Cloud extracted coordinate was the
measure value in my opinion this was the
best possible method of measuring
horizontal error in the data set after
the fact that the end user can expect
when they're trying to extract
horizontal features to make sure there
wasn't a shift in the RGB values over
the points due to a camera to light our
sensor alignment issue I checked to
ensure the intensity values aligned with
the RGB values the errors measured with
this method method can be considered as
absolute errors to measure the vertical
error I rasterized the point Cloud
taking an average elevation using a
great cell size of .1 ft and compareed
those points to the check shots
rasterizing the point cloud in this
situation basically took an average
elevation of all the points in a 0.1x
0.1 ft box and returned a new point I
then selected the four nearest Riz
points surrounding the check shot and
used to those as my measured Point Cloud
elevations I chose to do this because
when the delivered Point clouds were
going to be used to create a surface it
is likely a similar form of averaging
the elevations of the points would have
been used and I wanted to reduce the
chance of outliers in the point Cloud
causing results to appear worse than
they were I wanted to use four elevation
points so I could still see which clouds
had more vertical spread compared to
others the final vertical error was
calculated off an average of the four
selected
points there were a few issues I noticed
while going through the data while
creating the air spreadsheet the data I
received for the Leica rtc360 wasn't
provided in the same coordinate system
as the control file it seemed to be a
metric version of it although when I
scaled the point Cloud up to us survey
feed it still didn't fit correctly there
was something wrong with the data set
and I eventually had to abandon it the
mobile data set from the asai sensor
didn't capture much of the surrounding
and since it was mobile data it did not
penetrate well enough off the Perimeter
Road to pick out horizontal locations of
my chat shots but I was able to compare
elevations looking at the data I would
assume they had the sensor pointed
inward to capture the interior of the
site this data had significant vertical
error and quite a bit more vertical
spread than any other data set the navis
vlx3 data set had an unexpectedly high
level of horizontal air in the range of
0.2 ft after speaking with one of the
surveyors that was on on-site during the
data capture process it sounds like it
was most likely caused by user error
from the operator of the instrument that
was relatively new to using that unit
this data set was not used in the final
accuracy comparison the terrestrial
scanners provided the densest data sets
the pheros scan had more points per
square meter compared to any other data
set whether or not this density is
necessary depends on your application
and anyone that's tried to work with
Point clouds in the billions of points
knows it can be quite difficult for most
computers or software packages to handle
data set this large that being said you
can always strip away points in the
office but you cannot add new ones in
how the data was captured also affected
density of points significantly the phoh
data set appeared to have more scan
setups compared to the Regal VZ 600i for
example as a result there were areas of
the Regal scan with much lower density
the landbased scans didn't provide
accurate RGB values in areas in which
their camera couldn't see and the same
is true of the UAV scans this was
expected but should be noted for example
the landbased scans didn't colorize
overhead features like rooftops and
trees and the UAV scans erroneously
colorized points that penetrated tree
canopies with green values that their
cameras picked up from the leaves of the
trees the mobile scans was entity
quickly after the edge of the road this
was especially true of the vm1 that had
a single sensor the vmx 2ha cre created
a much denser data set and colorized
points further off of the road due to
its dual sensors and multiple cameras
the cheaper UAV sensors that collected
fewer points per second produced less
dense data sets affecting horizontal
accuracy for feature extraction the
photogrammetry data set was unable to
provide any accurate points under
overhead obstructions and had quite a
bit of vertical fuzz in the point
Cloud the terrestrial data sets provided
the most accurate results with an
average horizontal error of 13 mm
horizontally and 4 mm vertically this
was not a huge surprise for a couple of
reasons terrestrial scanning has the
most stable platform for their sensor as
it is sitting motionless on a tripod
having a dead Ser data set allows for
one to extract horizontal features more
precisely if your data set only has a
point every 20 mm it will be impossible
to define a feature more precisely than
that the three units used in this
comparison are all had near identical
errors associated with them horizontally
although the znf point Cloud was
significantly noisier vertically than
the Regal or feral and since I don't
know exactly how each data set was
processed it is possible those two point
clouds had additional post-processing to
condense the vertical component of their
points the slam units produced the
second most accurate results with an
average horizontal error of 19 mm and 8
mm vertically the vx2 had a very
visually appealing data set that was
very dense and quite accurate
considering how quickly the data was
captured the emisd data did not have RGB
values and was not dense enough to pick
out the interior check shots based on
intensity alone the UAV points produced
the third most accurate results with an
average horizontal era of 22 mm and 11
mm vertically the density of all the UAV
sensors was too poor to pick out
horizontal coordinates of the Interior
check shots due to overhead obstructions
the mobile units produced the fourth
most accurate results of the lar data
sets with an average horizontal err of
41 mm and 8 mm verly I was a bit
surprised by the poor performance of the
horizontal accuracy of the Regal mobile
units I was later informed that these
data sets did not use the control for
horizontal shift only vertical and
relied solely on gnss observations to
horizontally locate their data sets
lastly the tried and true DGI Phantom 4
Pro this data set proved to be quite
accurate out in the open with with a
horizontal error of 10 mm and 11 mm
vertically but again since this was a
NYE only flight nothing was mapped under
any overhead obstructions and if we had
check shots in any kind of vegetation
they would have shown significant
err my intent when I set out to do this
accuracy comparison was to determine
which of these 17 data sets was the most
accurate which sensor was the best Which
comp penetrate vegetation better and
create a more dense Cloud compared to
the others as I dug deeper into the data
sets a few things became apparent
terrestrial scanning was more accurate
and provided a better Point Cloud than
slam and slam beat out UAV in the same
categories mobile was a bit of a wash
accuracy wise considering it wouldn't be
a totally fair comparison if control
points weren't used horizontally the
Regal scanners did produce a relatively
better product than the hassa scanner
and the premium Regal scanner produced a
better data set in all respects compared
to the other two photogrammetry is a
perfectly acceptable solution for any
sites that don't have areas with
vegetation or overhead obstructions that
need to be mapped I chose not to State
the accuracy of each sensor because
there is so much variation in how the
data was captured if this test was
repeated with different crew members
capturing and processing the data and
quite certain we would see different
results that being said I'm fairly
confident that the standings would not
change in terms of accuracy from what
I've seen here I think terrestrial would
beat slam and slam would beat mobile and
UAV as for who would take third place
I'm not as sure I'm willing to bet there
are enough variables that it would be
hard to say one would always be more
accurate than the other for example I
believe mobile would be able to more
accurately Define a feature on the side
of a building next to the road but I
believe UAV would be capable of more
accurately defining a horizontal feature
30 yards off the edge of Road nearing
the limits of a mobile data set hands
down the most important conclusion I can
draw from this experiment was that how
the data is captured and processed makes
a larger difference than how much you
spend on a particular sensor yes some
sensors will provide more points per
second and have a better IMU but if you
are not tying that into control properly
than was spending that extra $50,000 on
a more expensive sensor worthwhile if
you spend 100,000 on my newest slam
scanner and strap it on someone that
doesn't have the proper training and
experience there's a good chance you
won't get the results you had hoped for
if you're trying to rush a job by
skipping scan setups and thinking that
your top tier terrestrial scanner will
allow you to cut Corners then you may be
in for a surprise when you get back to
the office and start processing your
data if I had to place a wager on a
relatively inexperienced team with a
best set money can buy setup compared to
a crew that knew what they were doing
with a setup that cost 1/4 as much but
that had the knowledge how to properly
bring in survey control into a project
I'm betting on experience every day of
the week if nothing else this comparison
highlighted that field and office
procedures are more important than how
much you spend on a sensor
Browse More Related Video
NASA | TIRS: The Thermal InfraRed Sensor on LDCM
Crop Water Use Dynamics & Water Budget Parameters/High Plains Aquifer - Gabriel Senay, USGS EROS
[CVPR 2024 Highlight] 4D-DRESS: A 4D Dataset ofReal-World Human Clothing With Semantic Annotations
Field-scale Actual ET Estimation using SSEBop | Gabriel Senay, Ph.D., P.E.
[ML2021] Pytorch Tutorial 2
快速上手了解智能合約(NFT標準) | TON Blockchain, TEP62
5.0 / 5 (0 votes)