ODA Summit 2021 - Part 3: Beyond Data Interoperability
Summary
TLDR本视频讨论了3D扫描技术在建筑、工程和施工(AEC)行业中的应用,以及如何通过自动化提高数据处理效率。专家们讨论了从扫描到BIM的过程、激光扫描和无人机技术在项目中的应用,以及面临的挑战,如数据转换和模型创建。同时,介绍了ODA的可视化引擎,展示了其在工程设计中的作用,并通过案例研究和未来计划,展望了技术发展的趋势。
Takeaways
- 🌐 越来越多的公司使用Kodi进行可视化和网络应用开发,并寻求解决复杂行业问题的创新技术。
- 🏗️ AEC行业中扫描束(Scan to Beam)的应用正在增加,特别是在改造和翻新项目中。
- 🏢 公司在项目中创建模型的需求不断增长,尤其是在扫描到BIM(Building Information Modeling)的应用上。
- 📈 通过激光扫描和无人机技术,可以捕获大型场地的点云数据,提高项目执行的粒度视图。
- 🔄 将点云数据转换为模型是一个挑战,尤其是在项目紧迫和资源有限的情况下。
- 🤖 自动化在3D扫描中扮演着重要角色,未来可能会有算法帮助筛选和识别扫描数据中的相关元素。
- 🛠️ 行业正在寻求一个“魔法按钮”,以高效地将点云数据转换为可用的模型。
- 📱 激光扫描技术的发展使得数据消费变得更加便捷,如使用平板电脑和智能手机进行室内设计。
- 🎥 Oda的可视化引擎允许在任何平台上为工程应用添加专业的2D或3D图形。
- 🔄 通过集成Oda Visualize,MSI Design在其旗舰计算机设计应用中实现了性能提升和开发成本降低。
- 🚀 未来计划包括在Oda Visualize中添加反射平面支持、快速对象转换和亚像素形态学抗锯齿等高级功能。
Q & A
为什么越来越多的公司选择使用Kodi进行可视化和网络应用开发?
-Kodi因其强大的可视化能力和灵活性,被越来越多的公司用于解决复杂的行业问题,包括数据互操作性之外的挑战。
在AEC行业中,扫描到BIM的作用是什么?
-扫描到BIM在AEC行业中的作用主要是为了获取现有建筑的精确数据,以便在进行改造或翻新时,能够更好地理解现有结构和流程,并有效地整合新的设计和工艺。
Walsh公司在处理大型升级项目时,面临的最大挑战是什么?
-Walsh公司在处理大型升级项目时,面临的最大挑战是如何在现有设施中整合新的管道、过滤系统和工艺流程,并确保这些新元素能够与当前的流程和程序顺利对接。
在设计和施工过程中,激光扫描的主要目的是什么?
-在设计和施工过程中,激光扫描的主要目的是捕获现有建筑的实际建成状态,为后续的设计和施工提供准确的数据支持。
为什么自动化在3D扫描中扮演着重要角色?
-自动化在3D扫描中扮演着重要角色,因为它可以帮助从扫描数据中过滤出不必要的信息,提高数据处理的效率,并且有助于将扫描数据快速转换为可用的模型。
ODA的可视化引擎AudioVisualize提供了哪些功能?
-ODA的可视化引擎AudioVisualize提供了专业级的2D和3D图形添加功能,能够在任何工程应用程序和平台上使用,包括对点云、网格或BIM模型的高质量可视化。
为什么ODA的可视化技术对于AEC行业来说是一个重要的进步?
-ODA的可视化技术对于AEC行业来说是一个重要的进步,因为它提供了一种优化的方式来处理和呈现复杂的模型数据,同时支持跨平台使用,有助于提高设计和施工的效率。
在集成ODA Visualize时,IMsi Design遇到了哪些挑战?
-在集成ODA Visualize时,IMsi Design面临的挑战包括更新渲染引擎以提高性能、降低开发成本、改善客户体验,并且需要在不同的硬件平台上实现渲染的可扩展性。
ODA Visualize的哪些特性使得它在设计审查中特别有用?
-ODA Visualize的高质量可视化、快速选择和高亮显示功能、以及对多种设计格式的支持,使得它在设计审查中特别有用,能够帮助用户有效地验证项目多个方面并进行沟通协作。
Open Cloud平台如何处理复杂的文件和数据?
-Open Cloud平台通过提供参考API、可视化.js库、用户管理API、角色基础API、文件操作API和自定义作业API等工具,能够有效地处理复杂的文件和数据,支持从文件中提取几何或属性数据,并允许用户执行自定义作业来提取额外信息。
Outlines
🌐 Kodi的多元化应用与扫描到BIM的讨论
本段介绍了越来越多的公司正在使用Kodi进行可视化、网页应用开发,以及寻求解决复杂行业问题的创新解决方案。特别强调了扫描到BIM(建筑信息模型)技术在AEC(建筑、工程、施工)行业中的应用,并提到了一个专门讨论这一技术在行业内应用的小组讨论。提到该讨论的完整版本可在YouTube频道上找到,强调了其提供的深入见解的价值。
📊 3D扫描与模型转换的行业挑战
这一部分深入探讨了3D扫描技术在不同行业项目中的实际应用,尤其是在改造和维修项目中。讨论了从传统的激光扫描到使用无人机进行大范围扫描的技术进步,以及这些技术如何帮助从实际场地获取精确的数据。专家们分享了他们在使用点云数据和3D模型转换过程中面临的挑战,包括数据的大量性和处理这些数据的高复杂性,以及如何通过自动化和算法改进来解决这些问题。
🔍 在行业中推广3D扫描的挑战与机遇
讨论了在AEC行业中推广3D扫描技术面临的挑战,特别是高成本和高技术门槛限制了更广泛的应用。专家们讨论了如何通过技术创新和解决方案开发来降低这些障碍,例如通过开发新工具和平台来简化数据捕捉和处理流程。同时,指出了在个人和小规模项目中应用3D扫描的潜力,强调了行业内合作和知识共享的重要性。
🛠 ODA Visualize技术的实际应用案例
介绍了ODA Visualize技术在工程设计和CAD软件开发中的应用,通过IMS Design的案例展示了如何将ODA Visualize集成到他们的TurboCAD产品中。讨论了这一技术如何帮助改善渲染性能,提升用户体验,并支持多种硬件平台。此外,分享了开发过程中的一些关键考虑点,包括与ODA团队的合作,利用源代码访问和不同版本库的同时使用。
🌟 ODA Visualize的动画和测量功能
这一部分重点介绍了ODA Visualize的动画API和测量功能,展示了如何在视觉化场景中对模型进行动画处理和精确测量。详细解释了这些功能如何帮助设计应用程序用户以更直观、更灵活的方式呈现和分析模型数据。同时,讨论了高级自定义和性能优化技术,如使用字体缓存和减少GPU调用来提升渲染性能。
🚀 ODA Visualize的未来发展方向
讨论了ODA Visualize技术未来的发展计划,包括增加反射平面支持、改进对象变换处理和引入子像素抗锯齿技术。此外,展望了新的文件格式VSFX的开发,该格式旨在减小文件大小并支持高效的数据流式传输。提到了基于强烈市场需求,开发针对Open IFC Viewer的插件系统的计划,以便更好地支持AEC行业用户。
📈 ODA在云基础设施和WebAssembly应用中的使用
介绍了ODA技术在云基础设施和WebAssembly应用开发中的实践案例。首先,通过Lunar Engineering的EDMS系统展示了如何使用ODA技术管理、审核和协作处理工程文件。随后,详细介绍了Open Cloud解决方案,强调了其在处理复杂CAD和BIM文件、优化数据存储和执行定制作业方面的能力。最后,通过Google V8团队的介绍,探讨了在Chrome浏览器中调试和性能分析WebAssembly应用的方法。
Mindmap
Keywords
💡可视化
💡激光扫描
💡点云数据
💡建筑信息模型(BIM)
💡设计审查
💡WebAssembly
💡模型转换
💡自动化
💡开放BIM标准
💡云计算
💡性能优化
Highlights
越来越多的公司今天正在使用Kodi进行可视化,他们正在使用Kodi进行Web应用开发,并且他们正在寻找解决其他复杂行业问题的解决方案。
本次会议的这一部分致力于超越数据互操作性的早期技术。
扫描到BIM在建筑信息模型中的应用正在增加,每个项目现在都有某种形式的扫描。
在建筑生命周期中,人们希望了解扫描的前端设计和施工过程的动机。
激光扫描工作过去主要集中在捕获建筑物的实际情况,例如大型机场项目。
由于无法信任建筑物的历史建筑文档,人们希望通过扫描来确认建筑物的实际状况。
在小型空间的3D扫描中,无人机的使用正在增加,但目前还没有一个工具能够帮助小型项目进行3D扫描。
自动化在3D扫描中的角色正在发展,尤其是在处理扫描数据和转换模型方面。
目前,行业面临的挑战是如何有效地将点云数据转换为模型,这是许多人的障碍。
ODA正在尝试提供填补这些空白的核心技术,而不是等待每个单独的供应商和行业部门自己解决。
可视化在工程设计中起着关键作用,无论是点云、网格还是BIM模型。
ODA的可视化引擎允许在任何平台上为任何工程应用添加专业的2D或3D图形。
通过集成ODA可视化,MSI设计提高了其TurboCAD系列产品的渲染性能和质量。
ODA可视化引擎的集成使得在不同的图形系统上运行时都能提供最佳的用户体验。
ODA可视化引擎支持多种渲染设备,这意味着客户端应用程序可以运行瓦片打印或PDF导出,同时与屏幕渲染设备一起使用图形系统缓存以获得最佳性能。
ODA可视化引擎现在支持在GDI设备中进行透明度处理,这可以用于打印和渲染。
ODA可视化引擎的未来计划包括添加对反射平面的支持,以及为动画和快速对象变换实现子像素形态学抗锯齿。
Transcripts
[Music]
more and more companies today
are using kodi for visualization they
are using kodi for web application
development and they are looking to idea
for solutions to other complex industry
problems this last part of our summit is
dedicated to early technologies that go
beyond data interoperability we'll start
this section from a panel discussion
about the role of scan to beam
in the aec industry in the scope of our
summit you can see a part of this
discussion with bright insights but i'd
like to assure you that it's worth
watching a full version available on our
youtube channel
[Music]
you know looking at your practices in
general
how often is your company seeing
projects where there is a need to create
models from these scans and the point
cloud data and there is definitely an
increase in
demand for the use of this it's to the
point that basically every project now
in the last couple years has had some
form of scanning whether it's
to um
scan to bim
the last project was a substation in a
basement uh
with complex floor and ceiling and wall
condition where we see the greatest
value is in those retrofits those
renovations um
walsh has three different verticals it's
our building group civil group and our
water group and
water time and time again when you're
massive upgrades how are you
retrofitting this existing facility with
new pipings new filtration systems new
processes so
really having a granular view on how
this is going to integrate into the
current process and procedures is very
difficult and you know can go out there
and take a tape measure and measure it
but you know that really doesn't provide
um
great hindsight uh when you're throwing
these models together um as well as how
you're gonna get this information or
this equipment into these spaces
yeah for us it's it's very similar
we've
deployed point cloud technology for the
last decade
video via either laser scanning directly
or increasingly with with drones to
capture the larger type sites and
i tend to look at it from the building
life cycle perspective so i always like
to start with why
why do people want to do the scan
in the front end of the design
construction process you have the as
built condition
so a lot of the
front end laser scanning work that we've
done in the past has has been around
capturing the ass built condition for
example we did mem international airport
which had an extremely large footprint
you know as an airport of course and it
was built
where nobody really trusts the as-built
documents because they go i need 10 20
30 generations you know they're like
you know
you got you want the 1975 version or you
want the 1982 or do you want the 1994
and they recommend you look at all of
them because they all have different
pieces of information on them but none
of them have the truth
so
because you can't trust the as built
documents that we're inheriting from
the history of that building
and people want the confidence to truly
know what's out there we've been
scanning to document the as built
condition something for us that that is
important is the scale that we have some
projects that you know we have towers
that we want to retrofit we're
definitely going to do that but there is
also a niche in in a a practice area
that we call workplace
and and there is so many smaller spaces
and uh and we see the drones as gene
mentioned
we see the point clouds we see all type
of 3ds kind but there is no um a tool
that help
that small ditch when you have to 3d
scan something you don't want to send
your team there
but you need to capture the space so
there is a lot of opportunities in there
in the in the field we have seen
increasingly um i would say probably
a hundred percent of the projects that
we have that are existing conditions
they're doing 3ds kindly maybe you can
uh help also describe what is your
process i mean are you
are you going out and doing the laser
scanning yourself with your own units
are you hiring somebody third party to
do it are you then getting the point
cloud data and and translating that
yourself or is that also a third party
service what you know what kind of
things are you relying on depending on
the complexity of the project
the availability of our scanners you
know that's that's always a
difficult thing to approach we have
multiple scanners within our
organization but
you know we have a retrofit project
comes up you know that's that's just
like a full-time fte that's sitting
there and that's being leveraged on that
project
one of the banes of our existence is
when we hear we need to perform laser
scanning on that project and then the
first question that comes to mind are we
creating the model oh no the architect's
creating the model no problem we'll go
scan that that's we're completely fine
with scanning that project because
that's the easy part scanning and
registering those points no problem uh
the complexity of really developing that
model that's time consuming and uh
unfortunately these days in the industry
it's you're closing up one project
starting up another project
simultaneously and bidding a project in
between the two of those so
it's very difficult to kind of stop do
all those iterations and go through it
from the architect's standpoint um we've
seen a couple different variations we've
never done our own scanning we're
usually contracting out it's either with
the general contractor
or it is coming from an outsource
firm of some sort
and then from that point once we've done
the scan
we have both modeled in-house and we've
also outsourced the model or the general
contractor has provided the model from
their scan a big point it's it's
something we've toyed with as far as the
the model conversion
it's something that we used to
handle all of that in-house
the software years ago has been
developing where there's
automatic feature extraction
but what we found is
particularly with piping type systems
the software will create those elements
but then you spend just as much time qc
and making sure everything fits properly
that
we weren't really seeing the the
efficiencies to make that process go and
that's that's kind of why we're here
talking about this the
industry's been chasing this magic
button where point cloud
allows us to capture more data than
we've ever had on the construction
site and then with drones as well
but the ability to efficiently convert
that to a model is
is something that really
holds a lot of people back
so what will be the role of automation
in 3d scanning because we all face the
same challenge when we go and we scan
we get a lot of information that
we probably don't need so for example um
i get a 3d scan i come with a bunch of
piping that has they're not really
relevant for the
let's say the scope of work that we're
doing i wonder if in the future some
kind of algorithm is going to start
understanding
what is piping what is a wall you know
it should be something that
start filtering those things we we have
seen some automation with you understand
what the what type of geometry is there
and say okay this could be a wall and it
and it throws a wall there
um sounds like cost and the time
associated with a making the scans and b
making those conversions is a very big
deal
uh you know obviously not every
not every type of stakeholder you know
if you're an architect and you're
focusing on design
that may not be feasible within your
firm or you know sounds like on the
contracting side you figure out ways to
make it work because
that is a part of your process that is a
fundamental part of what you do and
being able to do the scanning but maybe
conversion isn't always necessary or if
you could get conversion on top of that
as a
as an easy gain then it would be much
easier to justify
you know so so oda is is
you know in this this realm of trying to
provide sort of these core technologies
that fill these gaps um but what do you
think about that approach i mean does it
does that seem to make sense to you
rather than sort of waiting for
you know each of the individual vendors
and sectors to just sort of figure it
out
well yeah i think there's a need so it's
interesting because um one part of this
is how do you consume the scan data
right
one of the issues that we've always had
of course you know when you talk about
lasers kind of an airport you can
imagine how difficult this is to consume
in every sense hardware
just transferring the information making
accessible making it available bringing
it into other systems so we obviously
need more optimized ways of being able
to make that data available in a wide
range of platforms in a wide range of
environments everything
from uh you know tablet computer all the
way up to your smartphone you're seeing
movement on the laser scanning side of
things you know it's everyone's got the
newest latest and greatest cell phone or
ipad and they now all of a sudden have
laser scans on them nobody knew what the
heck they were going to do with it you
know it was for the ar so you can have a
little dancing rhino on your desk but
then people in our industry started
being like wait a second i can now
work with an interior designer i could
scan my apartment send it off to
interior designer and get a beautiful
decoration of my apartment well
if we could do that in our personal
level why why aren't we trying to
holistically approach this in our
industry to make it easier if i can go
out there with an ipad all my sub or all
my formative ipads out there let's start
scanning this let's start tying this
into different third-party solutions uh
procure not procurement uh production
tracking things like that so
i think the sky's the limit when it
comes to developing solutions for this
workflow
because it's an underutilized and the
burden of entry is so high that not
enough people are playing here yet so
it's kind of a
majorly under underutilized uh principle
in our industry and i i think it's it's
worthwhile and it's worth the investment
to start understanding what it could do
for you and your projects to your point
i i think that's definitely somewhere
where the industry as a whole would
definitely benefit there are a lot of
different groups out there that have
different tools and
i'm sure like most of you we we have to
be pretty agnostic we have to use a lot
of different tools there's a lot of
different conversions
to get what you need
each has their strengths have their
weaknesses
and we're constantly going through that
process on
on the scanning and software side just
to see
okay where's the industry at who's who's
bringing new
new technologies new potential to it but
the ability to
bring all those collective ideas
together in a simple
efficient solution i i think is
an amazing challenge but definitely
something that that would benefit the
whole industry i will follow up with
what you're saying james
and i see the same thing so one would be
the part of visualization that gives
access to clients and everybody in the
team to really quickly understand the
space or even take a simple measurement
you know
now to be able to access that data it is
pretty heavy and it is hard to to
anybody in the team to
just open and take some quick
measurements
and the other thing would be translation
so how you take all that point point
cloud and you make it accessible to
whatever platform you're going to use it
you know we cannot restrict our
our users or whoever is is working with
this data to only work in one specific
brand
software so we cannot say well you if
you get this one you use this tool to
translate the data and now you need to
use auto a revit you know or you need to
use um archicad so it needs to be
something called like ifcs you know that
is is it can work across any platform
and you can open this this file in
sketchup if you want a rhino and you can
get the same precision
[Music]
visualization plays a key role in
engineering design
either your data is point cloud air mesh
or a beam model
now it is time to take a closer look at
oda's professional visualization engine
[Music]
audiovisualize allows you to add a
professional 2d or 3d
graphics to any engineering application
on any platform
let's let's take a closer look at what
this technology can do for you and i
will start from a success story provided
by one of our oldest founding member msi
design
last year they have integrated our
visualize to their flagship computer
design application
and in the context of these samples
i would like to draw attention to the
creative approach that allows to emulate
feature which is only in our future
plans here i mean reflection plane
next demos illustrates how visual styles
can create an impressive visual
representation even for a simple model
changing of a few visual style options
like h model or age crease angle can
significantly alter the rendering and
add different effects to the final model
representation now msi design
will tell you more about the experience
with oda visualize
[Music]
hello i'm tim olsen vice president
development at imsi design i've had the
opportunity to be involved with cad
development since the early 80s close to
40 years now
and throughout that time visualizing cat
data such as text dimensions line styles
wireframe and 3d facet data have been a
constant challenge
it's been a challenge because of the
changes we've seen in the underlying
software and hardware interfaces
and driven by our customers ever
demanding need to support larger and
larger files
today i'm going to share with you our
experience of integrating oda visualize
with our turbocad family of products
imsi design has been developing desktop
and mobilecad products since 1983.
we have distributed over 16 million
products during that time targeting
consumer as well as professional cad and
aec users on windows mac android and ios
platforms
our products include turbo cat design
cad floor plan and our home design
architectural series our turbocad for
windows product is our first application
to integrate visualize
this product has extensive 2d drafting
annotation surface and solid modeling
geometric constraints
parametric design and an extensive suite
of interoperability solutions turbocat
is used by our customers on a wide
variety of platforms
ranging from integrated gpus
to high-end dedicated boards such as the
nvidia geforce
we need our rendering implementation to
scale with the platform providing all of
our customers with the best possible
experience
our goal with visualize was to update
our rendering engine to improve
performance reduce our development costs
by consolidating rendering engines and
to improve our customers experience
our first product to integrate visualize
with is turbocad
turbocad previously supported four
graphic engines including gdi opengl
lightworks and red sdk
our render manager wraps these engines
under one layer our approach was to add
visualize over a 12-month development
period as a fifth rendering engine
through two development phases
phase one we were to duplicate the
current functionality into the turbocad
2021 release
and phase two exposed new features in
the turbocad 2022 release turbocad 2021
with visualize was released last spring
with positive feedback from our
community of users
we observed two areas of notable
improvements in performance and quality
rewarding performance we saw up to 10x
an order of magnitude performance
updates for wireframe and 3d facet type
data on high-end graphics ports
and 2-3
x performance on lower end boards we
also saw some unexpected quality
improvements regarding anti-aliasing and
ambient inclusion
regarding anti-aliasing we are able to
expose methods and parameters to
fine-tune those jaggies out of both
wireframe and 3d facet type data
and with respect to screen
space ambient occlusion we are able to
expose some simple lighting setups for
users to quickly get more realistic
renderings
next up are four things we believe
helped us integrate visualize into our
turbocad product
first the oda visualized team was very
supportive in answering our questions
providing suggestions and listening to
our suggestions
i encourage anyone getting involved with
oda components to review the example
code use the timeline help and get
involved with the user form second
as a founding member we have access to
source code
source code provided us with the ability
to step through the debugger inside the
example code as well as through our code
and provided us with additional insight
into the workings of visualize
third the ability to use two different
versions of odi libraries together in
one application this enabled us to
update visualize in a monthly manner
without affecting the base oda library
which is modified source requiring more
time to update for us
and lastly our lead engineer vlad veslov
was able to leverage his prior
experience with rendering engines
towards visualize
we found implementing visualize
consistent with other rendering engine
concepts with turbocad 2021 we
duplicated our existing rendering engine
to include oda visualize with turbocad
2022 we extend our rendering engine with
new features based on visual styles
new visual styles such as x-ray
conceptual and gray along with
customizable parameters will allow more
ways for our customers to view complex
data
in closing our original goal was to be
as fast or better than our competitors
with our rendering engine while
improving the overall customer
experience
with the help of oda visualize we
believe we have accomplished that goal
going forward it is our intention to
leverage visualize into more of our
products including our macbase products
i look forward to our development team
focusing more on creating unique product
features
where oda components such as visualize
are a key component of that strategy
thank you
now we are moving forward with our
presentation there are many interesting
things ahead of us
[Music]
the animation api allows objects within
a visualized scene to be animated
entity animation allows translation
rotation and scaling to be applied to
the objects in the model
objects in oda visualize are stored in a
hierarchical structure and the animation
api takes into account this hierarchy
for example we can specify one animation
for an entity and another one for its
sub-entity and the resulting sub-entity
animation will be a combination of both
its own and its parent animations
view animation introduces a new camera
object that contains properties similar
to the view
using the camera object you can create
things like walkthrough animations for
complex building models
now let's talk about measurements
the ability to select and measure parts
of a model is a popular feature for many
types of design applications
for convenient management visualize
supports options like nearest point
center point starter endpoints and so on
also we have added measurement 2 to our
sample applications which can be used as
a base for the measurement support to
your client applications
[Music]
let's talk about highlighting
customization and sun graph highlighting
customization provides a way to control
edge and face style color masks for
highlighted objects transparency
displaying of the lines or contours and
drawing order of the highlighted
geometry can be also controlled
in addition we can use different
highlighting styles in a rendering stand
different parts of the model can be
highlighted using different styles to
meet the needs of even the most complex
design applications similarly this
technique can be used to highlight
different kinds of objects by most fit
highlighting style the next block of our
presentation will be devoted to the
visualized sun graph
it uses background processing of
geometry data to minimize number of gpu
calls and optimize rendering performance
we actively developing and enhancing
this feature visualize uses a triotype
fonts cache and block references cache
to optimize vectorization performance
and reduce memory usage for complex
models
in the past this reduced memory usage
has come at the cost of slower rendering
performance this year we have improved
our sun graph to work seamlessly with
our font and block cushion giving
reduced memory usage without any serious
loss in the rendering performance
another critical some graph bottleneck
is overall memory usage we have reduced
some graph memory usage by eliminating
intermediate geometry data which was
previously kept to facilitate selection
and highlighting processes
this table shows the benefits of this
work for different models
[Music]
during class here we have significantly
improved the screen space ambient
occlusion as sao it is a rendering
technique which allows to add an effect
of global illumination to the scene
first of all now we are using enhanced
blur to avoid shadow noise
in addition the ssao can now be adjusted
for the different zoom levels
either manually or in automatic mode
when realizing kfc in many cases the
full ifc model is divided into a set of
separate files electrical mechanical
plumbing
open fc viewer is now able to load all
these files into a single scene
all these modules can be manipulated in
object explorer pane
we can control visibility performance
action and many other operations like
for example collision detection between
objects from different models
for example we can test that analytical
equipment
doesn't collide with plumbing
also we have added the new functionality
to the collision detection pallet
to allow the transparency of the
collided objects to be customized this
year we have continued to work on apple
metal renderer fios and mac platforms
since metal device invokes the similar
code base with opengls 2 device
we have work on a more deep unification
of the data providing for these two
devices
also we have add new functionality to
the metal device
like support of the substantial
attachment support of the index geometry
cutting planes
another important feature is the frame
buffer support
one of the benefits from the frame
buffer support is the correct order
independent transparency even in
multi-target mode
our future planes with metal device
includes the transferring of the
openjlis to geometric shaders to the
metal as far as implementation of the
advanced features like screen space
event occlusion
fast proximity anteriorizing and more
[Music]
audio visualize now supports multiple
rendering devices for a single database
this means that client application for
example are able to run tile printing or
pdf exporting with optimal performance
using graphic system cache together with
on-screen rendering device previously
this was not possible another small
enhancement is ability to draw
background texture on faces in hidden
line mode
this enhancement can be used to create
an effect when background is visible
through geometry in contrast with
classic hidden line where a constant
background color is used to fill
geometry the next item involves raster
images previously
we could select a raster image only by
clicking on its border highlighting is
also applied only to raster image
borders
and now audio visualize provides an api
to enable selection and highlighting of
the full raster image contents a popular
question on oda technical support forum
is how do i
render a raster image with transparent
background
and now client applications is able to
request rendering of
32 bits per pixel raster images with
transparent geometry for any oda
vectorization
[Music]
model now we will mention a few final
visualization features
and the first one is a transparency
support in our gdi device which can be
used as for printing and for rendering
the next two options are related to the
import part of the oda visualize the
first one allows to generate a so-called
3d view during import of revit files if
such view is absent
the second one is the ability to import
data from frozen layers in dwg files
the last topic that i want to mention is
the viewcoup implementation
vcoop is a clickable object that allows
you to change the current view
it is a quite popular tool in modern
computer edit design applications
[Music]
now it is time to discuss our future
plans
first of all we will plan to add the
support of the reflection planes
the next topic is a fast object
transform
this is a very important feature for an
nd animation
for the cases when many objects are
involved in animation
like for example is an explode tool
also we will plan to add the
implementation of the subpixel
morphological anti-aliasing as maa
we should solve the problem with fast
approximate analyzing when one pixel
lines appear to blur
we are now working and we will continue
to work on the new file format for the
visualize vsfx
this new format will reduce the size of
the file up to 2.5 times
and will support efficient streaming
during last year
we have seen a strong interest to our
application open ac viewer
and as a part of this interest we have
received a main request for the creating
of the plugin system to this application
and we plan to work in this direction
before we move on to next topics let's
see an interesting member case
demonstrating a complex use of different
audi tools in cloud
including visualize and publish
[Music]
hi there i'm sean the product lead at
lunar and onset design coming to you
today from melbourne australia and
thanks for tuning in in this talk i'll
run through a basic introduction of the
lunar engineering document management
system or edms for short and dive into
some of the key oda technologies that
make our product possible
onset design is a small company based in
australia and we specialize in
everything related to engineering
document management
we've been an oda partner for a number
of years and we assist companies
implementing edms to manage their cad
files
one of the key challenges faced by our
customers has been receiving and
reviewing large sets of drawings from
their contractors and we've historically
used the oda libraries as a part of a
portal product to implement validations
on receipt of drawings
this allows us to automatically check
title block metadata quality among other
things and speeds up the qa process
allowing as built to be received and
processed more efficiently
luna is a cloud-based edms hosted on
microsoft azure that grew out of this
original portal product
it's designed to allow smaller companies
to get their edms up and running fast
without the need for a complex design
and config process
the key features of lunar document
upload revision review and collaboration
we use a key set of oda technologies to
make these features possible
[Music]
to show how the oda tech fits into our
application architecture i'll introduce
you to something that we call the lunar
document processing pipeline or the
journey that documents go through after
being uploaded to luna firstly as
documents are uploaded to luna they're
automatically published using the
publish sdk to thumbnail and pdf
thumbnails are used for previewing
documents in our list view whilst pdfs
are used for document review and markup
following this
we push the documents through the
drawings.net sdk
to extract the various document metadata
properties
and store the title block and xref data
next to the document in azure blob
storage as json
this can be used for validation metadata
review and more later on in the drawings
workflow
when models are uploaded we also push
them through the open cloud file
converter automatically generating vsf
files which can be used to view the icf
model ifc models in the browser as our
customers are starting to work with more
model formats we've seen an increased
demand for the users to be able to view
and navigate models in luna without the
need for an installed component after
looking around at the various model
viewing options available we finally
landed on open cloud
using opencloud we've been able to
enable customers to view ifc models as
well as native dwg and dgn files
natively in the browser without the need
for first translating them to pdf the
next step will be to use the bim rev rv
revit sdk to enable this functionality
for revit models as well
[Music]
whilst there are alternative methods to
achieve some of these features for
example 3d model viewing they typically
come with trade-offs like requiring a
model to be uploaded and stored in a
separate cloud repository which can be
problematic when it comes to data
sovereignty
or per model or document rendering fees
which can drive up the total cost of
ownership for customers over time by
using the oda libraries we can maintain
control of where our data is hosted and
keep our prices comparatively low by
avoiding rendering fees
so for us it's been a huge step forward
and what's next well going forward we
aim to further use the ifc and rapid
sdks to unlock some of the smart data
captured in models making it available
to all you lunar users
that's all for me today thanks for
tuning in enjoy the rest of your day
[Music]
new features added to our digital kits
and especially to ada visualize
give us the ability to provide the
solution for another important task a
thorough design review of aec models to
validate multiple project aspects
this solution is a design review engine
that evolved from a set of specific ode
technologies visualize publish open
cloud common data access and various bim
formats support
the major new capabilities we had this
year to support a complete design review
vlog 4 are
support for federated b models in
visualize and open cloud and two-way
conversion between visualize toolkit
and navisworks database
for now ode design review engine is a
solution with advanced set of
functionality import natively supported
multiple design formats ifc revit files
navisworks files dwg files with
architecture civil and map data
investigate an aggregated model with
high quality visualization
fast selection and highlight
get access to all model attributes
validate the model with advanced 3d
clash detection
use 3d markup tools to communicate
project needs
collaborate basing on open beam
standards such as bcf
and prospectively open cde
publish entire model of
or part of it
to pdf
exchange the model data with navisworks
files
this design review engine features are
available to extend your existing
application
or to create full featured design review
tools
[Music]
when developing the open cloud solution
we try to find a balance between
stability system security and speed of
adding new functionality
to do this we deploy our solution to
demo infrastructure to test new
functionality
and improve our stability for the past
year
several thousands of users were
registered in our
application
which give us the opportunity to test
our backend unreal data set
in the same time we were able to test
our new functionality in real conditions
open cloud support a wide range of file
formats
for card it's a dwg djn dxf and some
others
for the beam it's ifc revit at nineties
work
also we support some
common 3d file formats
some files use additional file resources
this can be a font file textures
materials and even external reference
to work with such complex files we
provide a reference api
you can upload all your files to the
backend and link them together through
the reference api
this mechanism is flexible enough so you
can link your file multiple times
it can be used for example to create
your own font library for rendering
files in the browser we provide a
visualize.js library
this is a cross-compiled version of
visualize project into a web assembly
format
it provides a rich api to work with the
geometry data to manipulate with them
or to create a new one
beside this we provide an api for
different
tools like a measurement tool a slice
panel tool
and some others we are constantly
improving functionality of our
visualize.js library from the latest
improvements we speed up the performance
of loading geometry data
and we change the way of loading this
data
which now is loaded the closest to the
camera
[Music]
open cloud server provides an api to
working with the users
administrator can create new user and
specify his specific settings
and configuration
for example you can create a user
specify his name his position
number of the project that the user can
create and the amount of storage that
the user can use
for the convenience these users can be
combined into groups
quite often on the real project
it is necessary to specify the same
project access for the set of the users
to do this we provide a role based api
you can create a custom role
change
per user permissions and assign this
role to your users or your groups open
cloud provide the class detection
functionality
you can use this functionality to detect
intersections between objects in the
scene and it provides an api to
configure this
intersection detector
with the help of the visualize.js
library you can highlight the
intersected objects and show them
another feature of the server is the
file assemblies
this type of endpoints allow you to
merge
several files into a single common
structure
and then you can work with this
structure as if it were a single file
[Music]
open cloud provide a simple set of file
operations you can extract geometry data
from the file or properties data
but sometimes it is required to execute
custom jobs to extract additional
information
to do this open cloud server provide an
api to run custom jobs
for example
you can create such custom job to
extract xdata from the dwg file and send
it to a separate storage with a
notification by default open cloud
server use file system to store user
files
this is a reliable and secure solution
but it requires additional
administration and configuration
if this option doesn't fit you
we provide
access to the additional third-party
storage systems
you can connect server to the amazon s3
storage or azure blob storage the server
is powerful enough to withstand the load
of
a large number of users
during stress testing
uh
we use the server instance with one
single core and two gigabyte of memory
the server in such configuration was
able to handle 18 000 requests per
minute with one and a half requests per
second for one active user the server
can withstand the load of
200 users for the convenience of working
with the open cloud system we provide an
open cloud cli tool
you can use this to easily install and
configure solution on your environment
for example you need just only two
commands to run our solution in docker
system
[Music]
welcome to our session on web assembly
debugging my name is emmanuel ziegler
and i work in the va development team of
google v8 is our javascript and
webassembly engine that is used in the
chrome browser microsoft edge and
node.js if you are new to webassembly
you need a little refresher on it don't
worry we will quickly present the
knowledge required to understand this
talk
the focus will be on how to debug and
profile a webassembly application right
in your chrome browser
this can be useful if you have a bug or
performance issue that is hard to reduce
in native code of course you may also
just use chrome out of convenience for
not having to reproduce the error in
another environment
in last year's talk i gave an
introduction to webassembly or wasm for
short and how you can use it to bring
your cc blusbus code to the web i
recommend this talk if you want to know
how to get started with western
development
for now all you need to know is that
wasm is a low level language that allows
you to safely and performantly ship
binary code on the web
instead of a machine executable western
provides an intermediate representation
that will be compiled just in time in
your browser
wasm is available in any major browser
but i will focus today on google chrome
for convenience
to create a business module from cc
blastbus code we will use the
inscription toolchain wasm also supports
other languages such as rust or
assemblyscript with more to come
we will start with a simple example that
simulates the collets conjecture which
claims that if you triple the value and
add 1 for odd numbers and half the value
for even numbers repeatedly you will
eventually arrive to a loop consisting
of the sequence to one over and over
again provided that you started with a
positive integer
if you're confused about what this does
don't worry the details are not
important for our investigations we will
be running this with different random
numbers over and over for 500 million
times the problem with this
implementation is that it does not work
because it gets stuck in an endless loop
we now want to debug the algorithm to
find the mistake and we can add
debugging symbols and switch off
optimizations using the minus g switch
just like with gcc or client for
convenience i am creating an html file
together with the watson module and its
javascript code
using a special extension which is still
in beta testing chrome can reach dwarf
debugging information embedded in the
module and we can navigate the c codes
directly you can find the extension
under the link in the slide
for the extension to work we also need
to enable an experimental feature in the
chrome developer tool settings
you can do that by opening the developer
tools clicking the little gear icon in
the upper right corner and then enabling
the web assembly debugging devtools 12
support in the experiment section
i have now opened the debugger in
devtools and set a breakpoint in the
main function after reloading the
program pauses at the main function as
expected
[Music]
you can see this from the pause in
debugger message on the top of the page
as well as in dev tools paused on
breakpoint and the breakpoint that i'm
currently passed on is highlighted in
the list and in the source code view
the scope on the right hand side
contains the local variables the local
variable is still undefined at this
point but once i start stepping through
the function it gets initialized with 0
and then with its actual value from the
time function
[Music]
once i reach the scope of the for loop
the run counter also becomes visible
run is initialized with zero and i not
only can see this in the scope but also
when i hover over the variable
[Music]
[Music]
i will now put a breakpoint here so we
can pause every time a new call to run
example is made
i can either deactivate the other
breakpoint or remove it completely as we
don't need it anymore
[Music]
we are now going to step into the
function but of course the first
function to be called is rand and we
therefore end up in there
as you can see this function does not
have debug information and we therefore
are presented with a raw vessel code
which you can also see from the stack
trace showing this little warning icon
for the c main function i generated the
debugging information while for this
automatically generated main wrapper
there is also no information
below that we find the javascript trains
you can see that the scope now shows
information on the local variables in
javascript
if i go back into rand we can see that i
can step through this low-level web
assembly just like the rest of the code
i can look at the local variables and as
i keep stepping through it the
information slowly gets populated
including the low level stack
if you are interested in this low level
behavior here you can just step through
your functions and see how it works
[Music]
we are currently not interested in this
so i'm going to step out of this
function and step into run example
we can watch the value of i change as
i'm stepping through it
[Music]
[Music]
do
[Music]
we now want to figure out why our
algorithm does not complete and i
therefore continue until we reach the
point where it fails
can see that the breakpoint is not hit
again even though we should have
completed one example in fractions of a
second
so this means that we are still in the
first call of this function
let's pause and see where we are
we are still in one example as expected
and if i go one level up we can confirm
that this is indeed still run 0 and we
really didn't hit the breakpoint
let's go back into the top scope and see
what's happening
there let's take a good look at the
number i and remember it for later
as i'm stepping through the function the
number keeps changing as expected
[Music]
[Music]
[Music]
but then at this point in time i end up
with the same number that we saw
originally and this means that this loop
is going to continue forever and i will
always toggle in between these two
numbers and never reach the exit
condition as i never becomes one
[Music]
if you remember the collets conjecture
you'll notice that something is not
right here
we have an odd number here and we should
be multiplying it by three and add one
but we end up in the branch that should
divide by 2 but as you can see from the
shift operator it's actually pointing in
the wrong direction here
[Music]
[Music]
so we got two errors to fix
switch the if and then bodies and fix
the shift operator
this should then be an accurate
implementation of the coreless
conjecture and we should exit each loop
properly
let us quickly fix the issue with the
wrong condition by exchanging the if and
else bodies
if we compare the performance to that of
the native version compiled with clang
we see that we are substantially slower
this only sometimes happens and if we
simply reload the page the performance
is on par with the native version
sometimes westminster is still a little
slower but it should come close to what
you see in native performance
so why was our first run slower
in last year's talk i explained that v8
uses a two-tier compilation strategy
this allows faster startup times while
ensuring a high peak performance
in the beginning it can happen though
that the top tier has not yet completed
its compilation when the program is
executed and because there is no on
stack replacement it can happen that it
stays in the baseline here for the whole
execution
this is usually not an issue because if
the program keeps returning to a
function higher up the stack and then
calls the inner function again the new
version will be called
in benchmarks this might not happen
though because of their simplicity and
the forced inlining of the run example
function certainly does not help here
there can even be implicit inlining the
compiler notices that the function is
not used outside the code
what works well in native code can
therefore be a problem with wasm
so you might consider forbidding
inlining of benchmarking functions that
are called
repeatedly but why did it work the
second time
that's because chrome caches compilation
results of the top tier so they don't
need to be recompiled again and again
on repeated runs you will therefore
likely start with the top tier right
away
now prevent inlining of the function and
we can see that the performance is now
much more predictable
or so it seems at least
if i load the program with devtools open
debugging will actually force vh to use
the baseline tier leading to bad
performance
this is especially confusing if you
decide to output your benchmark results
to the console and therefore keep the
console open
it's better to run the benchmark first
and then open the console to check the
output
v8 will then run at full performance and
you still have access to all the
messages
[Music]
if you want to run a proper function
profile of our code we should recompile
it with a profiling command line flag so
function names which are normally
stripped from the reson binary are
included
this also enables some other useful
changes to the module but if you do not
want that and you prefer just having the
function names put in the module you may
just use the profiling funcs flag
we can record function level profiles in
webassembly using the performance tab in
devtools
you can choose to press the recording
button to start the recording
immediately or reload and then start
recording
i will reload so my program gets
re-executed
once we start recording the module will
be tiered up so we can record the
regular performance and not the one of
the baseline
here a few seconds of data suffice for
our needs and when the profile is done
we can see that the majority of time is
spent in script execution
[Music]
[Music]
looking at the timeline we see that we
are mostly skipping in between two
functions the original main from the c
file and the run example function
the gaps where run example stops
executing is the time spent in main or
the rand function
we can also take a look at the bottom up
call graph which features the functions
in which most time is spent on the top
as to be expected the majority of the
time is spent in one example where the
inner loop is followed by original main
with the outer loop and occasional calls
to rent
the other parts only contribute to a
negligible amount we can also check from
which functions this one was called so
we understand the call pattern
we can also look at the top down graph
where we can easily see the path the
program takes and on which branch it
spends the most time
all of this works exactly like it does
with javascript for more complex
programs this provides useful insights
for optimization targets that really
help speed up your application
thank you for watching this little speed
run through webassembly debugging and
profiling
if you work with webassembly please try
these features and let us know what you
are missing most
have a nice day and happy coding
تصفح المزيد من مقاطع الفيديو ذات الصلة
Scan-to-BIM Panel Discussion (full version)
ODA Summit 2021 - Part 1: Complete CAD Interoperability
ALP Podcast Episode 1: Supply Chain & Logistics Challenges and Opportunities with Carl Hemus
Screens & 2D Graphics: Crash Course Computer Science #23
Webinar | ¿Cómo aumentar la competitividad a través de la logística en las Pymes?
Jensen Huang of Nvidia on the Future of A.I. | DealBook Summit 2023
5.0 / 5 (0 votes)