Features Archives - AEC Magazine https://aecmag.com/features/ Technology for the product lifecycle Thu, 20 Jul 2023 08:11:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.2.2 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Features Archives - AEC Magazine https://aecmag.com/features/ 32 32 Autodesk Forma: a deep dive into the data lake https://aecmag.com/collaboration/autodesk-forma-a-deep-dive-into-the-data-lake/ https://aecmag.com/collaboration/autodesk-forma-a-deep-dive-into-the-data-lake/#disqus_thread Mon, 05 Jun 2023 06:16:52 +0000 https://aecmag.com/?p=17817 Autodesk’s AEC cloud platform has an initial focus on conceptual design, but it will become much more than this. We explore how it handles data

The post Autodesk Forma: a deep dive into the data lake appeared first on AEC Magazine.

]]>
Autodesk boldly told the industry that there would be no second-generation Revit. Instead, it would develop a cloud-based platform for all project data, called Forma. This month the first instalment arrived, with a focus on conceptual design. But Forma will become so much more than this. Martyn Day gets to the heart of the platform and how it handles data

In early May, Autodesk announced the first instalment of its next-generation cloud-based platform, Forma, with an initial focus on conceptual design (see box out at end). The branding and original announcement of Forma was delivered at Autodesk University 2022 in New Orleans. While the name was new, the concept was not.

Back in 2016, the then vice president of product development at Autodesk, Amal Hanspal, announced Project Quantum , the development of a cloud-based replacement for Autodesk’s desktop AEC tools.

In the seven years that followed, progress has been slow, but development has continued, going through several project names, including Plasma,, and project teams. While the initial instalment of Forma could be considered to be just a conceptual design tool — essentially a reworking and merging of the acquired Spacemaker toolset with Autodesk Formit — this launch should not be seen as anything other than one of the most significant milestones in the company’s AEC software history.

Since the early 2000s, Autodesk (and, I guess, a lot of the software development world), decided that the cloud with a Software-as-a-Service (SaaS) business model was going to be the future of computing. Desktop-based applications require local computing power, create files, caches and generate duplicates. All of this requires management and dooms collaboration to be highly linear.

While the initial instalment of Forma could be considered to be just a conceptual design tool — essentially a reworking and merging of the acquired Spacemaker toolset with Autodesk Formit — this launch should not be seen as anything other than one of the most significant milestones in the company’s AEC software history

The benefits of having customers’ data and the applications sat on a single server include seamless data sharing and access to huge amounts of compute power. Flimsy files are replaced by extensible and robust databases, allowing simultaneous delivery of data amongst project teams.

With the potential for customers to reduce capital expenditure on hardware and easy management of software deployment and support, it’s a utopia based on modern computer science and is eminently feasible. However, to move an entire install base of millions from a desktop infrastructure, with trusted brands, to one that is cloudbased, does not come without its risks. This would be the software equivalent of changing a tyre at 90 miles an hour.

The arrival of Forma, as a product, means that Autodesk has officially started that process and, over time, both new functionality and existing desktop capabilities will be added to the platform. Overall, this could take five to ten years to complete.

The gap

Autodesk’s experiments with developing cloud-based design applications have mainly been in the field of mechanical CAD, with Autodesk Fusion in 2009. While the company has invested heavily in cloud-based document management, with services such as Autodesk Construction Cloud, this doesn’t compare to the complexity of creating an actual geometry modelling solution. To create Fusion, Autodesk spent a lot of time and money to come up with a competitive product to Dassault Systèmes’ Solidworks, which is desktop-based. The thinking was that by getting ahead on the inevitable platform change to cloud, Autodesk would have a contender to capture the market. This has happened many times before, with UNIX to DOS and DOS to Windows.

Autodesk has finally created the platform change event that it hoped for, without user demand, but it has come at a time when Revit is going to be challenged like never before

Despite these efforts Fusion has failed to further Autodesk’s penetration of its competitors’ install base. At the same time the former founder of Solidworks, Jon Hirschtick, also developed a cloudbased competitor, called Onshape, which was ultimately sold to PTC. While this proved that the industry still thought that cloud would, at some point, be a major platform change, it was clear that customers were both not ready for a cloud-based future or to leave the current market-leading application.

Years later and all Solidworks’ competitors are still sat there waiting for the dam to burst. Stickiness, loyalty and long-honed skills could mean they will be waiting a long time.

This reticence to move is even more likely to be found in the more fragmented and workflow-constrained AEC sector. On one hand, the pressure to develop and deliver this over ten years would seem quite acceptable. The problem comes with Autodesk’s desktop products that have had a historic low development velocity and a growing vocal and frustrated user base.

Back in 2012, I remember having conversations with C-level Autodeskers, comparing the cloud development in Autodesk’s manufacturing division and wondering when the same technologies would be available for a ‘next-generation’ Revit. With this inherent vision that the cloud would be the next ‘platform’, new generations of desktop tools like Revit, seemed like a waste of resources. Furthermore, Autodesk was hardly under any pressure from competitors to go down this route.

However, I suspect that not many developers at the time would have conceived that it would take so long for Autodesk to create the underlying technologies for a cloud-based design tool. The idea that Revit, in its current desktopbased state, could survive another five to ten years before being completely rewritten for the cloud is, to me, inconceivable.

Angry customers have made their voices clearly heard (e.g. Autodesk Open Letter and Nordic Open Letter Group . So, while Forma is being prepared, the Revit development team will need to do a serious bit of rear-guard action development to keep customers happy in their continued investment in Autodesk’s aging BIM tool. And money spent on shoring up the old code and adding new capabilities is money that’s not being spent on the next generation.

This isn’t just a case of providing enhancements for subscription money paid. For the first time in many decades, competitive software companies have started developing cloud-based BIM tools to go head-to-head with Revit (see Arcol, Snaptrude), Qonic and many others that have been covered in AEC Magazine over the past 12 months).

Autodesk has finally created the platform change event that it hoped for, without user demand, but it has come at a time when Revit is going to be challenged like never before.

The bridgehead

While Forma may sound like a distant destination, and the initial offering may seem inconsequential in today’s workflows, it is the bridgehead on the distant shore. The quickest way to build a bridge is to work from both sides towards the middle and that seems to be exactly what Autodesk is planning to do.

Forma is based on a unified database, which is capable of storing all the data from an AEC project in a ‘data lake’. All your Autodesk favourite file formats — DWG, RVT, DWF, DXF etc. — get translated to be held in this new database (schema), along with those from thirdparties such as Rhino.

The software architecting of this new extensible unified database forms the backbone to Autodesk’s future cloud offering and therefore took a considerable amount of time to define.

Currently, Autodesk’s desktop applications don’t use this format, so on-thefly translation is necessary. However, development teams are working to seamlessly hook up the desktop applications to Forma. With respect to Revit, it’s not unthinkable that, over time, the database of the desktop application will be removed and replaced with a direct feed to Forma, with Revit becoming a very ‘thick client’. Eventually the functionality of Revit will be absorbed into thin-client applets, based on job role, which will mainly be delivered through browserbased interfaces. I fully expect Revit will get re-wired to smooth continuity before it eventually gets replaced.

Next-generation database

One of the most significant changes that the introduction of Forma brings is invisible. The new unified database, which will underpin all federated data, lies at the heart of Autodesk’s cloud efforts. Moving away from a world run by files to a single unified database provides a wide array of benefits, not only for collaboration but also to individual users. To understand the structure and capabilities of Forma’s new database, I spoke with Shelly Mujtaba, Autodesk’s VP of product data.

Data granularity is one of the key pillars of Autodesk’s data strategy, as Mujtaba explained, “We have got to get out of files. We can now get more ‘element level’, granular data, accessible through cloud APIs. This is in production; it’s actually in the hands of customers in the manufacturing space. “If you go to the APS portal (Autodesk Platform Services (formerly Forge), you’ll see the Fusion Data API. That is the first manifestation of this granular data. We can get component level data at a granular level, in real time, as it’s getting edited in the product. We have built similar capabilities across all industries.

“The AEC data model is something that we are testing now with about fifteen private beta customers. So, it is well underway — Revit data in the cloud — but it’s going to be a journey, it is going to take us a while, as we get more and richer data.”

To build this data layer, Mujtaba explained that Autodesk is working methodically around workflows which customers use, as opposed to ‘boiling the whole ocean’. The first workflow to be addressed was conceptual design, based on Spacemaker. To do this, Autodesk worked with customers to identify the data needs and learn the data loops that came with iterative design. This was also one of the reasons that Rhino and TestFit are among the first applications that Autopdesk is focussing on via plug-ins.

Interoperability is another key pillar, as Mujtaba explained, “So making sure that data can move seamlessly across product boundaries, organisational boundaries, and you’ll see an example of that also with the data sheet.”

At this stage, I brought up Project Plasma, which followed on from Project Quantum. Mujtaba connected the dots, “Data exchange is essentially the graduation of Project Plasma. I also led Project Plasma, so have some history with this,” he explained. “When you see all these connectors coming out, that is exactly what we are trying to do, to allow movement of data outside of files. And it’s already enabling a tonne of customers to do things they were not able to do before, like exchange data between Inventor and Revit at a granular level, even Inventor and Rhino, even Rhino and Microsoft Power Automate. These are not [all] Autodesk products, but [it’s possible] because there’s a hub and spoke model for data exchange.

“Now, Power Automate can listen in on changes in Rhino and react to it and you can generate dashboards. Most of the connectors are bi-directional. Looking at the Rhino connector you can send data to Revit, and you can get data back from Revit. Inventor is now the same (it used to be one directional, where it was Revit to Inventor only) so you can now take Inventor data and push it into Revit.”


Find this article plus many more in the May / June 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Offline first

In the world of databases there have been huge strides to increase performance, even with huge datasets. One only has to look to the world of games and technologies like Unreal Engine.

One of the terms we are likely to hear a lot more in the future is ECS (Entity Component Systems). This is used to describe the granular level of a database’s structure, where data is defined by component, in a system, as opposed to be just being a ‘blob in a hierarchy data table’.

I asked Mujtaba if this kind of games technology was being used in Forma. He replied, “ECS is definitely one of the foundational concepts we are using in this space. That’s how we construct these models. It allows us extensibility; it allows us flexibility and loose coupling between different systems. But it also allows us to federate things. This means we could have data coming from different parties and be able to aggregate and composite through the ECS system.

“But there’s many other things that we have to also consider. For example, one of the most important paradigms for Autodesk Fusion has been offline first — making sure that while you are disconnected from the network, you can still work successfully. We’re using a pattern called command query responsibility separation. Essentially, what we’re doing is we’re writing to the local database and then synchronising with the cloud in real time.”

This addresses one of my key concerns that, as Revit gets absorbed into Forma over the next few years, users would have to constantly be online to do their work. It’s really important that team members can go off and not be connected to the Internet and still be able to work.

Mujtaba reassuringly added, “We are designing everything as offline first, while, at some point of time, more and more of the services will be online. It really depends on what you’re trying to do. If you’re trying to do some basic design, then you should be able to just do it offline. But, over time, there might be services such as simulation, or analytics, which require an online component.

“We are making sure that our primary workflows for our customers are supported in an offline mode. And we are spending a lot of time trying to make sure this happens seamlessly. Now, that is not to say it’s still file-based; it’s not file based. Every element that you change gets written into a local data store and then that local data source synchronises with the cloud-based data model in an asynchronous manner.

“So, as you work, it’s not blocking you. You continue to work as if you’re working on a local system. It also makes it a lot simpler and faster, because you’re not sending entire files back and forth, you’re just sending ‘deltas’ back and forth. What’s happening on the server side is all that data is now made visible through a Graph QL API layer.

“Graph QL is a great way to federate different types of data sources. Because we have different database technologies for different industries, because they almost always have different workflows. Manufacturing data is really relationship heavy, while AEC data has a lot of key value pairs (associated values and a group of key identifiers). So, while the underlying databases might be different, on top of that we have a Graph QL federation layer, which means from a customer perspective, it all looks like the same data store. You all you get a consistent interface.”

Common and extensible schemas One of the key benefits of moving to a unified database was that data from all sorts of applications can sit side by side, level 200 / 300 AEC BIM model data with Inventor fabrication-level models. I asked Mujtaba to provide some context to how the new unified database schema works.

“It’s a set of common schemas, which we call common data currencies — and that spans across all our three industries,” he explained. “So, it could be something basic, like basic units and parameters, but also definitions or basic definitions of geometry. Now, these schemas are semantically versioned. And they are extensible. So that allows other industry verticals to be able to extend those schemas and define specific things that they need in their particular context. So, whenever this data is being exchanged, or moved across different technologies under the covers, or between different silos, you know they’re using the language of the Common Data currencies.

Forma in the long-term is bold, brave and, in its current incarnation, the down payment for everything that will follow afterwards for Autodesk. However, it has been a long time coming

“On top of that data model, besides ECS, we also have a rich relationship framework, creating relationships between different data types and different level of details. We spend a lot of time making sure that relationship, and that that way of creating a graph-like structure, is common across all industries. So, if you’re going through the API, if you’re looking at Revit data and how you get the relationship, it will be the same as how you get Fusion relationships.”

To put it in common parlance, it’s a bit like PDF. There is a generic PDF definition, which has a common schema for a whole array of stuff. But then there’s PDF/A, PDF/E and X, which are more specific to a type of output. Autodesk has defined the nucleus, the schema for everything that has some common shared elements. But then there are some things that you just don’t need if you’re in an architecture environment, so there will be flavours of the unified database that have extensions. But convergence is happening and increasingly AEC is meeting manufacturing in the offsite factories around the world.

Mujtaba explained that because of convergence Autodesk is spending a lot of time on governance of this data model trying to push as much as possible into the common layer.

“We’re being very selective when we need to do divergence,” he said. “Our strategy is, if there is divergence, let it be intentional not accidental. Making sure that we make a determined decision that this is the time to diverge but we may come back at some point in time. If those schemas look like they’re becoming common across industries, then we may push it down into the common layer.

“Another piece of this, which is super important, is that these schemas are not just limited to Autodesk. So obviously, we understand that we will have a data model, and we will have schemas, but we are opening these schemas to the customers so they can put custom properties into the data models. That has been a big demand from customers. ‘So, you can read the Revit data, but can I add properties? For example, can I add carbon analysis data, or cost data back into this model?’

“Because customers would just like to manage it within this data model, custom schemas are also becoming available to customers too.”

Back in the days of AutoCAD R13, Autodesk kind of broke DWG, by enabling ARX to create new CAD objects that other AutoCAD users could not open or see. The fix for this was the introduction of proxy objects and ‘object enablers’ which had to be downloaded, so ARX objects could be manipulated. To this backdrop, the idea that users can augment Forma’s schema was a concern. Mujtaba explained, “You can decide the default level of visibility to only you can see it, but you might say this other party can see it, or you can say this is universally visible, so anyone can see it.

“To be honest, we are learning so we have a private beta running within the Fusion space of extension data extensibility with a bunch of customers, because this is a big demand from our customers and partners. So, we are learning. “The visibility aspect came up, and, in fact, our original visibility goal was only ‘you can see this data’. And then we learned from the customers that no, they would selectively want to make this data visible to certain parties or make it universally visible.

“We are continuing to do very fast iterations. Now instead of going dark for several years and trying those things and trying to boil the ocean, we are saying, ‘here’s a workflow we’d like to solve. Let’s bring in bunch of customers and let’s solve this problem. And then move to the next workflow’. So that’s why you see things coming out at a very rapid pace now, like the data exchange connectors, and the data APIs”.

It’s clear that the data layer is still to some degree in development, and I don’t blame Autodesk for breaking it down into workflows and connecting your tools that are commonly used. This is a vast development area and an incredibly serious undertaking on behalf of Autodesk’s development team. It seems that the strategy has been clearly identified and the data tools and technologies decided upon. In the future, I am sure files will still play a major role in some interoperability workflows, but with APIs and an industry interest in open data access, files may eventually go the way of the dodo.

Conclusions

Forma in the long-term is bold, brave and, in its current incarnation, the down payment for everything that will follow afterwards for Autodesk. However, it has been a long time coming. In 2016, the same year that Quantum was announced at Autodesk University, Autodesk also unveiled Forge, a cloud-based API which would be used to break down existing products into discreet services which could be run on a cloudbased infrastructure.

Forge was designed to help Autodesk put its own applications in the cloud, as well to help its army of third-party developers build an ecosystem. Forge recently was rebranded Autodesk Platform Services (APS). I actually don’t know of any other software generation change where so much time and effort was put into ensuring the development tools and APIs were given priority. The reason for this is possibly because, as in the past, these tools were provided by Microsoft Foundation Classes, while for the cloud Autodesk is having to engineer everything from the ground up.

While this article may appear a bit weird, as one might expect a review of the new conceptual design tools, for me the most important thing about Forma is the way that it’s going to change the way we work, the products we use, where data is stored, how data is stored, how data is shared and accessed, as well as new capabilities, such as AI and machine learning. Looking at Forma from the user-facing functional level that has been exposed so far, does not do it justice. This is the start of a long journey.

Autodesk Forma is available through the Architecture, Engineering & Construction (AEC) Collection. A standalone subscription is $180 monthly, $1,445 annually, or $4,335 for three years. Autodesk also offers free 30-day trials and free education licences. While Forma is available globally, Autodesk will initially focus sales, marketing and product efforts in the US and European markets where Autodesk sees Forma’s automation and analytical capabilities most aligned with local customer needs.


From Autodesk SpaceMaker to Autodesk Forma

Autodesk Forma
Revit and Forma integration

Over the last four years there have been many software developers aiming to better meet the needs of conceptual design, site planning and early feasibility studies in the AEC process. These include TestFit, Digital Blue Foam, Modelur, Giraffe and Spacemaker to name but a few. The main take up of these tools was in the property developer community, for examining returns on prospective designs.

Autodesk Forma
Rapid operational energy analysis

In November 2020, just before Autodesk University, Autodesk paid $240 million for Norwegian firm Spacemaker and announced it as Autodesk’s new cloud-based tool for architectural designers. Autodesk had been impressed with both the product and the development team and rumours started coming out that the team would be the one to develop the cloud-based destination for the company’s AEC future direction.

This May, Autodesk launched Forma, which is essentially a more architectural biased variant of the original application and, at the same time, announced it was retiring the Spacemaker brand name. This created some confusion that Forma was just the new name for Spacemaker. However, Spacemaker’s capabilities are just the first instalment of Autodesk’s cloudbased Forma platform, the conceptual capabilities, which will be followed by Revit.

Autodesk Forma
Microclimate analysis

Forma is a concept modeller which resides in the cloud but has tight desktop Revit integration. Designs can be modelled and analysed in Forma and sent to Revit for detailing. At any stage models can be sent back to Forma for further analysis. This twoway connection lends itself to proper iterative design.

At the time of acquisition, Spacemaker’s geometry capabilities were rather limited. The Forma 3D conceptual design tool has been enhanced with the merging of FormIT’s geometry engine (so now supports arcs circles, splines), together with modelling based on more specific inputs, like number of storeys etc. It’s also possible to directly model and sculpt shapes. For those concerned about FormIT, the product will remain in development and available in the ‘Collection’.

Autodesk Forma
Daylight potential analysis

One of the big advantages hyped by Autodesk is Forma’s capability to enable modelling in the context of a city, bringing in open city data to place the design in the current skyline. Some countries are served better than others with this data. Friends in Australia and Germany have not been impressed with the current amount of data available, but I am sure this is being worked on.

Looking beyond the modelling capabilities, Forma supports real-time environmental analyses across key density and environmental qualities, such as wind, daylight, sunlight and microclimate, giving results which don’t require much deep technical expertise. It can also be used for site analysis and zoning compliance checking.

At time of launch, Autodesk demonstrated plug-ins from its third-party development community, with one from TestFit demonstrating a subset of its car parking design capability, and the other from Shapedriver for Rhino.

Talking to the developers of these products, they were actively invited to port some capabilities to Forma. Because of the competitive nature of these products, this is a significant move. Rhino is the most common tool for AEC conceptual design and TestFit was actually banned from taking a booth at Autodesk University 2022 (we suspected because TestFit was deemed to be a threat to Spacemaker). In the spirit of openness, which Autodesk is choosing to promote, it’s good to see that wrong righted. By targeting Rhino support, Autodesk is clearly aware of how important pure geometry plays to a significant band of mature customers.

The post Autodesk Forma: a deep dive into the data lake appeared first on AEC Magazine.

]]>
https://aecmag.com/collaboration/autodesk-forma-a-deep-dive-into-the-data-lake/feed/ 0
Hypar: text-to-BIM and beyond https://aecmag.com/ai/hypar-text-to-bim-and-beyond/ https://aecmag.com/ai/hypar-text-to-bim-and-beyond/#disqus_thread Tue, 21 Mar 2023 17:00:00 +0000 https://aecmag.com/?p=17192 Could Hypar help users sidestep much of the detail modelling / drawing production associated with BIM 1.0?

The post Hypar: text-to-BIM and beyond appeared first on AEC Magazine.

]]>
Could a small software firm help users to sidestep much of the detail modelling and drawing production associated with BIM 1.0? Martyn Day reports on Hypar, a company looking to do exactly that

Imagine a computer aided design system that can model a building from a written definition. The user simply types their request into a chat box: ‘A two-storey retail building, and 14 storeys of residential, arranged in an L shape.’ In seconds, a model appears, conforming to the user’s exact requirements, complete with surrounding site conditions and including structural elements, facades and condominium layouts.

To refine the results, the user just adds to the descriptive text. Different floors, for example, might be defined by different text commands.

This is where BIM 2.0 is headed. In many ways, it’s already there.



Founded in 2018, Hypar.io is the brainchild of two seasoned industry veterans: Anthony Hauck and Ian Keough. Hauck was the head of Revit product development and spearheaded Autodesk’s generative design development. Keough is fondly known as the ‘father of Dynamo’, a visual programming interface for Revit.

The pair are widely respected in the industry for their work on developing some of the most popular programmatic design tools used by architects today. When it comes to design tool development, they have been there, done that and bought the T-shirt.

At Autodesk, Hauck and Keough took part in the development of Project Fractal, a tool compatible with Dynamo for generating multiple design solutions according to a customer’s own design logic. It was used in space and building layout design, optimising room placement and structure and generating options that meet core specifications.

This led to the eventual emergence of Hypar — a self-contained, web-based cloud platform and API, which executes code (in Python and C#) to swiftly generate hundreds or even thousands of designs based on design logic. Users can preview resulting 3D models on both desktop and mobile, along with analytics data.


Hypar.io
Hypar is designed to swiftly generate hundreds or even thousands of designs based on design logic

Hypar’s special feature is that it provides a common user interface that isn’t reliant on heavy data sets, in the way that many of today’s BIM tools are, while also supporting open IFC (Industry Foundation Class) files.

Additionally, it is open to communities of tool developers who may wish to share or sell algorithms, or benefit from Hypar’s suite of ever-evolving tools and growing number of open-source resources. This makes it a popular choice for generative tools development. These generative tools can then be used by non-programming designers to experiment with different design variations with a single click. Output from Hypar can also be connected to other processes through commonly used formats and scripts.

The core Hypar team now numbers twelve people and the company has raised a couple of rounds of seed funding, totalling over $2.5 million, from AEC specialist VC firm Building Ventures and Obayashi Construction of Japan.

On a mission

“Our mission from the beginning, and still to this day, is to deliver the world’s building expertise to realise better buildings.” explains Keough. However, he adds, there have been twists and turns along the way.

“When we started Hypar, I thought that, because I came out of the computational design community with Grasshopper and Dynamo, we would sell to computational designers. It turned out that, for a lot of different reasons that I’ve been bloviating about on Twitter and elsewhere, we had built ourselves into a cul-de-sac,” he says.

“We had our own visual programming languages; we were kind of off on our own spur of computational history, which the AEC world had created for itself. Anthony and I identified that as a dead end.”


Hypar.io
A starting point for Text-to-BIM

The challenge with selling into architecture firms, he continues, is that they are already highly saturated with Grasshopper and Revit. Any newcomer has to fight against this deeply entrenched competition.

“What we recognised is that from one project to the next, programmers couldn’t take the logic that they had created, representing some sort of automation for the purpose of the project, and turn that into a repeatable system that could be used from one project to the next, to the next,” Keough continues. The dream with Dynamo Package Manager, he says, was to make a routine and share this back.

“The problem was, all of those systems — Rhino, Grasshopper, Dynamo — were built on top of hero software, but the underlying premise of BIM was flawed.”

Hauck and Keough’s thinking is that while you can automate a certain amount on top of a BIM model, you’re essentially still automating on top of a flawed premise. “What ended up happening was Dynamo was used very effectively for automating the renaming of all the sheets in drawing sets, instead of automating the creation of a specific, novel type of structural system,” he says.

“Even in the cases where people did do that, you couldn’t then take that structural system generator that you had created, and easily couple it with somebody else’s mechanical system generator, or somebody else’s facade system generator. These ‘janky’ workflows had everybody writing everything out to Excel, then reading it from another script.”

Systemic thinking

Analysing the problem alongside customers from the building product manufacturing world, Hauck and Keough decided to “climb the ladder of abstraction”, he says, going beyond the BIM workflow where customers build things with beams and columns, manually, and instead, thinking systemically.

“The product manufacturing partners made this very clear — the value of systemic thinking — because they have highly configurable systems, for which they have experts sitting, pouring over PDFs, measuring things, looking up specifications in their product books, to achieve fully designed systems, to make a sale,” he says.

Every building product manufacturer who came to Hauck and Keough had this same problem. From 2019 to 2022, the pair underwent a shift in thinking, from seeing buildings as projects to instead seeing buildings as products and as assemblies of systems that work together.

Says Keough: “Why do clashes exist? Clashes exist because everyone sits in silos and they operate without interacting with each other. They don’t see what each other has been doing for a couple of weeks, until they merge the Revit models together. Then they find that these components run into other geometry. We need to make systems that can interoperate with each other and understand each other.

Text-to-BIM

So what’s happening when a request is entered in the chat box? Behind the scenes, Hypar is magically mapping the natural language input (for example, ‘2-storey parking garage’) onto Hypar parametric functions. This isn’t AI, but AI informed through natural language input.

Not only does it create the architectural model, but also flipping to a structure view will reveal the steelwork skeleton structure that has also been modelled. Carrying on modelling, we might add a glass façade, a 10-storey residential L-shape tower. It’s possible to section and view an interior model, complete with residence layouts. All this takes seconds.

To change a fully detailed building option in Revit from an L-shaped building to a U-shaped building might typically take three weeks of work. In Hypar, it takes a couple of seconds — not bad! On top of this, all sessions can be shared as collaborative workflows.

Contrary to popular belief, Hypar is not based on Grasshopper or Dynamo. Instead, it is built from the ground up, from the geometry kernel to its generative logic, to do everything itself. The reason for this is because, as a cloud application, it runs on microservices in the cloud and could not be created using historical desktop code. The results are fairly rectilinear, as the software does not support NURBS yet, although there is a tight integration with Grasshopper, where Grasshopper scripts can be turned into Hypar functions.

All of Hypar’s BIM elements are defined using the JSON schema. This is very similar to how Speckle (https://speckle.systems) defines entities that stream across the Speckle Server. Ultimately, it has the ability to export to IFC, but IFC is not native to the platform itself.

Regarding Revit support, Keough explains, “We have really rich support for Revit, in both import and export. We knew we had to do that from Day One, because that’s where people are and that’s where people are going to be for a while, at least in the production of documents.”

That said, Hypar is working with a lot of customers who use Revit for now — not because it’s the perfect system, but because it’s what they have at their disposal, he says. Oftentimes, they’re building functions on Hypar to export a particular type of panel drawing, things that are really hard, if not impossible to do in Revit without some sort of accessory automation, like Dynamo.”


Hypar.io
Design in the context of surrounding site conditions

Over time, as buildings increasingly move to prefabrication and modular construction, and the primacy of drawings is reduced, he says, there will be a corresponding increase in the need for custom types of drawings and outputs to robotic fabrication. “And we’re vectoring to have Hypar on a trajectory to meet those needs,” he says.

One of the biggest issues with BIM 1.0 was that it failed to cross the chasm from scaled drawings to 1:1 fabrication models. Hypar is designed to represent data at multiple scales. As Keough puts it: “We have customers working at every level of detail, from design feasibility all the way through to construction. We have customers who are actually making building construction drawings and specific types of fabrication drawings out of Hypar because, just like the systems that generate the actual building stuff, structural systems, the systems for generating those visualisations are just another sort of extension on top of Hypar.”

Recently at the Advancing Prefabrication conference, Hypar announced the integration work it has carried out with DPR Construction in the US, in order to automate the layout of drywall. “Imagine that, as an architect draws, a wall generates all the layout drawings, and all of the pre-cut fabrication drawings for off-site fabrication of the individual drywall pieces, as well as the stockpiling information about exactly how those pieces of pre-cut drywall are going to be stockpiled on site,” says Keough.

Imagine a CAD system that can model a building from a written definition. The user simply types their request into a chat box and in seconds, a model appears

But with increased detail comes performance issues — so how does Hypar cope with large amounts of data? He responds that the size and complexity of workflows that customers are trusting to Hypar are beginning to test the system. “This is a natural progression, when you build any kind of software,” he says. “When Anthony was building Revit, back in the day, and when I was building Dynamo, there’s always a point at which your customers start using this thing you created for larger and larger and larger models.”

This means that one of the things he and Hauck need to consider very carefully is performance. “If we are inviting all these people to play in this environment, and we can represent all these things with this level of detail, then the compute and your interaction as a user within that compute must operate at the level of performance that historically you’re used to. That’s going to be a challenge, because we’re going to be representing a lot more on this platform.”

Future of building

Our conversation then moves on to the future of building — and how buildings are becoming products. As Keough sees it, a building is comprised of materials and components from a bunch of different manufacturers, just like a laptop. All of these systems must fit tightly and snugly into a clearly defined chassis. From fans to motherboards, the interfaces between them are agreed upon and codified, in order to allow that to happen.

“That’s how buildings are going to start being delivered — and there is not a software out there on the market right now that thinks of buildings in that way,” says Keough. “We all still think of buildings as a big muddy hole in the ground that we fill with sticks and bricks. It’s going to be software like Hypar and others out there which are starting to evolve to think of buildings systemically, getting the system logic and interfaces with other systems in the code. There is a future in which clash coordination is not the way that we coordinate any longer, because it’s not required, because all the systems in a building understand where each other are and really understand how to adapt around each other.”

The demise of drawings

I tell Keough that I am hearing from more AEC firms that they want to move away from producing drawings and get better automated output. It’s something he hears a lot, too: “Increasingly our customers are asking for drawingless workflows.” His colleague Anthony has a “wonderfully concise and pithy” way of describing this problem, he adds. “He says that, for many of the people, Revit is a tax. They do the Revit thing, because they’re in some way required to do it. They have to deliver it.”

Take, for example, virtual design and construction, or VDC, the act of turning a design model into a constructible object. It typically encompasses millions of hours of labour, producing increasingly detailed versions of models, so that firms can coordinate tasks such as putting studs in a wall. But as DPR Construction has demonstrated, a very large contractor can use Hypar to automate the layout of all of its drywall.


Hypar.io
Hypar can generate structural models as well as architectural model

“Millions of square feet of drywall, the most mundane kind of thing in the world — but DPR is turning this into a highly optimised, efficient process, which starts with CNC cutting every single piece of drywall, then robotic layout on a slab,” he says. “If you are CNC cutting all the drywall and you have an automation engine that will do all of that for you based on the architect’s model, it will automate everything when the architect model changes as well. Why do you need drawings?”

Drawing-less building delivery is already possible today, says Keough. Some firms have already embraced it. “And I think if you swim even further back upstream, you actually start asking the questions that we’re asking. If you go to model-based delivery, but you don’t take a huge amount of work out of the process, if all you do is go to model-based delivery, then why are people still manually building those models?”

A faster horse?

Every new company emerges because someone is banging their head against a wall in frustration with an existing process, Keough believes. Hypar is no different, he says. “It came out of Anthony and I banging our collective heads against the wall and deciding that the industry needed something better. ‘There will be drawings’ is the first assumption, not ‘There will be a building’.

As Hypar stretches from concept to fabrication, it has the potential to replace BIM 1.0 workflows with highly tailored solutions, but more importantly, reduce workloads from weeks to seconds

But with start-ups such as Snaptrude emerging, with the goal of taking on the BIM market leaders, what does Keough think about software that sticks to BIM 1.0 workflows, but aims to speed things up? For him, it’s just a “faster horse”, rather than anything truly revolutionary, he says.

“I think there should be some faster horses. There’s a natural sort of regenerative cycle that needs to happen in software where people build the next version of, say, SketchUp. Even though SketchUp is amazing, maybe somebody can innovate on that just a little bit. But I think the next generation of tools in this industry is about the automation of the generation of building systems, and the encoding of expertise that’s carried around in the heads of architects and engineers right now as software.”

In a world where software tools fully detail most building models and automatically produce drawings, one has to wonder what happens to all the BIM seats out there. Will firms only need a few seats to do the work provided by thousands of seats in the past? There is already a slow shift to different charging models, like per project, or based on the value of projects. If less software is needed, surely the price will go up?

“At the point at which that happens, the cost of software will, I think, necessarily need to increase, yes, because there will be fewer seats sold,” agrees Keough. “But it will also often open all kinds of opportunities for software to be sold by value. And by that, I mean right now, for instance, we’re working with a lot of building product manufacturers. We automate the generative logic around which a building product manufacturer’s system is designed.”

For these customers, the value proposition is that they sell more of their product. That means that the value proposition for Hypar must also be more of their product sold, too. “We make more money, because they use the software as part of that sale. I try and figure out a number that works that you’ll pay us, and it’s the highest number that I can get you to pay us every year. Yes, we’ll see all kinds of interesting effects in the marketplace of software becoming generative and some of those will be the actual business models of the software.”

Conclusion

Over the last year, AEC Magazine has focused on highlighting small new innovators working to disrupt established BIM workflows and drive big jumps in productivity. The common thread in nearly all next-generation tools is that developers are looking to automate 3D geometry creation and provide some level of detailing and drawing production. A number are even looking to a time when the industry dispenses with drawings all together. These efforts typically have a sweet spot — the ‘bread and butter’, rectilinear, everyday buildings such as offices, residential, retail, hospitals, schools and so on.

From what I have seen so far, Hypar is aiming to potentially save weeks, if not months, of time, with far-reaching consequences for the industry, in terms of software usage (how many seats of BIM tools are required), how the industry bills its clients, and how much software companies bill us.

Hypar is the culmination of all the years of industry experience that Keough and Hauck have collectively amassed, with its blend of expressive programming and ease of use. As it stretches from concept to fabrication, it has the potential to replace BIM 1.0 workflows with highly tailored solutions, but more importantly, reduce workloads from weeks to seconds.

Individual licences of Hypar are just $79 per month, per user, with discounts available for customers buying groups of five licences or entering into Enterprise deals.


Keough on IFC

Hypar’s Ian Keough (@ikeough) is always good for an interesting tweet on software design and BIM. He also has occasional comments to make about IFC and the inherent problems it brings to the table. We asked Keough about his views on IFC.

Hypar.io
Ian Keough

“Going way back to the beginning of Hypar, we always thought that the sort of open standards and protocols that we have in our industry are severely lacking,” he tells us.

“The standards that we develop should be accessible to anybody with an even basic knowledge of computation. It shouldn’t require Autodesk’s resources or Trimble’s resources or anyone else’s resources to be able to use the open data standards that we have in our industry.”

He’s a software programmer with 10 to 15 years of experience now, he continues, “and I still find IFC almost impossible to use for any day-to-day sort of thing. A lot of what we’ve done, like the opensource library that sits at the heart of Hypar, just makes that stuff easier and more accessible.


Main image: Text-to-BIM: Hypar can model buildings from a written definition


Find this article plus many more in the March / April 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The post Hypar: text-to-BIM and beyond appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/hypar-text-to-bim-and-beyond/feed/ 0
Cloud workstations for CAD, BIM and visualisation https://aecmag.com/workstations/cloud-workstations-for-cad-bim-and-visualisation/ https://aecmag.com/workstations/cloud-workstations-for-cad-bim-and-visualisation/#disqus_thread Thu, 01 Jun 2023 15:12:00 +0000 https://aecmag.com/?p=17747 How the major public cloud providers (AWS, GCP and Microsoft Azure) - stack up

The post Cloud workstations for CAD, BIM and visualisation appeared first on AEC Magazine.

]]>
Using Frame, the Desktop-as-a-Service (DaaS) solution, we test 23 GPU-accelerated ‘instances’ from Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, in terms of raw performance and end user experience

If you’ve ever looked at public cloud workstations and been confused, you’re not alone. Between Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, there are hundreds of different instance types to choose from. They also have obscure names like g4dn.xlarge or NC16asT4v3, which look like you need a code to decipher.

Things get even more confusing when you dial down into the specs. Whereas desktop workstations for sale tend to feature the latest and greatest, cloud workstations offer a variety of modern and legacy CPU and GPU architectures that span several years. Some of the GCP instances, for example, offer Intel ‘Skylake’ CPUs that date back to 2016!

Over 40 pages of dedicated workstation reviews, features and coverage. (Click image to read)

Gaining a better understanding of cloud workstations through their specs is only the first hurdle. The big question for design, engineering, and architecture firms is how each virtual machine (VM) performs with CAD, Building Information Modelling (BIM), or design visualisation software. There is very little information in the public domaain, and certainly none that compares performance and price of multiple VMs from multiple providers using real world applications and datasets, and also captures the end user experience.

So, with the help of Ruben Spruijt from Frame, the hybrid and multi-cloud Desktop-as-a-Service (DaaS) solution, and independent IT consultant, Dr. Bernhard Tritsch, getting answers to these questions is exactly what we set out to achieve in this in-depth AEC Magazine article.


There are two main aspects to testing cloud workstation VMs.

  1. The workstation system performance.
  2. The real end user experience.

The ‘system performance’ is what one might expect if your monitor, keyboard, and mouse were plugged directly into the cloud workstation. It tests the workstation as a unit – and the contribution of CPU, GPU and memory to performance.

For this we use many of the same real world application benchmarks we use to test desktop and mobile workstations in the magazine. For BIM (Autodesk Revit), for CAD (Autodesk Inventor), for real-time visualisation (Autodesk VRED Professional, Unreal Engine and Enscape), and CPU and GPU rendering (KeyShot and V-Ray).

But with cloud workstations ‘system performance’ is only one part of the story. The DaaS remote display protocol and its streaming capabilities at different resolutions, network conditions – or what happens between the cloud workstation in the datacentre and the client device – also play a critical role in the end user experience. This includes latency, which is largely governed by the distance between the public cloud datacentre and the end user, bandwidth, utilisation, packet loss, and jitter.

For end user experience testing we used EUC Score, a dedicated tool developed by Dr. Bernhard Tritsch that captures, measures, and quantifies perceived end-user experience in virtual applications and desktop environments, including Frame. More on this later.


The cloud workstations

We tested a total of 23 different public cloud workstation instances from AWS, GCP, and Microsoft Azure.

Workstation testing with real-world applications is very time intensive, so we hand-picked VMs that cover most bases in terms of CPU, memory, and GPU resources.

VMs from Microsoft Azure feature Microsoft Windows 10 22H2, while AWS and GCP use Microsoft Windows Server 2019. Both operating systems support most 3D applications, although Windows 10 has slightly better compatibility.

For consistency, all instances were orchestrated and accessed through the Frame DaaS platform using Frame Remoting Protocol 8 (FRP8) to connect the end user’s browser to VMs in any of the three public clouds.

The testing was conducted at 30 Frames Per Second (FPS) in both FHD (1,920 x 1,080) and 4K (3,840 x 2,160) resolutions. Networking scenarios tested included high bandwidth (100 Mbps) with low latency (~10ms Round Trip Time (RTT)) and low bandwidth (ranging between 4, 8, and 16 Mbps) and higher latency (50-100ms RTT) using network controlled emulation.


CPU (Central Processing Unit)

Most of the VMs feature AMD EPYC CPUs as these tend to offer better performance per core and more cores than Intel Xeon CPUs, so the public cloud providers can get more users on each of their servers to help bring down costs.

Different generations of EPYC processors are available. 3rd Gen AMD EPYC ‘Milan’ processors, for example, not only run at higher frequencies than 2nd Gen AMD EPYC ‘Rome’ processors but deliver more instructions per clock (IPC). N.B. IPC is a measure of the number of instructions a CPU can execute in a single clock cycle while the clock speed of a CPU (frequency, measured in GHz) is the number of clock cycles it can complete in one second. At time of testing, none of the cloud providers offered the new 4th Gen AMD EPYC ‘Genoa’ or ‘Sapphire Rapids’ Intel Xeon processors.

Here it is important to explain a little bit about how CPUs are virtualised in cloud workstations. A vCPU is a virtual CPU created and assigned to a VM and is different to a physical core or thread. A vCPU is an abstracted CPU core delivered by the virtualisation layer of the hypervisor on the cloud infrastructure as a service (IaaS) platform. It means physical CPU resources can be overcommitted, which allows the cloud workstation provider to assign more vCPUs than there are physical cores or threads. As a result, if everyone sharing resources from the same CPU decided to invoke a highly multi-threaded process such as ray trace rendering all at the same time, they might not get the maximum theoretical performance out of their VM.

It should also be noted that a processor can go into ‘turbo boost’ mode, which allows it to run above its base clock speed to increase performance, typically when thermal conditions allow. However, with cloud workstations, this information isn’t exposed, so the end user does not know when or if this is happening.

One should not directly compare the number of vCPUs assigned to a VM to the number of physical cores in a desktop workstation. For example, an eight-core processor in a desktop workstation not only comprises eight physical cores and eight virtual (hyper-threaded) cores for a total of 16 threads, but the user of that desktop workstation has dedicated access to that entire CPU and all its resources.


GPU (Graphics Processing Unit)

In terms of graphics, most of the public cloud instance types offer Nvidia GPUs. There are three Nvidia GPU architectures represented in this article – the oldest of which is ‘Maxwell’ (Nvidia M60), which dates back to 2015, followed by ‘Turing’ (Nvidia T4), and ‘Ampere’ (Nvidia A10). Only the Nvidia T4 and Nvidia A10 have hardware ray tracing built in, which makes them fully compatible with visualisation tools that support this physics-based rendering technique, such as KeyShot, V-Ray, Enscape, and Unreal Engine.

At time of testing, none of the major public cloud providers offered Nvidia GPUs based on the new ‘Ada Lovelace’ architecture. However, GCP has since announced new ‘G2’ VMs with the ‘Ada Lovelace’ Nvidia L4 Tensor Core GPU. Most VMs offer dedicated access to one or more GPUs, although Microsoft Azure has some VMs where the Nvidia A10 is virtualised, and users get a slice of the larger physical GPU, both in terms of processing and frame buffer memory.

AMD GPUs are also represented. Microsoft Azure has some instances where users get a slice of an AMD Radeon Instinct MI25 GPU. AWS offers dedicated access to the newer AMD Radeon Pro V520. Both AMD GPUs are relatively lowpowered and do not have hardware ray tracing built in, so should only really be considered for CAD and BIM workflows.


Storage

Storage performance can vary greatly between VMs and cloud providers. In general, CAD/BIM isn’t that sensitive to read/write performance, and neither are our benchmarks, although data and back-end services in general need to be close to the VM for best application performance.

In Azure the standard SSDs are significantly slower than the premium SSDs, so could have an impact in workflows that are I/O intensive, such as simulation (CFD), point cloud processing or video editing. GCP offers particularly fast storage with the Zonal SSD PD, which, according to Frame, is up-to three times faster than the Azure Premium SSD solution. Frame also explains that AWS with Elastic Block Storage (EBS) has ‘very solid performance’ and a good performance/price ratio using EBS GP3.


Cloud workstation regions

All three cloud providers have many regions (datacentres) around the world and most instance types are available in most regions. However, some of the newest instance types for example, such as those from Microsoft Azure with new AMD EPYC ‘Milan’ CPUs, currently have limited regional availability.

For testing, we chose regions in Europe. While the location of the region should have little bearing on our cloud workstation ‘system performance’ testing, which was largely carried out by AEC Magazine on instances in the UK (AWS) and The Netherlands (Azure/GCP), it could have a small impact on end user experience testing, which was all done by Ruben Spruijt from Frame from a single location in The Netherlands.

In general, one should always try to run virtual desktops and applications in a datacentre that is closest to the end user, resulting in low network latency and packet loss. However, firms also need to consider data management. For CAD and BIM-centric workflows in particular, it is important that all data is stored in the same datacentre as the cloud workstations, or deltas are synced between a few select datacentres using global file system technologies from companies like Panzura or Nasuni.


Pricing

For our testing and analysis purposes, we used ‘on-demand’ hourly pricing for the selected VMs, averaging list prices across all regions.

A Windows Client/Server OS licence is included in the rate, but storage costs are not. It should be noted that prices in the table below are just a guideline. Some companies may get preferential pricing from a single vendor or large discounts through multi-year contracts.


Performance testing

Our testing revolved around three key workflows commonly used by architects and designers: CAD / BIM, real-time visualisation, and ray trace rendering.


CAD/BIM

While the users and workflows for CAD and Building Information Modelling (BIM) are different, both types of software behave in similar ways.

Most CAD and BIM applications are largely single threaded, so processor frequency and IPC should be prioritised over the number of cores (although some select operations are multi-threaded, such as rendering and simulation). All tests were carried out at FHD and 4K resolution.

Revit

Autodesk Revit 2021: Revit is the number one ‘BIM authoring tool’ used by architects. For testing, we used the RFO v3 2021 benchmark, which measures three largely single-threaded CPU processes – update (updating a model from a previous version), model creation (simulating modelling workflows), export (exporting raster and vector files), plus render (CPU rendering), which is extremely multithreaded. There’s also a graphics test. All RFO benchmarks are measured in seconds, so smaller is better.


Inventor Invmark


Autodesk Inventor 2023: Inventor is one of the leading mechanical CAD (MCAD) applications. For testing, we used the InvMark for Inventor benchmark by Cadac Group and TFI, which comprises several different sub tests which are either single threaded, only use a few threads concurrently, or use lots of threads, but only in short bursts. Rendering is the only test that can make use of all CPU cores. The benchmark also summarises performance by collating all single-threaded tests into a single result and all multi-threaded test into a single result. All benchmarks are given a score, where bigger is better.


Ray-trace rendering

The tools for physically-based rendering, a process that simulates how light behaves in the real world to deliver photorealistic output, have changed a lot in recent years. The compute intensive process was traditionally carried out by CPUs, but there are now more and more tools that use GPUs instead. GPUs tend to be faster, and more modern GPUs feature dedicated processors for ray tracing and AI (for ‘denoising’) to accelerate renders even more. CPUs still have the edge in terms of being able to handle larger datasets and some CPU renderers also offer better quality output. For ray trace rendering, it’s all about the time it takes to render. Higher resolution renders use more memory. For GPU rendering, 8 GB should be an absolute minimum with 16 GB or more needed for larger datasets.

Chaos Group V-Ray: V-Ray is one of the most popular physically-based rendering tools, especially in architectural visualisation. We put the VMs through their paces using the V-Ray 5 benchmark using V-Ray GPU (Nvidia RTX) and V-Ray CPU. The software is not compatible with AMD GPUs. Bigger scores are better.


V-Ray 5.2


Luxion KeyShot: this CPU rendering stalwart, popular with product designers, is a relative newcomer to the world of GPU rendering. But it’s one of the slickest implementations we’ve seen, allowing users to switch between CPU and GPU rendering at the click of a button. Like V-Ray, it is currently only compatible with Nvidia GPUs and benefits from hardware ray tracing. For testing, we used the KeyShot 11 CPU and GPU benchmark, part of the free KeyShot Viewer. Bigger scores are better.


Real-time visualisation

The role of real-time visualisation in design-centric workflows continues to grow, especially among architects where tools like Enscape, Twinmotion and Lumion are used alongside Revit, Archicad, SketchUp and others. The GPU requirements for real time visualisation are much higher than they are for CAD/BIM.

Performance is typically measured in frames per second (FPS), where anything above 20 FPS is considered OK. Anything less and it can be hard to position models quickly and accurately on screen.

There’s a big benefit to working at higher resolutions. 4K reveals much more detail, but places much bigger demands on the GPU – not just in terms of graphics processing, but GPU memory as well. 8 GB should be an absolute minimum with 16 GB or more needed for larger datasets, especially at 4K resolution.

Real time visualisation relies on graphics APIs for rasterisation, a rendering method for 3D software that takes vector data and turns it into pixels (a raster image).

Some of the more modern APIs like Vulkan and DirectX 12 include real-time ray tracing. This isn’t necessarily at the same quality level as dedicated ray trace renderers like V-Ray and KeyShot, but it’s much faster. For our testing we used three relatively heavy datasets, but don’t take our FPS scores as gospel. Other datasets will be less or more demanding.

Enscape 3.1: Enscape is a real-time visualisation and VR tool for architects that uses the Vulkan graphics API and delivers very high-quality graphics in the viewport. It supports ray tracing on modern Nvidia and AMD GPUs. For our tests we focused on rasterisation only, measuring real-time performance in terms of FPS using the Enscape 3.1 sample project.


Enscape


Autodesk VRED Professional 2023: VRED is an automotive-focused 3D visualisation and virtual prototyping tool. It uses OpenGL and delivers very highquality visuals in the viewport. It offers several levels of real-time anti-aliasing (AA), which is important for automotive styling, as it smooths the edges of body panels. However, AA calculations use a lot of GPU resources, both in terms of processing and memory. We tested our automotive model with AA set to ‘off’, ‘medium’, and ‘ultra-high’, recording FPS.

Unreal Engine 4.26: Over the past few years Unreal Engine has established itself as a very prominent tool for design viz, especially in architecture and automotive. It was one of the first applications to use GPU-accelerated real-time ray tracing, which it does through Microsoft DirectX Raytracing (DXR).

For benchmarking we used the Automotive Configurator from Epic Games, which features an Audi A5 convertible. The scene was tested with DXR enabled and disabled (DirectX 12 rasterisation).


Unreal Engine


Benchmark findings

For CAD and BIM

Processor frequency (GHz) is very important for performance in CAD and BIM software. However, as mentioned earlier, you can’t directly compare different processor types by frequency alone.

For example, in Revit 2021 and Inventor 2023 the 2.45 GHz AMD EPYC 7V12 – Rome (Azure NV8as_ v4) performs better than the 2.6 GHz Intel Xeon E5-2690v3 – Haswell (Azure NV6_v3 & Azure NV6_v3) because it has a more modern CPU architecture and can execute more Instructions Per Clock (IPC).

The 3.2 GHz AMD EPYC 74F3 – Milan processor offers the best of both worlds – high frequency and high IPC thanks to AMD’s Zen 3 architecture. It makes the Azure NvadsA10 v5-series (NV6adsA10_v5 / Azure NV12adsA10_v5 / Azure NV36adsA10_v5) the fastest cloud workstations for CPU-centric CAD/BIM workflows, topping our table in all the single threaded or lightly threaded Revit and Inventor tests.

Taking a closer look at the results from the Azure NvadsA10 v5-series, the entrylevel NV6adsA10_v5 VM lagged a little behind the other two in some Revit and Inventor tests. This is not just down to having fewer vCPUs – 6 versus 12 (Azure NV12adsA10_v5) and 36 (NV36adsA10_ v5). It was also slower in some singlethreaded operations. We imagine there may be a little bit of competition between the CAD software, Windows, and the graphics card driver (remember 6 vCPUs is not the same as 6 physical CPU cores, so there may not be enough vCPUs to run everything at the same time). There could also possibly be some contention from other VMs on the same server.


Cloud Workstations
System performance benchmark scores for CAD / BIM workflows

Despite this, the 6 vCPU Azure NV6adsA10_v5 instance with 55 GB of memory still looks like a good choice for some CAD and BIM workflows, especially considering its $0.82 per hour price tag.

We use the word ‘some’ here, as unfortunately it can be held back by its GPU. The Nvidia A10 4Q virtual GPU only has 4 GB of VRAM, which is less than most of the other VMs on test. This appears to limit the size of models or resolutions one can work with.

For example, while the Revit RFO v3 2021 benchmark ran fine at FHD resolution, it crashed at 4K, reporting a ‘video driver error’. We presume this crash was caused by the GPU running out of memory, as it ran fine on Azure NV12adsA10_v5, with the 8 GB Nvidia A10-8Q virtual GPU. Here, it used up to 7 GB at peak. This might seem a lot of GPU memory for a CAD/BIM application, and it certainly is. Even Revit’s Basic sample project and advanced sample project both use 3.5 GB at 4K resolution in Revit 2021. But this high GPU memory usage looks to have been addressed in more recent versions of the software. In Revit 2023, for example, the Basic sample project only uses 1.3 GB and the Advanced sample project only uses 1.2 GB.

Interestingly, this same ‘video driver error’ does not occur when running the Revit RFO v3 2021 benchmark on a desktop workstation with a 4 GB Nvidia T1000 GPU, or with Azure NV8as v4, which also has a 4 GB vGPU (1/4 of an AMD Radeon Instinct MI25). As a result, we guess it might be a specific issue with the Nvidia virtual GPU driver and how that handles shared memory for “overflow” frame buffer data when dedicated graphics memory runs out.

AWS G4ad.2xlarge looks to be another good option for CAD/BIM workflows, standing out for its price/performance. The VM’s AMD Radeon Pro V520 GPU delivers good performance at FHD resolution but slows down a little at 4K, more so in Revit, than in Inventor. It includes 8 GB of GPU memory which should be plenty to load up the most demanding CAD/BIM datasets. However, with only 32 GB of system memory, those working with the largest Revit models may need more.

As CAD/BIM is largely single threaded, there is an argument for using a 4 vCPU VM for entry-level workflows. AWS G4ad.xlarge, for example, is very cost effective at $0.58 per hour and comes with a dedicated AMD Radeon Pro V520 GPU. However, with only 16 GB of RAM it will only handle smaller models and with only 4 vCPUs expect even more competition between the CAD software, Windows and graphics card driver.

It’s important to note that throwing more graphics power at CAD or BIM software won’t necessarily increase 3D performance. This can be especially true at FHD resolution when 3D performance is often bottlenecked by the frequency of the CPU. For example, AWS G4ad.2xlarge and AWS G5.2xl both feature the same AMD EPYC 7R32 – Rome processor and have 8 vCPU. However, AWS G4ad.2xlarge features AMD Radeon Pro V520 graphics while AWS G5.2xl has the much more powerful Nvidia A10G.

At FHD resolution, the Nvidia A10G is dramatically faster than the AMD Radeon Pro V520 in viz software (more than 3 times faster in Autodesk VRED Professional, for example) but there is very little difference between the two in Revit. However, at 4K resolution in Revit, the Nvidia A10G pulls away as more demands are being placed on the GPU, thus exposing some of the potential limitations of the AMD Radeon Pro V520.

Finally, Azure NC8asT4_v3 or AWS G4dn.2xlarge could be interesting options for workflows that involve using Revit alongside visualisation applications like Enscape, Lumion and Twinmotion. We found the Nvidia T4 GPU delivered good performance in those apps at FHD resolution, but not at 4K where things slow down. However, as they both have slower CPUs, general application performance will not be as good as it is with AWS G4ad.2xlarge or Azure NV6adsA10_v5.


Visualisation with GPUs

For real-time viz, a high-performance GPU is essential, and while there are many workflows that don’t need plenty of vCPU, those serious about design visualisation often need both.

It’s easy to rule out certain VMs for real-time visualisation. Some simply don’t have sufficient graphics power to deliver anywhere near the desired 20 FPS in our tests. Others may have enough performance for FHD resolution or for workflows where real-time ray tracing is not required.

For entry-level workflows at FHD resolution, consider the Azure NV12adsA10_v5. Its Nvidia A10 8Q GPU has 8 GB of frame buffer memory which should still be enough for small to medium sized datasets displayed at FHD resolution. The Azure NV6_v3 and Azure NV12_v3 (both Nvidia M60) should also perform OK in similar workflows, but these VMs will soon be end of life. None of these VMs are suitable for GPU ray tracing.

For resolutions approaching 4K, consider VMs with the 16 GB Nvidia T4 (Azure NC4asT4_v3, Azure NC8asT4_v3, Azure NC16asT4_v3, AWS G4dn.xlarge, AWS G4dn.2xlarge, AWS G4dn.4xlarge). All of these VMs can also be considered for entry-level GPU ray tracing.

For top-end performance at 4K resolution consider VMs with the 24 GB Nvidia A10, including the AWS G5.xlarge, AWS G5.2xlarge, AWS G5.4xlarge, AWS G5.8xlarge and Azure NV36adsA10_v5. Interestingly, while all four VMs offer similar performance in V-Ray and KeyShot, the AWS instances are notably faster in real time workflows. We don’t know why this is.

The AWS.G4dn.12xlarge is also worth a mention as it is the only VM we tested that features multiple GPUs (4 x Nvidia T4). While this helps deliver more performance in GPU renderers (KeyShot and V-Ray GPU) it has no benefit for real-time viz, with VRED Professional, Unreal and Enscape only able to use one of the four GPUs.

Finally, it’s certainly worth checking out GCP’s new G2 VMs with ‘Ada Lovelace’ Nvidia L4 GPUs, which entered general availability on May 9 2023. While the Nvidia L4 is nowhere near as powerful as the Nvidia L40, it should still perform well in a range of GPU visualisation workflows, and with 24 GB of GPU memory it can handle large datasets. Frame will be testing this instance in the coming weeks.

While benchmarking helps us understand the relative performance of different VMs, it doesn’t consider what happens between the datacentre and the end user

As mentioned earlier, 3D performance for real time viz is heavily dependent on the size of your datasets. Those that work with smaller, less complex product / mechanical design assemblies or smaller / less realistic building models may find they do just fine with lower spec VMs. Conversely if you intend to visualise a city scale development or highly detailed aerospace assembly then it’s unlikely that any of the cloud workstation VMs will have enough power to cope. And this is one reason why some AEC firms that have invested in cloud workstations for CAD/BIM and BIM-centric viz workflows prefer to keep high-end desktops for their most demanding design viz users.


Cloud Workstations
System performance benchmark scores for Design Viz workflows

Visualisation with CPUs

The stand-out performers for CPU rendering are quite clear with the Azure NV36adsA10_v5 with 36 vCPU and AWS. G4dn.12xlarge with 48 vCPU delivering by far the best results in V-Ray, KeyShot and those renderers built into Revit and Inventor.

Interestingly, even though the AMD EPYC 74F3 – Milan processor in Azure NV36adsA10_v5 has 12 fewer vCPUs than the Intel Xeon 8259 – Cascade Lake in the AWS.G4dn.12xlarge it delivers better performance in some CPU rendering benchmarks due to its superior IPC. However, it also comes with a colossal 440 GB of system memory so you may be paying for resources you simply won’t use.

Of course, these high-end VMs are very expensive. Those with fewer vCPUs can also do a job but you’ll need to wait longer for renders. Alternatively, work at lower resolutions to prep a scene and offload production renders and animations to a cloud render farm.


Desktop workstation comparisons

It’s impossible to talk about cloud workstations without drawing comparisons with desktop workstations, so we’ve included results from a selection of machines we’ve reviewed over the last six months. Some of the results are quite revealing, though not that surprising (to us at least).

In short, desktop workstations can significantly outperform cloud workstations in all different workflows. This is for a few reasons.

  1. Desktop workstations tend to have the latest CPU and GPU technologies. 13th Gen Intel Core processors, for example, have much a higher IPC than anything available in the cloud, and Nvidia’s new ‘Ada Lovelace’ GPUs are only now starting to make an appearance in the cloud. However, these are only single slot and not as powerful as the dual slot desktop Nvidia RTX 6000 Ada Generation.
  2. Users of desktop workstations have access to a dedicated CPU, whereas users of cloud workstations are allocated part of a CPU, and those CPUs tend to have more cores, so they run at lower frequencies.
  3. Desktop workstation CPUs have much higher ‘Turbo’ potential than cloud workstation CPUs. This can make a particularly big difference in single threaded CAD applications, where the fastest desktop processors can hit frequencies of well over 5.0 GHz.

Desktop workstations can significantly outperform cloud workstations in all different workflows, but to compare them on performance alone would be missing the point entirely

Of course, to compare cloud workstations to desktop workstation on performance alone would be missing the point entirely. Cloud workstations offer AEC firms many benefits. These include global availability, simplifying and accelerating onboarding/ offboarding, the ability to scale up and down resources on-demand, centralised desktop and IT management, built-in security with no data on the end-user PC, lower CapEx costs, data resiliency, data centralisation, easier disaster recovery (DR) capability and the built in ability to work from anywhere, to name but a few. But this is the subject for a whole new article.


End user experience testing

While benchmarking helps us understand the relative performance of different VMs, it doesn’t consider what happens between the datacentre and the end user. Network conditions, such as bandwidth, latency and packet loss can have a massive impact on user experience, as can the remoting protocol, which adapts to network conditions to maintain a good user experience.

EUC Score is a dedicated tool developed by Dr. Bernhard Tritsch that captures, measures and quantifies perceived end-user experience in remote desktops and applications. By capturing the real user experience in a high-quality video on the client device of a 3D application in use, it shows what the end user is really experiencing and puts it in the context of a whole variety of telemetry data. This could be memory, GPU or CPU utilisation, remoting protocol statistics or network insights such as bandwidth, network latency or the amount of compression being applied to the video stream. The big benefit of the EUC Score Sync Player is that it brings telemetry data and the captured real user experience video together in a single environment.


EUC Score
Figure 1: The EUC Score Sync Player interface

When armed with this information, IT architects and directors can get a much better understanding of the impact of different VMs / network conditions on end user experience, and size everything accordingly. In addition, if a user complains about their experience, it can help identify what’s wrong. After all, there’s no point in giving someone a more powerful VM, if it’s the network that’s causing the problem or the remoting protocol can’t deliver the best user experience.

For EUC testing, we selected a handful of different VMs from our list of 23. We tested our 3D apps at FHD and 4K resolution using a special hardware device that simulates different network conditions.

The results are best absorbed by watching the captured videos and telemetry data, which can all be seen on the Frame website.

EUC Score Sync Player is able to display eight different types of telemetry data at the same time, so that’s why there are different views of the telemetry data. The generic ‘Frame’ recordings are a good starting point, but you can also dig down into more detail in ‘CPU’ and ‘GPU’.

When watching the recordings, here are some things to look out for. Round trip latency is important and when this is high (anything over 100ms) it can take a while for the VM to respond to mouse and keyboard input and for the stream to come back. Any delay can make the system feel laggy, and hard to position 3D models quickly and accurately on screen. And, if you keep overshooting, it can have a massive impact on modelling productivity.

In low-bandwidth, higher latency conditions (anything below 8 Mbps) the video stream might need to be heavily compressed. As this compression is ‘lossy’ and not ‘lossless’ it can cause visual compression artefacts, which is not ideal for precise CAD work. In saying that, the Frame Remoting Protocol 8 (FRP8) Quality of Service engine does do a great job and resolves to full high-quality once you stop moving the 3D model around. Compression might be more apparent at 4K resolution than at FHD resolution, as there are four times as many pixels, meaning much more data to send.

Frame automatically adapts to network conditions to maintain interactivity. EUC Score not only gives you a visual reference to this compression by recording the user experience, but it quantifies the amount of compression being applied

Frame, like most graphics-optimised remoting protocols, will automatically adapt to network conditions to maintain interactivity. EUC Score not only gives you a visual reference to this compression by recording the user experience, but it also quantifies the amount of compression being applied by FRP8 to the video stream through a metric called Quantisation Priority (QP). The lower the number, the less visual compression artefacts you will see. However, the lowest you can get is 12, as to the end user this appears to be visually lossless. This highest you can get is 50 which is super blurry.


Cloud Workstations
Figure 2: Enscape scene showing heavy compression on network with 2 Mbps bandwidth and 200ms RTT latency (visual artefacts particularly noticeable on concrete slab and on edges)

Cloud Workstations
Figure 3: Same scene with no compression

Visual compression should not be confused with Revit’s ‘simplify display during view navigation’ feature that suspends certain details and graphics effects to maintain 3D performance. In the EUC Score player you can see this in action with textures and shadows temporarily disappearing when the model is moving. In other CAD tools this is known as Level of Detail (LoD).

The recordings can also give some valuable insight into how much each application uses the GPU. Enscape and Unreal Engine, for example, utilise 100% of GPU resources so you can be certain that a more powerful GPU would boost 3D performance (in Unreal Engine, EUC Score records this with a special Nvidia GPU usage counter).

Meanwhile, GPU utilisation in Revit and Inventor is lower, so if your graphics performance is poor or you want to turn off LoD you may be better off with a CPU with a higher frequency or better IPC than a more powerful GPU.

To help find your way around the EUC Score interface, see Figure 1 above. In Figures 2 and 3 we show the impact of network bandwidth on visual compression artefacts. This could be a firm that does not have sufficient bandwidth to support tens, hundreds, or thousands of cloud workstation users or when the kids come home from school, and they all start streaming Netflix.


EUC Score test results

View the captured videos and telemetry data recordings on the Frame website.


Conclusion

If you’re an AEC firm looking into public cloud workstations for CAD, BIM or design visualisation, we hope this article has given you a good starting point for your own internal testing, something we’d always strongly recommend.

There is no one size fits all for cloud workstations and some of the instances we’ve tested make no sense for certain workflows, especially at 4K resolution. This isn’t just about applications. While it’s important to understand the demands of different tools, dataset complexity and size can also have a massive impact on performance, especially with 3D graphics at 4K resolution. What’s good for one firm, certainly might not be good for another.

Also be aware that some of the public cloud VMs are much older than others. If you consider that firms typically upgrade their desktop workstations every 3 to 5 years, a few are positively ancient. The great news about the cloud is that you can change VMs whenever you like. New machines come online, and prices change, as do your applications and workflows. But unlike desktops, you’re not stuck with a purchasing decision.

While AEC firms will always be under pressure to drive down costs, performance is essential. A slow workstation can have a massive negative impact on productivity and morale, even worse if it crashes. Make sure you test, test and test again, using data from your own real-world projects.

For more details, insights or advice, feel free to contact to Ruben Spruijt or Bernhard Tritsch.


What is Frame? The Desktop-as-a-Service (DaaS) solution

Frame is a browser-first, hybrid and multi-cloud, Desktop-as-a-Service (DaaS) solution.

Frame utilises its own proprietary remoting protocol, based on WebRTC/H.264, which is well-suited to handling graphics-intensive workloads such as 3D CAD.

With Frame, firms can deliver their Windows ‘office productivity’, videoconferencing, and high-performance 3D graphics applications to users on any device with just a web browser – no client or plug-in required.

The Frame protocol delivers audio and video streams from the VM, and keyboard / mouse events from the end user’s device. It supports up to 4K resolution, up to 60 Frames Per Second (FPS), and up to four monitors, as well as peripherals including the 3Dconnexion SpaceMouse, which is popular with CAD users.

Frame provides firms with flexibility as the platform supports deployments natively in AWS, Microsoft Azure, and GCP as well as on-premise on Nutanix hyperconverged infrastructure (HCI).

Over 100 public cloud regions and 70 instance types are supported today, including a wide range of GPUaccelerated instances (Nvidia and AMD).

Everything is handled through a single management console and, in true cloud fashion, it’s elastic, so firms can automatically provision and de-provision capacity on-demand.


This article is part of AEC Magazine’s Workstation Special report

Scroll down to read and subscribe here

Featuring

The post Cloud workstations for CAD, BIM and visualisation appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/cloud-workstations-for-cad-bim-and-visualisation/feed/ 0
Reality modelling helps streamline Hinkley Point C construction https://aecmag.com/reality-capture-modelling/reality-modelling-helps-streamline-hinkley-point-c-construction/ https://aecmag.com/reality-capture-modelling/reality-modelling-helps-streamline-hinkley-point-c-construction/#disqus_thread Fri, 21 Apr 2023 13:24:28 +0000 https://aecmag.com/?p=17567 Topcon ClearEdge3D Verity software used to verify the complex marine and tunnelling work for Hinkley Point C

The post Reality modelling helps streamline Hinkley Point C construction appeared first on AEC Magazine.

]]>
Topcon ClearEdge3D Verity software used to verify the complex marine and tunnelling work for UK nuclear power plant

The UK currently generates around 15 to 20 per cent of its electricity from nuclear energy. To support the UK Government commitment to reach Net Zero emissions by 2050, Hinkley Point C in Somerset is under construction. This is the first new nuclear power station to be built in the UK in over 20 years and will provide low-carbon electricity for around 6 million homes.

The electricity generated by its two EPR reactors will offset 9 million tonnes of carbon dioxide emissions a year, or 600 million tonnes over its 60-year lifespan.

As part of the project, Balfour Beatty is responsible for delivering the complex marine and tunnelling works, and constructing the structures for the critical infrastructure needed to supply cooling water to the power station. This project involves the construction of three tunnels under the Bristol Channel, with offshore concrete heads allowing sea water to pass into the tunnels.

Working within tight tolerances

Through the delivery stages from design to offshore execution, precision was vital with very tight construction tolerances required. Once complete, the system will be connected to the seabed via vertical shafts, capped with intake and outfall heads.

The intake structures are 44 metres long, which is roughly the length of four double-decker buses, and around eight metres high, weighing more than 5,000 tonnes each.

The structures were constructed at a purpose-built facility at Balfour Beatty’s site in Avonmouth, Bristol. Large steel alignment frames were then installed on top of the heads to enable future lifting and piling operations.

Lifting lugs were cast into the reinforced concrete heads and these were then matched against bespoke handling frames. The accuracy of the fit was critical due to the 5mm tolerance available for alignment, ultimately allowing for the installation of the lifting pins and the subsequent safe offshore lifts.

Tom Bush, Digital Project Delivery Coordinator at Balfour Beatty, explained: “It’s no surprise that using cranes to rotate and position the large fabricated structures on to the concrete heads is an incredibly challenging task, and we didn’t have any room for error.

“While we were constructing the concrete heads, fabricators were building the alignment frames. With such a small tolerance on either side of the lifting lugs, we needed to ensure the data and measurements we were giving were accurate – with Topcon’s ClearEdge3D Verity software, we were able to do that.”


Verification of works

Topcon’s ClearEdge3D Verity software can be used to compare point cloud data with design and fabrication models for verification of work. Balfour Beatty used the software to compare real-time data being supplied by the survey team on-site against initial drawings, to ensure the lifting lugs were aligned with the tolerance available. Inaccuracies were discovered during the first comparisons, and so changes were fed back to the fabricators and rectified early on.

Topcon’s software was also used to run several scenarios and create a digitally accurate approach that saved Balfour Beatty time and money, as well as strengthening health and safety precautions. The software translated data collected on-site into a digital model, providing accurate demonstrations of the rotations and twists of the installed lifting lugs on each of the heads, with immediate access to the latest data and comprehensive digital display models helping to streamline the process.

“Being able to accurately verify the position of each individual lifting lug on each of the concrete heads through Verity allowed us to provide detailed as-built information, within a short period of time and remove the risk of expensive or time-consuming errors taking place when it came to fabricating and fitting the alignment frames. This was key to enable the project to keep on programme,” added Bush.


Meanwhile, learn about the use of IFC at Hinkley Point C in this AEC Magazine article by Tim Davies, digital engineering manager, BYLOR Joint Venture (JV) – Hinkley Point C

The post Reality modelling helps streamline Hinkley Point C construction appeared first on AEC Magazine.

]]>
https://aecmag.com/reality-capture-modelling/reality-modelling-helps-streamline-hinkley-point-c-construction/feed/ 0
Blue Ocean AEC – next generation BIM https://aecmag.com/bim/blue-ocean-aec/ https://aecmag.com/bim/blue-ocean-aec/#disqus_thread Fri, 24 Mar 2023 10:25:24 +0000 https://aecmag.com/?p=17264 This ‘stealth’ BIM software developer aims to automate 60% to 70% of the layout of a building

The post Blue Ocean AEC – next generation BIM appeared first on AEC Magazine.

]]>
Next-generation BIM developers are developing tools that automate the production of detail design models and downstream drawing production. While there are many ways to get there, the end-goal looks to be the same: models and drawings produced in minutes, not months

Momentum in the BIM 2.0 development community continues to grow. This month, industry veteran Martin (Marty) Rozmanith contacted AEC Magazine to give us a sneak peek of the next-generation BIM technology that he’s working on. This is highly unusual, since many folks prefer to stay in stealth mode. But perhaps it’s a sign of the times: the race for the soul of the BIM market is now truly underway.

Resumes can be dull, but Rozmanith has a solid background and has spent his career at the forefront of AEC tool development. He was technical product manager at Autodesk for Architectural Desktop between 1996 and 1999. He then left Autodesk to join a start-up called Revit Technology as director of product management.

With Autodesk’s 2002 acquisition of Revit, he was sucked back into the mothership, where he spent another three years, before leaving for a series of roles in manufacturing-in-construction startup companies focused on housing affordability and sustainability.

Next, Rozamanith joined Dassault Systèmes in 2012, at a time when the company was trying to build an AEC team. There, he put in a ten-year stint as sales director and then strategy director of Construction/Cities.

In other words, having been in the business for so long, Marty Rozmanith knows the industry back to front — and he believes that a small, motivated software team can bring new value and new applications to market faster.

In fact, his frustration with the glacial pace at which big software firms tend to operate has led him to join a smaller company as its chief technology officer (CTO). That company is Atlanta, Georgia-based Green Building Holdings, which began life as a decarbonisation consultancy firm some fifteen years ago. Today, its core business is LEED, BREAM and energy modelling services. It is headed up by Charlie Cichetti, one of the 300 LEED fellows within the 300,000-strong community of LEED certified professionals in the US.

Green Building Holdings is a group of companies, which also includes consultancy Sustainable Investment Group (SIG) and Blue Ocean Sustainability, a green building software lab focused on measuring carbon in construction. The latter is led by Kristina Bach, the first reviewer that the US Green Building Council hired to review actual projects. Within the wider group, there is also a training company called GBES (Green Building Education Services), which has trained some 150,000 of the aforementioned 300,000 US-based LEED professionals.

“One of the reasons I joined this team was they had started a competency in software and they had consultants and actual client projects that needed software that didn’t exist yet,” Rozmanith explains. “The other reason is that one of the co-investors in the holding company Green Building Holdings, I’ve known for thirty years now. Arol Wolford was one of the board members of Revit. He set up Construction Market Data back in the day and sold it to Reed Elsevier. He’s now pretty-well known in the capital space for AEC in North America.”

Blue Ocean AEC

After spending much of his career working at large corporates, what was the attraction of creating a start-up at this stage? I ask Rozmanith.

“I was talking with Charlie for a year, and he wanted to expand the mission of Blue Ocean and do something interesting. We are looking to address design, construction and eventually operation, but my focus this year is really working on the design side and starting a new brand, Blue Ocean AEC. We’re different because we create software by architects for architects,” he replies.

“From my experience, big software companies do consultative selling. They go in front of the customer, ask a bunch of questions, and try to find the hot burning problem that they can use to extract money from the customer there and then,” he continues.

And that’s always kind of fine. It’ll drive revenue once you have a mature product. But when you don’t have a mature product, it will just kill your customers, because nobody will want to work with you if that’s your thinking. Big firms are really focused on trying to drive revenue in the short term, because obviously, once they’re publicly traded, that’s what you’ve got to do. I think I’d sooner be at a private company, where we can kind of do the right thing.”

His plan is to replicate the go-to-market strategy used by Revit right at the start. In other words, get the first dozen customers and send them into bat for you, in different parts of the country. These initial customers act as ‘lighthouses’ that attract other firms and act as an advisory board to the company. “We can have an open conversation with them about what problems they are trying to solve and what are their priorities — because architects are willing to do that,” he says.


Blue Ocean AEC

Blue Ocean AEC Blue Ocean AEC

Talking automation

After talking about Swapp, Hypar and Augmenta, and the potential impact of AI, I ask what kind of design tools Rozmanith is looking to create. His response: the automation of drawings and design.

“I’m thinking about doing this procedurally first, because AI, although interesting, is somewhat unpredictable,” he explains. “In a creative sense, it’s kind of fun, but one of the things that comes to mind is something Patrick Mays [former CIO of NBBJ] always said: ‘An architect is a special kind of lawyer who makes contracts with drawings in addition to words.’ At the end of the day, when you must be paid to make a contract like that, you kind of want automation that’s predictable. I agree that AI is going to have a big impact on the market, but it’s not the only way to automate.”

Blue Ocean AEC looks to combine data and recipes from old Revit projects with new conceptual tools, in order to meet client briefs and then quickly and automatically create new detailed Revit models

He recently attended the Design Futures Council 2023 event in La Jolla, California and there was a lot of talk about change among the attendees: that architects need to get away from pricing in hours, that they need to change the business model, use automation, learn to de-risk and so on. But all the people who were real change agents back in his early Revit days are the establishment now, he observes. “I have much better conversations at smaller firms than I do at big firms, because a lot of what’s coming next is going to hit architects in design and it’s not going to be optional to deal with it.”

Software development today is a “buy, build or partner” decision, according to Rozmanith. He’s choosing the partnership approach. “We’re a lab-to-market software house that isn’t really trying to push a box of something. We want you to be able to automate what you do, but we have some special tribal knowledge about that fifteen years’ worth of Revit data on your servers. And we can leverage that to automate what basically takes up 85% to 90% of your fee.”

Back in 2021, AEC Magazine covered Rozmanith’s presentation about a Dassault Systèmes project for Bouygues. Here, a new process reduced the design time by more than 50%.

“This required a lot of pre-planning to rethink how the building can be done using modules. And you figure out how those modules are assembled, which requires an entire team that doesn’t work on projects, like an elevator manufacturer would. And then you have some simpler tool that does general arrangement,” he observes.

“There’s an abstraction of the complicated, with blocks moving around in the general arrangement. Once you’ve resolved the general arrangement, you use that relationship to create the LOD 400 building model, based on what’s in the detailed modules. And that’s how they resolved getting a fabrication level model to go and execute the project.”

It’s an interesting idea, he says, “because not everyone needs to have to learn all the complicated stuff in Catia.” Obviously, you would still need a Catia team, but a wider group would benefit from using it.

But he sees a couple of problems with this approach. “If you’re the contractor, you could understand this and how this could make things faster. If you’re an architect, basically, none of the stuff you have right now helps you. Basically, you’ve got to throw everything away, rethink how buildings are done, along means and methods lines, which in the US, architects are prohibited from touching.”

In effect, only a contractor could do this, he observes, which is why Dassault Systèmes started with France-based Bouygues. Either way, architects get left out of the process, “unless they get hired here under the general arrangement.”

The second issue, as he sees it, is that you have to figure out modules ahead of time. “You can’t really use this for an engineer-to-order construction process, which is 95% of what’s going on right now. You can only use it for configure-to-order. And you really have to rethink how buildings work, like Katerra would do to make them configure-to-order. There’s a lot of constraint and everyone’s had a lot of upfront work.”

In the meantime, developers have been getting funding for tools that handle the general arrangement aspects, such as Spacemaker.

Perkins and Will came up with Massformer. LendLease had Podium.

“There’s a theme here,” he laughs, “but I think they didn’t kind of get the big idea. Because you can’t do the contract from these tools. They don’t produce any data that you can use. If you create a layout in Spacemaker, go get a bunch of cubes in Revit, then you have to figure out how the building’s going to work. What I’m working on is similar in concept to what I originally described, but the idea is you use the simple layouts to generate the LOD 350 models, so you can do automated construction drawings.”


Blue Ocean AEC Blue Ocean AEC

Skema.site

As the front end for design, Rozmanith and team at Blue Ocean AEC have developed a cloud-based tool called Skema. If you take the example of a parking garage, the creation of a 2D layout would be driven by where its parking spaces and aisles lie. There’s a simple app that helps the user figure out where they want the ramp, in which direction it should go, where spots for disabled drivers should be situated, and so on.

“And then it generates a 3D building based off of that, all at LOD 350,” he says. “We have a link to Cove.tool to do analysis. It has the same kind of Element Classification you would find in IFC or Revit. You can assign costs, if you are a contractor, for reports. But the main thing is that we manage that LOD 350 model in the systematic generation at its back end, so it has one-to-one correspondence with Revit.”

That means it can be pushed into a Revit model so that it’s well structured. “It would act just as if you had created it in Revit. You can change it, you can do whatever you want with it. And that’s the starting point for then doing automated construction documents,” he says.

“We’re showing this just as a concept in the early phases, but as we create the detailed data, we will have other analysis engines that get increasingly accurate as we get higher LOD. The main part of the system is figuring out how space should fit into a volume. Whether you sketch that volume, or you create that volume with some other Grasshopper-driven tool or something like that, we don’t really care.”

The original team that wrote this code (www.kreo.net) was located in Belarus with headquarters in London. When the Ukrainian war broke out, the Belarus team relocated to Poland. Currently, the team members of Blue Ocean AEC’s R&D partnership are in London, Poland, and the U.S. They are working on abstracting the historical layout data from an architect’s past Revit files, over 15 years’ worth, and fitting those into the actual volumes created via an adjacency matrix in Skema, while altering them parametrically to fit whatever shape future projects may take.

This seems not dissimilar to Finch3D, but maintains LOD 350. Rozmanith explains further: “If you are designing a school, we’ll look at three or four schools from your past school designs and capture the arrangements from Revit. We basically farm the data out of the old Revit projects to create these layouts that are abstracted into 2D, then use Skema to go and take those arrangements and fit it into whatever site is being worked on.”

The idea behind Blue Ocean AEC’s Skema is that it will speed up time and deliver multiple alternatives for the user to explore. When they narrow it down, they can go and look at the detailed buildings that are generated in literally minutes, reconstituting a new model out of previous Revit data. Models will be a combination of layouts that they’ve done, and some blank space blocked out for elements that need to exist, but haven’t been laid out yet.

“We’re not using AI for this part. This is all procedural, it’s very predictable,” says Rozmanith. “Our goal is to automate 60% to 70% of the layout of the building, so that we get rid of the boring stuff, leaving the lobby and the signature spaces and the atrium and all that stuff, which you as an architect really want to work on.”

Conclusion

There is undoubtedly an opportunity to improve productivity in AEC and there are now many developers looking to automate design and drawing production. But from talking to them all, there seem to be many ways to skin this particular cat.

Blue Ocean AEC may be some way away from bringing its product to market but it looks to combine data and recipes from old Revit projects with new conceptual tools, in order to meet client briefs and then quickly and automatically create new detailed Revit models. The recipes for these stay in Skema.site, where they can be edited and regenerated in Revit on demand.

The company’s green expertise will also make some appearance in the process, to further refine designs to meet ever-stringent sustainability goals. Rozmanith doesn’t look to replace Revit, but instead use it as a documentation engine. And by keeping all that data at LOD 350, this will save a lot of work.

We hope to have Rozmanith and his development team at Blue Ocean AEC join us in London on 20 June at NXT BLD and NXT DEV.


The trouble with offsite

One of the drivers for next-generation BIM is the need to connect design to fabrication. But today’s BIM systems were never intended to do that. And while developing a tool that can handle ’loosey-goosey’ geometry is one thing, refining it to fabrication level is quite another. Plus, there seem to be big issues in getting offsite construction firms up and running and financially stable.

KATERRA
Image courtesy of KATERRA

Rozmanith has seen this issue from all sides. “The AEC market is harder than the MCAD market, and if there’s one thing I’ve seen more than once, it’s that finance people will underestimate the scope of the problem,” he observes.

“They think it looks simple and then they will burn a lot of money and go out of business. We all watched Katerra burn through a billion dollars. It seems that firms entering this market think that having the manufacturing facility is the first step, when it is the last step. You figure out everything before you go and build the manufacturing facility,” he says.

“I’ve talked with big firms that spent millions on building manufacturing facilities and six months later were saying ‘It’s not running very well, we need help.’ Then we came in, and they couldn’t even tell us what their processes were, because they hadn’t figured them out yet!

Unfortunately, he observes, construction companies get a one and a half times multiple of revenue as a valuation on the stock market, while a tech division can get a thirty times multiple on the stock market. “Offsite is seen as a way to say they have a tech division, because of that. It’s a really bad idea for construction companies to make software, because it’s not their core competence. A thirty times multiple of zero revenue is not really a very good outcome!


Find this article plus many more in the March / April 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The post Blue Ocean AEC – next generation BIM appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/blue-ocean-aec/feed/ 0
Kreod and Catia: new model architect https://aecmag.com/digital-fabrication/new-model-architect/ https://aecmag.com/digital-fabrication/new-model-architect/#disqus_thread Wed, 22 Feb 2023 14:05:01 +0000 https://aecmag.com/?p=16953 With attention to detail, understanding of fabrication and Catia, Kreod is looking to empower the ‘Digital Master Builder'

The post Kreod and Catia: new model architect appeared first on AEC Magazine.

]]>
BIM has brought benefits, but it did not bring about the revival of the traditional ‘Master Architect‘ role for the profession. With attention to detail, understanding of fabrication and a high-power mechanical CAD (MCAD) system at his disposal, Kreod’s Chun Qing Li feels like he’s getting there

We are often told that history repeats itself. As a species, we seem doomed to keep reinventing thewheel. For the uninitiated, a recent discovery may look like the Next New Thing. For us older folks, those of us who have been around the sun a few too many times, the distinction between what’s ‘new’ and what’s ‘old’ tends to blur. And that, I guess, is why somebody invented marketing!

Architectural design was once a predominantly 2D exercise, accompanied by some physical models. When 2D CAD came along, we merely replicated the process on a personal computer. Then we had BIM, which puts 3D modelling first, to produce drawings and then PDFs. Ultimately, the deliverable has not changed — just our way of getting there.

But today’s BIM systems rarely collect enough detail to pass along to the next stage in the workflow: manufacturing and fabrication.

In other words, a chasm persists between BIM data and manufacturing data. In other forms of manufacturing — think of cars, aeroplanes, consumer goods and so on — a designer’s 3D model is accurate and modelled at 1:1 scale. And it is connected to a digital process, in which a number of other systems are involved, in order to generate assemblies (right down to the nuts and bolts, a bill of materials (BoM) and a wide variety of product manufacturing information. Business systems such as enterprise resource planning (ERP) software are integrated too, to produce costings and assess availability of parts.

There may still be 2D drawings on the factory floor, but the production of these is often wholly automated by core CAD systems. And these days, it’s not uncommon to see models get accessed at the point of manufacturing, too.

This begs the question: How might the AEC industry establish a ‘digital thread’ that extends further into the end-to-end process, delivers productivity benefits along the way, and helps firms manage the risks inherent to design and construction?

To date, a small number of architectural and construction firms have either augmented or ditched their BIM tools and adopted modelling software that is more commonly used in manufacturing in order to model at 1:1 scale, with high component detail. In this way, such firms are able to use design information beyond the design phase, to link it to fabrication, and to make it more accessible to downstream processes. In the process, they are in some ways becoming ‘master architects’.

Enabling design-to-construction

One such firm is Kreod, a London-based architectural and transdisciplinary practice, established 2012 by Chun Qing Li. The firm has built a reputation for the quality of its residential and commercial designs, and within the CAD community, it is particularly acknowledged for its approach to harnessing digital engineering and manufacturing.

Li has, almost single-handedly, promoted the benefits of using high-end manufacturing software in AEC, replacing BIM with 3DExperience Catia from Dassault Systèmes (DS).

Through using the base software and developing in-house applications such as Kreod Integrated DfMA Intelligent Automation workflow (typically shortened to KIDIA), the company prides itself on the accuracy of its models, its bills of materials and, ultimately, the financial success of its sustainable projects. By modelling everything at a fabrication level of detail, Kreod has made great strides in minimising risk.

Kreod models every component in detail, right down to fabrication level, in order to truly understand not just design intent but also constructability, cost and all the connection details

It is also tackling the issue of procurement, at a time when rapid raw material price inflation is contributing to escalating build costs and, in some cases, forcing projects to be put on hold. One of the innovations that the Kreod team has developed is a web service for onboarding clients. This details the materials required for their project, along with live associated costs. In this way, clients stay informed about budget issues and can even make decisions around when to buy certain materials for a job. For the first time, clients can actually tap into the supply chain themselves, as opposed to having to rely on traditional contracts and hoping it all works out.

At the same time, by shunning established BIM tools, Li has seemingly opted to perform the equivalent of climbing the north face of the Eiger. It’s only after talking with him that you get a real sense of what has driven him to bypass the predefined walls, doors and windows of ‘Lego CAD’, and make geometry and manufacturing his key drivers in selecting a design system.

Kreod

Kreod

Kreod

Kreod models every component in detail, right down to fabrication level, in order to truly understand not just design intent but also constructability, cost and all the connection details.

With each project, if the company is using a new supplier, or a new process, project leaders will visit the fabricator and invest time in understanding the fabrication process and limitations that they must consider during the design stages. The level of communication the firm builds up with its trusted suppliers means that design communication is dependable and can be model-based.

A fair amount of experimentation

On our visits to other leading architects, we certainly see a fair amount of experimentation with manufacturing CAD systems. While director of innovation at Aecom, Dale Sinclair was increasingly using MCAD tool Inventor over BIM tool Revit, to model modular projects destined to be manufactured off-site. Sinclair looks to have carried that vision with him to WSP, where he also heads up innovation. Revit was Aecom’s weapon of choice, but the level of detail at which the firm needed to model for fabrication would have led to huge models, impacting Revit performance and potentially making the system unusable for this purpose. By contrast, MCAD tools are optimised to run with models comprising tens of thousands of parts. Very high-end systems can handle even more.

Kreod may have eschewed the comforts of ‘Lego BIM’, opting instead for the certainty of extreme detail – but in the process, it has brought upon itself a great deal of additional modelling work

This approach may not be for the faint hearted — but it is for the engineering-minded. It also suits those AEC professionals who want to use digital tools to be involved in the whole end-to-end, design-to-build process. The upside for clients, meanwhile, is that firms that model in detail and know a great deal about fabrication costs upfront are better placed to offer a full-service, singlepoint-of-contact approach, managing everything from design to delivery.

The industry is very slowly waking up to the connection between choice of design tool and project outcomes. At present, that awareness tends to be limited as being where the industry needs to focus. And, as discussed, fabrication considerations need to take place early in design processes.

This forward-thinking mindset is not dependent on the size of a firm. It can be seen both at Aecom (50,000 employees) and Kreod (fewer than 10 employees). AEC Magazine contributor and Foldstruct CEO Tal Friedman refers to it as ‘Fabrication Information Modelling’ or ‘FIM’, where designs are created with built-in knowledge of eventual production methodology. Li prefers the tried-andtested DfMA (design for manufacturing) label, but ultimately, we are talking about the same thing.

The big problem is that there is no commercial turnkey system to provide this. Every manufacturer has varying capabilities. In 2017, Bouygues paid Dassault Systèmes to custom-build a system to automate the stripping of models from Revit into their component parts in order to build an optimised, manufacturable model, with drawings, full costings and lean project management using Dassault Systèmes’ 3DExperience environment. Bouygues is looking to expand this system to include sustainability analysis and optimisations, too.

Maybe this is the way the industry will go, with traditional, federated workflows for some sectors, using off-the-shelf BIM tools, while others adopt bespoke systems to fully automate assembly models and drive fabrication.

Looking at what we have today, along with where we need to go tomorrow, if AEC is to embrace a completely digital end-to-end process, it seems unlikely that today’s 2D-focused BIM tools can evolve to keep pace.

Kreod may have eschewed the comforts of ‘Lego BIM’, opting instead for the certainty of extreme detail – but in the process, it has brought upon itself a great deal of additional modelling work. That said, the more work Kreod does here, the bigger its own library of parts, so there will be payoffs.

It will be fascinating to see what software and services Kreod decides to bring to market in order to assist the industry. From our conversation with Li (see boxout below), it’s clear he likes to have a lot on his plate. In return, he’s giving the industry plenty of food for thought.


What is 3DExperience Catia?

3DExperience Catia is a long name for a big CAD system. Catia is the flagship modelling ecosystem from French developer, Dassault Systèmes. As CAD systems go, the current version, V6, is the Ferrari of the manufacturing CAD world. Indeed, it’s used by Ferrari and its F1 team, as well as Porsche, BMW, and Toyota. In aerospace, both Boeing and Airbus are customers. Catia covers individual part models, assemblies (of parts), very high-end surface modelling, Finite Element Analysis, Structural Analysis, generative design, sheet metal folding, rendering and so on. The design tool is connected to other DS brands, Enovia (collaboration), Delmia (supply chain planning) and Simulia (simulation), amongst others.

Kreod
The Cleveland Clinic Lou Ruvo Center for Brain Health in downtown Las Vegas, Nevada by Frank Gehry Image by Bobby Dagan
Kreod
Marseille: a Bouygues construction site for an international school complex by architect Rudy Ricciotti

The 3DExperience part of the name relates specifically to its ability to operate beyond the desktop and to work in the cloud, connecting to other parts of organisations with web-based model and business process management tools. This is commonly known as PLM (Product Lifecycle Management) in manufacturing circles.

Catia is not commonly seen in AEC firms. It’s viewed as an exotic choice. However, some exceptional practices are famously associated with its use. These include Frank Gehry, ZHA (Zaha Hadid Architecture) and more recently, Lendlease. We’ve also heard rumours that Laing O’Rourke might be experimenting with it as part of the company’s ongoing research into modern methods of construction.

Before Catia, Gehry had trouble getting his buildings built, because contractors would add a big percentage to the estimates, arguing that 2D drawings left too much to the imagination. When he switched to sending Catia models, all bids came in within 1% of each other.

Kreod is thus following in famous footsteps. In doing so, it has opted to build its own layers of functionality to enable the rapid detailed modelling of architectural and construction elements, down to every nut and bolt.

Chun Qing Li is looking to bring all the knowledge his team has amassed in design to fabrication modelling to offer on demand, online service to the market. Watch this space.


Kreod Olympic pavilion

The Kreod Olympic pavilion (London 2012) was designed to showcase modern design and innovative construction techniques. The unique form of the structure was made possible through the use of cuttingedge CAD technology, enabling it to be manufactured in modular form with minimal waste material. Prefabricated sections were quickly and accurately assembled, ensuring that construction could be completed within a tight timescale before the games began.

Innovative manufacturing processes such as laser cutting, vacuum casting and CNC machining enabled the creation of the intricate details that are a feature of the finished structure. During the build process, precision CNC tools cut individual components from a range of materials, including aluminium, stainless steel and composites. These were then hand welded together to create frames for a lightweight shell covered with wood panelling. The high accuracy of modelling and CNC-cut parts meant that each part fits perfectly, despite a wide range in variances.

Kreod

Kreod Kreod

Chun Qing Li describes his approach to creating the Kreod Pavilion as follows: “Our vision was to provide an exciting showcase for some of London’s most dynamic designers working in 3D-printed structures; making full use of contemporary technologies, while still meeting all relevant safety regulations.”

Kreod’s Olympic Pavilion demonstrates the potential of combining modern design and manufacturing technologies with digital building processes, offering a practical example of how custom structures can be created faster.


Q&A with Chun Qing Li of Kreod

Kreod
Kreod’s Chun Qing Li (pictured right)

AEC Magazine sat down with Chun Qing Li, founder & CVO of Kreod, to learn more about his non-conformist approach to BIM workflows and discuss his views on how and why the industry needs to change.

AEC Magazine: While there are a number of mature BIM applications out there, you have chosen to go with a CAD system more popular in high-end manufacturing. That’s meant developing your own layers of functionality — so what on earth made you do that?

Chun Qing Li (CQL): I feel that, as architects and designers, we have been kind of hijacked by the software companies. Because we are creative people, we have different mindsets to engineers, and the software companies develop tools created by software engineers who don’t necessarily understand how we work. We always have to bend ourselves and learn their logic. In my experience, it’s counterproductive. The tools we have as off-the-shelf solutions for AEC just don’t make any sense to me. For example, with BIM, we spend a whole load of time modelling a beautiful, 3D datadriven design, but ultimately end up delivering 2D PDFs. It just defeats the whole purpose.


AEC Magazine: But how much of that is down to the technology failing to map to dumb contractual constraints and deliverables?

CQL: Everything’s driven by the principal contractors. While architects may be using Graphisoft [Archicad], Revit and other interesting packages, in construction, they are still beholden to 2D drawings. That’s the thing, and the contractors mainly use 2D packages and specifications. I feel a sense of urgency that we need to change this, but obviously my company is very small, and I don’t have a marketing budget to educate the industry! When I started developing the software, I tried to convince developers and contractors that there was a better way of doing things, but they said that they didn’t want to be guinea pigs. So, I started my own construction company. Now, we are building things and we are our own guinea pigs!

We started with our own architecture firm, and from our in-house development work, we launched a multi-tech start-up to share our solutions, the first of which was an intelligent automation tool. This will eventually enable us to bring to market a complete platform that will integrate design with procurement and supply chains. This system will be able to get prices immediately, instead of relying on a QS (Quantity Surveyor) benchmark estimate. When we launch our platform, clients will be able to access our system and pick, choose and buy products from the catalogue.

As to contracts, based on all our in-house process development, we have introduced what we call ‘open book contracts’, so that when we talk to a client, we are incredibly open with materials costs, down to the brick, as well as all the preliminary costs and even our profit margin.


AEC Magazine: You sound very frustrated looking at the AEC workflows that have been adopted and codified?

CQL: The RIBA Sequential Work Methods have a linear format which necessitates sequential focus on phases, one at a time. Each stage must be completed before the next is initiated, leading to a drawn-out development process with intensive design alterations, delaying projects and having financial implications for all involved. Linear workflows, by their very nature, build in risk, not eliminate it. From a commercial perspective, I understand that the method makes it easier to break down payment phases, but I think you can simplify the whole workflow.

In manufacturing, they have refined the process. They produce 3D models that are manufacturing-ready. They do assembly sequencing, as in how you put things together, while we as an industry, we produce hundreds or thousands of drawings, bundle them up and throw them over the fence!

Our specific workflow, which we call KIDIA (Kreod Integrated DfMA Intelligent Automation) is specifically designed to eliminate the need for repetitive work. It enables an early and accurate calculation of cost by automatically creating the necessary manufacturing/fabrication code and bill of materials (BOM). KIDIA not only expedites RIBA Stages 2, 3 and 4, but can also potentially save up to 90% of the time spent doing it, all while eliminating or reducing risk, which can be seen in our contractor quotes.


 

AEC Magazine: Some may say that it’s extreme, opting for an MCAD system versus a BIM platform that was custom made for architects?

CQL: I think that the whole process has to be integrated on a single platform. Otherwise, you end up using all sorts of applications: Rhino, Grasshopper, SketchUp, AutoCAD, Excel spreadsheets and Revit. And the best the industry could do to join them up was IFC, which is a lowest common denominator format.

I guess like most of us, toolswise, I have been on a journey. I started off using MicroStation and Generative Components (GC), then moved on to Rhino and Grasshopper. Then I started to get into geometry rationalisation and teamed up with a professor of mathematics from ETH Zurich who did it all in C++, no CAD system required!

Experience is essential. On the Olympic Pavilion project, I was the lead designer, the client, the contractor and the project manager! I had to do everything, apart from structural engineering. After rationalising the geometry, I built what I called at the time a ‘building manufacturing model’, though I found out later it’s called DfMA. I found a second-hand robotic arm and started experimenting with cutting and assembling wooden frames for the pavilion. For the metal frame, I collaborated with a steel fabricator and took a highly collaborative approach, which was great. I learned so much.


AEC Magazine: How on earth did you fund the Pavilion as a personal project?

CQL: I pitched everyone! I had a fulltime job. I think I spent £2,000 on stamps, writing to a lot of people and companies. I just sold the idea and made it possible!


AEC Magazine: How did you get involved with Dassault Systèmes and the 3DExperience Catia platform?

CQL: Frank Gehry was the pioneer. He developed Gehry Technologies, with his own Catia tools for architectural designs. For us, this is a financial decision, a commercial decision, to go and find the best platform which can go all the way to fabrication and then develop on top of it. The 3DExperience platform is very powerful and can handle lots of complex information. It’s why it’s so popular in aerospace and automotive.

Initially, when I first contacted DS around 2010, they said they didn’t really work with one-man bands or students. They worked with multi-billion dollar engineering and aerospace firms and that was the end of the conversation! Later, things bounced back, especially as they created an on-boarding programme for start-ups and started to see potential for their applications in AEC.


AEC Magazine: So how do you think this is going to play out? AI is starting to appear at the edges and some systems will take polylines and deliver fully detailed 3D models, with drawings for fabrication. Are we going to see more design/ build firms? Will we need fewer architects?

CQL: I use the term ‘Digital Master Builder’ from the traditional meaning, dating back to the mid-sixteenth century, where architectural designers were closely involved in the whole construction process.

We have self-restricted the role of architects to just delivering design intent. Back in the day, architects used to lead the process, but now we lack broad industry competence and there is a general lack of interest in understanding the construction side of the business. There’s a shift in the way we work, where the principal contractor is now playing a major role in the entire process.

Through our plans for development, I want to give more power to the architects. They will be the designers that will understand the costs. That’s how we sell our schemes. We need to widen our spectrum, not just be the producers of couture drawings with beautiful line weights. Buckminster-Fuller asked Sir Norman Foster how much his building weighed? We need to be more like BuckminsterFuller, who was obsessed with the relationship between weight, energy and performance — of “doing the most with the least”. And, of course, customers want to know the cost. We need better tools to do this.


AEC Magazine: You have started a lot of different firms with different aims. What can you tell us about them?

CQL: There’s a reason we started these companies, for architecture, software development, and especially construction, because in each, I need to build, to demonstrate and deliver. From our experience, we can convince more customers to use our technology or consultancy. That’s the notion. Ultimately, we have to do something to better serve architects and developers and the key is to integrate the whole process. To provide more transparency, with design costs understood, which means there will be less need for damage control at critical points in any build.

The post Kreod and Catia: new model architect appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-fabrication/new-model-architect/feed/ 0
Simplebim: working with structured data https://aecmag.com/bim/simplebim-working-with-structured-data/ https://aecmag.com/bim/simplebim-working-with-structured-data/#disqus_thread Wed, 01 Feb 2023 13:50:30 +0000 https://aecmag.com/?p=16610 Open BIM and confronting the challenges of working with structured BIM data

The post Simplebim: working with structured data appeared first on AEC Magazine.

]]>
We caught up with Maria Lennox, who heads up Datacubist’s BIM services, to dig a bit deeper into Simplebim and talk about the challenges of working with structured BIM data on big projects

Maria Lennox has an impressive CV. Prior to joining Datacubist in February 2022 to head up the company’s BIM services, she spent several years as an architect and several more in large construction companies. Her first move was to NCC Sweden where she was given the time to develop how construction companies could make the most out of BIM. Lennox then became BIM director at SRV in Finland.

At NCC, Lennox first assumed that there wasn’t a problem with BIM data – teams make models, other firms use the models – but the reality was a lot more complex. With her small team of BIM experts, they made all kinds of use cases for cost estimators, design managers, site personnel, together with guidelines for designers to meet the demands of construction companies.

Simplebim
Maria Lennnox

After a while the team realised that it was a hopeless effort, providing guidelines and support to engineers and architects to ask them to add information for their use cases, as Lennox explains, “You can’t expect people to add information they are not responsible for. They are not interested in data that a site engineer needs to make a schedule. Even if you request for them to provide the concrete information for the model state, and they do it once, it’s never updated again.

“It’s exactly the same thing with everything else, like classifications or making it possible to pick up an object for purchasing, or cost estimation, it’s always different kinds of classes or needs. You just can’t rely on it.”

In 2016, Lennox started using Simplebim on projects and realised that her team was able to take control of the data inside of the models, so instead of asking people or designers to fix their models or add the extra information, the decision was the reverse – to ask for as little as possible and minimise the expectations!

Instead of trying to maintain 15 to 25 common data fields, the team managed with two or three. Based on that concept, her team added everything else for their own use cases. By creating the data and handling their own data, they could keep control of the model information in the IFCs. Lennox refers to this process as having established internal ‘BIM data factories’ to automate the maintenance of IFC information layers. In a typical month at SRV, Lennox explained there would typically be between 500 and 900 incoming IFC models from all disciplines, being handled automatically by the system, creating models that made sense to the construction teams.

In today’s BIM world, even with BIM standards and quality checking with Solibri, backed up by contractual obligations and national standards, this is still a woeful process. The easy answer would be to just accept that a perfect world cannot be achieved. However, there are tools that can make sense of the madness, as Lennox explains, “It’s a huge advantage, because, after a little work, you’re able to actually control the data yourself.”

Data wrangling

Simplebim offers both manual and automatic methods of wrangling data which is spread out over lots of different kinds of property sets or filtering too much of the same information. Simply create a new standard property set in Simplebim and standardise within that fresh layer, obviously checking for missing critical data. Then let the software pick up quantities.

The first step is to standardise the information properties to enrich the values and select groups of objects to define as assemblies for use downstream. Obviously if you have 900 models a month coming in, even with a team, this isn’t possible, so Simplebim has a scripting capability which drives the creation of these.

On Dropbox or OneDrive all you need is two folders: folder ‘In’ and folder ‘Out’. By dragging and dropping IFCs into the ‘In’ folder, Simplebim will run its automation, enriching the IFC and will save the updated IFC in the ‘Out’ folder. This could be the architectural, structural, electrical, plumbing or MEP models.

I believe that in construction companies, there is a lot of potential waste inside of BIM models because people don’t know how to use it, it’s too difficult

Models can then be loaded into Simplebim for further refinement, such as sections made, new groups created and even more ways to create sophisticated ‘chain groups’.

In an ideal world, Simplebim advises you always to create the same groups in your models, because then they become much easier to use. You can, for example, always add a ‘pre-cast elements’ group, which makes life much easier for the person scheduling the pre-cast elements.

“From my perspective, I believe that in construction companies, there is a lot of potential waste inside of BIM models because people don’t know how to use it, it’s too difficult,” explains Lennox .” The data itself is very hard and it changes every single time. When you move to another project, you get the files from somebody else, knowing the data fields may change. You shouldn’t need to be an actual expert, but should be able to be the site engineer. We just need to be able to use the model quick and dirty, fast, and move on instead of being a BIM expert. And that’s something we are trying to make possible with this [Simplebim] to make it as easy as possible for every single person.”

Be like water

It’s at this point that I am oddly thinking about Bruce Lee and his philosophy about fluidity and being adaptable to change – “Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle, it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow, or it can crash. Be water, my friend.” It’s much better with Lee’s delivery.

However, it’s important to realise that a lot of the problems caused in BIM workflows, especially in IFC, are to do with errant labelling and missing data. By accepting the nature of the humans who created the models being hosepiped in your direction, Simplebim can turn the roughest torrent of non-aligned IFCs into calm water by automating naming, tagging, and grouping of the models, and building that logical layer.

Using the Simplebim Hub and scripting is an important capability for handling large numbers of models with a high transaction level and getting dependable standard models out the other end. It centralises the data wrangling and the automation takes out the hard work.

BIM-based projects are coming with ever more overwhelming delivery documentation / model requirements. Some are over 300 pages thick and written by people who have had more to do with the theory of BIM than the practicality of what’s actually of use. Having ridiculously high deliverables is simply just going to cost the project more money. The problem is that these are coming from customers advised by academics. The answer is to adopt an open standard deliverable and be like water.


This is the second of a two part article on Simplebim.

Read part 1 here


Simplebim at Hinkley Point C

Simplebim

The construction of Hinkley Point C Nuclear Power Plant in Somerset, England is very much in vogue, with the country caught short in the production of its own electricity during the recent energy crisis. It is one of the most important infrastructure projects in the UK at the moment.

The new nuclear power plant will deliver 3,200 MWe, enough to power six million homes. While announced in 2010, construction didn’t start till 2017 and suffered further delays because of Covid-19, with the latest estimate that it will cost £26 billion to be completed by 2027.

After extensive testing, Bylor (a joint venture between Bouygues Travaux Publics (TP) and Laing O’Rourke) chose Simplebim to manage the build properties of the estimated 30,000 concrete pours over the 50 million reinforcement bars in the project.

Despite having thousands of IFC files, Simplebim automates the cleaning up, mapping and structuring of the data. This lets the Bylor team accurately forecast quantities of materials. And by mapping data to activities in Primavera P6, they can associate material requirements to time. This has allowed the team to more effectively plan resources and manage supply chain and logistical issues. This is important as it also maps to the way that Bylor gets paid, which is by the number of walls, slabs, sofits etc. that it produces. Being able to quantify and track each job feeds into billing, as well as quality management, which on this kind of facility is exceptionally important.

By holding all the engineering data in IFC, the project data is efficiently collated and in a format which can be repurposed to match the daily activity on site. Most BIM tools model objects and entities, which might match the end-product but may be the result of many concrete pours or discrete construction processes. Simplebim provides Bylor with the tools to break the model down to a granular level to define the data for how it will be constructed. In addition, if any changes need to be made due to delays or problems during construction, these can easily be highlighted and updated. All activities within Simplebim are recorded so there’s a comprehensive history log that can help analyse past performance which will enable teams to learn from past experiences.

“This is not necessarily about doing something new. It’s about an making it more efficient, more scalable, less resource intensive, and ultimately more flexible,” explains Terry Parkinson, senior digital engineer, Bylor. “Simplebim allows us to adjust and tweak the information to support our processes. It gives us a level of control that we have never had previously.

The post Simplebim: working with structured data appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/simplebim-working-with-structured-data/feed/ 0
Parametric design keeps London Kings Cross project moving https://aecmag.com/computational-design/parametric-design-keeps-london-kings-cross-project-moving/ https://aecmag.com/computational-design/parametric-design-keeps-london-kings-cross-project-moving/#disqus_thread Wed, 18 Jan 2023 18:39:55 +0000 https://aecmag.com/?p=16356 Arup went all-in on parametric design and BIM for a mixed-use complex in London’s King’s Cross

The post Parametric design keeps London Kings Cross project moving appeared first on AEC Magazine.

]]>
Global engineering firm Arup went all-in on parametric design and BIM for a mixed-use complex in London’s King’s Cross neighbourhood. With the Tekla-Grasshopper Live Link, structural work could continue at pace while design-milestone approvals were handled in parallel

The space to the north of London’s King’s Cross railway station has been undergoing an urban transformation for more than 15 years. Residential apartments, offices, a shopping complex and more have been built in and around the area’s historical buildings, drawing Londoners to a part of the city that people used to avoid.

Global engineering firm Arup has been involved in much of the work, including designing the structural elements for one of the final development projects: King’s Cross R8. The project consists of two 13-storey buildings joined by a podium, combining affordable social housing with rental space for small business owners.

A key challenge on the project was the location, with the development running immediately adjacent to three brick tunnels constructed in the 1700s. All the trains going in and out of King’s Cross station run through these tunnels, which are sensitive to movement. As a result, every design milestone for anything constructed within a certain proximity to the station had to first go through an approval process with London’s rail-network operator.

Such approvals can understandably take time and potentially knock a project off schedule. But thanks to the use of parametric design and BIM – specifically through the Tekla-Grasshopper Live Link – the team from Arup was able to keep the work moving.

Parametric design, or data-driven design, is guided by a set of interconnected parameters and roles, defined and inputted by the engineer, with these parameters then generating or controlling the design output into a parametric BIM modelling tool. Through the use of tools such as Grasshopper, engineers can benefit from the ability to quickly run various design iterations to optimise the structural design, as well as aiding the creation of the geometrically complex. What’s more, with all parameters and data interconnected, the change management process is also automated and simplified.

Arup senior structural engineer and project lead, Gordon Clannachan, explained: “We had to produce a number of drawings for the network-rail approval process. Although these needed to be done at an earlier stage than we would typically do on projects, they allowed the client to fast-track the approvals process prior to the main contractor starting on site. Using Tekla to automate the BIM model was essential for this work. As the design scheme evolved, we were able to respond very quickly thanks to Tekla’s automation-enabling tools.”

The team put parametric design at the heart of all the project’s workflows, pushing or pulling data and geometry to and from Tekla Structures to improve the efficiency of everyday tasks. The engineers also created a script that automated the calculation of the loads bearing down on the concrete columns and walls. This helped to further optimise the design and reduce the amount of concrete in the building’s foundations.

Arup also used the Tekla-Grasshopper integration to develop their own scripts for calculating the embodied carbon footprint of all structural elements. The Tekla Organizer tool was then used to set up templates to export the embodied carbon of every element by material, and for different embodied-carbon stages. These calculations were reported against targets that have been set for 2030 and beyond.

“We have a responsibility to take ownership of the embodied carbon in the structures we design and to use our influence to reduce the carbon impact of our projects,” said Gordon. “If you really want to influence carbon-related decisions, then you need to automate these calculations. The live-link integration between Tekla and Grasshopper is great for this too. We built the carbon factors into the Grasshopper script and parametrically linked the data.”

“I always try to look for ways to do each project better than the one before, rather than just defaulting to repeating the same methods. Pushing automation into our workflows makes us more efficient in how we deliver projects and respond to changes. The structural team believed in what we were doing and put a lot of hard work into developing these tools, which we can now use on the next project too.”

The post Parametric design keeps London Kings Cross project moving appeared first on AEC Magazine.

]]>
https://aecmag.com/computational-design/parametric-design-keeps-london-kings-cross-project-moving/feed/ 0
Bentley Systems iTwin ‘phase two’ https://aecmag.com/digital-twin/bentley-systems-itwin-phase-two/ https://aecmag.com/digital-twin/bentley-systems-itwin-phase-two/#disqus_thread Wed, 30 Nov 2022 13:05:52 +0000 https://aecmag.com/?p=16174 Bentley continues its drive to lead the digital twin infrastructure market with the release of new iTwin products

The post Bentley Systems iTwin ‘phase two’ appeared first on AEC Magazine.

]]>
In mid-November, the first post-Covid, in-person gathering on all things Bentley Systems took place at YII in London, where the company continued its drive to lead the digital twin infrastructure market with the release of new iTwin products

It is almost four decades since Bentley Systems was founded — or 38 years, to be precise. It’s almost as long since the 1986 launch of MicroStation for DOS, and yet MicroStation continues to be the base platform for the infrastructure software company’s wide range of vertical products. These cover areas as diverse as plant, rail, roads, bridges, dams, telecom, electricity, and water systems.

Having achieved its long-awaited Initial Public Offering (IPO) in 2020, the company recently passed $1 billion in annual revenue, and while its products remain emphatically desktop-based on Microsoft Windows, the Microsoft Azure cloud is now an option, too, when that option makes sense for customers.

Over the past five years, Bentley Systems has been on a mission to persuade infrastructure owners and operators to invest in digital twins of their assets using its iTwin technology, on the promise of better lifecycle management for those assets, as well as to extend their use of asset information, whether it is designed in CAD or modelled from the field using reality-capture technology.

That message seems to be getting across, especially with customers in Asia, with entrants for the company’s Year in Infrastructure (YII) 2022 awards using the iTwin technology significantly outnumbering those from other regions. As CEO Greg Bentley pointed out, 50% of the world’s infrastructure projects are currently happening in Asia.

These Asian companies are using digital twins to address three critical areas, according to Greg Bentley. The first is ‘digital context’, the ability to use and deploy 3D and 4D technologies to capture the physical reality of an asset being twinned. The second is ‘digital chronology’, which focuses on maintaining a database or record throughout the lifecycle of an asset to create what Bentley refers to as an ‘evergreen’ digital twin. The third area is ‘engineering technology’, the ability to model and simulate a facility over the lifetime of a project in order to optimise its efficiency.

iTwin IoT for acquiring and analysing sensor data
iTwin Capture, for capturing, analysing, and sharing reality data. Image courtesy of Bentley Systems

But this focus on digital twins is by no means confined to Asia. According to a survey of infrastructure CEOs conducted at the event by AEC Advisors, when asked about their main concerns over the next three years, it was clear that respondents are worrying less about 2D drawings and more about 3D models. Almost one in five said their companies were evaluating and deploying digital twins.

This interest comes at a time when many customers are having to do more with less, as chief operating officer Nicolas Cumins explained. Infrastructure investment is on the rise, to support economic recovery, ensure energy security and tackle climate change, he said. But such projects must play out against a backdrop of extreme labour and skills shortages, which are causing considerable work backdrops and, in some cases, forcing companies to turn down projects.

One way that Bentley can help, said Cumins, is to give firms the digital tools they need to mine data held in many different file formats and in hard-to-access data silos. Hidden away, that data goes stale and ‘dark’, he said.

CIOs estimate that around 70% of the data produced by their organisation is never used, and in the infrastructure sector, the picture may be even worse — as high as 95%, according to Bentley Systems. If only 5% of an infrastructure organisation’s data is being analysed to derive actionable insights from it, then decision-makers are likely making underinformed decisions on critical infrastructure on a regular basis.

Bentley’s goal, then, is to make this ‘dark data’ more accessible, more usable and available to more people.

Technology evolution

From a Bentley perspective, the significant interest in digital twins demands a fundamental shift from files to databases and thus a fundamental change in software architecture and its ability to support very large digital twin models. Both Greg Bentley and his brother and chief technology officer Keith Bentley touched on this, while stressing that files continue to be useful, logical and here to stay.

At the same time, Keith Bentley explained to attendees how the company’s view of digital twins has changed and matured since the 2018 launch of iTwin. Back then, he said, “I thought that the prospect of digital twins would add more value than all the progress that had been made since the beginning of Bentley Systems, back at the beginning of the PC. I thought that the opportunity around digital twins was a once-in-a-lifetime opportunity and I was extremely excited.” He is still excited, he said.

From the start, he added, openness was a key theme for the product, which is why iTwin was made open source early on. In that way, he said, customers “can use the iTwin platform without paying a licence to Bentley Systems at all, downloading the entire source code to the iTwin platform.” That’s still true today, he says, and now the company has plans to create a lot more open-source projects around iTwin.

Producing a model to generate dumb drawings may well be a means to an end today, but Bentley figures that there are many more benefits to be had in talking to and repurposing a virtual model

In ‘Phase Two’ of the company’s iTwin initiative, the company will work harder to help users understand how iTwin can assist them in their day-to-day work and integrate iTwin services into the desktop products they use. That won’t mean replatforming, reimplementing or reprogramming of existing products, he stressed: “They are what they are.”

In other words, customers can move to iTwin-enabled versions of the core products, without losing any of the capabilities to which they are accustomed. But they will improve in three key ways, he promised: first, in the input to those products; second, in their operation; and third, in their output.

Building on reference files, MicroStation will be able to add a reference iTwin — not only files on a desktop or server, but also in the cloud – and the iTwin itself can connect to many other sources. Bentley Systems is also working on incorporating interactive tools like Slack, Microsoft Teams and Zoom with MicroStation, for collaborative design sessions. Finally, digital twins will be updated directly by design applications, not via some convoluted check in/check out document management system.

“Phase Two is about empowering our existing user base to become digital twin natives,” said Bentley. In the long term, he promised, “nobody will be thinking about their role in the project. They will be thinking about how they contribute to everything that everybody else is doing at the same time.” The next release of MicroStation, due in 2023, will be powered by many new iTwin features, he said.

New iTwin Products

As mentioned, the iTwin platform was officially launched in 2018. While this built on the open iModel database work that Bentley had done in previous years, iTwin provided a host of services on which to build digital twins. There was one main omission: applications to ease the creation and management of twins. Five years later at YII 2022, we saw the first three iTwin applications launched: iTwin Experience, iTwin Capture and iTwin IoT, which are all available immediately.

iTwin Experience is a new cloud product for owners and operators which gives them ‘a single pane of glass overlay’ to their engineering, operational and enterprise data. It enables users to visualise, to query and to analyse critical infrastructure data at any level of granularity, at any scale, all geospatially referenced. It will also be used by engineering firms as a platform to offer their own digital twin solutions to their owner/operator clients by adding their proprietary algorithms.

iTwin Capture, which has evolved from ContextCapture, will be used to create 3D models of existing infrastructure assets derived from photograph or point clouds with engineering level precision. These will be used in engineering workflows, leveraging artificial intelligence and machine learning to automatically identify, locate, and classify reality data.

Adobe is already using iTwin Capture as part of its Substance 3D solutions, to help designers leverage their own pictures to create photorealistic 3D environments. When it comes to Infrastructure, the majority of digital twins will be derived from capturing their existing infrastructure assets by using a reality model and not a BIM model. This makes iTwin Capture probably the most important on-ramp to create infrastructure digital twins.

iTwin IoT does what it says on the tin, enabling the seamless incorporation of Internet of Things (IoT) data from sensors and monitoring devices. IoT devices are being used increasingly in both construction and operations for real time safety and condition monitoring, to measure and visualise any change in the environment of an infrastructure asset, to look for changes to the structural condition and to know when to perform repairs.

Dustin Parkman, VP of Mobility, gave an example here of a workflow that focuses on an elevated road section. In a single environment, he brought in a GIS map, a modelled asset created via a reality capture of the actual structure. BIM information was then added to this, and GPR (Ground Penetrating Radar) data overlaid to identify pitting and to assess hydroplaning risks on the bridge as captured. Subsurface data was also added and IoT sensor feeds were placed for realtime performance analysis. AI was then run over the photogrammetry of the section, to assess visible cracks, rust and critical deterioration to the structure.

New tech, new insights

Other executives speaking at the event also focused on the integration of different types of data to inform more insight led decision-making.

For example, according to Julien Moutte, vice president of technology, Bentley Systems is creating a sensor data service to manage connectivity to smart devices and analyse the data they collect, brought in from Azure IoT Hub or Amazon IoT Edge.

Bentley Systems iTwin
iTwin IoT for acquiring and analysing sensor data. Image courtesy of Bentley Systems.

The company’s experience with integrating enterprise systems, such as enterprise resource planning or asset management systems, is enabling it to create an enterprise data federation service, to expose a simple yet secure API to connect customers’ enterprise data to their digital twins.

Bentley also has a new mesh export service, which will enable digital twins to appear in the Metaverse and in game engines. It will use the best format depending on output, whether that is USD for Nvidia Omniverse, DataSmith for Epic Games’ Unreal Engine, glTF for Unity, and many more.

The problem with digital twin data, Moutte pointed out, is that it is usually very large and any changes require a complete reload into these game engines. The solution to this challenge comes in the format of 3DFT – 3D Fast Transmission. The technology uses streaming, meaning users no longer download entire models, but only that which is visible in the viewport.

That’s a wrap

At Bentley Systems, digital twins are the new BIM. Now that’s a loaded sentence if ever there were one, so I will explain further. Producing a model to generate dumb drawings may well be a means to an end today, but Bentley figures that there are many more benefits to be had in talking to and repurposing a virtual model. More benefits, in fact, than today’s somewhat limited workflows support.

The vast majority of BIM models, after all, are created to produce drawing sets and then not reused. Bentley wants to drive that digital thread on throughout the lifecycle, and digital twins is the next destination after adopting a modelling-based view.

This is a technology approach where Bentley Systems enjoys an impressive industry head-start. Autodesk’s Tandem, for example, only came out last year with an embryonic feature set, limited to dealing with BIM twins from Revit.

With that in mind, the three iTwin applications that have been released by Bentley look set to simplify and lower the barrier to entry for more resistant customers, enabling them to experiment with the iTwin platform.

Bentley’s vision makes a lot of sense of its 2015 acquisition of Acute3D, which led to the ContextCapture capabilities that allow customers to ‘grab’ data relating to existing real-world assets, ready for its inclusion in digital twins. After all, the vast majority of assets don’t currently have a corresponding BIM model at the ready to drive their digital twin, so most of them will still need to be captured by drones using photogrammetry and some laser scans.

But what about Bentley customers with no plans at all to get into ‘twinning’? Here, there is good news, because the new iTwin features due to arrive in Bentley’s desktop software portfolio will still benefit existing workflows, extending their connectivity and collaboration capabilities. And, if such customers do feel the need to explore twinning further down the line, every application will be ‘twin ready’ in the coming years.


Main image: iTwin Experience for visualising and navigating digital twins. Image courtesy of Bentley Systems.

The post Bentley Systems iTwin ‘phase two’ appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/bentley-systems-itwin-phase-two/feed/ 0
Artificial Intelligence (AI): the coming tsunami https://aecmag.com/ai/ai-the-coming-tsunami-architecture/ https://aecmag.com/ai/ai-the-coming-tsunami-architecture/#disqus_thread Mon, 24 Oct 2022 10:07:56 +0000 https://aecmag.com/?p=15888 We explore the potential impact of AI on architectural design

The post Artificial Intelligence (AI): the coming tsunami appeared first on AEC Magazine.

]]>
While we see design software marginally improve year on year, there has been growing unrest at the pace/scale of improvements. Questions have been raised about how well BIM workflows map to how the industry actually works. Martyn Day looks at the potential impact of artificial intelligence on architecture

As a society, living in a technological age, we have become incredibly used to rapid change. Sometimes it feelslike the one constant we can rely on is that everything will change. For millennia humankind lived in caves, scrawling drawings on the walls. The Stone Age was 2.5 million years long, then came the Bronze Age and, with it, urbanisation, which lasted 1,500 years. The first Industrial Revolution lasted just 80 years (1760 – 1840). Before we reached our current, digital age, the Wright Brothers perfected powered flight and just 66 years later, our species had escaped Earth’s gravity, traversed the vacuum of space and landed on the moon. We are making advances in ever shorter timeframes and have industrialised innovation through the development of ever-smarter tools.

The next revolution is already here but, as the saying goes, it will not be evenly distributed. At the moment, many aspects of our working lives are still going through digital transformation. Everything is becoming data and the more that becomes centralised, the more insights it enables, offering a greater opportunity for knowledge processing.

Artificial Intelligence and Machine Learning have gone from science fiction to science fact and are rapidly being used by increasing numbers of industries to improve productivity, knowledge capture and in the creation of expert systems. Businesses will need to transform as quickly as these technologies are deployed as they will bring structural and business model changes at rates which we have not yet truly anticipated.

In the last few months, I’ve seen demonstrations of design technology currently in development that will, at the very least, automate labour intensive detail tasks and perhaps greatly lessen the need for architects on certain projects.

First warning

During the lock down in 2020, I watched with interest an Instagram post by designer and artist, Sebastian Errazuriz. It soon became a series and more of a debate. He said, “I think it’s important that architects are warned as soon as possible that 90% of their jobs are at risk.”

His argument condensed down to the fact that architecture takes years to learn and requires years of practice. Machine learning-based systems can build experience at such an accelerated rate that humans cannot possibly compete.

As we already have millions of houses, enormous quantities of data, including blueprints, why do we need a new house when we can have an AI trained and then blend of all the best designs? “Now try to imagine what 1,000 times this tech and 10 years will do to the industry,” concluded Errazuriz.

 

The interesting thing is, at that point in time there was very little technology offering anything like that. Perhaps Errazuriz had seen Google’s Sidewalk Labs which was experimenting with generative design to create and optimise neighbourhood design. At the time I thought it was a good marketing ploy for himself, although the comments turned into a pile-on.

Current AI reality

We are still some way off from fulfilling anything like the true potential of AI in generative design, a view shared by Michael Bergin of Higharc, who used to head up a machine learning research group at Autodesk. “The full impact of a generative model that uses a deep learning system, what we call an Inference Model, is not ready for primetime yet but it’s incredibly interesting,” he says.

But there have already been several fascinating applications of AI/ML in AEC. Autodesk, for example, has delivered some niche uses of the technology. Autodesk Construction IQ is aimed at project risk management in commercial, healthcare, institutional, and residential markets. It examines drawings and identifies possible high-risk issues ahead. AutoCAD has a ‘My Insights’ feature, which examines how customers use their AutoCAD commands and what they do. The AI will then offer tailored advice to help improve productivity or how to better use tools.

Like all hype cycles, the impact of machine intelligence on jobs will be overestimated in the short term and underestimated in the long-term

There are also a range of adaptive and ‘solver’ tools available such as Testfit, Autodesk Spacemaker and Finch 3D, which all solve multiple competing variables to help arrive at solutions that are optimised. While not strictly AI/ML, their results feel like magic and actually help designers make better informed decisions and reduce the pain of complexity.

Bricsys has also been investing in AI. Bricscad BIM doesn’t use the Lego CAD paradigm of modelling with walls, doors, windows etc. Instead the user models with solids and then, using the BIMify command, runs AI over the geometry, which it identifies as IFC components, windows, floors, walls etc.

AI applications so far have either predominantly been at the conceptual side or have tried to ‘learn’ from the knowledge of past projects.

Recent advances

Over the last two years, in conversations with AEC firms who were fed up with the limitations of their BIM tools and were looking for significant productivity improvements, many seemed to turn to wanting to completely automate the 2D drawing process.

While drawings are a legal requirement, heavily model-based firms are calculating that they could save millions by just having AI take over that and then they could spend more time on design. Around the time of our NXT BLD conference in June 2022, I started to see early alpha code of software which was looking to apply AI/ ML to design. And, in subsequent conversations with some design IT directors at leading architectural firms, there was an appreciation that for many standard, repeatable buildings types – schools, hospitals, offices, and houses – automated systems will soon heavily impact bread and butter projects.

One firm was already running projects in the Middle East with an in-house system which only required one architect, whose task was to define and control the design’s ‘DNA’, with the rest of the team being engineers, focussed on streamlining fabrication. I’ve also seen a demonstration of a system that just requires mere polyline input to derive fabrication drawings for modular buildings, missing out detail design completely. There’s also Augmenta, which is looking to automate the routing of electrical, plumbing, MEP and structural detail modelling.

Another gift from lockdown was construction giant Bouygues Construction working with Dassault Systèmes to develop an expert system based the 3D Experience platform (Catia for us old schoolers).

Drop in a Revit model and the system outputs a fully costed, documented virtual construction model for fabrication – all based on the rules, processes and machines, which Bouygues has defined in its workflow, all managed through its Dassault Systèmes’ Product Lifecycle Management (PLM) backbone.

While the system is based on configuration and constraints and low on AI/ ML, there is a drive to build expert systems, bespoke systems to harness a company’s well-defined internal processes. Like Higharc, the rise of the platform to solve niche market segments is also more likely to be the case for next generation tools.

Pictures that infinitely paint

Ten years ago, machine learning systems were only just getting a hang of identifying what the subject of a photograph was. Is this a bear or is this a dog? Today’s systems can write entire paragraphs defining a scene from its computer vision. This advance is probably just as well, as there are already automated taxis with no human drivers in San Francisco driving around picking up passengers – Cruise and Waymo.

The rise of DALL-E, Midjourney, DeepRender and Stable Diffusion have flooded social media with all sorts of amazing images. In this issue you can see the work of many readers who have been experimenting with these tools, to great effect. Trained on billions of photographs and now allowing users to add their own, from week to week this technology seems to be rapidly advancing to a point where the output becomes useful at the conceptual phase of design.

Hassan Ragab
AI-generated image of a building facade produced by Hassan Ragab in midjourney, automatically converted to a 3D mesh using Kaedim, an AI that turns 2D images to 3D models

That’s a view shared by computational designer / digital artist, Hassan Ragab, one of the most accomplished users of the technology. “There will be a point in the near future where these tools could be directly employed into the design process,” he says. “For now many architects and designers are using it as sketching / inspirational tools, but for me, I am just trying to explore what these tools mean to our creative process; by trying to push my imagination to its limits and visualising what is on my mind using these powerful tools (and also to observe how these tools are changing how my mind works).”

Second warning

In August 2022, Sebastian Errazuriz was on Instagram again, this time identifying that illustrators will, unfortunately, be the first artists to be replaced by AI. Illustrations are commissioned based on text descriptions, which is how these AI systems work.

“The only difference between a human and the AI is that it takes a human about five hours to make a decent illustration that’s going to be published. It takes the computer five seconds” said Errazuriz.

He went on to recommend jumping in as fast as humanly possible to understand how the tools work and for illustrators to use their abilities to augment these designs. Experience will now help artists learn how to better describe an image to the machine.

 

 

I recently spent a weekend with friends who own a visualisation and media company. One of the partners confided to me that he thought that being a creative, he would never have to compete against artificial intelligence. In the last two months his company has had to invest hours of time learning to make use of and understand how these new tools can be harnessed for their business. They even have clients that are requesting to use AI generated presentation speakers, which read out written text in their videos to save money. It would seem Errazuriz is certainly more on the money.

AI to BIM?

Having seen the incredibly consistent midjourney building designs by Hassan Ragab and followed the community, it was interesting when a UK company called Kaedim popped up which appeared to be developing a service to convert 2D images to 3D mesh models. I contacted the CEO, Konstantina Psoma to see if we could try out the service.

Kaedim was designed to offer a service to the games industry a SaaS platform to quickly convert 2D assets into 3D meshes for games content. We sent over one of Hassan’s complex models and got an OBJ file back containing a single meshed object. It was interesting to see the interpretation but obviously there was no detail on any of the other sides of the building. Psoma had warned me that Kaedim hadn’t been trained on architectural assets but was up for giving it a go.

Kaedim
Photo of early modernist architecture, automatically converted to a 3D mesh using Kaedim, an AI that turns 2D images to 3D models

With the complex nature of the midjourney output, I next put through a photograph of some early modernist architecture, which was very rectilinear, this gave much better results. I then tried to put the mesh through Bricscad BIM to see if the BIMify command could turn it into a BIM model.

While I was hoping this would deliver the world’s first AI concept design to BIM model, incompatibilities in the software meant it fell a little short. Kaedim creates a single sealed mesh, whereas Bricscad BIM is expecting multiple meshes in its models. However, it did come temptingly close, especially with simplified geometry.

At some point these AI systems are most certainly going to start producing 3D models based on description, or the AI will be capable of rendering all façades, enabling some degree of 3D. Instead of feeding them flat 2D models, imagine an AI trained on every awardwinning architectural 3D model, or all the changes to architectural vocabulary throughout history, from Imhotep (2,700 BCE) to Zaha Hadid Architects (2016). Or an AI engineering system, which generates a fabricable engineering design of a hospital at 1:1, but allows the architect to design the façade panels, possibly inspired from another AI tool?

Conclusion

AI/ML, configurators and solvers are coming and coming fast. Over the next five years it will be fascinating to see how this all unfolds. To stay ahead of the game, the best survival advice is to familiarise yourself with these new systems, when you get the chance.

Established BIM developers of the existing tools are working out which elements of their software AI/ML can be applied to. These could be as boring, but essential, as stair design, to form optimisation, based on multiple analysis criteria.

This piecemeal approach to improvement will please existing users but won’t radically change the process. It will be for others, with nothing to lose, to come up with more powerful design systems which offer higher speeds of concept to design throughput. The focus might not be on architectural design but on construction because of the value benefit that could be applied.

Augmenta, for example, is looking to automate all the phases of detailed design. If this were to be driven into fabrication as well, the whole process might also go from 3D model to G-code.

Like all hype cycles, the impact of machine intelligence on jobs will be overestimated in the short term and underestimated in the long-term. From what I can see, efforts are being made to automate detail design, together with drawing production.

Both of these tasks are highly demanding and require sizable teams to carry out mundane work, and coordinate design changes. Automation could ultimately bring about reductions in head count at firms. The dream about having more time to design may hold some truth, but architects would need to change their business models, as billing by the hour and having a change driven fee structure is not going to survive the impact of automation in detail design.

The other thing that comes to mind is that, with all this time compression technology and ability to turn a process which has traditionally taken years into maybe weeks, it doesn’t really allow for the nature of humans and the reality of clients changing their mind.

I remember hearing of one successful collaborative BIM project that coordinated its project teams on an office building design and got early sign off from the client, at which point they ordered the steel. Much later, the client changed their mind on the design, but it was too late as the steel had been cut. AI might help deliver zero clashes and vastly reduced waste, but we can’t forget about the state of flux which is core to human condition.


AI in architecture: by Clifton Harness, CEO of Testfit

It was scheme “F0” fully printed and delivered to higher-ups for review. This baby was the sixth major site plan design, but the tenth minor iteration that slightly improved the developer’s financial outcome. Finance said it was a winner.

On the walk back to my desk at 11:14pm, I counted the units, again. 253. Good. I counted the parking stalls. It was ready for review. The next morning, I arrived to review “F0” and caught my 30-years-an-architect boss hard at work counting the stalls and units. This is when it really hit me: software has barely scratched the surface of building design. I think that this thought, in this moment, was the TestFit founding moment.

I was so deeply struck with the very real absurdity that industry-wide hundreds of thousands of hours are spent checking math on parking stalls. Imagine if we could fix that? Or more meaningful things? Like improving the hit rate for housing projects. Or to employ artificial intelligence to enable humans to comply with the rise of ever more complex zoning and compliance codes more ably?

Now to the meat of how I see AI playing out in architecture:

AI in architecture will result in better architecture, as long as there is actually a human architect running that AI. This will put the modern architect at a crossroads: do they embrace technologies that can make them super architects or do they reject them and watch the engineering and development industries embrace them? Either way, we will get better buildings, and the choice is the architect’s now.

If user-editable configurators like TestFit’s technology are employed, the project team has detailed control to achieve the design vision. It enables software engineers to use meaningful procedures to develop forms and understand why they break. The major strength (or weakness) to procedures is that they are all human-informed.

In the past few years, we have seen very impressive machine-learning algorithms start to tackle things like noise, daylighting, energy use, or microclimate analysis. These are promising, but ultimately computers were the ones doing those analyses anyway. The definition of form to meet project requirements continues to be the fundamental task at the heart of the design process.

Mixed-AI workflows are also quite promising. An example of this is using a simple procedure to generate massing, and then to ask a neural network for its best guess on column sizing for said mass.

Another thing I am absolutely convinced of: all these avenues of AI penetrating the architecture industry will still go through architecture firms. I’ve worked personally with hundreds of real estate developers, and nearly all of them would prefer to work with architects that have a long track record of success.

The real fear, I think, for the architecture industry, is when the Startup Development or Start-up Architecture shops start to leverage this technology and develop asymmetrical advantages over real estate investment trusts (REITs) or the Genslers of the world. AEC has always been soft on process, and AI is the process holy hand grenade.

The post Artificial Intelligence (AI): the coming tsunami appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-the-coming-tsunami-architecture/feed/ 0