AEC Magazine https://aecmag.com/ Technology for the product lifecycle Thu, 31 Aug 2023 10:07:02 +0000 en-GB hourly 1 https://wordpress.org/?v=6.2.2 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png AEC Magazine https://aecmag.com/ 32 32 BLK2FLY gets autonomous indoor scanning https://aecmag.com/reality-capture-modelling/blk2fly-gets-autonomous-indoor-scanning/ https://aecmag.com/reality-capture-modelling/blk2fly-gets-autonomous-indoor-scanning/#disqus_thread Thu, 31 Aug 2023 10:06:12 +0000 https://aecmag.com/?p=18353 Adds capability to autonomous flying laser scanner to scan indoor spaces safely and accurately

The post BLK2FLY gets autonomous indoor scanning appeared first on AEC Magazine.

]]>
Adds capability to autonomous flying laser scanner to scan indoor spaces safely and accurately

The Leica BLK2FLY, the autonomous flying laser scanner from Leica Geosystems, part of Hexagon, now has the ability to scan indoor spaces.

The new feature, which can be delivered through a free firmware update for existing customers, is designed to provide expanded coverage for complex scanning projects, including digital twins.

The BLK2FLY can now scan in areas without GNSS availability, opening reality capture opportunities in new settings and with new applications, including hazardous indoor areas like nuclear power plants.

Increased performance of the autonomous navigation system are said to heighten the sensor’s spatial awareness, allowing for obstacle avoidance in more confined spaces.

This new capability relies upon advancements to Hexagon’s visual SLAM system, providing real-time spherical imaging that improves the BLK2FLY’s operating range to a radius of 1.5 metres.

“The BLK2FLY, with its advanced autonomous UAV-based scanning, redefined reality capture workflows for multiple industries, especially digital construction, architecture, historic preservation and utilities,” says Pascal Strupler, business director autonomous reality capture at Hexagon’s Geosystems division.

“Digital realities are a crucial component of those workflows, and now that the BLK2FLY is able to scan indoors, users can create complete, comprehensive digital twins of buildings and structures.”

The BLK2FLY complements Hexagon’s terrestrial and autonomous sensor portfolio. According to Hexagon, when combined, they create complete coverage for any scanning project.

Users can also take advantage of Reality Cloud Studio, powered by HxDR, Hexagon’s cloud application that enables uploading of data to the cloud from the field using a tablet or smartphone to register, mesh and create 3D models of their data from the field automatically.

The post BLK2FLY gets autonomous indoor scanning appeared first on AEC Magazine.

]]>
https://aecmag.com/reality-capture-modelling/blk2fly-gets-autonomous-indoor-scanning/feed/ 0
HP Z4 Rack G5 remote workstation launches https://aecmag.com/workstations/hp-z4-rack-g5-remote-workstation-launches/ https://aecmag.com/workstations/hp-z4-rack-g5-remote-workstation-launches/#disqus_thread Wed, 23 Aug 2023 13:19:48 +0000 https://aecmag.com/?p=18339 1U rack workstation designed specifically to maximise density in server rooms and datacentres

The post HP Z4 Rack G5 remote workstation launches appeared first on AEC Magazine.

]]>
1U rack workstation designed specifically to maximise density in server rooms and datacentres

HP has introduced the HP Z4 Rack G5, a 1U rack workstation designed specifically for server rooms and datacentres for providing 1:1 remote access for workstation users.

The HP Z4 Rack G5 shares many of the same characteristics as the HP Z4 G5 desktop workstation – up to 24 cores with the Intel Xeon W-2400 CPU and up to the Nvidia RTX 6000 Ada Generation GPU – but maxes out at 256 GB of DDR5 memory instead of 512 GB. It supports one dual slot GPU, including the new Nvidia RTX 5000 Ada or RTX 4500 Ada, or two single slot GPUs, including the new Nvidia RTX 4000 Ada.

With a 1U chassis, the HP Z4 Rack G5 earns is datacentre credentials by being significantly slimmer than its 4U desktop counterpart, delivering four times better density in a standard rack.

It has dual power supplies, where two 675W PSUs can be combined together, or one can be used for redundancy in the event of a failure. In redundancy mode, however, the CPU / GPU options will be limited.

The HP Z4 Rack G5 can be configured with the HP Anyware Remote System Controller, a remote out-of-band management solution designed to give IT managers the ability to monitor and manage workstation fleets through a single interface.

IT managers can remote in, power workstations on and off remotely, perform bare metal imaging (with multiple Operating Systems) manage inventory, and get hardware alerts and diagnostics info.

The workstation can also be used with HP Anyware (formerly Teradici CAS) remote access and collaboration software so teams can access the power of the Z4 Rack from any device.


The post HP Z4 Rack G5 remote workstation launches appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/hp-z4-rack-g5-remote-workstation-launches/feed/ 0
The Future AEC Software Specification https://aecmag.com/bim/the-future-aec-software-specification/ https://aecmag.com/bim/the-future-aec-software-specification/#disqus_thread Thu, 27 Jul 2023 08:57:16 +0000 https://aecmag.com/?p=18252 A wish list of what leading edge BIM practices want from next generation tools

The post The Future AEC Software Specification appeared first on AEC Magazine.

]]>
At our inaugural NXT DEV conference, one of the keynote talks, given by Aaron Perry of the Open Letters Group, announced a free to all Future AEC Software Specification backed by global firms such as BIG, HOK, Herzog & de Meuron, KPF, Woods Bagot and ZHA. What should next generation tools be able to do? Martyn Day gives the back story

In London on 21 June, at AEC Magazine’s NXT DEV event, Aaron Perry, head of digital design at AHMM, gave a masterful presentation on the ‘Future AEC Software Specification’, a document that looks destined to have major influence on the future of AEC software.

Perry spoke on behalf of a peer group of leading architectural practices, which have been discussing the future of design tools and how to drive development across the entire design software community.

The specification originated with the UK Open Letter Group (OLG), who wrote that infamous letter to Autodesk CEO, Andrew Anagnost in 2020, but it is a much wider global effort.

It is currently being shared amongst the Nordic OLG firms (authors of a second open letter), and the highly influential AIA Large Firm Roundtable (LFRT) for approval and revisions.

The specification addresses the whole software development community, including those that invest in technology, and provides a level of customer insight that, quite frankly, is not available anywhere else. It asks, what is the future of our industry from a digital design point of view? It is a truly unique offering.

In his talk, Perry first points out that the specification is not a definition for a single monolithic product. The industry needs an ecosystem of best-in-class tools that can work together and are applicable to large firms all the way down to single users. There are ten pillars to the specification (see below). We’ll look at some of the key ones below.


The Future AEC Software Specification


Aaron Perry Future AEC Software Specification presentation on-demand


Data framework

Fundamental to the specification is the data framework. When trying to deliver complicated projects, firms use many software tools. Every time data is exported between the various siloed tools, data is lost. As a result, most firms accept the status quo and try not to move their data between various file formats, choosing to use suites of average tools.

But many firms still try and use the best tool for the right problem, but they are paying for it through translators, complexity, plug-ins, specialist software. This is the biggest drain on time and energy in the industry. A solution would be something like USD, (Universal Scene Description) which is open source and solved most of the data transmission problems in the CGI world. It’s a common standard, transmits geometry, lighting, materials to pretty much any media and entertainment application, whoever developed it.

The AEC industry faces a similar problem. With USD the common data structure sits outside the host products. This also means different teams, with different specialisms can open the USD file and work on their part of the scene. This creates a level playing field for software developers.

Read the Mission Statement of the Future AEC Software Specification

According to Perry, USD has some challenges. It’s a file format, as opposed to a database, there are variants as it’s not a perfect specification, and it is not the right format for BIM data.

Perry explained how different participants in projects need to access different levels of detail of data – specific data sets for structural engineers, cost consultants, sub-contractors who don’t need the whole model, energy simulation experts.

“The concept of the data framework being outside of these file formats, enabling different people to access author modified geometry and parameters, concurrently. This is by entity or component, not requiring the opening of the full model to review a single string of data,” he said.

“At this entity level, collaborators can concurrently commit changes and coauthor packages of information with a full audit trail. Moving away from file formats and having a centralised data framework enables local data to be committed to a centralised data framework and allow entire supply chains to access different aspects of the whole project without the loss of geometry.”

As an industry, we are currently locked into proprietary file formats. In the next generation, Perry clearly makes a case for shared and distributed ownership without the vendor lock-ins that the industry has suffered from.

Moving away from file formats and having a centralised data framework enables local data to be committed to a centralised data framework and allow entire supply chains to access different aspects of the whole project without the loss of geometry

Context and scale were others aspects of data that Perry commented on – with the need for detail, such as an individual sprinkler, in relation to the whole building. Game engines like Unreal with Nanite technology can model dust particles and sand, while scaling up geometry to the size of countries. Models are being rebuilt and remade for this level of detail. We need modern performant software from concept to construction levels of detail, he said.


Find this article plus many more in the July / August 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Responsible design

Projects are increasingly becoming centred on retrofit. With sustainability at the core, the focus is on reinventing the buildings we already have. Time is spent analysing the fabric of existing buildings but there are no tools available to help this. Perry explained that we need software that understands construction and embodied carbon – every major design firm has written their own. We also need more tools for operational energy, water use, climate design, occupant health, community – we are not seeing any tools with this on their agenda, he said.

MMC and DfMA

Modern methods of Construction, DfMA and Off-site are increasing and firms are looking for more certainty in construction. There are design constraints for shipping volumetric buildings to site, yet Perry explained that 95% of software currently available will not allow design to create design constraints when conceptually massing studies.

Most of the massing studies software, and there have been many that have come to market, is not fit for purpose. In fact, design tools do not allow architects to go down to the fabrication level of detail required. There is no modular intelligence.

Modelling capabilities

Moving from drawing board to CAD was pretty straightforward. Moving to object-based modelling was more challenging. Perry calls for more flexible tools, smarter tools that have construction intelligence, “We have either flexible tools without structure, or structured tools with no flexibility. Neither of which understand how buildings come together”.

Automation and intelligence

The industry is highly repetitive, yet across all firms, despite there being experience and knowledge in abundance, that knowledge gets applied from first principles each time, without learning anything from previous projects. Can we have tools that capture that knowledge?

The core size is manually calculated for every single building, over and over again, as the building envelope changes. When talking about repetitive tasks, like creating fire plans for multi-floored buildings, Perry asks ‘Where’s Clippy? We have no automation from past projects”.

Deliverables

The industry is spending far longer creating drawings than designing. “Why are we still generating drawings on A1 and A3 sized templates but these are never printed by anyone?” asked Perry.

“The drawing production process is about the release and exchange and acceptance between multiple parties and it’s sticking around for the time being because it’s our sole method, as an industry, to transact between different parties but what is the lifespan during production?” Using a centralised data framework with automation might become an alternative. Drawing production might become more incidental in the future.

Access data and licensing

As subscription has changed the way customers pay for tools, there have been far too many experiments with business models. Floating licences have been expunged for individual named user licences. Firms end up restricting access to those products. “Commercial limits are commercial limits.” said Perry. Token based systems expect the customer to gamble on future usage needs.

“We are seeing a trend in some vendors by charging a percentage of construction value. The reality is that by doing that you are pushing the decision making process of what design software to use to the person paying the bills and setting the total values for projects, so we’re effectively walking into a situation where we’re saying ‘the commercial decision for purchasing software now sits with the pension investment fund that owns the budget.” Needless to say Perry is not a fan.

In his hour-long presentation, Perry outlined just a few of the topics dealt with in the Future AEC Software Specification, which can be seen in full here.

To watch Perry’s excellent presentation and the panel discussion that followed, click here

Conclusion

We are at an interesting point in AEC software development. Mature customers are ahead of the software developers, wanting more, and demanding higher productivity benefits. The leading software developer, Autodesk, has just started replacing its tyre at 90mph, working out how to migrate Revit data to its new unified database, while figuring out what design tools Autodesk Forma will offer.

At the same time, new start-up firms are listening to mature customers and pondering what they do for the mass market play. The Future AEC Software Specification is the generic guide to all developers in the industry, to think about the kinds of tools that large firms would like to see, beyond what is currently available.

From watching Perry’s crystal clear talk at NXT DEV, and dropping into the conversations on the various panel sessions, it’s clear that there is plenty of opportunity for the industry to work together to develop the kind of capabilities which many would love to have right now.

AEC Magazine hopes to put on a yearly event where we continue to directly connect innovative practices with software developers and VCs. It’s reassuring to hear from the OLG that large construction firms have been inspired by Perry’s talk and the release of the software specification and are already looking to join the initiative and contribute to the specification to make it broader and pan-discipline applicable.


Mission statement
Future AEC Software Specification

“Our aim is to set out an open-source specification for future design tools that facilitate good design and construction, by enabling creative practice and supporting the production of construction-ready data.

“This specification envisages an ecosystem of tools, for use by large firms and single practitioners alike, with a choice of modular applications each with specific functions, overlaid on a unifying “data framework” to enable efficient collaboration with design, construction and supply chain partners.

“As architects, engineers and contractors, we do not profess to have all the expertise needed to enable this ecosystem, but we do know what we need to do our work, and so we are looking to engage with the software community to deliver the best solution.”


 A Q & A with Aaron Perry (AHMM) and Andy Watts (Grimshaws)

We caught up with Aaron Perry (AHMM) and Andy Watts (Grimshaws), after NXT DEV to show to find out what kind of response they had to the Future AEC Software Specification.


AEC Magazine: What was the immediate reaction at NXT DEV after you came off stage? What did people say to you?

Aaron Perry: When I came off stage, I couldn’t move two metres without another group of the audience wishing to speak with me. I was surprised how many people really ‘got’ the spec’s intentions. To some extent, I expected many to ask, ‘What about IFC?’ and most people got that we aren’t talking about file formats but web-enabled data exchanges.

Andy Watts: It was also surprising the range of interest we got – from engineers, developers, contractors, even the likes of KPMG. The way Aaron pitched the presentation made sure there was resonation with everyone, regardless of discipline.”

Aaron Perry: Online, it’s been overwhelmingly positive, I expected trolls to express alternative opinions for the sake of it, but the feedback so far has been very positive and complimentary of the initiative. I’ve spoken with vendors and designers, and it’s been encouraging.

Andy Watts: After a few weeks to digest, we’ve started to see some more mature engagement as well, readers wanting to know how this will work.


AEC Magazine: You have HOK and BIG signed up. What’s happening with the the AIA LFRT and the Nordic Open Letter Group. Are they in the process of approving it?

Aaron Perry: The website has been updated with supporters, and there are many that we haven’t even managed to get around to adding. We received confirmation from the Architectural Associations of Finland, Norway, Iceland and Denmark on the morning of NXT DEV. Since this, we’ve also had Central European associations and firms express interest.

Andy Watts: I’ve met with HOK and Woods Bagot whilst in the US the past couple of weeks. There has been conversation happening informally amongst the US firms – a healthy debate was how that was framed to me. But since then, a number of US firms have also started reaching out – nbbj, Perkins & Will, etc. Greg (HOK) and Shane [Burger] Woods Bagot are also laying the groundwork for a presentation to the LFRT by Aaron and I later in the year.

Beyond that, we’ve started to see Australia get on board in the last few weeks – Cox, Architectus, Aurecon have all started reaching out. And it has recently had some publicity at some AUS events.


AEC Magazine: Construction firms were also interested. Will there be signatories from the construction space?

Aaron Perry: One of the most surprising conversations I had straight after the presentation was from a Tier 1 Main Contractor who said something along the lines of ‘we’re very in bed with Autodesk. We’re actually working with them to build out custom tools for us, but you raise a really valid point that we should challenge vendors too, can we speak more…’ So yes, this is not limited to just designers.

ICE and others have asked us to report to their strategy groups to help them understand where they can contribute.


AEC Magazine: Now that you have delivered the specification what comes next?

Aaron Perry: We’re working on it. We have a solid plan of what steps to take, but I’d be talking out of turn to share that without having received a broader sign-off.

Andy Watts: Agreed. Generally our approach was to put this out in the public to see if it resonated, and then plan from there. We’ve got the resonation – now we’re working on the plan.


AEC Magazine: How can firms contributes to the specification?

Aaron Perry: The spec is live and has started to receive feedback/input. We’re pointing people to that and to add their thoughts and adaptations.

Andy Watts: Also, we’ve seen a lot of engagement through word of mouth, so we’d encourage people to share.


How we arrived at the Future AEC Software Specification

Over the past five years, the majority of investment in the AEC industry, including venture capital, acquisition and development, has focused on construction. This happened at a time when architects and users of the number one design modelling tool, Autodesk Revit, were getting frustrated at the lack of meaningful updates, but seeing increased software subscription costs.

Meanwhile, Autodesk was in full acquisition mode, buying up cloud-based applications to fill out its Construction Cloud service. While there had been long-term rumours of a new generation of solutions from Autodesk based on the cloud, originally called Quantum, there was no set date or anything to show.


Greg Schleusner
In his work on data lakes, Greg Schleusner of HOK has highlighted the inefficiencies of data silos

This frustration led to the formulation and delivery of an open letter to the CEO of Autodesk, Andrew Anagnost, from 25 UK and Australian firms, highlighting a wide range of complaints, from lack of updates to the stuffing of ‘Collections’ with products they don’t ever install, let alone use.

The net result was a series of meetings with Autodesk management and product development staff, exploring what kinds of features these firms would like to see in Revit.

In 2022, Anagnost made it clear that there was not going to be a second generation of Revit as we know it – a desktop application. In an interview with Architosh, he said, “If you want a faster horse, you might not want to work with us because we will not make a faster horse. But we can be that partner and tool provider that supports professionals into the new era of architecture.”

That new era was a future version of Autodesk Forma, the cloud-based platform which softlaunched this year with an initial focus on conceptual design.

Around this time, the Open Letter Group (OLG) were engaged with the Revit development team, providing wish lists for Revit. Meanwhile, a group of signature architects had also approached Autodesk with similar complaints, just without a public letter, and went through the same engagement process, requesting new features.

The Revit team had its own part-published roadmap and there were attempts to see where they could align. But herein lies a problem. When asking for feature wish lists the many will always drown out the few.

Autodesk has sold somewhere between 1.2 and 2 million seats of Revit and the majority of users have fairly rudimentary needs. Meanwhile, many of the firms that were complaining were the mature BIM users, a minority that are really pushing the boundaries and are frustrated at being held back with Revit’s lack of development velocity.

Any boundary-pushing technology or new development was more likely to appear as a cloud service or as part of Autodesk Forma – and this transition could take years

This led to a second open letter, the Nordic Letter which is now signed by 324 firms, including BIG with Danish, Norwegian and Icelandic Architectural Associations, representing some of the firms who had approached Autodesk privately.

The OLG had little faith that Autodesk would build into Revit the advanced features its members need. Any boundary pushing technology or new development was more likely to appear as a cloud service or as part of Autodesk Forma – and this transition could take years.

The OLG decided to work on a specification, a shopping list of the kinds of technologies they would like to see in a next generation product.

By this time, late 2022, the OLG had been approached by nearly every large software company with an interest in either entering the BIM market or tailoring their current offerings to meet the BIM needs of the members – Hexagon, Nemetschek , Dassault Systèmes, Trimble, Graphisoft, BricsCAD, Qonic and others.

The decision was made to collate the specification and just ‘put it out there’ for the benefit of all / any developers looking to create next generation design tools. And there are many, including Snaptrude, Qonic, Arcol, Blue Ocean and Swapp, with more waiting in the wings.

As we move towards a second generation of design tools, one of the key challenges for these software developers is to what extent should they respond to the needs of the firms that have complained? Autodesk is not alone in feeling it needs to cater to the masses.

As one developer told us, “We’re in the volume market and developing for the signature architects and large firms does not make commercial sense for us.”

The problem is, artificial intelligence and machine learning is coming very quickly to this market, and people that use BIM to make simple rectangles and unexciting geometry, are going to be the first to face automation.

In an AI world, those who create complex geometry and more unique designs will be the ones that will still require more software products. AI probably means more Rhino.

Databases over files

Greg Schleusner of HOK has been on a similar tack, looking at next generation tools, specifically concentrating on the change from proprietary to open data formats and from files to data lakes.

The industry wants data sovereignty and independence. It does not want to be trapped in proprietary file formats again. We’ve already been DWGd and RVTd; we need to move beyond the silos and it’s time for open databases, offering freedom to share at a granular level, without the need to write ‘through’ a vendor’s application to gain access to your own or other’s project data, held in proprietary repositories. Schleusner has managed to bring together a consortium of 39 firms to pursue the creation of this open data lake.

To really understand the scope of this, we recommend watching his past three presentations from NXT BLD.

2021 – Creating Balance

2022 – Road to Nowhere

2023 – Assemble

The post The Future AEC Software Specification appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/the-future-aec-software-specification/feed/ 0
AEC July / August 2023 Edition out now https://aecmag.com/technology/aec-july-august-2023-edition-out-now-2/ https://aecmag.com/technology/aec-july-august-2023-edition-out-now-2/#disqus_thread Thu, 27 Jul 2023 15:33:08 +0000 https://aecmag.com/?p=18295 What leading AEC firms want from next generation tools, NXT BLD / DEV on-demand, plus lots more

The post AEC July / August 2023 Edition out now appeared first on AEC Magazine.

]]>

We kick off summer with an important edition of AEC Magazine, where we hear from leading AEC firms about what they want from next generation tools. It’s available to view now, free, along with all our back issues.

Subscribe to the digital edition free + all the latest AEC technology news in your inbox, or take out a print subscription for $49 per year (free to UK AEC professionals).


What’s inside the July / August edition?

The future AEC software specification
At NXT DEV, Aaron Perry of AHMM, backed by many global firms, presented a wish list of what leading edge BIM practices want from next generation tools.

NXT BLD / NXT DEV on-demand
NXT BLD was followed this year by NXT DEV, a conference dedicated to the future of AEC software. All 40 presentations from both events can now be viewed on-demand.

Will AI design your next building? 
Will AI take architects’ jobs too, or will it make them more fulfilling instead? asks Akos Pfemeter, VP, technology partnerships at Graphisoft.

Revizto reflections 
We caught up with Revizto’s CEO to discuss the company’s origins, development path and latest release.

Autodesk Tandem: dashboards
Autodesk has added new dashboarding capabilities to Tandem for obtaining all sorts of metrics from digital twins.

Review: Nvidia RTX 4000 SFF Ada Generation 
Nvidia’s new compact workstation GPU delivers in all types of AEC workflows, but there’s a premium to pay.

Review: AMD Radeon Pro W7900 & W7800
AMD’s new workstation GPUs can’t outpace the Nvidia RTX 6000 Ada, but they can compete on price / performance.

AMD Ryzen 7000 X3D for CAD, viz and simulation
AMD’s new Ryzen CPUs with 3D V-Cache deliver a performance boost in 3D games, but what do they offer AEC professionals?

The post AEC July / August 2023 Edition out now appeared first on AEC Magazine.

]]>
https://aecmag.com/technology/aec-july-august-2023-edition-out-now-2/feed/ 0
Nvidia RTX 4000, 4500 & 5000 Ada Generation GPUs launch https://aecmag.com/workstations/nvidia-rtx-4000-4500-5000-ada-generation-gpus-launch/ https://aecmag.com/workstations/nvidia-rtx-4000-4500-5000-ada-generation-gpus-launch/#disqus_thread Tue, 08 Aug 2023 16:00:36 +0000 https://aecmag.com/?p=18330 New viz-focused workstation GPUs start at $1,250 to expand mid-range to high-end options

The post Nvidia RTX 4000, 4500 & 5000 Ada Generation GPUs launch appeared first on AEC Magazine.

]]>
New viz-focused workstation GPUs start at $1,250 to expand mid-range to high-end options

Nvidia has expanded its pro graphics line up adding three Nvidia RTX Ada Generation workstation GPUs to target visualisation, simulation, XR, AI and CAD workloads.

The Nvidia RTX 4000 Ada (20 GB), Nvidia RTX 4500 Ada (24 GB) and Nvidia RTX 5000 Ada (32 GB) fill the middle ground between the Nvidia RTX 4000 SFF Ada (20 GB) (read our review) and Nvidia RTX 6000 Ada (48 GB) (read our review) which launched earlier this year.

The Nvidia RTX 4000 Ada and Nvidia RTX 4000 SFF Ada are identical in their cores specs, and have the same number of cores (CUDA, Tensor and RT) and 20 GB of GDDR6 memory. However, they have different form factors and max power consumption.

The Nvidia RTX 4000 Ada is a full height, single slot GPU designed for standard workstation towers. It draws up to 130W via a 6-pin power connector.

Meanwhile, the SFF version is a low profile, dual slot GPU specifically designed for Small Form Factor and ultra-compact workstations. It draws up to 70W, directly from the PCIe slot.

The Nvidia RTX 4000 Ada is available in September with an ESP of $1,250.

Moving up the range, the Nvidia RTX 4500 (24 GB) is billed as the most balanced GPU for the majority of workloads. The dual slot card has a max power consumption of 210W and is available in October with an ESP of $2,250.

The Nvidia RTX 5000 Ada has a max power consumption of 250W and needs a 16-pin CEM5 PCIe connector. According to Bob Pette, VP professional visualization at Nvidia, it is for those that need performance that is closer to Nvidia’s 6000 class GPUs, but don’t necessarily need 48 GB of frame buffer memory. Pette adds that the dual slot card delivers a huge performance leap over the previous generation Ampere RTX A5000 and A5500.

Compared to the RTX A5000, Nvidia says the RTX 5000 Ada has 1.6 times the graphics performance, twice the rendering performance in commercial renderers and three times the rendering performance in the Omniverse RTX renderer, which takes advantage of Nvidia DLSS 3.

DLSS 3, short for Deep Learning Super Sampling, boosts performance by using the Tensor cores in Ada Generation GPUs to generate entirely new frames without having to process the graphics pipeline (read our Nvidia RTX 6000 Ada Generation review for more on this).

Meanwhile, for the datacentre, Nvidia has announced the Nvidia L40S Ada GPU, which is effectively a passively cooled version of the Nvidia RTX 6000 Ada but clocked a little higher.

Up to eight Nvidia L40S GPUs can be accommodated in an Nvidia OVX reference server. Compared to the Nvidia A40, Nvidia says the Nvidia L40S Ada is up to 2.4x faster in commercial renderers and up to 4x faster in the Omniverse renderer.

The post Nvidia RTX 4000, 4500 & 5000 Ada Generation GPUs launch appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/nvidia-rtx-4000-4500-5000-ada-generation-gpus-launch/feed/ 0
AMD Radeon Pro W7600 and W7500 workstation GPUs launch https://aecmag.com/workstations/amd-radeon-pro-w7600-and-w7500-workstation-gpus-launch/ https://aecmag.com/workstations/amd-radeon-pro-w7600-and-w7500-workstation-gpus-launch/#disqus_thread Thu, 03 Aug 2023 13:00:55 +0000 https://aecmag.com/?p=18318 AMD targets volume mid-range workstation segment with new RDNA 3 pro graphics cards

The post AMD Radeon Pro W7600 and W7500 workstation GPUs launch appeared first on AEC Magazine.

]]>
AMD targets volume mid-range segment with new RDNA 3 pro graphics cards

AMD has launched the Radeon Pro W7600 and Radeon Pro W7500, a duo of ‘mid-range’ desktop pro workstation GPUs built on its RDNA 3 architecture.

The new graphics cards are designed to target ‘medium’ workloads for 3D CAD, visualisation, video editing, and digital content creation. They follow on from the ‘ultra-high-end’ AMD Radeon Pro W7800 and W7900 which launched earlier this year (read our review).

The Radeon Pro W7600 and W7500 are both full height, single slot GPUs, so are designed to fit in standard desktop tower workstations and not small form factors (SFFs) / ultra-compacts.

In terms of their performance profiles and price, the new GPUs seem well positioned. However, it feels like AMD could be limiting their potential reach by not giving at least one of them a low-profile form factor, which rules them out of SFF and ultra-compact workstations.

Read what AEC Magazine thinks

Both GPUs come with 8 GB of GDDR6 memory and four DisplayPort 2.1 Connectors, the latest version of the digital display standard. According to AMD, this means the cards are future proofed for next gen displays in terms of refresh rate, pixel resolution and colour bit-depth.

The Radeon Pro W7500 offers 12.2 TLOPs of peak single precision performance and has a total board power of 70W, so can operate with PCIe slot power alone. It costs $429.

The Radeon Pro W7600 offers 19.9 TLOPs of peak single precision performance and has a total board power of 130W, so needs a 6-pin connector. It costs $599.

Both GPUs comprise multiple unified RDNA 3 compute units (28 on the W7500 and 32 on the W7600). Each compute unit has 64 dual issue stream processors, two AI accelerators and a second gen ray tracing (RT) accelerator. According to AMD, RDNA 3 offers up to 50% more raytracing performance per compute unit than the previous generation.

There is growing software compatibility for AMD RT accelerators. In addition to DirectX Raytracing (DXR) and Vulkan ray tracing, for which there is direct support, AMD’s open-source toolset HIP, is helping software developers automatically translate their existing Nvidia CUDA code bases.

In terms of the competition, AMD compares the AMD Radeon Pro W7600 to the similarly priced Nvidia RTX A2000 (12 GB) and the AMD Radeon Pro 7500 to the Nvidia RTX T1000 (8GB). The company claims better performance in CAD applications Solidworks and PTC Creo and the AEC-focused real time viz tool Twinmotion. Both Nvidia GPUs are available at similar price points, but the Nvidia GPUs are compatible with both standard towers and SFF / compact workstations.


What AEC Magazine thinks

The roll out of AMD’s new generation RDNA 3-based pro GPUs comes straight out of the workstation graphics playbook. Start at the high-end and then move down the range.

With the new Radeon Pro W7500 and W7600 AMD is looking to target a specific part of the workstation market – the mid-range $350 to $950 dollar segment, which it describes as the largest piece of the pie.

In terms of their performance profiles and price, the new GPUs seem well positioned. However, it feels like AMD could be limiting their potential reach by not giving at least one of them a low-profile form factor. Small Form Factor and ultra-compact workstations, such as the HP Z2 Mini and Lenovo ThinkStation P3 Ultra, represent an increasingly big slice of the mainstream workstation market, not just on desktops but in racks for remote graphics deployments.

AMD could be choosing to focus more on getting these pro GPUs out in the market via specialist system builders, such as Armari and BOXX, who tend to only sell tower workstations.

Alternatively, perhaps AMD feels there is less need for a low-profile form factor pro GPU going forward. As Jimmy Holbert, director of Radeon Creator & workstation strategy at AMD pointed out in the press briefing, AMD’s APUs (CPUs with integrated GPUs) are starting to have an impact in the entry level pro GPU market. With rumours of much beefier models coming next year, could AMD plan to extend the reach of its APUs into the mid-range?

In addition, by configuring both of the new GPUs with 8 GB of memory, it looks like AMD could be limiting where these cards can be used. While 8 GB is currently sufficient for most CAD and BIM workflows, many real time visualisation or GPU rendering tools can easily use more memory, especially at higher resolutions. And that’s without considering multi-tasking workflows, where an architect might model in Revit and render in the background with Lumion or Twinmotion.

Look out for a full review soon.

The post AMD Radeon Pro W7600 and W7500 workstation GPUs launch appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/amd-radeon-pro-w7600-and-w7500-workstation-gpus-launch/feed/ 0
SketchUp 3D Warehouse gets visual search function https://aecmag.com/cad/sketchup-3d-warehouse-gets-visual-search-function/ https://aecmag.com/cad/sketchup-3d-warehouse-gets-visual-search-function/#disqus_thread Thu, 27 Jul 2023 05:52:14 +0000 https://aecmag.com/?p=18286 Visual search function helps architects and designers find 3D models ‘easier and faster’

The post SketchUp 3D Warehouse gets visual search function appeared first on AEC Magazine.

]]>
AI-powered image search helps architects and designers find 3D models ‘easier and faster’

Trimble SketchUp’s 3D Warehouse, the free 3D model library, has made it easier to find models, through a new visually-driven search function called Image Search.

Users can now take a photo of an object or drop and drag an existing image into the 3D Warehouse’s search bar, and AI will sift through millions of pre-built models to find matches. The new feature is designed to help architects and designers more easily specify new products for their designs.

“3D Warehouse used to be entirely based on keyword search, where you had to type in exactly what you were looking for in order to generate the right match,” said Steve Guzman, product manager for 3D Warehouse.

“3D Warehouse Image Search eliminates that requirement, allowing users to overcome language barriers and incorrect search queries by matching images with models.

“Now, designers can more easily source 3D models or find alternatives for their clients who are often looking for very specific objects to incorporate into their designs.”

As well as user-generated models, Image Search results include real-world objects from building product manufacturers and parametrically configurable objects.

On top of Image Search, users can now easily search, filter, and download materials and texture swatches by simply typing the name of the material into the search bar and clicking the Materials tab.

The post SketchUp 3D Warehouse gets visual search function appeared first on AEC Magazine.

]]>
https://aecmag.com/cad/sketchup-3d-warehouse-gets-visual-search-function/feed/ 0
Autodesk Tandem: dashboards https://aecmag.com/digital-twin/autodesk-tandem-dashboards/ https://aecmag.com/digital-twin/autodesk-tandem-dashboards/#disqus_thread Mon, 24 Jul 2023 17:52:58 +0000 https://aecmag.com/?p=18139 Autodesk Tandem has new dashboarding capabilities for obtaining all sorts of metrics from digital twins

The post Autodesk Tandem: dashboards appeared first on AEC Magazine.

]]>
Autodesk recently showcased Tandem’s new dashboarding capabilities for obtaining all sorts of metrics from digital twins. This is the latest in a long line of updates which are coming thick and fast. Martyn Day reports

In 2015, Autodesk launched its cloud-based development API, Forge. Forge was a set of tools and services for building cloud-based software applications, with integrations to all the main Autodesk products.

For some unknown reason, Autodesk recently renamed Forge to the less impressive Autodesk Platform Services (APS), but the capabilities have remained unchanged.

The key aim is to expedite the development of new applications from these core Autodesk building blocks, such as DWG, viewers, file exchangers etc. If there was a poster child as to the benefits of ‘APS’, it is the digital twin solution Autodesk Tandem, launched in 2021, which is still being rapidly developed in front of our very eyes.

Autodesk Tandem

The initial release was very bare bones. It didn’t support IFC and was pretty much a place to ‘lightweight’ Revit BIM models and start renaming and filtering sets of building components for more complex capabilities yet to come.

Since then, Autodesk has continually added sizable chunks of workflow-centric capabilities every couple of months. Given so many models are in RVT format, Tandem could be the tool to bring digital twins to the masses.

Just looking at the last six months, it’s clear the Tandem team is on a mission. In January, Autodesk included the facility monitoring beta program, which enabled Tandem to display near real time IoT data. It was also looking to spot anomalies ahead of equipment failures. This is the most exciting technology for me, as it connects the digital twin database with the physical reality of operations.

In April, the new ‘Systems’ feature was released, which uses system tracing to identify and filter routed systems like MEP, which might be imported as just a bunch of individual components with no ‘intelligence’ associated. It also included new facility monitoring visualisations and heatmapping capabilities.

Now, this month comes dashboards, which you might think is very much at the test and consumption end of the digital twin process. However, Autodesk’s first dashboard offering is very specifically aimed at using dashboards to check the completeness of the twin data.

In the demo, dashboards worked off the facility template, which is a place where components get classified and tagged for tracking in Tandem. These can be assets, systems or subsystems, by discipline, and used for phases such as handover or commissioning.

Tim Kelly, senior product manager at Autodesk explained, “We want to be able to provide an experience where we ensure the delivery of both complete and accurate as built data.

“Dashboards are our way to allow customers to dig into specific datasets and review that comprehensively. I know that when the term dashboard is used, oftentimes people refer to a Power BI or Tableau experience where you’re curating all of the data. But we have worked to pre-build some framework around this experience.”


Find this article plus many more in the July / August 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Dashboards are filtered views, that can be easily custom made to create specific packages of information that are relevant to a given period of time or given delivery phase, specific disciplines, specific packages, or different stages.

Autodesk calls each of these defined elements a ‘card’ and each has chart display options like pie, donut, or starburst. These are stored in a library, so can be inserted into the display to create views with collections of dashboard cards.

The functionality so far developed on dashboards has come from conversations with Autodesk customers who are currently having to take data out into business intelligence applications or specific dashboarding software to create external workflows.

The first function of note is a filter bar which sits across the screen to provide a view of all of the comprehensive filtering across all parameters (sources, levels, Revit categories etc.). This panel reflects everything that is in the dynamic model viewer. Users have the ability to see connected information as components are selected, allowing interaction with the different metrics that appear on screen.

If we are ever going to build solutions that can do generative design of future facilities, we need to understand how facilities operate in the real world to really impact future design decisions

Table views act as a summary of parameter completeness. Parameters are broken down by classification and are applied to different components dynamically. The lower half of the screen displays the dashboard ‘card library’, which is where users build out and track ‘completeness’ of the twin data. This can be applied across different classification levels or on specific parameters which customers want to track. Data pops up when hovering the mouse over the display cards. Some let users drill down, offering additional information on a classification level or Revit type / level etc. and the display automatically updates.

While doing that, the viewer always reflects those selections made. It’s a very interactive experience. Tandem is still a highly visual tool for dynamic display of specific datasets.

Autodesk’s focus on bringing dashboarding into its product, with this specific style of experience, is aimed at providing interactive access to data but is specifically focused on the data collection during twin building. The next phase will be to expand this experience, perhaps with more preconfigured dashboards – facility monitoring, sustainability tracking and data validation are all viable options.

From the demo, the process flow, worked thus: load a Tandem model, use the filters to select categories you are interested in checking. The display updates with the elements that fit the filter rules and the dashboard ‘cards’ dynamically change given on what’s displayed / selected. These cards are ‘percentage of name status completion’, asset’s tagged status, and model number.

Each card shows what percentage of the elements have been classified as required. If you pick all the water heaters, you see total percentage unnamed etc. You can use the filters to isolate these objects and all the tables update with feedback on the selection. Here dashboards are a tool to help navigate through and identify the outstanding components that need classification. On big jobs I can see how beneficial these tools would be.

Conclusion

To give credit where it’s due, from the complete list of AEC software tools in development at Autodesk, I think Tandem is probably the one with the highest velocity. That possibly might be because it is the newest and has the most to add. But I am impressed with the development team’s monthly reach out to engage with customer (and non-customers) to either discuss digital twin issues or demonstrate upcoming or recently introduced functionality.

I think this is a template for all software development teams but, of course, this is way harder when you have tens of thousands, to millions of users for managers of mature products.

A lot of the earlier functions were centred on the core data parsing and visualisation that is required in the creation of digital twin datasets. While dashboarding might be thought of as more of a way to display real assets status, Autodesk is again approaching this as a tool set to further identify and isolate models to easily rectify omitted data and complete the digital twin.

Making dashboards requires a lot of filtering and check box ticking, together with an inherent knowledge of the twin’s integrated systems and components. It’s the first Tandem demonstration which really made me think just how much data preparation work, filtering and tagging the digital twin process requires. And all this needs to be done before you get anything valuable out.

I couldn’t help but feel that a ChatGPT interface would go a long way to simplifying the derivation of dashboards and it’s something that Autodesk is currently looking into but there are legal issues.

As its capabilities continue to expand, Tandem’s maturing capabilities will be a challenge to keep the interface simple, hiding the complexity of the process, especially when many of those ultimately digesting the output of the system will not be digital twin experts.


Issues with digital twins

Those who understand and fully support BIM and VDC methodology are typically very supportive of digital twins and the use of them – if the data has been created in the design process and refined through to construction, especially where COBie data has also been built up for all the serviceable elements of a building.  However, there is a lot more to transitioning that data into a useful environment.

The three primary challenges hindering the uptake of digital twin technology are 1) the lack of clarity on why it is better vs traditional 2D Facilities Management (FM) tools, 2) the complexity in creating detailed source data and 3) the cost associated with implementation / maintenance.

While you may think having a ‘fresh’ detailed BIM model would make the process relatively seamless and painless, BIM design and construction data is not the type of meta data the digital twin databases need.

The act of creating a digital twin requires meticulous and detailed mapping of physical assets and their data to virtual components, demanding expertise. It requires significant time, and the maintenance of this needs to be ongoing for the life of the building / asset.

It’s something that needs significant planning, resourcing, and budgeting for. The kind of costs associated with current digital twin technology mean that smaller companies, or projects, in particular, may find it challenging to allocate budgets to create and maintain digital twins of their assets.

There is also the problem with the lack of standardised frameworks and BIM’s inherent interoperability issues which multiplies as layers of data may come from other industries (oil and gas) which use diverse software and hardware systems. This presents a challenge for creating and integrating them seamlessly into one database.

Autodesk here does have the advantage of Revit supporting architecture, structural and MEP. The UK did have a five-year project running with the Centre for Digital Built Britain (CDBB) at Cambridge University, but this recently completed its five-year funding at the end of 2022 and has since and closed its doors. However, the Gemini Papers explaining the benefits of connected twin technology are still available as a useful resource.

Security is a potential issue because linking digital twins to IoT sensors means sending data from physical assets to the digital twin data model through the public internet. Organisations remain wary of entrusting critical data to this threat.

Furthermore, there is a widespread lack of awareness and understanding surrounding digital twins. Many decision-makers and industry leaders remain unfamiliar with the technology’s potential, perceiving it as an experimental venture rather than a tangible solution to real-world challenges. This lack of comprehension results in reluctance to invest in uncharted territory, slowing down adoption rates.


Why and how we are developing for digital twins?

Robert Bray, Vice President & General Manager, Autodesk Tandem

In the opening address of the recent Tandem update, Robert Bray, vice president & general manager, Autodesk Tandem, gave a great opening talk, concerning the workflow of making digital twins and offering some insight into how Autodesk is approaching the software development.

“As we think about a digital twin, there’s really two things we think about. The first is, that a twin is a digital replica of that build asset. And the important part of the replica is context- understanding of the equipment, the assets, the spaces, the systems in that facility, the interconnections between them.

“The other important aspect of the digital twin is, of course, the bi-directional connection between the physical and digital connections to all of those operational systems that give the twin the operational behavioural awareness necessary to simulate predicted informed decisions, based on real-world conditions.

“From a larger industry perspective, we really think about this, as how do we transform the asset lifecycle? That means thinking beyond what happens in design and construction, and structuring data that’s useful to an owner, beyond handover – a period in which it can be connected to existing solutions and used to build up a wealth of asset knowledge.

Why do we want to do this? Three things:

The first is to understand if the facility performs as it’s planned and designed, or not? And if not, what can we do to tune that facility to achieve its objectives? Those may be sustainability criteria, or outcomes in terms of throughput, facility, occupancy, whatever it might be.

The second is thinking about how do we detect issues and then improve that facility based on data insights? Facility management as a practice needs more data to do the job more effectively, and that means not just more data, but data that is informing those decisions, rather than just data for data to come. We need to take that data from all those operational systems and turn it into information that leads to actionable insight.

And finally, the reason we do this is to leverage that knowledge to build better in the future. If we are ever going to build solutions that can do generative design of future facilities, we need to understand how facilities operate in the real world to really impact future design decisions.

When we started this, we really bought into a maturity model around digital twins, and the purpose for this is really to start to provide a prescriptive framework for how to approach digital twins. It starts with the descriptive part, which is really that ‘as built’, what are all the assets and spaces, systems in their facility. All the metadata about them and the connections. The informative twin starts to add that operational behavioural awareness to connect some sort of physical and digital systems in that facility. This might be maintenance management systems or maintenance history, IoT sensors, building management systems, other automation systems – that type of thing – to bring that to life and provide that actionable insight.

Predictive twins then start to add ‘what if’ scenarios around looking at maintenance history of a component and predicting when it should be replaced, based on past-history of that component and failure of that component.

Comprehensive twins, starting to think about ‘what if’ simulation around what if I reconfigured this space, what’s the impact on occupant behaviour? If I upgrade the system, what’s the impact on carbon emissions and energy consumption? Autonomous twins, being the holy grail of the self-healing and self-tuning facility.

As we think about this, what’s important is we define our data standards in those early stages. Well defined Tandem models lead to downstream capabilities. If we don’t have those normalised data standards, it’s very hard to build machine logic that can lead to predictive, comprehensive, and autonomous site capabilities in the future. So really drilling in on these normalised data standards is very important.

As we think into Tandem digital twins, we think Tandem in terms of two sets of workflows.  Building workflows, which is really about how do we harness all of the design and construction data, or the ‘as built’ data of that facility, to create that digital twin, complete with connections to those operational systems and data.

The second workflow is operations, how do we take that insight we’re gleaning from those that operational data and use it to inform decisions? These two things work in concert with each other. A facility is an ever-evolving thing, they’re not static, they change [every] day. We need to recognise that twins change as well, in terms of adding new workflows and capabilities. Hence, the two are interconnected.

When we think about twin building, we really think about it as a three-step workflow, really defining the data requirements and the outcomes through transparent collaboration. This is really about getting those data standards well-articulated and captured for not just a facility but a portfolio of facilities. Being able to contribute all of that data from the data from submittals, from other types of documentation, maybe from as-built documentation for the facility. And the piece we’ve talked about a lot, but we haven’t shown before is the idea of who ensures complete, accurate ‘as built data’. This is what our dashboard capability is built around, really verifying completeness and accuracy.

As we talk to customers, one of the big things we hear is that we don’t trust the data we have, we need to be able to provide capabilities to ensure that completeness and accuracy of that digital twin data, to ensure it matches and reflects the ‘as-built’ facility.

Our twin building capabilities are built on a lot of the capabilities we have at Autodesk in terms of our Autodesk Construction Cloud platform, our Autodesk Docs platform, our design products like Revit, standard formats like IFC. We have AutoCAD and Navisworks in beta, but we are working towards Autodesk Build integration, we know that’s important for capturing information about things as they’re installed and commissioned on the job site. And of course, we do support that universal tool of construction, Microsoft Excel, which is often used to capture data through the construction process.

As we move downstream into operations, we think we have come up with something a little bit different. The twin building experience is clearly very much a member of the AEC-like experience, but for downstream operations. We need to provide facility managers, facility operators, an experience that is more tailored to their needs – the ability to monitor their facility through a dashboard.

We need to be able to give them the ability to drill in and investigate anomalies, again, providing more of a curated experience rather than a free-flowing 3D experience. And then, obviously, surface that information that helps them take a proactive action based on the insights they’re gaining.

The post Autodesk Tandem: dashboards appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/autodesk-tandem-dashboards/feed/ 0
AMD Ryzen 7000 X3D for CAD, viz and simulation https://aecmag.com/workstations/amd-ryzen-7000-x3d-for-cad-viz-and-simulation/ https://aecmag.com/workstations/amd-ryzen-7000-x3d-for-cad-viz-and-simulation/#disqus_thread Tue, 25 Jul 2023 04:57:10 +0000 https://aecmag.com/?p=18160 AMD’s 3D V-Cache delivers a real performance boost in 3D games, but what does it offer AEC professionals?

The post AMD Ryzen 7000 X3D for CAD, viz and simulation appeared first on AEC Magazine.

]]>
AMD’s new Ryzen processors with 3D V-Cache deliver a real performance boost in 3D games, but what do they offer AEC professionals? Greg Corke explores

AMD’s 3D V-Cache technology, built into select AMD Ryzen 7000 Series desktop processors, has been a huge success in the gaming sector over the past couple of years.

3D V-Cache allows for more L3 cache to be placed on the CPU by stacking it vertically instead of horizontally. The more cache a CPU has, the greater the chance of fetching the data it needs from cache instead of from slower system memory (RAM).

In some CPU-limited games, 3D V-Cache can lead to significantly better 3D performance. AMD has so much confidence in the technology it has even branded its 3D V-Cache Ryzen models ‘gaming processors’.

But are there any benefits for architects, engineers, or product designers, beyond those looking to get the edge in those after work sessions of Red Dead Redemption 2?

Before we get into answering that question, let’s have a look AMD’s 3D V-Cache Ryzen offerings.

AMD Ryzen 7000 X3D Series

AMD offers two different types of processors in its AMD Ryzen 7000 Series — those with 3D V-Cache (denoted by the X3D suffix) and those without (denoted by an X suffix or no suffix). The X3D models are slightly more expensive, but not by much.

There are three chips with 3D V-Cache — the 16-core Ryzen 9 7950X3D, 12-core Ryzen 9 7900X3D and 8-core Ryzen 7 7800X3D. All have 64 MB more L3 cache than their standard desktop counterparts but run at lower frequencies because they are harder to cool.

The 3D V-Cache processors differ largely by their number of cores, but this only tells part of the story. The Ryzen 9 7950X3D and Ryzen 9 7900X3D are made up of two different core compute die (CCD) ‘chiplets’ — one with 3D V-Cache and one without. For the Ryzen 9 7950X3D and Ryzen 9 7900X3D both chiplets have 8-cores and 6-cores respectively.

The 3D V-Cache chiplets run at a lower clock speed than the non-3D V-Cache chiplets. This architectural framework is important to understand, as it can impact performance if software doesn’t run on the best cores for the job (more on this later).

The Ryzen 7 7800X3D is different in that it only has a single 8-core 3D V-Cache CCD chiplet, so all cores run at a lower clock speed.



3D V-Cache on test

To explore which pro design workflows might benefit from 3D V-Cache, specialist UK manufacturer Armari lent us one of its high-performance Magnetar workstations, equipped with a single AMD Ryzen 9 7950X3D processor, 64 GB of DDR5 memory and an AMD Radeon Pro W7900 high-end workstation GPU. The full spec and mini review can be seen at the bottom of this article.

For testing we disabled each CCD in turn, using the AMD Ryzen Master Software. This method isn’t perfect. For a real-world comparison we should arguably use the same machine, first fitted with an AMD Ryzen 9 7950X3D processor, and then with its non-3D V-Cache counterpart, the AMD Ryzen 9 7950X. However, with our approach, isolating each CCD in turn brings real clarity to the potential benefits of AMD’s 3D V-Cache technology.

In order for 3D V-Cache to deliver a performance boost, any benefit of having fast access to a larger pool of frequently used data must outweigh the drop in frequency. And this drop can be quite big.

In Cinebench, for example, clock speeds in the single-threaded test were around 0.3 GHz to 0.4 GHz lower (reaching 5.35 GHz to 5.43 GHz on the standard CCD and 5.05 GHz on the 3D V-Cache CCD). There was a similar difference in the multi-threaded test (5.15 GHz on the standard CCD and 4.83 GHz on the 3D V-Cache CCD).

It was no surprise that the 3D V-Cache CCD was significantly slower in the SPECapc for Solidworks 2022 benchmark. Like most CAD software, Solidworks is largely single threaded, very dependent on processor frequency, and is not that sensitive to the speed at which data can be fed into the CPU. The same can be said for the majority of tests within the Invmark for Inventor benchmark, although the 3D V-Cache CCD did show a small lead in Dynamic Simulation.

The 3D V-Cache CCD showed no benefit in the ray trace rendering benchmarks, V-Ray and Cinebench, nor in the point cloud processing software, Leica Cyclone Register 360.

But there were some workflows where the 3D V-Cache CCD showed a significant advantage. When recompiling shaders in Unreal Engine it finished 14% faster. It also enjoyed a 16% lead in the Computational Fluid Dynamics (CFD) benchmark, WPCcfd (SPECworkstation 3.1), which simulates combustion and turbulence.

These results weren’t totally unexpected, as in our recent Intel Xeon ‘Sapphire Rapids’ vs AMD Ryzen Threadripper Pro article, both tests were shown to be sensitive to memory bandwidth.

There was no benefit in Rodinia, a CFD benchmark that represents compressible flow. However, as our ‘Sapphire Rapids’ memory bandwidth tests showed, it is not as memory intensive as other CFD benchmarks. The same is probably true of the Calculix Finite Element Analysis (FEA) benchmark.

We also tested the impact of 3D V-Cache on 3D graphics performance. In Solidworks 2022, the 3D V-Cache CCD delivered notably lower benchmark scores. This was not particularly surprising, however, as 3D graphics performance in most CAD applications is heavily influenced by CPU frequency, and the 3D V-Cache CCD runs about 0.3 GHz slower.

In Enscape and Unreal Engine, graphics intensive applications that are largely bottlenecked by the GPU, the gap was much smaller, with the standard CCD edging out the 3D V-Cache CCD at 4K resolution. Dialling down to FHD, which lightens the load on the GPU and therefore can elevate the role of the CPU, gave the 3D V-Cache CCD in Enscape a slight lead. But we’re talking about very fine margins here. Of course, other applications / datasets may yield different results.


Find this article plus many more in the July / August 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Tuning the CPU

By default, all professional applications will prioritise the non-3D V-Cache cores. This is good for single threaded or lightly threaded CAD software, as our tests show that frequency is far more important for performance in these types of applications than having more cache.

Also, as the only workflows we found to benefit from 3D V-Cache are highly multi-threaded and make use of all of the CPU cores, assigning the right workflows to the right cores is not such a concern.

If you do find workflows that benefit from 3D V-Cache and run on fewer cores (and, of course if you are a gamer), then there are ways to manage which applications use the 3D V-Cache cores and which do not.

The easiest way to control this is through the Windows Game Bar using the ‘remember this is a game’ setting. Alternatively, use Process Lasso, a process automation / optimisation tool that allows processes to be permanently or temporarily assigned to specific cores.

Some applications are easier to configure than others, however. For example, when apps spawn separate executables for different compute intensive processes, each executable will need to be identified and then configured independently. In Solidworks, CAD, visualisation and simulation is a case in point.

Power efficiency

Compared to standard AMD Ryzen 7000 Series processors and (in particular) 13th Gen Intel Core processors, AMD’s 3D V-Cache Ryzen processors are incredibly power efficient.

The top-end AMD Ryzen 9 7950X3D has a Thermal Design Power (TDP) of 120W and a peak power of 162W. This is significantly lower than the non-3D V-Cache AMD Ryzen 9 7950X (TDP of 170W and peak power of 230W) and Intel Core i9-13900K (125W TDP and a max turbo power of 253W).

But specs only tell part of the story. In real world multi-threaded tests, our Armari workstation with AMD Ryzen 9 7950X3D draws noticeably less power than the other mainstream processors. This was observed at the plug socket, when measuring power draw of the overall systems — considering CPU, motherboard, memory, storage, and fans.

For example, in Cinebench R23, rendering with 16 cores and 32 threads, the AMD Ryzen 9 7950X3D workstation draws 251W. This is a full 90W less than an AMD Ryzen 9 7950X-based Scan workstation (341W) and almost half that of an Intel Core i9-13900K Scan workstation (451W). To put this in perspective, comparing Ryzen 9 7950X3D to Ryzen 9 7950X, you use 26% less power for only 6% less performance.

Power consumption in the single threaded Cinebench test is much more equal. The AMD Ryzen 9 7950X-based Scan workstation, Intel Core i9-13900K Scan workstation, and AMD Ryzen 9 7950X3D Armari workstation drew 127W, 122W, and 129W respectively.

Conclusion

If you simply look at benchmark scores for mainstream CAD and viz workflows, it’s easy to dismiss the AMD Ryzen 7000 X3D Series out of hand.

And while most architects, designers and engineers will not see a performance gain from 3D V-Cache, there are some specific professional workflows where it shows real promise.

The most likely beneficiaries are those that use simulation tools, including CFD and FEA, in workflows where the processor is often left waiting for data. On paper, the AMD Ryzen 7950X3D looks most capable — the other models have fewer cores and the Ryzen 7 7800X3D misses out on single threaded performance by not having a higher GHz standard CCD.

Of course, there are far better processors for hardcore multi-threaded simulation. The AMD Threadripper Pro 5000 Series and Intel Xeon W-2400 and W-3400 Series offer more cores, more memory bandwidth, and more memory capacity. But these processors can be very expensive.

For those on a budget, the Ryzen 7950X3D could offer a cost-effective way to reduce solve times in simulation software compared to standard desktop processors. The word ‘could’ is important here, as all CFD and FEA solvers behave differently. Even distinct datasets within the same application can have specific ways of using workstation resources, so in house testing is essential.

Finally, power efficiency deserves one last mention. The AMD Ryzen 7950X3D uses significantly less power than most modern desktop processors. And 6% less rendering performance for 26% less power will seem like a good trade-off for some.


Armari Magnetar M16R7-AD1000G3

Armari has a long history of developing specialist workstations, so it’s no surprise to see the UK firm offering the AMD Ryzen 9 7950X3D processor as an alternative to the AMD Ryzen 9 7950X.

Both processors are available in its Magnetar M16R7-AD1000G3 workstation (as reviewed here), which was recently replaced by the Magnetar M16R7-AW1350G4. In the new machine you get the same core specifications, but a few chassis tweaks, a new PSU, and better cooling to handle dual monster gaming GPUs.


Our review machine has a serious all-in-one liquid cooling system for the CPU with a colossal side mounted 360mm radiator. One wonders if this is overengineered for the power efficient AMD Ryzen 9 7950X3D, but you can’t argue with the impressive acoustics under heavy loads.

As we’ve found, the AMD Ryzen 9 7950X3D does well in data-intensive workloads so you want to ensure the CPU can be fed as fast as possible from memory and storage as it is from cache. With 64 GB of dual channel 6,000MHz memory pushing 52 GB/sec in the SiSoft Sandra memory bandwidth benchmark and a 4 TB NVMe RAID 0 array delivering 13,970 MB/sec read and 12,709 MB/sec write in the CrystalDiskMark benchmark it doesn’t disappoint for a system of this type. WiFi 6E is built into the Asus ProArt X670E-Creator WiFi motherboard, and you also get 10 Gb Ethernet for the fastest data transfer rates.

For graphics, there’s a single Radeon Pro W7900, AMD’s brand-new high-end workstation GPU with a whopping 48 GB of memory.

As we found in our in-depth review, it doesn’t hit the heights of the NvidiaRTX 6000 Ada Generation, but then it’s half the price. It’s a great GPU for those working with colossal datasets in specific viz workflows, although arguably the AMD Ryzen 9 7950X would be a better fit here.

If you play to the strengths of the Ryzen 9 7950X3D, and intend to use the machine for engineering simulation then downgrade the GPU, ramp up the RAM to 128 GB and set your CFD solver to work.


Specifications

  • AMD Ryzen 9 7950X3D processor (16 cores, 4.2 GHz base, 5.7 GHz max boost)
  • 64 GB (2 x 32 GB) Corsair Vengence DDR5-6000 C40 memory
  • AMD Radeon Pro W7900 (48 GB) pro GPU
  • Asus ProArt X670E-Creator WiFi motherboard
  • 4 TB AMD NVMe RAID 0 array with 2 x 2 TB Solidigm P44 Pro SSDs
  • Microsoft Windows 11 Pro
  • 3 Year RTB workstation warranty
  • £6,195 (Ex VAT)
  • + £150 for upgrade to M16R7-AW1350G4 workstation base

The post AMD Ryzen 7000 X3D for CAD, viz and simulation appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/amd-ryzen-7000-x3d-for-cad-viz-and-simulation/feed/ 0
Revizto reflections https://aecmag.com/collaboration/revizto-reflections/ https://aecmag.com/collaboration/revizto-reflections/#disqus_thread Tue, 25 Jul 2023 05:47:34 +0000 https://aecmag.com/?p=18171 Martyn Day caught up with Revizto CEO, Arman Gukasyan, to discuss the company’s origins, development path and latest release

The post Revizto reflections appeared first on AEC Magazine.

]]>
While attending Revizto’s London Field Day earlier this year, Martyn Day caught up with CEO, Arman Gukasyan, to discuss the company’s origins, development path and latest release

In the world of digital coordination and project information dissemination, Revizto has been pushing the boundaries since 2011. Headquartered in Lausanne, Switzerland the company quotes that it has over 150,000 AEC users in 150 countries. After 12 years of development, it’s still a private company and has a user-base that’s growing rapidly, including firms such as AECOM, BAM, Atkins, Grimshaw, BDP and Balfour Beatty.

The software has evolved from basic viewing and filtering to being an essential tech stack element. The Revizto cloud-based hub provides a highly performant single source of truth for 2D and 3D project data, together with issue tracking, VR, clash detection and now with a full-power iPhone and tablet client.


Martyn Day: Revizto has appeared in AEC Magazine for some time, but I don’t think we have ever covered the origins of the company and where its technology came from. How did Revizto start?

Arman Gukasyan: It all started following a stint working for Infomap. Part of my role was developing the business and in meetings and discussions with c-level executives about city planning, I found out that they were getting not only different sets of data from different disciplines but also each city and county/region had their own set of standards and guidelines to follow!

With buildings, infrastructure and cities becoming more and more complex, the data they needed was insufficient for their needs. At the time it was all about CAD data as BIM was only just being talked about.

I started to experiment to see which technologies could handle the heavy 3D data without distorting it, and create a lightweight, interactive version, which could be used for communication and collaboration.

It was pretty clear to me that gaming technologies were the way to go, as a number of games included expansive maps and models of cities.

With angel investment I started the business in 2008 and hired the first employee who was a game developer. We wanted to disrupt the AEC industry with technology to support project coordination, collaboration and project communication; and be uncomplicated and scalable.

Our experiments started with 2D AutoCAD data and us creating 3ds Max models. At first, we provided a service for small to medium-sized construction projects, delivering an EXE file which created an interactive way to explore designs. This created a huge clash between owners and architects, as the architects didn’t recognise their designs, as they had never seen all their data imported into one place before! It gave them a whole new context on the project.


Martyn Day: Autodesk bought UK developer Navisworks in 2007. To some extent this was their BIM viewer for the masses. It sounds like rather than delivering an application you were more of a service at the time?

Arman Gukasyan: We delivered two main projects as a service business before deciding that this wasn’t where I wanted the company to be. [The first project was] creating an exact replica of the Olympic Village, both buildings and infrastructure for the Olympic Committee.

They used this model to train 4,500 volunteers six months before the Olympics took place, which hadn’t been done before. One of the Olympic sponsors, Coca Cola used the model to help decide placement of their billboards to assess viewing points.

Revizto CEO, Arman Gukasyan
Revizto CEO, Arman Gukasyan

The second project was a model for La Sagrera in Barcelona, a high-speed train connecting Paris and Barcelona. Our model and simulations helped to identify that the platforms would be too narrow to manage the rush-hour capacity so changed were made to address this issue.

We developed Software as a Service in 2011 when Revit and SketchUp were the main authoring tools. We developed plugins for the model so that data could be ‘sucked out’ of Revit, taken into Revizto and optimised allowing users to interactively explore their designs in a basic way.

Seeing that this was not enough, we then created an issue tracking component to sit on top of it based on Jira from Atlassian and adopted for the AEC industry.

I travelled between Switzerland and the US a fair bit in the early days, so this is where I first recruited a dedicated sales representative. Then we came to the UK, EMEA followed by APAC.

We launched Revizto at Autodesk [University] AU in 2012. We were a very different offering from what was on the market, Revizto’s focus is, and always will be, on ease of use.

After the initial successful launch, we started to develop the issue tracker more, bringing 2D and 3D together, because, even today, 2D is still a big part of the process, when it comes to contractual documents.

In 2015 we developed an automatic overlay with a 3D [model], which proved very popular and in turn, we were being trusted by larger organisations and working on bigger projects.

Although it seemed like the main authoring tool being used by the industry for BIM was Revit, we realised that we had to maintain being platform agnostic and also support others like Bentley, Nemetschek and Trimble so we developed integrations for all the platforms. Every market vertical (civil, rail, oil and gas, mining, architecture etc.) has different tools that they use. You can bring your data into Revizto no matter how you create your data!

In 2017/2018 we added support for point clouds, as firms increasingly started to check the reality against the BIM data. We were the first to work out how to get point clouds on phones and tablets. Our customers can load huge point clouds without any challenges or streaming, caching locally, automatically. When you are using a streaming-based solution, you are very dependent on your bandwidth and where and when you can open a project. In Revizto, once you open a project, it’s cached on your local machine or mobile device (phones and tablets).


Find this article plus many more in the July / August 2023 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Martyn Day: In all Common Data Environments (CDEs) and model viewers, clash detection is always high on the end user wish list. Last year you released a major update with a very good, mature clash detection capability. This caused some issues with Autodesk who refused you a stand at Autodesk University. With Spacemaker and Navisworks, you were suddenly deemed perhaps too competitive?

Arman Gukasyan: We spent several years (2018 – 2021) developing clash detection whilst watching the market to see if solutions such as Navisworks or Solibri were going to be developed further in this direction. We saw no evidence of this and as our customers were asking for a clash tool, we released the capability within Revizto in 2021 to create an integrated collaboration platform.


Martyn Day: Autodesk really confused the Navisworks product, moving some functionality to the cloud and leaving some on the desktop, which meant your data needed to be in two places, depending on what functions you wanted to do. I’ve come across users who had been told that Navisworks was end of life.

Arman Gukasyan: Our philosophy here at Revizto is to listen to what our customers would like, really listen. This is done in a number of ways, with the main one being through our implementation team who have all come from industry. This team encourages our customers to show us their challenges and we aim to develop a solution.

Our clash solution came about from these conversations – a truly collaborative process. Revizto cloud architecture allows everyone involved in the project (no matter where they are based) to view which clash tests are being worked on, however no one else can touch these clash issues until the person working on them finishes. Once finished everyone can view them in real time.

Clash detection is not about creating millions of clashes and stuffing them into an issue tracker. Clash detection is how can I create a clash free model, and then feed only the real clashes into the issue tracker. With other solutions, if you open up your mailbox, you have 1,000 or 2,000 unread messages, you will never get to the bottom. With Revizto everything happens in real time.


Martyn Day: While clash detection is highly asked for and highly valuable to protect against causing millions of dollars of errors on site, the number of people that do clash detection seem to be a loud minority?

Arman Gukasyan: That may be so, but clash detection is part of the process in managing data. Revizto is an integrated collaborative platform where clash detection is a main part of the platform. There may be a loud minority, but it is a powerful minority.


Martyn Day: Moving on to the iPhone version, how much data can you hold on it? Can it be used on an iPad? Is there a memory limitation on your app or the generation of iPhone that you can open?

Arman Gukasyan: It depends on what project you’re opening, of course, which would start at the size of the model. We don’t have a specific limitation. We did a lot of testing on both the iPhone and Android, and we can push the limits all the time with the iPhone.

Android is less powerful due to the operating system taking a lot of the available RAM. Our brand-new mobile app isn’t streaming the data, it’s loading highly optimised data and can open models that other solutions and desktop apps can’t.

The most important thing the mobile app does is that it only renders whatever you see (occlusion culling). When the mobile app first opens it asks you what data you want – only the cached data, old updates, or if you want to see a particular pipe. It can load files which contain hundreds of models, thousands of sheets, even point clouds.


Martyn Day: There are an increasing number of iPads / tablets on building sites, why the bias to phones?

Arman Gukasyan: Not everyone has a tablet, but nearly everyone has a phone. Again, we listened to our customers who wanted to be more productive by using Revizto on their phone, anytime anywhere.


Martyn Day: So what capabilities are missing from it?

Arman Gukasyan: You wouldn’t actually do clash detection on the phone. You would use the phone app to see all the clashes that have been highlighted in the issue tracker.


Martyn Day: With clash detection and now the mobile app rewritten, which areas are you looking to develop next?

Arman Gukasyan: We currently cover the design and build part of the process and have started to focus on the handover and FM (Facility management) stage. Other areas of interest include Digital Twins and Augmented Reality.

A final point from me is to reiterate that we always watch closely the problems our users have and bring a solution with our own twist. I believe that we have made it easy to access your data, invoke collaboration between all project team members and it will be the main focus for us to make the collaboration seamless across the project lifecycle.

The post Revizto reflections appeared first on AEC Magazine.

]]>
https://aecmag.com/collaboration/revizto-reflections/feed/ 0