Rachel Berrys Virtually Visual blog

Subscribe to Rachel Berrys Virtually Visual blog feed Rachel Berrys Virtually Visual blog
A CAD Bunny's Big Adventure Through Virtualisation... frolicking with GPUs along the way - by Rachel Berry
Updated: 2 hours 23 min ago

Review of Additive Manufacture and Generative Design for PLM/Design at Develop 3D Live 2018

Wed, 05/16/2018 - 13:54

A couple of months ago, back at D3DLive! I had the pleasure of chairing the Additive Manufacturing (AM) track. This event in my opinion alongside a few others e.g. Siggraph and COFES is one of the key technology and futures events for the CAD/Graphics ecosystem. This event is also free thanks in part to major sponsors HP, Intel, AMD and Dell sponsorship.

A few years ago, at such events the 3D-printing offerings were interesting, quirky but not really mainstream manufacturing or CAD. There were 3D-printing vendors and a few niche consultancies, but it certainly wasn’t technology making keynotes or mentioned by the CAD/design software giants. This year saw the second session of the day on the keynote stage (video here) featuring a generative design demo from Bradley Rothenberg of nTopology.

With a full track dedicated to Additive Manufacture(AM) this year including the large mainstream CAD software vendors such as Dassault, Siemens PLM and Autodesk this technology really has hit the mainstream. The track was well attended with approximately half of the attendees when poled where actually involved in implementing additive manufacture and a significant proportion using it in production.

There was in general a significant overlap between many of the sessions, this technology has now become so mainstream that rather than seeing new concepts we are seeing like mainstream CAD more of an emphasis on specific product implementations and GUIs.

The morning session was kicked off by Sophie Jones, General Manager of Added Scientific a specialist consultancy with strong academic research links who investigate future technologies. This really was futures stuff rather than the mainstream covering 3D-printing of tailored pharmaceuticals and healthcare electronics.

Kieron Salter from KWSP then talked about some of their user case studies, as a specialist consultancy they’ve been needed by some customers to bridge the gaps in understanding. In particular, some of their work in the Motorsports sector was particularly interesting as cutting-edge novel automotive design.

Jesse Blankenship from Frustum gave a nice overview of their products and their integration into Solid Edge, Siemens NX and Onshape but he also showed the developer tools and GUIs that other CAD vendors and third-parties can use to integrate generative design technologies. In the world of CAD components, Frustum look well-placed to become a key component vendor.

Andy Roberts from Desktop Metal gave a rather beautiful demonstration walking through the generative design of a part, literally watching the iteration from a few constraints to an optimised part. This highlighted how different many of these parts can be compared to traditional techniques.

The afternoon’s schedule started with a bonus session that hadn’t made the printed schedule from Johannes Mann of Volume Graphics. It was a very insightful overview of the challenges in fidelity checking additive manufacturing and simulations on such parts (including some from Airbus).

Bradley Rothenberg of nTopology reappeared to elaborate on his keynote demo and covered some of the issues for quality control and simulation for generative design that CAM/CAE have solved for conventional manufacturing techniques.

Autodesk’s Andy Harris’ talk focused on how AM was enabling new genres of parts that simply aren’t feasible via other techniques. The complexity and quality of some of the resulting parts were impressive and often incredibly beautiful.

Dassault’s session was given by a last-minute speaker substitution of David Reid; I haven’t seen David talk before and he’s a great speaker. It was great to see a session led from the Simulia side of Dassault and how their AM technology integrates with their wider products. A case study on Airbus’ choice and usage of Simulia was particularly interesting as it covered how even the most safety critical, traditional big manufacturers are taking AM seriously and successfully integrating it into their complex PLM and regulatory frameworks.

The final session of the day was probably my personal favourite, Louise Geekie from Croft AM gave a brilliant talk on metal AM but what made it for me was her theme of understanding when you shouldn’t use AM and it’s limitations – basically just because you can… should you? This covered long term considerations on production volumes, compromises on material yield for surface quality, failure rates and costs of post-production finishing. Just because a part has been designed by engineering optimisation doesn’t mean an end user finds it aesthetically appealing – the case where a motorcycle manufacturer and indeed wants the front fork to “look” solid.

Overall my key takeaways were:

·       Just because you can doesn’t mean you should, choosing AM requires an understanding of the limitations and compromises and an overall plan if volume manufacture is an issue

·       The big CAD players are involved but there’s still work to be done to harden the surrounding frameworks in particular reliable simulation, search, fidelity testing.

·       How well the surrounding products and technologies handle the types of topologies and geometries GM throws out will be interesting. In particular it’ll be interesting to watch how Siemens Syncronous Technology and direct modellers cope, and the part search engines such as Siemens Geolus too.

·       Generative manufacture is computationally heavy and the quality of your CPU and GPU is worth thinking about.

Hardware OEMS and CPU/GPU Vendors taking CAD/PLM seriously

These new technologies are all hardware and computationally demanding compared to the modelling kernels of 20 years ago. AMD were showcasing and talking about all the pro-viz, rendering and cloud graphics technologies you’d expect but it was pleasing to see their product and solution teams and those from Dell, Intel, HP etc talking about computationally intensive technologies that benefit from GPU and CPU horse power such as CAE/FEA and of course generative design. It’s been noticeable in recent years in the increasing involvement and support from hardware OEMs and GPU vendors for end-user and ISV CAD/Design events and forums such as COFES, Siemens PLM Community and Dassault’s Community of Experts; which should hopefully bode well for future platform developments in hardware for CAD/Design.

Afterthoughts

A few weeks ago Al Dean from Develop3D wrote an article (bordering on a rant) about how poorly positioned a lot of the information around generative design (topology optimisation) and it’s link to additive manufacture is. I think many reading, simply thought – yes!

After reading it – I came to the conclusion that many think generative design and additive manufacture are inextricably linked. Whilst they can be used in conjunction there are vast numbers of use cases where the use of only one of the technologies is appropriate.

Generative design in my mind is computationally optimising a design to some physical constraints – it could be mass of material, or physical forces (stress/strain) and could include additional constraints – must have a connector like this in this area, must be this long or even must be tapered and constructed so it can be moulded (include appropriate tapers etc – so falls out the mold).

Additive manufacture is essentially 3-D printing, often metals. Adding material rather than the traditional machining mentality of CAD (Booleans often described as target and tool) – removing stuff from a block of metal by machining.

My feeling is generative design far greater potential for reducing costs and optimising parts for traditional manufacturing techniques e.g. 3/5-axis G-code like considerations, machining, injection molding than has been highlighted. Whilst AM as a prototyping workflow for those techniques is less mature than it could be as the focus has been on these weird and wonderful organic parts you couldn’t make before without AM/3-D Printing.

AWS and NICE DVC – a happy marriage! … resulting in a free protocol on AWS

Thu, 05/03/2018 - 13:12

It’s now two years since Amazon bought NICE and their DVC and EnginFrame products. NICE were very good at what they did. For a long time they were one of the few vendors who could offer a decent VDI solution that supported Linux VMs, with a history in HPC and Linux they truly understood virtualisation and compute as well as graphics. They’d also developed their own remoting protocol akin to Citrix’s ICA/HDX and it was one of the first to leverage GPUs for tasks like H.264 encode.

Because they did Linux VMs and neither Citrix nor VMware did, NICE were often a complementary partner rather than a competitor although with both Citrix and VMware adding Linux support that has shifted a little. AWS promised to leave NICE DVC products alone and have been true to that. However the fact Amazon now owns one of the best and experience protocol teams around has always raised the possibility they could do something a bit more interesting than most other clouds.

Just before Xmas in December 2017 without much fuss or publicity, Amazon announced that they’d throw NICE DVC in for free on AWS instances.

NICE DCV is a well-proven product with standalone customers and for many users offers an alternative to Citrix/VMware offerings; which raises the question why run VMware/Citrix on AWS if NICE will do?

There are also an awful lot of ISVs looking to offer cloud-based services and products including many with high graphical demands. To run these applications well in the cloud you need a decent protocol, some have developed their own which tend to be fairly basic H.264, others have bought in technology from the likes of Colorado Code Craft or Teradici’s standalone Cloud Access Software based around the PCoIP protocol. Throwing in a free protocol removes the need to license a third-party such as Teradici, which means the overall solution cost is cut but with no impact on the price AWS get for an instance. This could be a significant driver for ISVs and end-users to choose AWS above competitors.

Owning and controlling a protocol was a smart move on Amazon’s part, a key element of remoting and the performance of a cloud solution, it makes perfect sense to own one. Microsoft and hence Azure already have RDS/RDP under their control. Will we see moves from Google or Huawei in this area?

One niggle is that many users need not just a protocol but a broker, at the moment Teradici and many do not offer one themselves and users need to go to another third-party such as Leostream to get the functionality to spin-up and manage the VMs. Leostream have made a nice little niche supporting a wide range of protocols. It turns out that AWS are also offering a broker via the NICE EnginFrame technologies, this is however an additional paid for component but the single vendor offering may well appeal. It was really hard to find this out, I had to contact the AWS product managers for NICE to be certain. I really couldn’t work out what was available from the documentation and product overviews from AWS (in the end I had to contact the product management team directly).

Teradici do have a broker in-development, the details of which they discussed with Jack on brianmadden.com.

So, today there is the option of a free protocol and paid for broker (NICE+EngineFrame alibi tied to AWS) and soon there will be a paid protocol from Teradici with a broker thrown in, the protocol is already available on the AWS marketplace.

This is just one example of many where cloud providers can take functionality in-house and boost their appeal by cutting out VDI, broker or protocol vendors. For those niche protocol and broker vendors they will need to offer value through platform independence and any-ness (the ability to choose AWS, Azure, Google Cloud) against out of the box one-stop cloud giant offerings. Some will probably succeed but a few may well be squeezed. It may indeed push some to widen their offerings e.g. protocol vendors adding basic broker capabilities (as we are seeing with Teradici) or widening Linux support to match the strong NICE offering.

In particular broker vendor Leostream may be pushed, as other protocol vendors may well follow Teradici’s lead. However, analysts such as Gabe Knuth have reported for many years on Leostream’s ability to evolve and add value.

We’ve seen so many acquisitions in VDI/Cloud where a good small company gets consumed by a giant and eventually fails, the successful product dropped and the technologies never adopted by the mainstream business. AWS seem to have achieved the opposite with NICE, continuing to invest in a successful team and product whilst leeraging exactly what they do best. What a nice change! It’s also good to see a bit more innovation and competition in the protocol and broker space.