Industry news

How to Limit CPU & RAM via the Windows Boot Configuration

Helge Klein - Wed, 11/21/2018 - 00:40

Testing the effects of different CPU and memory configurations is easiest when you run the tests on a powerful machine and restrict it to the required number of CPU cores and amount of RAM. Microsoft’s documentation of the relevant command is missing an essential parameter. Here are the commands you need.

Limiting the CPU to N Cores

On an elevated command prompt run:

bcdedit /set {current} numproc NUMBER_OF_CORES

Note: strangely, the numproc parameter is missing from the Microsoft documentation of bcdedit. However, it still works fine on Windows 10 1803.

Limiting the RAM to N MB

On an elevated command prompt run:

bcdedit /set {current} removememory MB_TO_REMOVE_FROM_INSTALLED_RAM With: MB_TO_REMOVE_FROM_INSTALLED_RAM = INSTALLED_RAM - DESIRED_RAM

This is unnecessarily complicated. Instead of specifying the total RAM you want Windows to see, you specify how much of the installed RAM to remove (in MB).

Removing a Bcdedit Setting

To remove a setting, run the following on an elevated command prompt:

bcdedit /deletevalue {current} SETTING_NAME E.g.: bcdedit /deletevalue {current} numproc

The post How to Limit CPU & RAM via the Windows Boot Configuration appeared first on Helge Klein.

New - Latest EPA Libraries

Netscaler Gateway downloads - Mon, 11/19/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

PackMan in practice, part 2

The Iconbar - Fri, 11/16/2018 - 09:00
As mentioned at the end of part one, this article about creating PackMan packages is going to look at what's necessary to generate distribution index files, ROOL pointer files, and how these tasks can be automated. Towards the end I'll also be taking a look at some options for automating the uploading of the files to your website.

Index files and pointer files

Distribution index files

Distribution index files are the mechanism by which the PackMan app learns about packages - each 'distribution' (i.e. collection of package zip files hosted on a website) is accompanied by an index file. This index file contains information about each package; at first glance it looks like it's just a copy of the RiscPkg.Control file from within the package, but there are actually a handful of extra fields that are included within each 'control record':

  • Size - The size of the zip file
  • MD5Sum - The md5 of the zip file
  • URL - The (relative) URL to the zip file
Additionally, by convention the distribution index file should only list the most recent version of each package. It's also common (but not necessary) for the package files to contain the package version as part of their name, e.g. SunEd_2.33-1.zip. This way, although the index only lists the most recent version, you can still host the previous versions on your site if you want or need to.ROOL pointer files

For the community distribution that's managed by ROOL, ROOL require package mainteners to provide a 'pointer file' containing a simple list of links/URLs to packages. ROOL's server will then regularly fetch this pointer file, fetch all the (changed) packages, and merge everything into the central distribution index file.What we need is some kind of packaging tool

These rules for distributions, index files, and pointer files mean that the task of preparing package zip files for publishing is too complex to be attempted using simple Obey file or BASIC scripting. OK, if you were only interested in ROOL pointer files you could probably cobble something together without too much effort, but in most cases the use of a 'proper' tool is going to be required.

The good news is that to generate a distribution index file, a tool only needs two bits of information about each package: the URL the package is going to be hosted at, and a copy of the zip file itself. Everything else can be determined by examining the contents of the zip. This means that once you've got such a tool for generating index files, it should be very easy to plug it into your publishing pipeline.

And the even better news is that I've already written a tool - skopt, a.k.a. Some Kind Of Packaging Tool. Currently its feature set is quite small, but since it appears to be the only tool of its kind in the standard PackMan distributions, it's something that's very much needed if the ecosystem is to continue to grow. Additionally, it's built ontop of the same LibPkg library that RiscPkg/PackMan use internally - helping to simplify the code (just a few hundred lines of C++) and ensure consistent behaviour with PackMan itself.Functions

The current version of skopt has three modes of operation:

  • copy - Copy package file(s) from one location to another. However, unlike a simple *Copy command, this will rename the package zip to the 'canonical' form (i.e. SunEd_2.33-1.zip). It'll also raise an error if it tries to overwrite a package which has the same name but different content (which could indicate that you're trying to publish a new version of a package without increasing the version number).
  • gen-binary-index - Scans a list of folders containing (binary) package zip files and generates the corresponding distribution index file. It assumes that the filenames of the packages within the folders will match the names by which the files will be uploaded to your website. However you can specify per-folder URL prefixes.
  • gen-pointer-file - Scans a list of folders containing binary packages and generates a ROOL pointer file. As with gen-binary-index, it's assumed that the names of the files will match the names used once uploaded, and URL prefixes are applied on a per-folder basis.
Uploading packages

With skopt managing generating the index file(s) (and optionally package file naming), there's only one piece of the puzzle left: how can you upload the packages to your site?

Ignoring the obvious choice of doing it manually, there are at least a few RISC OS-friendly solutions for automation:!FTPc

!FTPc can be driven programmatically by the use of Wimp messages, and comes with a handy BASIC library to simplify usage. So if you have FTP/FTPS access to your web site, only a few lines of BASIC are needed to upload files or folders.scp, sftp

If you have SSH access to the web server, then scp (as included in the OpenSSH PackMan package) may be an easy solution for you.

The OpenSSH package also includes a copy of the sftp command, which is useful if your web site supports SFTP rather than the older FTP & FTPS protocols. sftp can easily be driven from scripts by the use of the -b option, which will read a series of commands from a text file.rsync

rsync (also available via PackMan) is another option for copying files via SSH connections (or via the bespoke rsync protocol).Next time

Uploading binary packages to PackMan distributions is all well and good, but what about the source code? Or what if you want to have regular web pages through which people can also download your software? Stay tuned!

No comments in forum

Categories: RISC OS

New - NetScaler Gateway (Maintenance Phase) Plug-ins and Clients for Build 12.0-59.9

Netscaler Gateway downloads - Thu, 11/15/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Maintenance Phase) 12.0 Build 59.9

Netscaler Gateway downloads - Thu, 11/15/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Maintenance Phase) 12.0 Build 59.9

Netscaler Gateway downloads - Thu, 11/15/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Maintenance Phase) Plug-ins and Clients for Build 12.0-59.9

Netscaler Gateway downloads - Thu, 11/15/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

Troubleshooting Insights for Success Within your Epic Deployment

Theresa Miller - Thu, 11/15/2018 - 18:34

Healthcare organizations that deploy Epic to support the patients that they see every day have a responsibility to ensure that Epic is always online and performing well.  Take for example, a hospital that uses Epic for all its patient services to keep the patients healthy and to ensure their safety Epic needs to be online […]

The post Troubleshooting Insights for Success Within your Epic Deployment appeared first on 24x7ITConnection.

Using Procmon To Find Registry Settings

Theresa Miller - Thu, 11/15/2018 - 06:30

Process Monitor (a.k.a. Procmon) is a free Microsoft utility as a part of their Sysinternal Suite, created by the famous Mark Russinovich. The suite has a large amount of incredibly useful tools for Microsoft IT Pros and Developers, but can be overwhelming to start with and look at. Procmon is a great one to start with, […]

The post Using Procmon To Find Registry Settings appeared first on 24x7ITConnection.

GPS becomes Data Logger

The Iconbar - Fri, 11/09/2018 - 06:21
At the recent London Show Chris Hall was showing his new Data Logger. Here he gives some info into the new hardware

Version 2.40 of my SatNav software and the compact hardware unit with just an OLED display meant I could stop trying to fix things that were still unfinished. I had full battery management, conditional data logging, robust and error-tolerant data downloading on demand and power management that avoided any SD card corruption.

Where next?

 
There were things still unfinished: I wanted the unit to be able to use WiFi to transmit data instead of making do with manual downloads to a USB pen drive; I would have liked to remove the code which drives a liquid ink display (Papirus) into a more general purpose module, where it should be, but had never tried writing one. RiscBASIC could compile an application to a module but only at 26 bit. Both these aspirations were therefore not immediate.

It was not long before an idea came to me: as usual it hit me at 0400. Putting detail aside, the unit’s fundamental purpose was to monitor incoming data (GPS data) and selectively log the data along with time and date of receipt. The data could then be downloaded on demand to a USB pen drive in an appropriate format. My university training had strongly counselled me never to press something designed for a specific task into service for a quite different purpose. At least not without careful thought.

The front panel showing the push buttons and the USB socket, as well as the 12-way screw terminal block, which can easily be unplugged.

The idea I got was that I could build a data monitor which would examine a number of input lines and log any state change. The inspiration for this came from my volunteer work on the Severn Valley Railway: we had experienced a few faults on S&T equipment that had proved very difficult to fault find. They were intermittent and on complex equipment where faults would be difficult for the signalman to report in a way that could help us diagnose the problem. Accurate details of exactly what had occurred would help enormously.

Removing the GPS module (which used the serial port to communicate), adding a CJE real time clock module and using the GPIO pins to monitor incoming data would mean that very little in the SatNav programme would need to be altered to fulfil this new purpose.Data Logger

The main purpose of the data logger is to monitor signalling circuits so that a precise sequence of relay operations can be logged. Eight inputs are scanned 1500 times a second and any changes of state are logged. The GPS module has gone, replaced by a simple real time clock card. Otherwise the software is very similar, with changes of state of any one of the eight inputs being the parameter that is being monitored rather than GPS location. The unit is designed to operate for weeks at a time, with regular visits to download data to a USB pen drive simply by pressing the 'ON & INFO' button. Making the device fault tolerant was quite tricky as there is no desktop display to show errors. (A desktop display is being created but no monitor is plugged in.)

The GPIO inputs themselves on the Raspberry Pi used 3.3V logic and had software-controlled pull-up or pull-down resistors. The circuits being monitored used 12V, 24V or 50V and comprised a number of relay contacts in series with a relay coil. Some relays had spare contacts (making them easy to monitor) but some did not, particularly those monitoring track circuits.

The ‘simple’ DC track circuit exploits the hysteresis of the relay so that it falls away at a known voltage and requires a higher voltage to pick up again when the train has gone. Monitoring the single contact provided is the only way which means breaking into signalling circuits without disturbing them.

The signalling circuit shown is a 24V circuit with the fused positive supply (B12fx) on the left and the negative return (N12) on the right. Inputs A to D are ‘active low’ and connected to GPIO pins GP5-GP8 which have, by default, pull-up resistors on chip. Inputs E to H are ‘active high’ and are connected to GPIO pins 17-19 and 23 with pull-down resistors.

That also means being tolerant of relay back EMFs as the relay coil circuit is broken. How resilient the unit will be in service is still unproven.

My first stab at protecting the GPIO 3.3V logic from the 24V signalling supplies and the hundreds of volts that might appear as relay coil circuits are broken. Each input is inactive when open circuit and can be activated in the voltage ranges noted. This gives a choice of +1 to -30V (which includes 0V); -6 to -30V (with a series resistor and excludes 0V) or +6 to +30V. Voltages in the range specified will make the input ‘active’.

Monitoring of the inputs has a ‘test’ mode built in so that holding down certain keys on the keyboard will toggle the measured status of the particular input.

Construction

The case itself is made from polycarbonate sheet, cut to several different widths by the supplier that I cut to length with a hacksaw.

The case with printed labels stuck onto the protective film after cutting the ploycarbonate sheets to size.

Most of the breakout circuit board is taken up with the various components needed to protect the GPIO inputs from harmful external voltages but otherwise it is a very similar layout to that used for the GPS unit. Circuitry to control power switching is identical.

Internal wiring. Signal diodes, 1N914, resistors and capacitors should protect the 3.3V level logic from +24V and relay coil back EMFs. Inputs will sink no more than 200µA to avoid disturbing the signalling circuits being monitored.

A side view shows the plug-in screw terminal block which can be connected to the circuits to be monitored using up to 2.5mm2 cable.

Eight inputs plus three voltage-measuring inputs (approx. 0 to 30V with a resolution of 33mV). The terminal block can be unplugged.

All I have to do now is to check for any dry soldered joints!

Testing

Power consumption is about 150mA but the unit is designed to operate with an external mains power supply. The internal battery allows it to continue monitoring the signalling circuits for 17 hours after external power is lost. The signalling circuits themselves are also battery backed.

The unit has an ADC board with three spare voltage inputs, allowing any positive voltage to be recorded at any state change.

The OLED display shows the time of day and the status of the eight inputs and the three voltages (plus the supply voltage, which is the battery voltage unless it is being charged by a healthy external supply.

The OLED display shows the time (in GMT) updating once per second on the first line and shows the exact time of the last log entry on the second line with the status of each input as logged. On the line below the letters A to H will appear an ‘X’ if that item is active. The line showing ‘Man Poff’ indicated that the unit was switched on manually with no external power and that was the last change of state (an ‘A’ would mean A went active and ‘a’ inactive). The four voltages are those measured at that time. 3.807V implies 54% battery life. The other voltage inputs are not connected.

An analysis of the sampling shows that the sample interval is about 650µs (pink dots) while ‘lost time’ (for example to update the OLED display - 8cs - or to measure voltages - 5cs) is generally less than 12cs.

The battery should last about 18 hours (at 175mA) and the voltage reading can be used to indicate the battery life remaining, as above.

Routine observations

The OLED display shows the current time and date in Grenwich Mean Time as well as the last occasion (time only, but shown to the nearest hundredth of a second) when one of the eight inputs being monitored changed state. The status of each input (A to H) is also shown.

No inputs are connected so the unit shows the initial condition: manually turned on with no external power supply.

Logging

The unit will keep a log of the various inputs monitored (making a new entry at each state change or every 30 minutes) which can be downloaded onto a USB pen drive on demand by inserting a USB pen drive in the USB socket and pressing the ‘ON & INFO’ push button.

Download request at &573EC486
Opening file for day 60 as SCSI::0.$.Lg28-09-2018/csv
Closing file for day 60 length=50173
Writing valid lines from DataLog/csv (length=C457) to
.RL28Sep2018/csv
Closing copy file at length=50263
Reopening copy file from 57 to C457
Logfile now truncated to length=C400
When a log is downloaded, the daily log files are written to the pen drive along with a file (shown above ‘LogFile.csv’) which will show detailed information to allow any errors to be followed up.

Pressing the ‘INFO’ button will bring up this screen and show the progress of the download as shown. The ‘ON’ button is used to ensure the messages don’t disappear.

The ‘ON’ button is used to make sure that the messages remain on screen for long enough for the user to read them.

As each day’s log details are downloaded, a progress report is shown.

If the process completes with no errors this screen is shown

Releasing the ‘ON’ button confirms you have read the message.

Logging format

The logging of the inputs uses a ‘CSV’ file, the syntax of which includes the time of each entry and the various parameters being monitored. The logging now looks like the example below.

28-09-2018,11:29:34.71,&573E97E7FF,0,0,0,0,0,0,0,0,4593mV,04.98V, .00V, .00V,Man POn ,0
28-09-2018,11:29:49.57,&573E97EDCD,0,0,1,0,0,0,0,0,4587mV,04.98V, .00V, .00V,C,21675
28-09-2018,11:29:49.84,&573E97EDE8,0,0,0,0,0,0,0,0,4596mV,04.98V, .00V, .00V,c,238
28-09-2018,11:30:52.50,&573E980662,0,0,1,0,0,0,0,0,4593mV,04.98V, .00V, .00V,C,92043
28-09-2018,11:30:52.71,&573E980677,0,0,0,0,0,0,0,0,4596mV,04.98V, .00V, .00V,c,0
28-09-2018,11:31:29.20,&573E9814B8,0,0,1,0,0,0,0,0,4596mV, .00V,05.04V, .00V,C,53452
28-09-2018,11:31:31.42,&573E981596,0,0,0,0,0,0,0,0,4596mV, .00V,05.04V, .00V,c,3104
28-09-2018,11:33:46.94,&573E984A86,0,0,0,0,0,0,0,0,0000mV, 0.00V, 0.00V, 0.00V,Quit,0
A download request will copy a log file for each of the last 60 days for which data are available onto a USB pen drive. It will archive previous days’ log entries as a record of what was extracted. Subsequent downloads will thus still capture full data for the current date. The example above shows log entries created by touching input ‘C’ with a wire connected to 0V with 5V applied to K or L. Times are in GMT.

Performing a log download

Place a USB pen drive in the USB socket, wait a few seconds for the presence of the drive to be recognised by the operating system and then press the ‘ON & INFO’ button. Log entries up to and including the current day will be downloaded, the data for the last 60 days being placed in separate files on the USB stick. Progress of the download will be shown on the OLED display and once reported complete the USB pen drive should be removed and the ‘ON & INFO’ button used to confirm this has been done, as prompted.

If an error message is shown on the OLED display (such as ‘disc drive is empty’) this can be recorded. The date and time at which the download was carried out and the date and time shown on the OLED display should also be recorded (so that any clock drift can be recorded).

A continuous log is stored on the internal 16GB SD card and historical data are archived on completion of any download (in case the pen drive is mislaid) with just the data for the current date retained in the accessible log.

Analysis of the data downloaded

Analysis of the data downloaded will be done in an Excel spreadsheet which will allow any abnormal operation to be examined in detail.

Logging

Daily logs are created when a download request is made, with dates, times and circuit activity recorded every 30min or at any change of state.

Power consumption

Although it cannot be seen with only an OLED display, a full desktop is being generated. With no HDMI display connected, power consumption drops from 550mA (excluding the display itself) to about 170mA so it is quite efficient.

Powerboost board

The ‘power booster’ board allows an internal 3.3V Lithium-Polymer battery to produce a 5.2V output and will use an external 5V power source to take over this rôle and charge the internal battery until fully charged. Switching on and off is controlled by an ‘ENABLE’ input.

A blue LED lights if power is being supplied to the computer. With the booster board output disabled, only a minimal current is drawn from the internal battery. A red LED lights if the internal battery becomes discharged below 3.2V (a diode is fitted to disable the output automatically). Fully discharging the internal battery is likely to damage it.

While the internal battery is being charged a yellow LED lights, turning green when it is fully charged. A small current drain to light the green LED to show a full charge seems enough to keep some power banks happy even whilst the unit is otherwise powered down.

This means the external source can be connected and disconnected without affecting the operation of the device except to extend battery life.

Analogue to digitial conversion

The ADC board is the 12 bit ADS 1015, by Adafruit, and it takes about 5cs to check all four voltages. I did try a 16 bit version but it took quite a bit longer to do the same sampling. I had also included the voltage sampling in the same block of code as that examining the eight GPIO pins for any state change. This meant that the unit was sampling at about 10Hz.

I realised that state changes of the GPIO pins needed to be monitored more frequently and voltages only really needed to be recorded at any state change. This improved the data sampling rate to 20Hz.

The next inspiration was that the OLED display was being updated each time there was a state change.

The OLED module keeps a sprite in memory which is updated line by line as text is written to the display drivers. Every so often, the display is updated via the IIC bus, which takes about 12cs.

There seemed no point in updating the OLED display more quickly than the eye could see so I held back updates if the display had been updated in the last 2s. The sampling rate rose to 1500Hz.

The Organic LED Display

The OLED display shows the active/inactive status of the eight inputs under the headings A to H under which an ‘X’ appears if the circuit concerned is active, updating every 30 mins or whenever any of the inputs changes state. The three voltages being monitored are also displayed, updating at the same time. Time and date is also shown, in GMT.

This continuous display allows the set up of the unit to be checked easily and the last recorded state change is displayed along with the precise time (to the nearest hundredth of a second) that it occurred.

Loss of power

Whilst the unit should continue operating in the event of a mains failure, it will shut down if its internal battery becomes exhausted and would, in that event, need to be restarted manually. Before doing so, the fact that mains power has been restored should be confirmed (a yellow or green LED indication on the powerboost board confirms that the internal battery is being charged).

The unit takes about 11s from power on to be functional and the real time clock is able to keep time whilst the unit is switched off by having its own 3V 70mAh coin cell. The unit will continue logging for about 17 hours after power has been removed after which it will shut down.

The ‘OFF’ button allows the unit to be shut down manually. The status of the ‘ON’ and ‘OFF’ buttons is held internally so that it can de determined which button was the last to be pressed.

Battery endurance

The internal rechargeable battery is monitored for state of charge each time any of the eight inputs changes state: it will show either ‘xx%’ (while power is off) or ‘chgng’ (whilst being charged). Below 3.2V it is discharged and the power boost board will force power off. The battery should last for 500 charge/discharge cycles.

If a keyboard, mouse and monitor are connected, then a multi-tasking graphical user interface will be displayed. Power consumption rises from 1W to about 3W with an HDMI display connected (excluding the power that the display itself requires). Direct WiFi access is not yet supported by RISC OS.

Power on/off

The ‘OFF’ button does not switch the unit off directly. Power is removed under software control so that corruption of the SD card, which could occur if power was removed whilst the log was being updated, is avoided.

Pressing the ‘OFF’ button will warn the user before removing power, as shown.

Provided that the unit has been operating for at least six seconds (enough time for the RISC OS desktop to start), the ‘off’ button will pull GPIO 26 low but do nothing else. The software will notice this, complete any logging and then do a system shutdown using the command SYS "TaskManager_Shutdown",162 which will shutdown all applications tidily and restart RISC OS. The effect of this is to update the CMOS ‘last time on’ setting and restart the ROM. As the ROM reinitialises, GPIO 4 becomes high impedance thus removing power.

Software update

A specially prepared USB pen drive may be used to update the software automatically or to extract archived data, but this is not a routine operation.

Silent errors

With a monitor using the HDMI output to display a desktop, any error that might be generated can be recorded, with its line number, and investigated. It is very frustrating to see the software ‘freeze’ and realise that an error message is being displayed but cannot be seen. If you simply plug in an HDMI display at that point, the computer will not send data to it as the HDMI system is not powered up. I therefore added an error display to the OLED drivers.

The finished unit clamps to a steel surface:

The finished unit.

No comments in forum

Categories: RISC OS

Supercomputing 2018, a conference for enterprise architects?

Theresa Miller - Tue, 11/06/2018 - 08:01

I’m in Barcelona for VMworld as the 2018 tech conference winds down, but I’m eager to attend Supercomputing next week in Dallas.  This conference has been around since 1988, and about 11,000 people attend. It is billed as the international conference for HPC (High Performance Computing), networking, storage, and analysis. That sounds like what most […]

The post Supercomputing 2018, a conference for enterprise architects? appeared first on 24x7ITConnection.

New - NetScaler Gateway (Maintenance Phase) Plug-ins and Clients for Build 11.1-60.13

Netscaler Gateway downloads - Mon, 11/05/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Maintenance Phase) 11.1 Build 60.13

Netscaler Gateway downloads - Mon, 11/05/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Maintenance Phase) Plug-ins and Clients for Build 11.1-60.13

Netscaler Gateway downloads - Mon, 11/05/2018 - 22:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

Review of Additive Manufacture and Generative Design for PLM/Design at Develop 3D Live 2018

Rachel Berrys Virtually Visual blog - Wed, 05/16/2018 - 13:54

A couple of months ago, back at D3DLive! I had the pleasure of chairing the Additive Manufacturing (AM) track. This event in my opinion alongside a few others e.g. Siggraph and COFES is one of the key technology and futures events for the CAD/Graphics ecosystem. This event is also free thanks in part to major sponsors HP, Intel, AMD and Dell sponsorship.

A few years ago, at such events the 3D-printing offerings were interesting, quirky but not really mainstream manufacturing or CAD. There were 3D-printing vendors and a few niche consultancies, but it certainly wasn’t technology making keynotes or mentioned by the CAD/design software giants. This year saw the second session of the day on the keynote stage (video here) featuring a generative design demo from Bradley Rothenberg of nTopology.

With a full track dedicated to Additive Manufacture(AM) this year including the large mainstream CAD software vendors such as Dassault, Siemens PLM and Autodesk this technology really has hit the mainstream. The track was well attended with approximately half of the attendees when poled where actually involved in implementing additive manufacture and a significant proportion using it in production.

There was in general a significant overlap between many of the sessions, this technology has now become so mainstream that rather than seeing new concepts we are seeing like mainstream CAD more of an emphasis on specific product implementations and GUIs.

The morning session was kicked off by Sophie Jones, General Manager of Added Scientific a specialist consultancy with strong academic research links who investigate future technologies. This really was futures stuff rather than the mainstream covering 3D-printing of tailored pharmaceuticals and healthcare electronics.

Kieron Salter from KWSP then talked about some of their user case studies, as a specialist consultancy they’ve been needed by some customers to bridge the gaps in understanding. In particular, some of their work in the Motorsports sector was particularly interesting as cutting-edge novel automotive design.

Jesse Blankenship from Frustum gave a nice overview of their products and their integration into Solid Edge, Siemens NX and Onshape but he also showed the developer tools and GUIs that other CAD vendors and third-parties can use to integrate generative design technologies. In the world of CAD components, Frustum look well-placed to become a key component vendor.

Andy Roberts from Desktop Metal gave a rather beautiful demonstration walking through the generative design of a part, literally watching the iteration from a few constraints to an optimised part. This highlighted how different many of these parts can be compared to traditional techniques.

The afternoon’s schedule started with a bonus session that hadn’t made the printed schedule from Johannes Mann of Volume Graphics. It was a very insightful overview of the challenges in fidelity checking additive manufacturing and simulations on such parts (including some from Airbus).

Bradley Rothenberg of nTopology reappeared to elaborate on his keynote demo and covered some of the issues for quality control and simulation for generative design that CAM/CAE have solved for conventional manufacturing techniques.

Autodesk’s Andy Harris’ talk focused on how AM was enabling new genres of parts that simply aren’t feasible via other techniques. The complexity and quality of some of the resulting parts were impressive and often incredibly beautiful.

Dassault’s session was given by a last-minute speaker substitution of David Reid; I haven’t seen David talk before and he’s a great speaker. It was great to see a session led from the Simulia side of Dassault and how their AM technology integrates with their wider products. A case study on Airbus’ choice and usage of Simulia was particularly interesting as it covered how even the most safety critical, traditional big manufacturers are taking AM seriously and successfully integrating it into their complex PLM and regulatory frameworks.

The final session of the day was probably my personal favourite, Louise Geekie from Croft AM gave a brilliant talk on metal AM but what made it for me was her theme of understanding when you shouldn’t use AM and it’s limitations – basically just because you can… should you? This covered long term considerations on production volumes, compromises on material yield for surface quality, failure rates and costs of post-production finishing. Just because a part has been designed by engineering optimisation doesn’t mean an end user finds it aesthetically appealing – the case where a motorcycle manufacturer and indeed wants the front fork to “look” solid.

Overall my key takeaways were:

·       Just because you can doesn’t mean you should, choosing AM requires an understanding of the limitations and compromises and an overall plan if volume manufacture is an issue

·       The big CAD players are involved but there’s still work to be done to harden the surrounding frameworks in particular reliable simulation, search, fidelity testing.

·       How well the surrounding products and technologies handle the types of topologies and geometries GM throws out will be interesting. In particular it’ll be interesting to watch how Siemens Syncronous Technology and direct modellers cope, and the part search engines such as Siemens Geolus too.

·       Generative manufacture is computationally heavy and the quality of your CPU and GPU is worth thinking about.

Hardware OEMS and CPU/GPU Vendors taking CAD/PLM seriously

These new technologies are all hardware and computationally demanding compared to the modelling kernels of 20 years ago. AMD were showcasing and talking about all the pro-viz, rendering and cloud graphics technologies you’d expect but it was pleasing to see their product and solution teams and those from Dell, Intel, HP etc talking about computationally intensive technologies that benefit from GPU and CPU horse power such as CAE/FEA and of course generative design. It’s been noticeable in recent years in the increasing involvement and support from hardware OEMs and GPU vendors for end-user and ISV CAD/Design events and forums such as COFES, Siemens PLM Community and Dassault’s Community of Experts; which should hopefully bode well for future platform developments in hardware for CAD/Design.

Afterthoughts

A few weeks ago Al Dean from Develop3D wrote an article (bordering on a rant) about how poorly positioned a lot of the information around generative design (topology optimisation) and it’s link to additive manufacture is. I think many reading, simply thought – yes!

After reading it – I came to the conclusion that many think generative design and additive manufacture are inextricably linked. Whilst they can be used in conjunction there are vast numbers of use cases where the use of only one of the technologies is appropriate.

Generative design in my mind is computationally optimising a design to some physical constraints – it could be mass of material, or physical forces (stress/strain) and could include additional constraints – must have a connector like this in this area, must be this long or even must be tapered and constructed so it can be moulded (include appropriate tapers etc – so falls out the mold).

Additive manufacture is essentially 3-D printing, often metals. Adding material rather than the traditional machining mentality of CAD (Booleans often described as target and tool) – removing stuff from a block of metal by machining.

My feeling is generative design far greater potential for reducing costs and optimising parts for traditional manufacturing techniques e.g. 3/5-axis G-code like considerations, machining, injection molding than has been highlighted. Whilst AM as a prototyping workflow for those techniques is less mature than it could be as the focus has been on these weird and wonderful organic parts you couldn’t make before without AM/3-D Printing.

AWS and NICE DCV – a happy marriage! … resulting in a free protocol on AWS

Rachel Berrys Virtually Visual blog - Thu, 05/03/2018 - 13:12

It’s now two years since Amazon bought NICE and their DCV and EnginFrame products. NICE were very good at what they did. For a long time they were one of the few vendors who could offer a decent VDI solution that supported Linux VMs, with a history in HPC and Linux they truly understood virtualisation and compute as well as graphics. They’d also developed their own remoting protocol akin to Citrix’s ICA/HDX and it was one of the first to leverage GPUs for tasks like H.264 encode.

Because they did Linux VMs and neither Citrix nor VMware did, NICE were often a complementary partner rather than a competitor although with both Citrix and VMware adding Linux support that has shifted a little. AWS promised to leave NICE DCV products alone and have been true to that. However the fact Amazon now owns one of the best and experience protocol teams around has always raised the possibility they could do something a bit more interesting than most other clouds.

Just before Xmas in December 2017 without much fuss or publicity, Amazon announced that they’d throw NICE DVC in for free on AWS instances.

NICE DCV is a well-proven product with standalone customers and for many users offers an alternative to Citrix/VMware offerings; which raises the question why run VMware/Citrix on AWS if NICE will do?

There are also an awful lot of ISVs looking to offer cloud-based services and products including many with high graphical demands. To run these applications well in the cloud you need a decent protocol, some have developed their own which tend to be fairly basic H.264, others have bought in technology from the likes of Colorado Code Craft or Teradici’s standalone Cloud Access Software based around the PCoIP protocol. Throwing in a free protocol removes the need to license a third-party such as Teradici, which means the overall solution cost is cut but with no impact on the price AWS get for an instance. This could be a significant driver for ISVs and end-users to choose AWS above competitors.

Owning and controlling a protocol was a smart move on Amazon’s part, a key element of remoting and the performance of a cloud solution, it makes perfect sense to own one. Microsoft and hence Azure already have RDS/RDP under their control. Will we see moves from Google or Huawei in this area?

One niggle is that many users need not just a protocol but a broker, at the moment Teradici and many do not offer one themselves and users need to go to another third-party such as Leostream to get the functionality to spin-up and manage the VMs. Leostream have made a nice little niche supporting a wide range of protocols. It turns out that AWS are also offering a broker via the NICE EnginFrame technologies, this is however an additional paid for component but the single vendor offering may well appeal. It was really hard to find this out, I had to contact the AWS product managers for NICE to be certain. I really couldn’t work out what was available from the documentation and product overviews from AWS (in the end I had to contact the product management team directly).

Teradici do have a broker in-development, the details of which they discussed with Jack on brianmadden.com.

So, today there is the option of a free protocol and paid for broker (NICE+EngineFrame alibi tied to AWS) and soon there will be a paid protocol from Teradici with a broker thrown in, the protocol is already available on the AWS marketplace.

This is just one example of many where cloud providers can take functionality in-house and boost their appeal by cutting out VDI, broker or protocol vendors. For those niche protocol and broker vendors they will need to offer value through platform independence and any-ness (the ability to choose AWS, Azure, Google Cloud) against out of the box one-stop cloud giant offerings. Some will probably succeed but a few may well be squeezed. It may indeed push some to widen their offerings e.g. protocol vendors adding basic broker capabilities (as we are seeing with Teradici) or widening Linux support to match the strong NICE offering.

In particular broker vendor Leostream may be pushed, as other protocol vendors may well follow Teradici’s lead. However, analysts such as Gabe Knuth have reported for many years on Leostream’s ability to evolve and add value.

We’ve seen so many acquisitions in VDI/Cloud where a good small company gets consumed by a giant and eventually fails, the successful product dropped and the technologies never adopted by the mainstream business. AWS seem to have achieved the opposite with NICE, continuing to invest in a successful team and product whilst leeraging exactly what they do best. What a nice change! It’s also good to see a bit more innovation and competition in the protocol and broker space.

Open-sourced Virtualized GPU-sharing for KVM

Rachel Berrys Virtually Visual blog - Thu, 03/22/2018 - 12:05

About a month ago Jack Madden’s Friday EUC news-blast (worth signing-up for), highlighted a recent  announcement from AMD around open-sourcing their GPU drivers for hardware shared-GPU (MxGPU) on the open-source KVM hypervisor.

The actual announcement was made by Michael De Neffe on the AMD site, here.

KVM is an open source hypervisor, favoured by many in the Linux ecosystem and segments such as education. Some commercial hypervisors are built upon KVM adding certain features and commercial support such as Red Hat RHEL. Many large users including cloud giants such as Google, take the open source KVM and roll their own version.

There is a large open source KVM user base who are quite happy to self-support, including a large academic research community. Open-sourced drivers enable both vendors and others to innovate and develop specialist enhancements. KVM is also a very popular choice in the cloud OpenStack ecosystem.

As far as I know, this is the first open-sourced GPU sharing technology available to the open source KVM base. AMD’s hardware sales model also suits this community well with no software license of compulsory support; a model paralleling how CPUs/servers are purchased.

Shared GPU reduces the cost of providing graphics and suits the economies of scale and cost demanded in Cloud well. I imagine for the commercial and cloud based KVM hypervisors, ready access to drivers can only help accelerate and smooth their development on top of KVM.

The drivers are available to download here:

https://support.amd.com/en-us/download/workstation?os=KVM# . Currently there are only guest drivers for Windows OSs. However being open source, this opens up the possibility for a whole host of third-parties to develop variants for other platforms.

There is also an AMD community forum where you can ask more questions if this is a technology of interest to you and read the various stacks and applications other users are interested in.

Significant announcements for AR/VR for the CAD / AEC Industries

Rachel Berrys Virtually Visual blog - Fri, 03/09/2018 - 16:22
Why CAD should care about AR/VR?

VR (Virtual Reality) is all niche headsets and gaming? Or putting bunny ears on selfies… VR basically has a marketing problem. Looks cool but for many in enterprise it seems a niche technology to preview architectural buildings etc. In fact, the use cases are far wider if you get passed those big boxy headsets. AR (Augmented Reality) is essentially bits of VR on top of something see-through. There’s a nice overview video of the Microsoft Hololens from Leila Martine at Microsoft, including some good industrial case studies (towards the end of the video), here. Sublime have some really insightful examples too, such as a Crossrail project using AR for digital twin maintenance.

This week there have been some _really_ very significant announcements from two “gaming” engines, Unity and the Unreal Engine (UE) from Epic. The gaming engines themselves take data about models (which could be CAD/AEC models) together with lighting and material information and put it all together in a “game” which you can explore – or thinking of it another way they make a VR experience. Traditionally these technologies have been focused on gaming and film/media (VFX) industries. Whilst these games can be run with a VR headset, like true games they can be used on a big screen for collaborative views.

Getting CAD parts into gaming engines has been very fiddly:
  • The meshed formats in VFX industries are quite different from those generated in CAD.
  • Enterprise CAD/AEC user are also unfamiliar with the very complex VFX industry software used to generate lighting and materials.
  • CAD / AEC parts are frequently very large and with multiple design iterations so a large degree of automation is needed to fix them up repeatedly (or a lot of manual hard work)
  • Large engineering projects usually consist of thousands of CAD parts, in different formats from different suppliers

Many have focused on the Autodesk FBX ecosystem and 3DS Max, who with tools like their Slate materials editor allowed the materials/lighting information to be added to the CAD data.  This week both Unreal and Unity announced what amounts to end-to-end solutions for a CAD to VR pipeline.

Unreal Engine

Last year at Siggraph in July 2017, Epic announced Datasmith for 3DS Max with the inference of another 20 or so formats to follow (they were listed on the initial beta sign-up dropdown) including ESRI, Solidworks, Revit, Rhino, Catia, Autodesk, Siemens NX, Sketchup; the website today lists fewer but more explicitly, here. This basically promises the technology to get CAD data from multiple formats/sources into a form suitable for VFX.

This week they followed it up with the launch of a beta of Unreal Studio. Develop3D have a good overview of the announcement, here.  This reminds a lot of the slate editor in 3DS Max, and it looks sleek enough that your average CAD/AEC user could probably use without significant training (there are a lot of tutorial resources). With an advertised launch price of $49 per month it’s within the budget of your average small architectural firm and the per month billing makes it friendly to project based billing.

Epic are taking on a big task to deliver the end-to-end solution themselves, but they seem to know what they are doing. Watching their hiring website over the last six months they seem to have been hiring a large number of staff both in development (often in Canada) but also sales/business for these projects (hint: the roles often tagged with enterprise – so easy to spot). Over the last couple of years they’ve also built up a leadership team for these project including Marc Petit, Simon Jones and Christopher Murray and it’s worth reviewing the marketing material those folks are putting out.

Unity Announcement

On the same day as the UE announcement Unity countered with an announcement of providing a similar end-to-end solution via a partnership with PiXYZ, a small but specialist CAD toolkit provider.

Whilst the beta is not yet released, PiXYZ existing offerings look a very good and established technology match. Their website is remarkably high on detail of specific functionality and it looks good. PiXYZ Studio for example has all the mesh fix up tools you’d like for cleaning up CAD data for visualisation and VFX. PiXYZ Pipeline seems to cover all your import needs I’ve heard credible rumours that a lot of the CAD focused functionality is built on top of some of the most robust industry licensed toolkits, so the signs are positive that this will be a robust, mature solution rather fast. This partnership seems to place Unity in a position to match the Datasmith UE offering.

It’s less clear what Unity will provide on the materials / lighting front, but I imagine something like the Unreal Studio offering will be needed.

What did we learn from iRay and vRay in CAD

Regarding static rendering in VFX land: vRay, Renderman, Arnold and iRay compete, with iRay taking a fairly small share. However, via strong GPU, hardware and software vendor partnerships iRay has become the dominant choice in enterprise CAD (e.g. Solidworks Visualize etc). CAD loves to standardise and so it will be interesting if a similar battle of Unity vs Unreal will unfold with and eventual dominant force.

Licensing and vendor lock-in

This has all been enabled by the shift in licensing models of the gaming engines demonstrating they are serious about the enterprise space. For gaming a game manufacturer would pay a percentage such as 9% to use a gaming engine to create their game. This makes no sense in the enterprise space to integrate against a gaming engine which is a tiny additional feature on the overall CAD/PLM deployment. So, you will see lots of headlines about “Royalty Free” offerings, the revenues are in the products such as Datasmith and Studio. The degree to which both vendors rely on 3rd party toolkits and libraries under the hood e.g. CAD translators, the PiXYZ functionality etc will also dictate the profitability via how much Unreal or Unity have to pay in licensing costs.

These single vendor / ecosystem pipelines are attractive but relying on the gaming engine provider for the CAD import and materials could potentially lead to lock-in which always makes some customers nervous. Having done all the work of converting CAD data into something fit for rendering and VR I could see the attraction of being able to output it to iRay, Unity or Unreal, which of course is the opposite of what these products are.

Opportunities

There’s a large greenfield virgin market in CAD/AEC of customers who have very limited or no use of visualisation. Whilst the large AEC firms may have little pockets of specialist VFX, your average 10 man architecture firm doesn’t, like wise for the bulk of the Solidworks base. This technology looks simple enough for those users but I suspect uptake by SMBs may be slower than you might presume because for projects won on the lowest-bid why add a VR/AR/professional render component if Sketchup or similar is sufficient?

In enterprise CAD, AEC and GIS there are already VR users with bespoke solutions and strong specialist software offerings (often expensive) and it will be interesting to see the dynamics between these mass-market offerings and the established high-end vendors such as ESI.io or Optis.

These announcements are also setting Unity and Unreal up to start nibbling into the VFX, film and media ecosystems where specialist complex materials and lighting products are used. For many in AEC/CAD these products are a bit overkill. A lot of these users are likely to be less inclined to build their own materials and simply want libraries mapping the CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”). In the last month or so we’ve seen UE also move into traditional VFX territory with headlines such as “Visually Stunning Animated Feature ‘Allahyar and the Legend of Markhor’ is the First Produced Entirely in Unreal Engine” and Zafari – a new children’s cartoon TV series made using UE.

 

I haven’t seen any evidence of any integrations with the CAD materials ecosystems bridging that CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”) part of the solution. If this type of solution becomes mainstream it would be nice to see the material specialists (e.g. Granta Design) and CAD catalogues (e.g. Cadenas) carry information about how VFX type visualisation should be done based on the engineering material data. One to look out for.

 

Overall, I’m very interested about these announcements, lots of sound technology and use cases but whether the mass market is quite over the silly VR headset focus just yet…. we’ll soon find out J.

 

 

 

IoT Lifecycle attacks – lessons learned from Flash in VDI/Cloud

Rachel Berrys Virtually Visual blog - Wed, 08/23/2017 - 12:55
There are lots of parallels between cloud/vdi deployments and “the Internet of Things (IoT)”, basically they both involve connecting an end-point to a network.

One of the pain points in VDI for many years has been Flash Redirection. Flash is a product that it’s makers Adobe seem to have been effectively de-investing in for years. With redirection there is both server and client software. Adobe dropped development for Linux clients many years ago, then surprisingly resurrected it late last year (presumably after customer pressure). Adobe have since said they will kill the Flash player on all platforms in 2020.

Flash was plagued by security issues and compatibility issues (client versions that wouldn’t work with certain server versions). In a cloud/VDI environment the end-points and cloud/data center are often maintained by different teams or even companies. This is exactly the same challenge that the internet of things faces. A user’s smart lightbulb/washing machine is bought with a certain version of firmware, OEM software etc. and how it is maintained is a challenge.

It’s impossible for vendors to develop products that can predict the architecture of future security attacks and patches are frequent. Flash incompatibility often led to VDI users using registry hacks to disable the version matching between client and server software, simply to keep their applications working. When Linux Flash clients were discontinued, it left users unsupported as Adobe no longer developed the code and VDI vendors were unable to support closed source Adobe code.

The Flash Challenges for The Internet of Things
  • Customers need commitments from OEMs and software vendors for support matrices, how long a product will be updated/maintained.
  • IoT vendors need to implement version checking to protect end-clients/devices being downgraded to vulnerable versions of firm/software and life-cycle attacks.
  • In the same way that VDI can manage/patch end-points, vendors will need to implement ways to manage IoT end-points
  • What happens to a smart device if the vendor drops support / goes out of business. Is the consumer left with an expensive brick. Can it even be used safely?

There was a recent article in the Washington Post on Whirlpool’s lack of success with a connected washing machine, it comes with an app to allow you to “allocate laundry tasks to family members” and share “stain-removing tips with other users”. With the uptake low, it raises the question how long will OEMs maintain and services like applications. Many consumer devices such as washing machines are expect to last 5+ years. Again, this is a challenge VDI/Cloud has largely solved for thin-clients, devices with long 5-10 year refresh cycles.

Android Rooting and IoS Jailbreaking – lessons learned for IoT Security

Rachel Berrys Virtually Visual blog - Mon, 08/21/2017 - 11:38

Many security experts regard Android as the wild west of IT. An OS based on Linux developed by Google primarily for the mobile devices but now becoming key to many end points associated with IoT, Automotive, Televisions etc. With over 80% of smartphones running Android and most of the rest using Apple’s iOS, Android is well established and security is a big concern.

Imagine you are a big bank and you want 20000 employees to be able to access your secure network from their own phones (BYOD, Bring Your Own Device) or you want to offer your millions of customers your bank’s branded payment application on their own phone. How do you do it?


Android and iOS have very different security models and very different ways they can be circumvented. Apple with iOS have gone down the root of only allowing verified applications from the Apple store to be installed. If users want to install other applications they can compromise their devices by Jailbreaking their iPhone or similar. Jailbreaking can allow not only the end user to circumvent Apple controls in iOS but also malicious third-parties. IoS implements a locked bootloader to prevent modification of the OS itself or allowing applications root privileges.

Many people describe “rooting” on Android as equivalent to Jailbreaking. It isn’t. Android already allows users to add additional applications (via side-loading). Rooting of an Android devices can allow the OS itself to be modified. This can present a huge security risk as once the OS on which applications has potentially been compromised, an application running on it can’t really establish if the device is secure. Asking pure software on a device “hello, compromised device – are you compromised?” is simply a risky and silly question. Software alone theoretically can never guarantee to detect a device is secure.

There are pure software applications that pertain to establish if a device is compromised usually via techniques such as looking for common apps that can only be installed if a device is rooted/jailbroken, or characteristics left by rooting/jailbreaking applications, or signs of known malicious viruses/worms etc. These often present a rather falsely reassuring picture as they will detect the simplest and majority of compromises so it looks like such applications can detect a potentially unsecure device. However, for the most sophisticated of compromises where the OS itself is compromised the OS can supply such applications with the answer that the device is secure even if it isn’t. Being able to patch and upgrade the OS has a number of technical benefits, so some OEMs ship Android devices rooted and there is a huge ecosystem of rooting kits to enable it to be done. Rootkits can be very sinister and hide themselves though, lurking waiting to be exploited.

Knowing your OS is compromised is a comparable problem to that faced with hypervisors in virtualisation and one that can be solved by relying on hardware security where the hardware below the OS can detect if the OS is compromised. Technologies such as Intel TXT on servers takes a footprint of a hypervisor, locks it away in a hardware unit and compares the hypervisor to the reference on boot ongoing, if the hypervisor is meddled with the administrator is alerted.

Recognising the need for security for Android and other rich OSs, technologies have emerged from OEMs and chip designers that rely on hardware security. Usually these technologies include hardware support for trusted execution, trusted root and isolation with a stack of partners involved to ensure end applications can access the benefits of hardware security.

Typically, there is some isolation where both a trusted and untrusted processors and memory are provided, (some technologies allow the trusted and untrusted “worlds” to be on the same processor). The trusted world is where tested firmware can be kept and it remains a safe haven that knows what the stack above it including the OS should look like. Trusted execution environments (TEE) and Trusted Root are common in cloud and mobile and have enabled the wide-spread adoption of and confidence in mobile pay applications etc.

Many IoT products have been built upon chips designed for mobile phones, thin clients etc. and as such with Linux/Android OSs have the capabilities to support hardware supported security. However, many embedded devices were never designed to “be connected” with such security considerations. For the IoT (Internet of things) to succeed the embedded and OEM ecosystems need to look to hardware based security following the success of the datacentre and mobile in largely solving secure connection.

Of course, it all depends on the quality of execution. Enabling hardware security is a must for a secure platform however if a software stack is then added where a webcams default password is hardcoded the device can be compromised.

Pages

Subscribe to Spellings.net aggregator