Manufacturers are constantly innovating on design in response to consumer expectations and also to explore new ways of delivering increased value for reduced cost. Often times, this design innovation is done through a combination of CAD revisions, rapid prototyping, digital and physical testing, short-run productions and many other iterative design approaches.

The pervasiveness of CAD software and ease of exploring digital prototypes has made it much easier for design engineers to offset their physical prototyping with simulated ones. Where engineers once had to develop expensive, time-consuming physical iterations of a particular design and test them in a well-controlled experimental environment, it is now common and fairly robust to test those same experiments digitally at a significant speed and cost advantage.

A commonly used approach to digital design engineering is the use of Design of Experiments (DOE). In a digital DOE, an entire design space can be explored by running a large number of samples with it, and fit a response surface using the resulting data. The response surface connects the inputs to the outputs and is usually based on some form of linear regression. It can be used to quickly understand the tradeoffs between design changes.

These linear regression-based response surfaces, however, often fail to understand complex and non-linear design space interactions, and thus become expensive to run enough samples to get the accuracy required.

Screenshot of example response surfaces comparing kiring vs polynomials interpolation methods.

Example response surfaces comparing kiring vs polynomials interpolation methods. The DOE samples are shown as red dots. Two inputs are presented (theta and radius) with the Z axis representing the output. Notice how different interpolation methods produce dramatically different result landscapes.

Example DOE results presented as a sensitivity plot.

Example DOE results presented as a sensitivity plot. DOE inputs are along the X axis, and the outputs (in this case part_mass) is along the Y axis. For this part, the R1 parameter has the highest positive correlation to mass (as R1 increases, mass increases substantially). D2 has a slightly negative correlation to mass (as D2 increases, mass decreases slightly).

Design optimization is another common technique for improving/innovating on a design. In this design modality, an engineer outlines a design space in a similar manner to a DOE, but instead of randomly sampling the space and generating a response surface, a mathematical method drives the selection of the sampling until a set of convergence criteria is reached. Genetic Algorithms (GAs) for example, sample the design space given a population size, evaluate the samples in waves using a provided merit function, and iterate the population waves using various crossover and mutation approaches until some convergence criteria is reached. Often, the more number of samples in a wave, the fewer number of generations needed to find the optimum.

Example GA and local search optimization of an Onshape bottle aiming to minimize the mass of the shell.

Example GA and local search optimization of an Onshape bottle aiming to minimize the mass of the shell.

While often being accurate at finding an optimal solution, GAs are often considerably time-intensive and expensive to evaluate. Many times a GA takes thousands of iterations to reach convergence, and if a given case is expensive to solve (such as a physics-based CFD or FEA problem), a GA could end up running for several months even on a large computing cluster of hundreds to thousands of nodes.

Solution: The SuperLearner Approach

A consortium of technical partners including Argonne National Laboratory, Convergent Sciences, Onshape and Parallel Works, has been exploring new ways of solving the design optimization challenges described above. More specifically, data-driven machine learning (ML) models have emerged as a unique way to help solve some of these challenges, primarily in the areas of inaccurate linear regression-based DOE response models, and computationally expensive design optimization using GAs.

This unique ML-GA approach, called SuperLearner, allows a design engineer to create a training dataset by sampling a digital design space specified by the practitioner, build a ML-based response surface model, define one or more merit functions, and run various GA optimizations methods over the model in usually seconds. This rapid optimization allows the practitioner to go back, redefine their objectives and quickly rerun the optimizations, something that without this approach could take days, weeks or months to do depending on computational time of a single case. The approach is named SuperLearner because when building the ML-based response surfaces, several ML algorithms are stacked upon one another to deliver the optimum results across multiple methods.

Example plots demonstrating DOE sampling points creating an interpolated response surface (red dots) An example local search optimization minimizing an output (red dots) on a polynomial smoothed response surface

Example plots demonstrating (left) DOE sampling points creating an interpolated response surface (red dots), and a predicted output (blue dot) using the response, and (right) an example local search optimization minimizing an output (red dots) on a polynomial smoothed response surface.

Introducing the SuperLearner Onshape App

Creating parametric geometry for a given DOE or GA problem is often a bottleneck in executing these types of studies. Business as usual often involves coupling diverse sets of software together using some connective framework or software, which is an entire systems-level expertise in its own right. Coupling these connective workflows together to high-performance computers for evaluating the large number of cases that need evaluation is again a different set of challenges that few organizations have the expertise to accomplish.

With the emergence of Onshape’s configuration variable paradigm, it was clear to the Parallel Works team that a streamlined approach for generating a parametric part, driving it using an API and exporting it to various formats for analysis was now possible. It became a perfect environment for integrating the SuperLearner approach and getting it closely into the hands of design engineers.

Parallel Works, along with its technical partners, have developed a fully integrated Onshape app (now available in the Onshape App Store) that brings the power of the SuperLearner approach to any Onshape Part Studio. The SuperLearner app provides a wizard-like approach to defining an optimization design space, performance objectives starting with measurable elements in an Onshape model, DOE using various sampling methods to create training data, and finally an ML-GA based optimization for a user-provided merit function. Workflows were developed on Parallel Works that fully automate the running of the DOE executor, ML-GA optimizor, and finally a video generator showing the optimization convergence.

The Onshape-integrated SuperLearner app attaches to an existing Part Studio that has been created with Configuration Variables to define min, max and default values. A new public Measurement FeatureScript was created to easily identify the performance metrics / outputs of a given optimization problem. At the moment, the SuperLearner app is restricted to using Onshape measurements as performance outputs, such as mass, volume, center of gravity, distances and angles, but it is enabled to additionally evaluate any user workflow defined in Parallel Works (for example, to run large-scale CFD and FEA studies using various open and/or licensed tools). Parallel Works is also exploring integrating several simulation tools directly into the SuperLearner setup process to make physics-based optimizations on CFD or FEA problems fully integrated.

How to Use the SuperLearner App in an Onshape Part Studio

The Parallel Works team developed the SuperLearner app with numerous use cases in mind, particularly where each simulation result takes a long time to generate. Currently, the app is limited to mass properties and Onshape-measurable outputs, but it can readily be coupled to any custom simulation workflow consisting of open-source, licensed or proprietary tools (contact Parallel Works to learn more about enabling this). Now, let’s walk through an example optimization problem minimizing shell mass of a parametric CAD bottle.

1. Define and test configuration variables and part measurements

The SuperLearner app can easily attach onto any existing Onshape Part Studio. In either a new or existing Part Studio, add Configuration Variables to various aspects of the part you want to optimize. Select appropriate min, max and default values, and assign them to the part (sketches, extrusions, offsets, etc).

Once these Configuration Variable assignments are complete, test the parameters of the model and ensure it builds in the way desired. You will notice some model configurations may break the Onshape generation. The SuperLearner optimizer prepares for this and creates invalid solutions for these cases, but it's good practice to try to minimize these failed values. These Configuration Variables will form the default parameters for the SuperLearner-based optimization:

Example bottle Part Studio parametric testing with six Configuration Variables

Example bottle Part Studio parametric testing with six Configuration Variables

Once your geometrical parameters (inputs) are defined and tested using Configuration Variables, you can use the public Measure FeatureScript to define outputs that you want to optimize for. At the moment, you can select measurements such as mass properties, distance and angles.

GIF of Onshape, setting Configuration Variables

Finally, open up the SuperLearner app (after subscribing via the Onshape App Store) and select the Part Studio you want to optimize within your workspace:

Gif of utilizing the SuperLearner app in Onshape

2. Select DOE parameters and outputs

With your optimization Part Studio selected, you can now further define a design space you want to optimize. Also, you can select the specific Measurements you want to calculate for each iteration of the DOE:

Gif of design space optimization in Onshape

3. Generate the DOE training data

After specifying your DOE inputs and outputs, select the number of DOE samples you want to evaluate, as well as the design space sampling method. After setting these values, a plot is generated below showing you the inputs on the X and Y axis. This plot shows you at a glance the specific cases that will be evaluated as part of the DOE, and on the diagonals, the distribution of points with input ranges. The SuperLearner app currently supports six (6) different sampling methods such as latin hypercube, space-filling, monte-carlo and k-mean clustering.

GIF showing some of the sampling methods in the SuperLearner app in Onshape

After specifying your DOE sampling space, submit the job and watch the results stream back in real time. As DOE runs complete, they are presented in the form of a sensitivity plot showing which inputs have the highest or lowest impacts on the outputs:

GIF showing sensitivity plots with the SuperLearner App in Onshape

4. Set up the optimization and run

Once your training data has generated, you can now proceed to run an ML-GA optimization over it. Enter a python-based merit function, select the number of evaluations to predict, and hit the go button. Note, the more iterations you select, the more accurate the final result may be as it can discover more potential local min or max values. If you find that your optimizations are not finding the best optimals you would expect, it is likely you need to regenerate a DOE training dataset with more samples (ie create a more accurate response landscape):

GIF of running a Machine Learning optimization in Onshape

5. View the optimal part

An optimization should only take perhaps 20-30 seconds to generate because it isn’t actually evaluating each iteration, but rather only making an ML-based prediction based on the training data you provided. Once it completes, you will be able to navigate a plot showing how the values are changing over time. You can select a particular evaluation you want to update your part’s configuration values as and watch it regenerate to your part optimum:

GIF of optimization via Machine Learning in Onshape

One of the most useful things about the SuperLearner approach is that once an optimization completes, you can quickly go back, define a new python-based merit function, and rerun the optimization within 20-30 seconds. It will explore the design space in the new merit you provided. For example, if you wanted to keep the volume of the liquid in the bottle consistent at 100, but minimize the shell mass with more important, you could provide a python statement similar to below:

abs( 100 - liquid_mass ) + ( shell_mass * 10 )

You can also generate an optimization video of each SuperLearner run by clicking “Generate Video” button. When clicked, each iteration in the optimization will generate the appropriate Onshape geometry, take a snapshot, and generate a plot. These individual frames are then combined together into an mp4 video. Example bottle optimization video result shown below:

 
 
 
Video Thumbnail
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Please see below for a video walkthrough of the SuperLearner app minimizing mass of a cyclone separator part in Onshape:

 
 
 
Video Thumbnail
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

Join the Onshape/SuperLearner Webinar

Interested in learning more about rapidly optimizing your Onshape parts? Join me and Joe Dunne, Onshape’s Director of Developer Relations, at our live webinar, “Virtual Mockup for Onshape” on Tuesday, June 18th at 11 am EST. Joe and I will be demonstrating how and why to use SuperLearner to innovate on your Onshape designs. You can also sign up for a free 30-day trial of the SuperLearner app in the Onshape App Store.

Acknowledgments

Parallel Works would like to thank Onshape and Joe Dunne for seeing what was possible with this optimization technology, as well as Argonne National Laboratory and Convergent Science for jointly participating in the Technology Commercialization Fund (TCF) project sponsored by the Department of Energy to make the core functionality of the app possible. For further information, please reference this SAE technical paper for the original work that invented and inspired the SuperLearner app.