• Call us at +1 734.668.9922
This email address is being protected from spambots. You need JavaScript enabled to view it.
Cimdata Logo

CIMdata Blog

Tuesday, July 18, 2017

Simulation-Driven Test: A Path to “Smart” Testing?

Written by 

testLet’s start with a pop quiz: Ask a half-dozen of your colleagues, including your managers, “What is the biggest benefit of doing simulation?"

It is highly likely that the answer you will get, particularly in the USA, is, “To reduce testing.” It is also true that, the higher you go up the management ladder, the more pervasive the mantra of “simulation replaces test” becomes.

As you might guess, I think this particular, restricted view of the role of simulation is wrong. Not only that, it can be counterproductive and harmful.

A more “correct” answer is, in my opinion, you do simulation because it is part of the best process you know to develop products. Actually, the best role for simulation is to improve first-time product design performance capability.

The rise of computers

A friend of mine at Intel, Don Tappan, observed: “Computers are now infinitely fast and vanishingly inexpensive.” He is correct. If one considers Moore’s Law, that the capability for technical computing doubles every 18 months, we can now compute in one second what would have taken the engineers on the Apollo moon program a thousand years to do! In addition, a Cray-1 supercomputer in the early 1980s cost about $40 million. Now, you can buy a Raspberry Pi for $20 that has 20 times the capability of that Cray-1. This is just astounding, the improvement of capability in 50 years.

It will continue, though the death of Moore’s Law has been proclaimed many times. Computers will be faster by a factor of 10 every five years, software will be more capable, and the cost of simulation will proceed to dramatically decrease. In 10 years’ time, your computers will be 100 times faster (at less cost) and your employee costs will double. Go figure.

The essential role of test

At some point, companies engaged in mechanical design have to put a physical product in their customers’ hands. They have to do their engineering due diligence, to validate that the product meets the requirements and, among other things, is not dangerous. At least some physical validation testing will always be needed.

In addition, many products face government regulation and compliance standards. Testing may simply be required by external bodies. This is particularly true in fields like commercial aircraft and medical devices.

The evil role of test

There is pressure for companies to rush to prototype hardware, so they can test it to discover failure modes, so they can redesign it. This sort of iterative development testing is, I believe, truly evil. Besides, it simply does not work for large, complex products like automobiles.

Three decades ago, General Motors realized it had a huge problem in product development. New vehicles were taking six or eight years to bring to market; the products were obsolete at their moment of introduction. Those programs involved as many as a dozen iterations of hand-built, very expensive, prototypes.

Today, GM and other auto makers take only 18 to 24 months to develop a new vehicle, and they do it with no prototype builds. Validation is done by testing early production vehicles. There may be a prototype to prove the manufacturing process, but there are no full vehicle prototypes to prove product performance or to discover failure modes.

The fact is, with compressed schedules, prototypes simply take too long to acquire. Also, they are often not representative of production parts, particularly for issues like fatigue and durability. So, manufacturers have altered the content or reach of their production programs, and taken new capability and technology development offline.

Test and simulation, together

The real issue is not how best to do test or to do simulation, but how best to do product (and manufacturing) engineering. The answer, I believe, is to see test and simulation as complementary activities, and then to leverage their synergies.

A manager of mine at GM, Jim Welton, said, “A world-class simulation capability can only exist with a world-class test capability.” At that time, he was responsible for GM’s global CAE simulation effort.

What he meant was, in an era of novel materials (like composites or exotic metal alloys) and production methods (like additive manufacturing) we need great test capabilities to define material properties, perhaps based on our internal, proprietary, production methods. You can’t get the data from outside your company.

Also, what has always been true is that past testing should guide simulation on what to look for, especially with loads and failure modes. And, simulation should guide testing, especially for issues like noise, durability and fatigue, and thermal. Simulation and test should drive each other.

Organization issues

The real problem with the “simulation replaces test” mindset is that it causes an internal competition for resources: “Buy me a supercomputer, and you can cut your test budget.” Every simulation engineer has a war story of a test that failed to uncover a failure mode, as does every test engineer have a tale of where the simulation was wrong.

In the best companies, test and simulation are in a single organization responsible to support both product development and validation. No one from outside comes to question their budget balance between test and simulation. This is particularly true for crash safety performance at large automakers.

At other companies and less mature organizations, one sees that test and simulation are sufficiently far apart in the organization that they compete for resources at a level of management that does not seem to understand that test and simulation essentially have the same deliverables for product development.

The changing nature of test and simulation

I have already noted the dominant trends: Simulation is becoming faster and less costly, while test is becoming relatively slower and more expensive.

The same drivers (the rise of digital) are forcing ideas like the digital twin, the digital enterprise, and model-based everything. So, where does test and simulation go?

Some, like NAFEMS (Rod Dreisbach) advance the idea of “smart testing” I have used the descriptor of “simulation-driven test”. Both of these terms are really getting at the idea of the integration of simulation and test for a changing product development paradigm.

I think that’s the important idea and a good descriptor. The integration of test and simulation, to serve the common purpose of how better to engineer and develop products.

This idea of test and simulation integration is not new, especially for noise and vibration, and for thermal systems development. Solution suppliers like LMS (acquired by Siemens PLM Software) and their customers understand very well that an integrated test and simulation approach is essential to best address product system issues like noise performance.

So, now what?

As CIMdata often observes, it’s a three-legged stool: People & organization, process, and technology.

people

Technology is really not an issue here.

So far as the people are concerned, it is important to send the message that test and simulation are NOT in competition, a message that especially needs to go to senior and mid-management. Test and simulation are complementary and should be synergistic activities.

For the organization, try to have test and simulation under the same roof. In large organizations, yes, that is not practical. But the automobile crash safety model is worth emulating: How about merging test and simulation for product durability, or for noise and vibration, or for thermal performance?

For the process, there are a number of things to do:

  1. Make a validation plan, and specify how to execute it. The best organizations know how to cascade customer requirements down to required simulations and testing for validation.
  2. Write and follow standard work. Understand the purpose and contribution of each test and every simulation
  3. Make a capability matrix. How good is your test, how good is your simulation? Decide how best to go about engineering, with a judicious mix of both. For simulation, CIMdata has suggested the following scale:


    • Level 0: Simulation has no capability
    • Level 1: Simulation has some capability, but is not useful
    • Level 2: Simulation can be used to sort, but not select, alternatives
    • Level 3: Simulation is predictive, but requires physical testing to calibrate models
    • Level 4: Simulation is predictive, confirmation testing is required
    • Level 5: Simulation is predictive, no confirmation tests are required
    • Level 6: Simulation is more capable than test (Six Sigma, robust design, stochastics, optimization, …)
  4. Close the loop. Charter and assign technical experts to provide feedback between test and simulation to continuously improve both. Too often we do the simulation, make a decision, and then validate the product by test. But, no one ever comes back to compare the test against the previous simulation. This is a lost (and possibly very expensive) learning cycle.

Some best practices

At GM Powertrain, John Givens instituted the practice that simulation should sign off on the acquisition of prototype parts for testing. That was simply to say the appropriate simulations had been done. GMPT was actually solving a slightly different problem: 80% of their prototype component parts that were ordered were never tested. They needed a hurdle.

Test and simulation procedures need to be documented as standard work. Appropriate tests and simulations should be planned as part of any product development program. Those tests and simulations should be cross-referenced and reconciled; to be sure that lessons learned are documented and continuous improvement is put in place.

It’s about how best to do engineering.

Do you agree? Let me know!

Keith

Keith Meintjes

Email This email address is being protected from spambots. You need JavaScript enabled to view it.
ipad background image

Featured Cimdata Reports

ipadcontent
PLM Benefits Appraisal Guide

CIMdata’s PLM Benefits Appraisal Guide is designed to help potential PLM users evaluate the applicability and payoffs of PLM in their enterprise, and to help existing users of PLM monitor the impact it is having on their product programs.

ipadcontent
AEC Market Overview Report

The CIMdata 2017 AEC Market Overview Report presents CIMdata’s detailed analysis of the Architecture, Engineering, and Construction (AEC) market.

ipadcontent
PLM Market Analysis Reports

The PLM MAR Series provides detailed information and in-depth analysis on the worldwide PLM market. It contains analyses of major trends and issues, leading PLM providers, revenue analyses for geographical regions and industry sectors, and historical and projected data on market growth.

ipadcontent
PLM Market Analysis Country Reports

These reports offer country-specific analyses of the PLM market. Their focus is on PLM investment and use in industrial markets. Reports cover Brazil, Germany, India, Japan, Russia, South Korea, and the United States.

ipadcontent
Simulation & Analysis Market Analysis Report

This report presents CIMdata’s overview of the global simulation and analysis market, one of the fastest growing segments of the overall product lifecycle management market, including profiles of the leading S&A firms.

ipadcontent
CAM Market Analysis Report

This report presents CIMdata’s overview of the worldwide CAM software and services market. It also includes a discussion on the trends in the CAM industry and updates on the top CAM solution providers.