Keith Meintjes, a CIMdata fellow and executive consultant, is a veteran of the auto industry. Before becoming a consultant and industry analyst, he spent three decades at GM as a simulation manager, and then managed the automaker’s global CAEIT infrastructure. For him, many of the headline-making product disasters can be summed up as the failure to identify a failure mode.
“We also have a failure to deliver on the promises of systems engineering,” says Meintjes. “I think proper systems engineering would have allowed us to identify and avoid many of these failure modes.”
With systems engineering, products are simulated and tested with all the disparate components included at the systems level. That means testing is done with mechanical, electrical and software components all in the loop. The last two pieces—electronics and software—take on more critical roles as Internet of Things (IoT) devices increasingly rely on sensors and software to trigger and execute functions powered by chips and processors. Some failure modes may not be uncovered during the individual component’s testing, because it’s triggered by the interplay between the electromechanical parts and the control software. Systems-level simulation and testing could expose such failure modes…
Systems engineering as a concept has been around for quite some time, but most of the software supporting the process began to appear about two decades ago. Though engineering and manufacturing communities have shown a growing interest in them, they haven’t embraced the tools widely.
The reason? “It’s the complexity of the tools,” says Meintjes. “Tools like SysML [open source environment to model systems] are not executable, very difficult to use and require a large number of people at the end user companies to understand it.”
To read the full article by Kenneth Wong, please click here.