Simulation Governance Archives - ESRD https://www.esrd.com/tag/simulation-governance/ Engineering Software Research and Development, Inc. Fri, 14 Jun 2024 13:11:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Simulation Governance Archives - ESRD https://www.esrd.com/tag/simulation-governance/ 32 32 Simulation Governance https://www.esrd.com/simulation-governance-at-the-present/ https://www.esrd.com/simulation-governance-at-the-present/#respond Thu, 13 Jun 2024 20:23:21 +0000 https://www.esrd.com/?p=31866 At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation, digital twins, certification by analysis, and AI-assisted simulation projects are generating considerable interest in engineering communities. For these initiatives to succeed, the reliability of numerical simulations must be assured. This can happen only if management understands that simulation governance is an essential prerequisite for success and undertakes to establish and enforce quality control standards for all simulation projects.

The idea of simulation governance is so simple that it is self-evident: Management is responsible for the exercise of command and control over all aspects of numerical simulation. The formulation of technical requirements is not at all simple, however. A notable obstacle is the widespread confusion of the practice of finite element modeling with numerical simulation. This misconception is fueled by marketing hyperbole, falsely suggesting that purchasing a suite of software products is equivalent to outsourcing numerical simulation.  

At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.

The Kuhn Cycle

The development of numerical simulation technology falls under the broad category of scientific research programs, which encompass model development projects in the engineering and applied sciences as well. By and large, these programs follow the pattern of the Kuhn Cycle [1] illustrated schematically in Fig. 1 in blue:

Figure 1: Schematic illustration of the Kuhn cycle.

A period of pre-science is followed by normal science. In this period, researchers have agreed on an explanatory framework (paradigm) that guides the development of their models and algorithms.  Program (or model) drift sets in when problems are identified for which solutions cannot be found within the confines of the current paradigm. A program crisis occurs when the drift becomes excessive and attempts to remove the limitations are unsuccessful. Program revolution begins when candidates for a new approach are proposed. This eventually leads to the emergence of a new paradigm, which then becomes the explanatory framework for the new normal science.

The Development of Finite Element Analysis

The development of finite element analysis followed a similar pattern. The period of pre-science began in 1956 and lasted until about 1970. In this period, engineers who were familiar with the matrix methods of structural analysis were trying to extend that method to stress analysis. The formulation of the algorithms was based on intuition; testing was based on trial and error, and arguing from the particular to the general (a logical fallacy) was common.   

Normal science began in the early 1970s when the mathematical foundations of finite element analysis were addressed in the applied mathematics community. By that time, the major finite element modeling software products in use today were under development. Those development efforts were largely motivated by the needs of the US space program. The developers adopted a software architecture based on pre-science thinking. I will refer to these products as legacy FE software: For example, NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970.

Mathematical analysis of the finite element method identified a number of conceptual errors. However, the conceptual framework of mathematical analysis and the language used by mathematicians were foreign to the engineering community, and there was no meaningful interaction between the two communities.

The scientific foundations of finite element analysis were firmly established by 1990, and finite element analysis became a branch of applied mathematics. This means that, for a very large class of problems that includes linear elasticity, the conditions for stability and consistency were established, estimates were obtained for convergence rates, and solution verification procedures were developed, as were elegant algorithms for superconvergent extraction of quantities of interest such as stress intensity factors. I was privileged to have worked closely with Ivo Babuška, an outstanding mathematician who is rightfully credited for many key contributions.

Normal science continues in the mathematical sphere, but it has no influence on the practice of finite element modeling. As indicated in Fig. 1, the practice of finite element modeling is rooted in the pre-science period of finite element analysis, and having bypassed the period of normal science, it had reached the stage of program crisis decades ago.

Evidence of Program Crisis

The knowledge base of the finite element method in the pre-science period was a small fraction of what it is today. The technical differences between finite element modeling and numerical simulation are addressed in one of my earlier blog posts [2]. Here, I note that decision-makers who have to rely on computed information have reasons to be disappointed. For example, the Air Force Chief of Staff,  Gen. Norton Schwartz, was quoted in Defense News, 2012 [3] saying: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

General Schwartz expected that the reliability of predictions based on numerical simulation would be similar to the reliability of observations in physical tests. This expectation was not unreasonable considering that by that time, legacy FE software tools had been under development for more than 40 years. What the general did not know was that, while the user interfaces greatly improved and impressive graphic representations could be produced, the underlying solution methodology was (and still is) based on pre-1970s thinking.

As a result, efforts to integrate finite element modeling with artificial intelligence and to establish digital twins based on finite element modeling will surely end in failure.

Paradigm Change Is Necessary

Paradigm change is never easy. Max Planck observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This is often paraphrased, saying: “Progress occurs one funeral at a time.” Planck was referring to the foundational sciences and changing academic minds.  The situation is more challenging in the engineering sciences, where practices and procedures are often deeply embedded in established workflows and changing workflows is typically difficult and expensive.

What Should Management Do?

First and foremost, management should understand that simulation is one of the most abused words in the English language. Furthermore:

  • Treat any marketing claim involving simulation with an extra dose of skepticism. Prior to undertaking projects in the areas of digital transformation, certification by analysis, digital twins, and AI-assisted simulation, ensure that the mathematical models produce reliable predictions.
  • Recognize the difference between finite element modeling and numerical simulation.
  • Understand that mathematical models produce reliable predictions only within their domains of calibration.
  • Treat model form and numerical approximation errors separately and require error control in the formulation and application of mathematical models.
  • Do not accept computed data without error metrics.
  • Understand that model development projects are open-ended.
  • Establish conditions favorable for the evolutionary development of mathematical models.
  • Become familiar with the concepts and terminology in reference [4]. For additional information on simulation governance, I recommend ESRD’s website.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023. https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/.

[3] Weisgerber, M. DoD Anticipates Better Price on Next F-35 Batch, Gannett Government Media Corporation, 8 March 2012. [Online]. Available: https://tinyurl.com/282cbwhs.

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. 


Related Blogs:

]]>
https://www.esrd.com/simulation-governance-at-the-present/feed/ 0
Digital Transformation https://www.esrd.com/digital-transformation/ https://www.esrd.com/digital-transformation/#respond Fri, 17 May 2024 01:31:22 +0000 https://www.esrd.com/?p=31765 Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, we address digital transformation in the engineering sciences, focusing on numerical simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, I address digital transformation in the engineering sciences, focusing on numerical simulation.

Digital Technologies in the Engineering Sciences

Digital technologies have been integrated into the engineering sciences since the 1950s.  The adoption process has not been uniform across all disciplines. Some fields (like aerospace) adopted technologies early, while others were slower to change. The development and adoption of these technologies are ongoing. Engineering today is increasingly digital, and innovations are constantly changing the way engineers approach their work. Here are some important milestones:

Early Adoption (1950s-1970s)

  • Mainframe computers were used for engineering calculations that would have been impossible or extremely time-consuming to perform by hand.
  • Numerical control (NC) machines used punched tape or cards to control tool movements, streamlining machining processes.
  • Early Computer-Aided Design (CAD) systems revolutionized drafting in the 1960s. They allowed engineers to create and manipulate drawings on a computer, making design iterations much faster than previously possible.

Period of Rapid Growth (1980s-1990s)

  • Affordable Personal Computers (PCs) made computing power accessible to individual engineers and small firms.
  • Development of CAD software brought 3D modeling from specialized applications into mainstream design.
  • Finite Element Modeling software became commercially available, allowing engineers to perform structural and strength calculations.
  • The mathematical foundations of the finite element method (FEM) were established, and finite element analysis (FEA) became a branch of Applied Mathematics.

Post-Millennial Development  (2000s-Present)

  • Cloud-based solutions offer scalable computing power and collaboration tools, making complex calculations accessible without massive hardware investment.
  • Building Information Modeling (BIM) revolutionized the architecture, engineering, and construction (AEC) industries.
  • Internet of Things (IoT): Networked sensors and devices provide engineers with real-time data to monitor structures, predict maintenance needs, and optimize operations.
  • Additive Manufacturing (3D Printing) allows for the rapid creation of complex prototypes and even functional end-use parts.

Given that digital technologies have been successfully integrated into engineering practice, it may appear that not much else needs to be done. However, important challenges remain, and there are many opportunities for improvement. This is discussed next.

Outlook: Opportunities and Challenges

Bearing in mind that the primary goal of digital transformation is to enhance competitiveness, in the field of numerical simulation, this translates to improving the predictive performance of mathematical models. Ideally, we aim to reach a reliability level in model predictions comparable to that of physical experimentation. From the technological point of view, this goal is achievable: We have the theoretical understanding of how to maximize the predictive performance of mathematical models through the application of verification, validation, and uncertainty quantification procedures. Furthermore, advancements in explainable artificial intelligence (XAI) technology can be utilized to optimize the management of numerical simulation projects so as to maximize their reliability and effectiveness.  

The primary challenge in the field of engineering sciences is that further progress in digital transformation will require fundamental changes in how numerical simulation is currently understood by the engineering community and how it is practiced in industrial settings. It is essential to keep in mind the differences between finite element modeling and numerical simulation. I explained the reasons for this in an earlier blog post [1]. The art of finite element modeling will have to be replaced by the science of finite element analysis, and the verification, validation, and uncertainty quantification (VVUQ) procedures will have to be applied [2].

Paradoxically, the successful early integration of finite element modeling practices and software tools into engineering workflows now impedes attempts to utilize technological advances that occurred after the 1970s. The software architecture of legacy finite element codes was substantially set by 1970, based on understanding the finite element method that existed at that time. Limitations of the software architecture prevented subsequent advances, such as a posteriori error estimation in terms of the quantities of interest and control of model form errors, both of which are essential for meeting the reliability requirements in numerical simulation. Abandoning finite element modeling practices and embracing the methodology of numerical simulation technology is a major challenge for the engineering community.

The “I Believe” Button

An ANSYS blog [3] tells the story of a presentation made to an A&D executive. The presentation was to make a case for transforming his department using digital engineering. At the end of the presentation, the executive pointed to a coaster on his desk. “See this? That’s the ‘I believe’ button. I can’t hit it. I just can’t hit it. Help me hit it.” Clearly, the executive was asking for convincing evidence that the computed information was sufficiently reliable to support decision-making in his department. Put in another way, he did not have the courage to sign the blueprint on the basis of data generated by digital engineering. What it takes to gather such courage was addressed in one of my earlier blogs [4]. Reliability considerations significantly influence the implementation of simulation process data management (SPDM).

Change Is Necessary

The frequently cited remark by W. Edwards Deming: “Change is not obligatory, but neither is survival,” reminds us of the criticality of embracing change.


References

[1] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023.
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/
[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. 162 pp. 206–214, 2024. The publisher is providing free access to this article until May 22, 2024. Anyone may download it without registration or fees by clicking on this link:
https://authors.elsevier.com/c/1isOB3CDPQAe0b
[3] Bleymaier, S. Hit the “I Believe” Button for Digital Transformation. ANSYS Blog. June 14, 2023. https://www.ansys.com/blog/believe-in-digital-transformation
[4] Szabó B. Where do you get the courage to sign the blueprint? ESRD Blog. October 6, 2023.
https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/


Related Blogs:

]]>
https://www.esrd.com/digital-transformation/feed/ 0
Digital Twins https://www.esrd.com/digital-twins/ https://www.esrd.com/digital-twins/#respond Thu, 02 May 2024 15:33:33 +0000 https://www.esrd.com/?p=31726 The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term "digital twin" was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term “digital twin” was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.

Digital Twins: An Overview

An overview of the current understanding of the idea of digital twins at NASA is available in a keynote presentation delivered in 2021 [1]. This presentation contains the following quote from reference [2]:

“The Digital Twin (DT) is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin.”

I think that this is closer to being an aspirational statement than a functional definition of digital twins.  On the positive side, this statement articulates that the reliability of the results of the simulation should be comparable to that of a physical experiment. Note that this is possible only when mathematical models are used within their domains of calibration [3]. On the negative side, the description of a product “from the micro atomic level to the macro geometrical level” is neither necessary nor feasible. The goal of a simulation project is not to describe a physical system from A to Z but rather to predict the quantities of interest, such as expected fatigue life, margins of safety, limit load, deformation, natural frequency, and the like. In view of this, I propose the following definition:

“A Digital Twin (DT) is a set of mathematical models formulated to predict quantities of interest that characterize the functioning of a potential or actual manufactured product. When the mathematical models are used within their domains of calibration, the reliability of the predictions is comparable to that of a physical experiment.”

The set of mathematical models may comprise a single model of a component or several interacting component models. The motivation for creating digital twins typically comes from the requirements of product lifecycle management: High-value assets are monitored throughout their lifecycles, and the models that constitute a digital twin are updated with new data as they become available. This fits into the framework of model development projects discussed in one of my blogs, “Model Development in the Engineering Sciences,” and in greater detail in reference [3]. An essential attribute of any mathematical model is its domain of calibration.

Example 1: Component Twin

The Single Fastener Analysis Tool (SFAT) is a smart application engineered for comprehensive analyses of single and double shear joints of metal or composite plates. It also serves as an example of a component twin and highlights the technical challenges involved in the development of digital twins.

Figure 1. Single Fastener Analysis Tool (SFAT). Examples of use cases.

SFAT offers the flexibility to model laminates either as ply-by-ply or homogenized entities. It can accommodate various types of fastener heads, such as protruding and countersunk, including those with hollow shafts. It is capable of supporting different fits such as neat, interference, and clearance.

SFAT also provides additional input options to account for factors like shimmed and unshimmed gaps, bushings, and washers. The application allows for the specification of shear load and fastener pre-load as loading conditions. It provides estimates of the errors of approximation in terms of the quantities of interest.

Example 2: Asset Twin

A good example of asset twins is the structural health monitoring of large concrete dams. Following the collapse of the Malpasset dam in Provence, France, in 1959, the World Bank mandated that all dam projects seeking financial backing must undergo modeling and testing at the Experimental Institute for Models and Structures in Bergamo, Italy (ISMES). Subsequently, ISMES was commissioned to develop a system that will monitor the structural health of large dams. The dams would be instrumented, and a numerical simulation framework, now called digital twin, would be used to evaluate anomalies indicated by the instruments.

It was understood that numerical approximation errors would have to be controlled to small tolerances to ensure that they were negligibly small in comparison with the errors in measurements. To perform the calculations, a finite element program based on the p-version was written at ISMES in the second half of the 1970s under the direction of Dr. Alberto Peano, my former D.Sc. student. That program is still in use today under the name FIESTA [4].

Simulation Governance: Essential for Digital Twin Creation

Creating digital twins encompasses all aspects of model development, necessitating separate treatment of the model form and approximation errors. In other words, the verification, validation, and uncertainty quantification (VVUQ) procedures have to be applied. The model must be updated and recalibrated when new ideas are proposed or new data become available. The only difference is that in the case of digital twins, the updates involve individual object-specific data collected over the life span of the physical object.

Model development projects are classified as progressive, stagnant, and improper. A model development project is progressive if the domain of calibration is increasing, stagnant if it is not increasing, and improper if the problem-solving machinery is not consistent with the formulation of the mathematical model or lacks the ability to support solution verification [3]. The goal of simulation governance is to ensure that digital twin projects are progressive. Unfortunately, owing to a lack of simulation governance, the large majority of model development projects are improper, and hence, most digital twins fail to meet the required standards of reliability.


References

[1]  Allen, D. B. Digital Twins and Living Models at NASA. Keynote presentation at the ASME Digital Twin Summit. November 3, 2021.

[2] Grieves, M. and Vickers, J. Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In: Transdisciplinary Perspectives on Complex Systems. F-J. Kahlen, S. Flumerfelt and A. Alves (eds) Springer International Publishing, Switzerland, pp. 85-113, 2017.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.  The publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b

[4] Angeloni, P., Boccellato, R., Bonacina, E., Pasini, A., Peano, A.  Accuracy Assessment by Finite Element P-Version Software. In: Adey, R.A. (ed) Engineering Software IV. Springer, Berlin, Heidelberg, 1985. https://doi.org/10.1007/978-3-662-21877-8_24


Related Blogs:

]]>
https://www.esrd.com/digital-twins/feed/ 0
Not All Models Are Wrong https://www.esrd.com/not-all-models-are-wrong/ https://www.esrd.com/not-all-models-are-wrong/#respond Thu, 11 Apr 2024 15:55:43 +0000 https://www.esrd.com/?p=31628 Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


I never understood the statement: “All models are wrong, but some are useful”, attributed to George E. P. Box, a statistician, quoted in many papers and presentations. If that were the case, why should we try to build models and how would we know when and for what purposes they may be useful? We construct models with the objective of making reliable predictions, the degree of reliability being comparable to that of a physical experiment.

Consider, for example, the problem in Fig. 1 showing a sub-assembly of an aircraft structure. The quantity of interest is the margin of safety: Given multiple load conditions and design criteria, estimate the minimum value of the margin of safety and show that the numerical approximation error is less than 5%.   We must have sufficient reason to trust the results of simulation tasks like this.

Figure 1: Sub-assembly of an aircraft structure.

Trying to understand what George Box meant, I read the paper in which he supposedly made the statement that all models are wrong[1] but I did not find it very enlightening. Nor did I find that statement in its often-quoted form. What I found is this non sequitur: “Since all models are wrong the scientist must be alert to what is importantly wrong.” This makes the matter much more complicated: Now we have to classify wrongness into two categories: important and unimportant. By what criteria? – That is not explained.

Box did not have the same understanding as we do of what a mathematical model is. This is evidenced by the sentence: “In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless.” Our goal is not to model the “real world”, a vague concept, but to model specific aspects of physical reality, the quantities of interest having been clearly defined as, for example, in the case of the problem shown in Fig. 1. Our current understanding of mathematical models is based on the concept of model-dependent realism which was developed well after Box’s 1978 paper was written.

Model-Dependent Realism

The term model-dependent realism was introduced by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design [2] but the distinction between physical reality and ideas of physical reality is older. For example, Wolfgang Pauli wrote in 1948: “The layman always means, when he says `reality’ that he is speaking of something self-evidently known; whereas to me it seems the most important and exceedingly difficult task of our time is to work on the construction of a new idea of reality.” [From a letter to Markus Fierz.]

If two different models describe a set of physical phenomena equally well then both models are equally valid: It is meaningless to speak about “true reality”. In Hawking’s own words [3]: “I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation.” In other words, mathematical models are, essentially, phenomenological models.

What is a Mathematical Model?

A mathematical model is an operator that transforms one set of data D, the input, into another set, the quantities of interest F. In shorthand notation we have:

\boldsymbol D\xrightarrow[(I,\boldsymbol p)]{}\boldsymbol F,\quad (\boldsymbol D, \boldsymbol p) \in ℂ \quad (1)

where the right arrow represents the mathematical model. The letters I and p under the right arrow indicate that the transformation involves an idealization (I) as well as parameters (physical properties) p that are determined through calibration experiments. Restrictions on D and p define the domain of calibration . The domain of calibration is an essential feature of any mathematical model [4], [5].

Most mathematical models used in engineering have the property that the quantities of interest F continuously depend on D and p. This means that small changes in D and/or p will result in correspondingly small changes in F which is a prerequisite to making reliable predictions.

To ensure that the predictions based on a mathematical model are reliable, it is necessary to control two types of error: The model form error and the numerical approximation errors.

Model Form Errors

The formulation of mathematical models invariably involves making restrictive assumptions such as neglecting certain geometric features, idealizing the physical properties of the material, idealizing boundary conditions, neglecting the effects of residual stresses, etc. Therefore, any mathematical model should be understood to be a special case of a more comprehensive model. This is the hierarchic view of models.

To test whether a restrictive assumption is acceptable for a particular application, it is necessary to estimate the influence of that assumption on the quantities of interest and, if necessary, revise the model. An exploration of the influence of modeling assumptions on the quantities of interest is called virtual experimentation [6]. Simulation software tools must have the capability to support virtual experimentation.

Approximation Errors

Approximation errors occur when the quantities of interest are estimated through a numerical process.  This means that we get a numerical approximation to F, denoted by Fnum. It is necessary to show that the relative error in Fnum does not exceed an allowable value τall:

| \boldsymbol F - \boldsymbol F_{num} |/|\boldsymbol F| \le \tau_{all} \quad (2)

This is the requirement of solution verification. To meet this requirement, it is necessary to obtain a converging sequence of numerical solutions with respect to increasing degrees of freedom [6].

Model Development Projects

The formulation of mathematical models is a creative, open-ended activity, guided by insight, experience, and personal preferences. Objective criteria are used to validate and rank mathematical models [4], [5]. 

Model development projects have been classified as progressive, stagnant, and improper [5]. A model development project is progressive if the domain of calibration is increasing, stagnant if the domain of calibration is not increasing, and improper if one or more algorithms are inconsistent with the formulation or the problem-solving method does not have the capability to estimate and control the numerical approximation errors in the quantities of interest. The most important objective of simulation governance is to provide favorable conditions for the evolutionary development of mathematical models and to ensure that the procedures of verification, validation and uncertainty quantification (VVUQ) are properly applied.

Not All Models Are Wrong, but Many of Them Are…

Box’s statement that all models are wrong is not correct. Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models. And there are models, not tethered to scientific principles and methods, that are not even wrong.


References

[1] Box, G. E. P. Science and Statistics. Journal of the American Statistical Association, Vol. 71, No. 356, pp. 791-799, 1976.

[2] Hawking, S. and Mlodinow, L. The Grand Design. Random House 2010.

[3] Hawking, S. The nature of space and time.  Princeton University Press, 2010 (with Roger Penrose).

[4] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp. 75-86, 2021 [open source].

[5] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024. Note: the publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b.

[6] B. Szabó and I. Babuška,  Finite Element Analysis.  Method, Verification and Validation. 2nd edition, John Wiley & Sons, Inc., 2021.  


Related Blogs:

]]>
https://www.esrd.com/not-all-models-are-wrong/feed/ 0
Certification by Analysis (CbA) – Are We There Yet? https://www.esrd.com/certification-by-analysis-are-we-there-yet/ https://www.esrd.com/certification-by-analysis-are-we-there-yet/#respond Thu, 07 Mar 2024 21:36:09 +0000 https://www.esrd.com/?p=31410 Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards. The key to CbA is reliability. This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments. To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


While reading David McCullough’s book “The Wright Brothers”, a fascinating story about the development of the first flying machine, this question occurred to me: Would the Wright brothers have succeeded if they had used substantially fewer physical experiments and relied on finite element modeling instead?  I believe that the answer is: no.  Consider what happened in the JSF program.

Lessons from the JSF Program

In 1992, eighty-nine years after the Wright brothers’ Flying Machine first flew at Kitty Hawk, the US government, decided to fund the design and manufacture of a fifth-generation fighter aircraft that combines air-to-air, strike, and ground attack capabilities. Persuaded that numerical simulation technology was sufficiently mature, the decision-makers permitted the manufacturer to concurrently build and test the aircraft, known as Joint Strike Fighter (JSF). The JSF, also known as the F-35, was first flown in 2006.  By 2014, the program was 163 billion dollars over budget and seven years behind schedule.

Two senior officers illuminated the situation in these words:

Vice Admiral David Venlet, the Program Executive Officer, quoted in AOL Defense in 2011 [1]: “JSF’s build and test was a miscalculation…. Fatigue testing and analysis are turning up so many potential cracks and hot spots in the Joint Strike Fighter’s airframe that the production rate of the F-35 should be slowed further over the next few years… The cost burden sucks the wind out of your lungs“.

Gen. Norton Schwartz, Air Force Chief of Staff, quoted in Defense News, 2012 [2]: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

These officers believed that the software tools were so advanced that testing would confirm the validity of design decisions based on them. This turned out to be wrong. However, their mistaken belief was not entirely unreasonable if we consider that by the start of the JSF program commercial finite element analysis (FEA) software products were 30+ years old, therefore they could have reasonably assumed that the reliability of these products greatly improved, as were the hardware systems and visualization tools capable of creating impressive color images, tacitly suggesting that the underlying methodology is capable of guaranteeing the quality and reliability of the output quantities.  Indeed, there were very significant advancements in the science of finite element analysis which became a bona-fide branch of applied mathematics in that period.  The problem was that commercial FEA software tools did not keep pace with those important scientific developments.

There are at least two reasons for this:  First, the software architecture of the commercial finite element codes was based on the thinking of the 1960s and 70s when the theoretical foundations of FEA were not yet established.  As a result, several limitations were incorporated.  Those limitations kept code developers from incorporating later advancements, such as a posteriori error estimation, advanced discretization strategies, and stability criteria.  Second, decision-makers who rely on computed information failed to specify the technical requirements that simulation software must meet, such as, for example, to report not just the quantities of interest but also their estimated relative errors.  To fulfill this key requirement, legacy FE software would have had to be overhauled to such an extent that only their nameplates would have remained the same.

Technical Requirements for CbA

Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards.  The key to CbA is reliability.  This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments.   To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.

Model form errors occur because we invariably make simplifying assumptions when we formulate mathematical models.  For example, formulations based on the theory of linear elasticity include the assumptions that the stress-strain relationship is a linear function, independent of the size of the strain and that the deformation is so small that the difference between the equilibrium equations written on the undeformed and deformed configurations can be neglected.  As long as these assumptions are valid, the linear theory of elasticity provides reliable estimates of the response of elastic bodies to applied loads.  The linear solution also provides information on the extent to which the assumptions were violated in a particular model.  For example, if it is found that the strains exceed the proportional limit, it is advisable to check the effects of plastic deformation.  This is done iteratively until a convergence criterion is satisfied.  Similarly, the effects of large deformation can be estimated.  Model form errors are controlled by viewing any mathematical model as one in a sequence of hierarchic models of increasing complexity and selecting the model that is consistent with the conditions of the simulation.

Numerical errors are the errors associated with approximating the exact solution of mathematical problems, such as the equations of elasticity, Navier-Stokes, and Maxwell, and the method used to extract the quantities of interest from the approximate solution.   The goal of solution verification is to show that the numerical errors in the quantities of interest are within acceptable bounds.

The domain of calibration defines the intervals of physical parameters and input data on which the model was calibrated.  This is a relatively new concept, introduced in 2021 [3], that is also addressed in a forthcoming paper [4].  A common mistake in simulation is to use models outside of their domains of calibration.

Organizational Aspects

To achieve the level of reliability in numerical simulation, necessary for the utilization of CbA, management will have to implement simulation governance [5] and apply the protocols of verification, validation, and uncertainty quantification.

Are We There Yet?

No, we are not there yet. Although we have made significant progress in controlling errors in model form and numerical approximation, one very large obstacle remains: Management has yet to recognize that they are responsible for simulation governance, which is a critical prerequisite for CbA.


References

[1] Whittle, R. JSF’s Build and Test was ‘Miscalculation,’ Adm. Venlet Says; Production Must Slow. [Online] https://breakingdefense.com/2011/12/jsf-build-and-test-was-miscalculation-production-must-slow-v/ [Accessed 21 February 2024].

[2] M. Weisgerber, M.  DoD Anticipates Better Price on Next F-35 Batch.  Gannett Government Media Corporation, 8 March 2012. [Online]. https://tinyurl.com/282cbwhs [Accessed 22 February 2024].

[3] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021 [open source].

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  To appear in Computers & Mathematics with Applications in 2024.  The manuscript is available on request.

[5] Szabó, B. and Actis, R. Planning for Simulation Governance and Management:  Ensuring Simulation is an Asset, not a Liability. Benchmark, July 2021.


Related Blogs:

]]>
https://www.esrd.com/certification-by-analysis-are-we-there-yet/feed/ 0
The Demarcation Problem in the Engineering Sciences https://www.esrd.com/demarcation-problem-in-engineering-sciences/ https://www.esrd.com/demarcation-problem-in-engineering-sciences/#respond Thu, 01 Feb 2024 14:52:11 +0000 https://www.esrd.com/?p=30871 In engineering sciences, we classify mathematical models as ‘proper’ or ‘improper’ rather than ‘scientific’ or ‘pseudoscientific’. A model is said to be proper if it is consistent with the relevant mathematical theorems that guarantee the existence and, when applicable, the uniqueness of the exact solution. Otherwise, the model is improper. At present, the large majority of models used in engineering practice are improper. Following are examples of frequently occurring types of error, with brief explanations.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Generally speaking, philosophers are much better at asking questions than answering them. The question of distinguishing between science and pseudoscience, known as the demarcation problem, is one of their hotly debated issues. Some even argued that the demarcation problem is unsolvable [1]. That may well be true when the question is posed in its broadest generality. However, this question can and must be answered clearly and unequivocally in the engineering sciences.

That is because, in the engineering sciences, we rely on validated models of broad applicability, such as the theories of heat transfer and continuum mechanics, the Maxwell equations, and the Navier-Stokes equations.  Therefore, we can be confident that we are building on a solid scientific foundation. A solid foundation does not guarantee a sound structure, however. We must ensure that the algorithms used to estimate the quantities of interest are also based on solid scientific principles. This entails checking that there are no errors in the formulation, implementation, or application of models.

In engineering sciences, we classify mathematical models as ‘proper’ or ‘improper’ rather than ‘scientific’ or ‘pseudoscientific’. A model is said to be proper if it is consistent with the relevant mathematical theorems that guarantee the existence and, when applicable, the uniqueness of the exact solution. Otherwise, the model is improper. At present, the large majority of models used in engineering practice are improper. Following are examples of frequently occurring types of error, with brief explanations.

Conceptual Errors

Conceptual errors, also known as “variational crimes”, occur when the input data and/or the numerical implementation is inconsistent with the formulation of the mathematical model. For example, considering the displacement formulation in two and three dimensions, point constraints are permitted only as rigid body constraints, when the body is in equilibrium. Point forces are permitted only in the domain of secondary interest [2], non-conforming elements and reduced integration are not permitted.

When conceptual errors are present, the numerical solution is not an approximation to the solution of the mathematical problem we have in mind, in which case it is not possible to estimate the errors of approximation. In other words, it is not possible to perform solution verification.

Model Form Errors

Model form errors are associated with the assumptions incorporated in mathematical models. Those assumptions impose limitations on the applicability of the model. Various approaches exist for estimating the effects of those limitations on the quantities of interest. The following examples illustrate two such approaches.

Example 1

Linear elasticity problems limit the stresses and strains to the elastic range, the displacement formulation imposes limitations on Poisson’s ratio, and pointwise stresses or strains are considered averages over a representative volume element. This is because the assumptions of continuum theory do not apply to real materials on the micro-scale.

Linear elasticity problems should be understood to be special cases of nonlinear problems that account for the effects of large displacements and large strains and one of many possible material laws. Having solved a linear problem, we can check whether and to what extent were the simplifying assumptions violated, and then we can decide if it is necessary to solve the appropriate nonlinear problem. This is the hierarchic view of models: Each model is understood to be a special case of a more comprehensive model [2].

Remark

Theoretically, one could make the model form error arbitrarily small by moving up the model hierarchy.  In practice, however, increasing complexity in model form entails an increasing number of parameters that have to be determined experimentally. This introduces uncertainties, which increase the dispersion of the predicted values of the quantities of interest.

Example 2

In many practical applications, the mathematical problem is simplified by dimensional reduction. Within the framework of linear elasticity, for instance, we have hierarchies of plate and shell models where the variation of displacements along the normal to the reference surface is restricted to polynomials or, in the case of laminated plates and shells, piecewise polynomials of low order [3]. In these models, boundary layer effects occur. The boundary layers are typically strong at free edges. These effects are caused by edge singularities that perturb the dimensionally reduced solution. The perturbation depends on the hierarchic order of the model. Typically, the goal of computation is strength analysis, that is, estimation of the values of predictors of failure initiation. It must be shown that the predictors are independent of the hierarchic order. This challenging problem is typically overlooked in finite element modeling. In the absence of an analytical tool capable of guaranteeing the accuracy of predictors of failure initiation, it is not possible to determine whether a design rule is satisfied or not.

Figure 1: T-joint of laminated plates.

Numerical Errors

Since the quantities of interest are computed numerically, it is necessary to verify that the numerical values are sufficiently close to their exact counterparts. The meaning of “sufficiently close” is context-dependent: For example, when formulating design rules, an interpretation of experimental information is involved. It has to be ensured that the numerical error in the quantities of interest is negligibly small in comparison with the size of the experimental errors. Otherwise, preventable uncertainties are introduced in the calibration process.

Realizing the Potential of Numerical Simulation

If we examine a representative sample of mathematical models used in the various branches of engineering, we find that the large majority of models suffer from one or more errors like those we described above. In other words, the large majority of models used in engineering practice are improper. There are many reasons for this, caused mainly by the obsolete notion of finite element modeling, deeply entrenched in the engineering community.

As noted in my earlier blog, entitled Obstacles to Progress, the art of finite element modeling evolved well before the theoretical foundations of finite element analysis were established. Engineering books, academic courses, and professional workshops emphasize the practical, intuitive aspects of finite element modeling and typically omit cautioning against variational crimes. Even some of the fundamental concepts and terminology needed for understanding the scientific foundations of numerical simulation are missing. For example, a senior engineer of a Fortune 100 company, with impeccable academic credentials earned more than three decades before, told me that, in his opinion, the exact solution is the outcome of a physical experiment. This statement revealed a lack of awareness of the meaning and relevance of the terms: verification, validation, and uncertainty quantification.

To realize the potential of numerical simulation, management will have to exercise simulation governance [4]. This will necessitate learning to distinguish between proper and improper modeling practices and establishing the technical requirements needed to ensure that both the model form and approximation errors in the quantities of interest are within acceptable bounds.


References

[1] Laudan L. The Demise of the Demarcation Problem. In: Cohen R.S., Laudan L. (eds) Physics, Philosophy and Psychoanalysis. Boston Studies in the Philosophy of Science, vol 76. Springer, Dordrecht, 1983.

[2] Szabό, B. and Babuška, I. Finite Element Analysis. Method, Verification, and Validation (Section 4.1). John Wiley & Sons, Inc., 2021.

[3] Actis, R., Szabó, B. and Schwab, C. Hierarchic models for laminated plates and shells. Computer Methods in Applied Mechanics and Engineering, 172(1-4), pp. 79-107, 1999.

[4] Szabó, B. and Actis, R. Simulation governance: Technical requirements for mechanical design. Computer Methods in Applied Mechanics and Engineering, 249, pp.158-168, 2012.


Related Blogs:

]]>
https://www.esrd.com/demarcation-problem-in-engineering-sciences/feed/ 0
Where Do You Get the Courage to Sign the Blueprint? https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/ https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/#respond Fri, 06 Oct 2023 14:55:14 +0000 https://www.esrd.com/?p=29984 Mathematical models have become indispensable sources of information on which technical and business decisions are based. It is therefore vitally important for decision-makers to know whether relying on the predictions of mathematical models is justified. When properly used, numerical simulation can be a major corporate asset. However, it can become a major corporate liability if the reliability of predictions is not guaranteed. Learn more in our latest blog post.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Mathematical models have become indispensable sources of information on which technical and business decisions are based. It is therefore vitally important for decision-makers to know whether relying on the predictions of mathematical models is justified. When properly used, numerical simulation can be a major corporate asset. However, it can become a major corporate liability if the reliability of predictions is not guaranteed.

Resource Allocation

Project management is responsible for allocating resources to numerical simulation and physical experimentation.  Consider the two extreme cases: (a) If the decision is not to use numerical simulation, just rely on experimentation, then management is adopting the methodology of the Wright brothers.  (b) If the decision is not to do experiments, and just rely on finite element modeling, then management will risk repeating the costly mistakes of the F-35 program.  The correct balance depends on the justified degree of confidence in the predictive performance of numerical simulation [1].

Simulation Governance

Simulation governance is a managerial function concerned with the assurance of reliability of information generated by numerical simulation. The term was introduced in 2011 and specific technical requirements were addressed from the perspective of mechanical design in 2012 [2]. At the 2017 NAFEMS World Congress in Stockholm, simulation governance was identified as the first of eight “big issues” in numerical simulation.

A plan for simulation governance has to be tailored to fit the mission of each organization or department within an organization.  We consider three types of mission in the following.  A summary of the main points is presented in the table below.

  • If the mission is the application of established design rules, then the goal is to verify that a quantity of interest  does not exceed its allowable value .  Simulation governance is concerned with standardization of recurring numerical simulation tasks through the creation of smart applications. Smart applications, also called “simulation apps”, are expert-designed in such a way that the use of those applications does not require expertise in numerical simulation. The preservation and maintenance of corporate know-how and institutional knowledge are among the important objectives of simulation governance. The productivity of newly hired engineers significantly increases if routine simulation procedures are standardized so that applications consistently produce certifiable results. Economic benefits are realized through improved productivity and improved reliability.
Table 1. Mission-dependence of numerical simulation tasks.
  • If the mission is the formulation of design rules, for example, to establish allowable values for a new material system, then the plan should focus on the collection, maintenance, and documentation of experimental data, management of solution and data verification procedures, revision and updating mathematical models in the light of new information collected from physical experiments and field observations. Economic benefits: Substantial savings through reduction of the number of tests on the sub-component, component, sub-assembly, and assembly levels.
  • If the mission is to support condition-based maintenance (CBM), then the goal is to determine the probability that the number of cycles to failure  is smaller than a given number of cycles .  The main activities are: Collection, maintenance, and documentation of fatigue data and unit-specific information on service data, standardization of recurring analysis tasks. Economic benefits: Substantial savings through improved disposition decisions.

Observe that solution, data, and code verification are common to all three types of mission.  Validation and uncertainty quantification are performed in model development projects.

Recognizing that technology advances and the available information increases over time, planning must incorporate data management and systematic updates of simulation practices so as to take advantage of new data, ideas, and technology.  Model development projects are open-ended.  Validation and definition of the domain of calibration is conditional on the available data.  Since the available data increases over time, and new ideas are likely to be proposed, there will be opportunities to revise and update mathematical models.  No one has the final word in model development [3].

Questions for Management

  1. Does the engineering team possess the technical expertise and software tools to perform numerical simulation projects effectively?
  2. Are mathematical models being defined independently from the way numerical approximations are obtained? – A common mistake is to conflate model definition with its numerical solution, as in “finite element modeling”.
  3. Are the errors in numerical approximation properly estimated, controlled, and reported?
  4. Physical testing is necessarily tied to the specific test conditions.  Generalization to a larger set of conditions requires a mathematical model.  Testing without a plan to generalize the results does not make sense. – Are physical testing projects properly planned, executed and analyzed in your organization? 
  5. Do decision-makers have sufficient confidence in the predictive performance of mathematical models to reduce the number and complexity of physical tests through reliance on numerical simulation?
  6. Are the experimental data properly documented and archived?
  7. How well are the various professional skills needed for successful execution of numerical simulation projects coordinated?
  8. Are mathematical models properly calibrated and their domains of calibration properly defined and documented?
  9. How are new data incorporated into model updates when the data fall (a) within the domain of calibration and (b) outside of the domain of calibration?
  10. Are the procedures of verification, validation, and uncertainty quantification (VVUQ) properly and consistently applied?
  11. Have opportunities to improve design workflows through standardization been fully explored?
  12. What is the estimated economic value of your current numerical simulation activities? – Without simulation governance, that value can be a large negative number.

[1] B. Szabó and R. Actis, Planning for Simulation Governance and Management: Ensuring Simulation is an Asset, not a Liability. Benchmark, July 2021, pages 8-12.

[2] B. Szabó and R. Actis, “Simulation governance: Technical requirements for mechanical design,” Computer Methods in Applied Mechanics and Engineering, vol. 249, pages 158-168, 2012.

[3] B. Szabó and I. Babuška, “Methodology of model development in the applied sciences,” Journal of Computational and Applied Mechanics, vol. 16, no. 2, pp. 75-86, 2021 (open source).

]]>
https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/feed/ 0
S.A.F.E.R. Numerical Simulation for Structural Analysis in the Aerospace Industry Part 5: An Introduction to StressCheck for High-Fidelity Aero-structure Analysis https://www.esrd.com/safer-numerical-simulation-structural-analysis-part-5/ https://www.esrd.com/safer-numerical-simulation-structural-analysis-part-5/#respond Mon, 02 Apr 2018 20:39:32 +0000 https://esrd.com/?p=6447 In this final post of our "S.A.F.E.R. Numerical Simulation for Structural Analysis in the Aerospace Industry" series, we will profile the stress analysis software product StressCheck®, what makes it different from other FEA software and the applications for which it is used in A&D engineering.[...]]]>
SAINT LOUIS, MISSOURI – April 2, 2018

In our last S.A.F.E.R. Simulation post, we explored the growing importance of Verification and Validation (V&V) as the use of simulation software becomes more wide spread among not just FEA specialists but also the non-FEA expert design engineer. The emphasis on increased V&V has driven a need for improved Simulation Governance to provide managerial oversight of all the methods, standards, best practices, processes, and software to ensure the reliable use of simulation technologies by expert and novice alike.

In this final post of our current series we will profile the stress analysis software product StressCheck and the applications for which it is used in A&D engineering. StressCheck incorporates the latest advances in numerical simulation technologies that provide intrinsic, automatic capabilities for solution verification through the use of hierarchic finite element spaces, and a hierarchic modeling framework to evaluate the effect of simplifying modeling assumptions in the predictions. We will detail what that actually means for engineering users and how StressCheck enables the practice of Simulation Governance by engineering managers to make simulation Simple, Accurate, Fast, Efficient, and Reliable – S.A.F.E.R. – for experts and non-experts alike.

What is StressCheck?

StressCheck live results extraction showing the convergence of maximum stress on a small blend in an imported legacy FEA bulkhead mesh.

StressCheck is an engineering structural analysis software tool developed from its inception by Engineering Software Research & Development (ESRD) to exploit the most recent advances in numerical simulation that support Verification and Validation procedures to enable the practice of Simulation Governance. While StressCheck is based on the finite element method, StressCheck implements a different mathematical foundation than legacy-generation FEA software. StressCheck is based on hierarchic finite element spaces capable of producing a sequence of converging solutions of verifiable computational accuracy. This approach not only has a great effect on improving the quality of analysis results but also in reforming the time-consuming and error-prone steps of FEA pre-processing, solving, and post-processing as they have been performed for decades.

The origins of StressCheck extend from R&D work performed by ESRD in support of military aircraft programs of the U.S, Department of Defense. The motivation behind the development of StressCheck was to help structural engineers tackle some of the most elusive analysis problems encountered by A&D OEM suppliers and their contracting agencies in the design, manufacture, test, and sustainment of both new and aging aircraft. Historically, many of these problem types required highly experienced analysts using expert-only software tools. Yet even then, the results produced were dependent on the same expert to assess their own validity of output.

During the development of StressCheck, ESRD realized that many aerospace contractors were frustrated with the complexity, time, and uncertainty of stress analysis performed using the results of legacy finite element modeling software. As a consequence, it was not uncommon that engineering groups relied upon or even preferred to use design curves, handbooks, empirical methods, look-up tables, previous design calculations, and closed-form solutions. The time to create, debug, and then tune elaborately constructed and intricately meshed finite element models was just too exorbitant, especially early in the design cycle where changes to geometry and loads were frequent.

StressCheck was developed to address these deficiencies. Since its introduction it has now been used by every leading U.S. aircraft contractor along with many of their supply chain and sustainment partners.

What are the applications for StressCheck in the A&D industry?

StressCheck is ideally suited for engineering analysis problems in solid mechanics which require a high-fidelity solution of a known computational accuracy that is independent of the user’s expertise or the model’s mesh. In the aviation, aerospace, and defense industries these application problem classes include: structural strength analysis, detail stress analysis, buckling analysis, global/local workflows, fastened and bonded joint analysis, composite laminates, multi-body contact, engineered residual stresses, structural repairs, and fatigue and fracture mechanics in support of durability and damage tolerance (DaDT). To explore examples of these applications visit our Applications showcase area and click on any of the featured tiles.

StressCheck is not intended to be a replacement for general purpose finite element codes used for internal loads modeling of large aero-structures or complete aircraft. In these global loads models an artisan-like approach of building up a digital structure using an assortment of 2D frame and shell element types, typically of mixed element formulations with incompatible theories, may be sufficient when accuracy beyond that of approximate relative load distributions is unimportant. Most of the strength, stress, and fatigue analyses performed by aerospace structures groups occurs downstream of the global loads modeling. Historically, these analyses workflows required a series of models, each progressively adding in more structural details that had previously been approximated in often crude fashion or ignored all together.

Multi-scale, global-local including multi-body contact analysis of wing rib structure in StressCheck.

Using StressCheck it is now feasible to employ FEA with analysis problems which require modeling large spans of an aero-structure that has widely varying geometric dimensions with numerous joints, fasteners, cutouts, material types and stress concentrations. Before with traditional FEA methods it was often impossible to use solid elements throughout a multi-scale model using geometry directly from CAD data. So much time and often tricks were required to simplify, defeature, approximate, and repair the design topology that engineering managers were reluctant to approve the use of FEA for some analysis types.

Because of its inherent robustness and reliability, StressCheck is also ideal as the solver engine powering a new generation of Simulation Apps which help to democratize the power of simulation. Smart Sim Apps based on StressCheck can help to simplify, standardize, automate, and optimize recurring analysis workflows such that non-expert engineers may employ FEA-based analysis tools with even greater confidence than expert analysts can using legacy software tools.

Request Application Demo

 

How is StressCheck’s numerical simulation technology different from that used by legacy or traditional FEA softwares?

In a previous S.A.F.E.R. Simulation post we exposed the limitations of finite element modeling as it has been practiced to date. Most of these constraints are attributable to decisions made early in the development of the first generation of FEA software years before high performance computing was available on the engineers desktop. Unfortunately, those limitations became so entrenched in the thinking, expectations, and practices of CAE solution providers such that each new generation of FEA software was still polluted by these artifacts. To learn how this occurred and what makes StressCheck’s numerical simulation technology so different, we encourage you to view the 3.5-minute StressCheck Differentiators video:

 

What are the key differences and advantages of StressCheck for users?

StressCheck has numerous intrinsic features that support hierarchic modeling, live dynamic results processing, automatic reporting of approximation errors & more.

The most visible difference to the new user is that StressCheck employs a much smaller, simpler, and smarter library of elements. There are only five element types to approximate the solution of a problem of elasticity, whether it is planar, axi-symmetric, or three-dimensional. This compares to the many dozens of element types of legacy FEA software which often require a wizard to know which one to select, where to use or not to use them and more importantly, how to understand their idiosyncrasies and interpret their often erratic behavior.

The second big difference for users is that StressCheck elements map to geometry without the need for simplification or defeaturing. The available higher-order mapping means that the elements are far more robust with respect to size, aspect ratio, and distortion. As such, a relatively coarse mesh created just to follow geometry may be used across variant-scale topologies. There is no loss of resolution or a need for intermediate highly simplified “stick & frame” or “plate & beam” models.

StressCheck meshes are much easier to create, check, and change as the elements and their mesh no longer have to be the principal focus and concern of the analyst’s attention. StressCheck models aren’t fragile nor do they break as easily, and thus have to be recreated, with changes to design geometry, boundary conditions, or analysis types (e.g., linear, nonlinear, buckling). For example, a linear analysis result is the starting point for a subsequent nonlinear analysis, so the analyst simply switches solver tabs to obtain a nonlinear solution. Because of the use of hierarchic spaces during the solution execution, each run is a subset of the previous run, making it possible to perform error estimation of any result of interest, anywhere in the model after a sequence of solutions is obtained.

So, what’s the bottom line? High-fidelity solutions can be obtained from low-density meshes while preserving an explicit automatic measurement of solution quality.  No guesswork is required to determine if the FEA result can be trusted.

Detailed stress concentrations represented on “low-density” StressCheck meshes.

The errors of idealization are separated from those due to discretization/approximation (e.g. do I have ‘enough’ mesh? DOF? Element curvature?). Sources of inaccuracies and errors are immediately identifiable not because an expert catches it, but because the software is intelligent enough to report them. For each analysis users are provided with a dashboard of convergence curves that show the error in any one of a number of engineering quantities such as stress, strain, and energy norm.

Because solutions are continuous, a-priori knowledge or educated guesses of where stress concentrations may occur are no longer needed. Any engineering data of interest can dynamically be extracted at any location within the continuous domain and at any time without loss of precision due to interpolation or other post-processing manipulation necessitated from having nodal results only, characteristic of legacy FEA codes. Proof of solution convergence is also provided for any function at any location regardless of the element mesh and nodal location. As a consequence, the post-processing of fixed solutions common in legacy FEA becomes in StressCheck dynamic instantaneous extraction of live results:

 

What is the benefit to engineering groups and value to A&D programs from the use of StressCheck?

StressCheck automatically increases the approximation of stresses on a fixed mesh, making solution verification simple, accurate, fast, efficient & reliable.

With the use of StressCheck, the results of FEA-based structural analysis are far less dependent on the user expertise, modeling approximations, or mesh details. High-fidelity stress analysis of complex 3D solid model geometries, with numerous joints and fastener connections typical of aero-structures may be obtained in less time, with reduced complexity and greater confidence.

As a result, the stress analysis function becomes an inherently more reliable and repeatable competency for the engineering organization. FEA-based structural analysis performed with StressCheck is not an error-prone process where every different combination of user, software, elements, and mesh risks generating different answers all to the dismay of engineering leads and program managers.

By using industry application-focused, advanced numerical simulation software like StressCheck it is now possible to simplify, standardize, and automate some recurring analysis tasks to become more robust for less experienced engineers to conduct. New engineers are productive sooner with access to safer analysis tools that are intelligent enough to capture institutional methods and incorporate best practices. The role and value of the expert engineering analyst evolves to a higher level by creating improved methods and custom tools such as automated global local workflow templates and Sim Apps, respectively.

As presented in the first post of this series, the business drivers to produce higher performing damage tolerant aero-structures are requiring a near hyper-level of engineering productivity, precision, and confidence from the use of simulation technologies earlier in the design cycle. This is also true in the later stages as digital simulation replaces more physical prototyping and flight testing to facilitate concurrency of engineering and build.

Status-quo methodologies dependent on expert-only software that risk adding more time, risk, and uncertainty to the project plan is no longer satisfactory to meet these demands. Next generation simulation technologies implemented in software like StressCheck can help to encapsulate complexity, contain cost, improve reliability, mitigate risk, accelerate maturity, and support better governance of the engineering simulation function.

With StressCheck engineering simulation is Simple, Accurate, Fast, Efficient, and Reliable.

Coming Up Next…

We will discuss why StressCheck is an ideal numerical simulation tool for both benchmarking and digital engineering handbook development (i.e. StressCheck CAE handbooks).  In addition, we will provide examples of how StressCheck CAE handbooks are a robust form of Smart Sim Apps that serve to encapsulate both tribal knowledge and state-of-the-art simulation best practices.

To receive future S.A.F.E.R. Simulation posts…

=

]]>
https://www.esrd.com/safer-numerical-simulation-structural-analysis-part-5/feed/ 0
ESRD’s ASME VVUQ 2023 Symposium Keynote Presentation Recording Now Available https://www.esrd.com/asme-vvuq-2023-symposium-keynote-presentation-recording/ https://www.esrd.com/asme-vvuq-2023-symposium-keynote-presentation-recording/#respond Wed, 11 Oct 2023 20:35:34 +0000 https://www.esrd.com/?p=30067 In mid-May 2023, ESRD’s Co-Founder and Chairman Dr. Barna Szabó delivered a keynote presentation at the ASME VVUQ 2023 Symposium in Baltimore, Maryland, USA. Dr. Szabó’s presentation, entitled “Simulation Governance: An Idea Whose Time Has Come”, will focus on the goals and means of Simulation Governance with reference to mechanical/aerospace engineering practice. We are pleased to announce that the recording of the keynote presentation is now available.]]>
Courtesy ASME.

In mid-May 2023, ESRD’s Co-Founder and Chairman Dr. Barna Szabó delivered a keynote presentation at the ASME VVUQ 2023 Symposium in Baltimore, Maryland, USA. Dr. Szabó’s presentation, entitled “Simulation Governance: An Idea Whose Time Has Come”, will focus on the goals and means of Simulation Governance with reference to mechanical/aerospace engineering practice.

The abstract of the keynote presentation was as follows:

Mathematical models have become indispensable sources of information on which technical and business decisions are based.  It is therefore vitally important for decision-makers to know whether or not  they should rely on the predictions of a particular mathematical model.

The presentation will focus on the reliability of information generated by mathematical models.  Reliability is ensured through proper application of the procedures of verification, validation and uncertainty quantification.  Examples will be presented.

It will be shown that mathematical models are products of open-ended evolutionary processes.  One of the key objectives of simulation governance is to establish and maintain a hospitable environment for the evolutionary development of mathematical models.  A very substantial unrealized potential exists in numerical simulation technology.  It is the responsibility of management to establish conditions that will make realization of that potential possible.

Dr. Barna Szabó

We are pleased to announce that the 45-minute recording of Dr. Szabó’s keynote presentation is now available for playback:


Would You Like a Simulation Governance Briefing?

Would you like to connect with Dr. Szabó on this topic? Feel free to complete the following form and we will be happy to schedule a Simulation Governance briefing with you:

]]>
https://www.esrd.com/asme-vvuq-2023-symposium-keynote-presentation-recording/feed/ 0
New Simulation Governance Page https://www.esrd.com/new-simulation-governance-page/ https://www.esrd.com/new-simulation-governance-page/#respond Fri, 05 Jan 2018 21:59:33 +0000 https://esrd.com/?p=5504 Learn how Simulation Governance was introduced, how it came to be one of the Big Issues of NAFEMS, and how ESRD's leadership and other world-renowned simulation experts are using this powerful function for enhancing reliability of modern numerical simulation [...]]]>

NAFEMS has named Simulation Governance a “Big Issue”. But what is it?

We hope you had Happy (and S.A.F.E.R.) Holidays! We look forward to rolling up our sleeves and getting back to work for you.

You may have heard of Simulation Governance, or may be familiar with the phrase. As we are heavily involved in the conceptualization and implementation of Simulation Governance standards and practice, we have developed a one-stop page that provides historical background, thought leadership and resources on this topic.

Learn how Simulation Governance was introduced, how it came to be one of the Big Issues of NAFEMS, and how ESRD’s leadership and other world-renowned simulation experts are using this powerful function for enhancing reliability of modern numerical simulation.

Learn More

 

]]>
https://www.esrd.com/new-simulation-governance-page/feed/ 0