ESRD https://www.esrd.com/ Engineering Software Research and Development, Inc. Mon, 08 Jul 2024 13:28:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg ESRD https://www.esrd.com/ 32 32 Variational Crimes https://www.esrd.com/variational-crimes/ https://www.esrd.com/variational-crimes/#respond Mon, 08 Jul 2024 11:00:00 +0000 https://www.esrd.com/?p=31948 From the beginning of FEM acceptance, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated (variational crimes). This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In Thomas Kuhn’s terminology, “pre-science“ refers to a period of early development in a field of research [1]. During this period, there is no established explanatory framework (paradigm) mature enough to solve the main problems. In the case of the finite element method (FEM), the period of pre-science started when reference [2] was published in 1956 and ended in the early 1970s when scientific investigation began in the applied mathematics community. The publication of lectures at the University of Maryland [3] and the first mathematical book on FEM [4] marked the transition to what Kuhn termed “normal science”.

Two Views

Engineers view FEM as an intuitive modeling tool, whereas mathematicians see it as a method for approximating the solutions of partial differential equations cast in variational form. On the engineering side, the emphasis is on implementation and applications, while mathematicians are concerned with clarifying the conditions for stability and consistency, establishing error estimates, and formulating extraction procedures for various quantities of interest. 

From the beginning, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated. This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.

The Art of Finite Element Modeling

The art of finite element modeling has its roots in the pre-science period of finite element analysis when engineers sought to extend the matrix methods of structural analysis, developed for trusses and frames, to complex structures such as plates, shells, and solids. The major finite element modeling software products in use today, such as NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970. As long as the goal is to find force-displacement relationships, such as in load models of airframes and crash dynamics models of automobiles, finite element modeling can provide useful information. However, problems arise when the quantities of interest include (or depend on) the pointwise derivatives of the solution, as in strength analysis where stresses and strains are of interest.

Misplaced Accusations

The first mathematical book on the finite element method [4] dedicated a chapter to violations of the rules of variational calculus in various implementations of the finite element method. The title of the chapter is “Variational Crimes,” a catchphrase that quickly caught on. The variational crimes are charged as follows:

  1. Using non-conforming Elements: Non-conforming elements are those that do not satisfy the interelement continuity requirements of the variational formulation.
  2. Using numerical integration.
  3. Approximating domains and boundary conditions.

Item 1 is a serious crime, however, the motivations for committing this crime can be negated by properly formulating mathematical models. Items 2 and 3 are not crimes; they are essential features of the finite element method, and the associated errors can be easily controlled. The authors were thinking about asymptotic error estimators (what happens when the diameter of the largest element goes to zero) that did not account for items 2 and 3. They did not want to bother with the complications caused by numerical integration and the approximation of the domains and boundary conditions, so they declared those features to be crimes. This may have been a clever move but certainly not a helpful one.

Sherlock Holmes investigating variational crimes in Victorian London. Image generated by Microsoft Copilot.

Egregious Variational Crimes

The authors of reference [4] failed to mention the truly egregious variational crimes that are very common in the practice of finite element modeling today and will have to be abandoned if the reliability predictions based on finite element computations are to be established:

  1. Using point constraints. Perhaps the most common variational crime is using point constraints for other than rigid body constraints. The finite element solution will converge to a solution that ignores the point constraints if such a solution exists, else it will diverge. However, the rates of convergence or divergence are typically very slow. For the discretizations used in practice, it is hardly noticeable.  So then, why should we worry about it? – Either we are not approximating the solution to the problem we had in mind, or we are “approximating” a problem that has no solution. Finding an approximation to a solution that does not exist makes no sense, yet such occurrences are very common in finite element modeling practice. The apparent credibility of the finite element solution is owed to the near cancellation of two large errors: The conceptual error of using illegal constraints and the numerical error of not using sufficiently fine discretization to make the conceptual error visible.  A detailed explanation is available in reference [5], Section 5.2.8.
  2. Using point forces in 2D and 3D elasticity (or more generally in 2D and 3D problems). In linear elasticity, the exact solution does not have finite strain energy when point forces are applied. Hence, any finite element solution “approximates” a problem that does not have a solution in energy space.  Once again, divergence is very slow. When point forces are applied, element-by-element equilibrium is satisfied, and the effects of point forces are local, whereas the effects of point constraints are global. Generally, it is permissible to apply point forces in the region of secondary interest but not in the region of primary interest, where the goal is to compute quantities that depend on the derivatives, such as stresses and strains [5].
  3. Using reduced integration. At the time of the publication of their book [4], Strang and Fix could not have known about reduced integration which was introduced a few years later [6]. Reduced integration was justified in typical finite element modeling fashion: Low-order elements exhibit shear locking and Poisson ratio locking. Since the elements that lock “are too stiff,” it is possible to make them softer by using fewer than the necessary integration points. The consequences were that the elements exhibited spurious “zero energy modes,” called “hourglassing,” that had to be controlled by various tuning parameters. For example, in the Abaqus Analysis User’s Manual, C3D8RHT(S) is identified as an “8-node trilinear displacement and temperature, reduced integration with hourglass control, hybrid with constant pressure” element. Tinkering with the integration rules may be useful in the art of finite element modeling when the goal is to tune stiffness relationships (as, for example, in crash dynamics models), but it is an egregious crime in finite element analysis because it introduces a source of error that cannot be controlled by mesh refinement, or increasing the polynomial degree, and makes a posteriori error estimation impossible.
  4. Reporting computed data that do not converge to a finite value. For example, if a domain has one or more sharp reentrant corners in the region of primary interest, then the maximum stress computed from a finite element solution will be a finite number but will tend to infinity when the degrees of freedom are increased. It is not meaningful to report such a computed value: The error is infinitely large.
  5. Tricks used when connecting elements based on different formulations. For example, connecting an axisymmetric shell element (3 degrees of freedom per node) with an axisymmetric solid element (2 degrees of freedom) involves tricks of various sorts, most of which are illegal.

Takeaway

The deeply ingrained practice of finite element modeling has its roots in the pre-science period of the development of the finite element method. To meet the current reliability expectations in numerical simulation, it will be necessary to routinely perform solution verification. This is possible only through the science of finite element analysis, respecting the rules of variational calculus. When thinking about digital transformation, digital twins, certification by analysis, and linking simulation with artificial intelligence tools, one must think about the science of finite element analysis and not the art of finite element modeling rooted in pre-1970s thinking.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Turner, M.J., Clough, R.W., Martin, H.C. and Topp, L.J. Stiffness and deflection analysis of complex structures. Journal of the Aeronautical Sciences23(9), pp. 805-823, 1956.

[3] Babuška, I. and Aziz, A.K. Survey lectures on the mathematical foundations of the finite element method.  The mathematical foundations of the finite element method with applications to partial differential equations (A. K. Aziz, ed.) Academic Press, 1972.

[4] Strang, G. and Fix, G. An analysis of the finite element method. Prentice Hall, 1973.

[5] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[6] Hughes, T.J., Cohen, M. and Haroun, M. Reduced and selective integration techniques in the finite element analysis of plates. Nuclear Engineering and Design46(1), pp.203-222, 1978.


Related Blogs:

]]>
https://www.esrd.com/variational-crimes/feed/ 0
Simulation Governance https://www.esrd.com/simulation-governance-at-the-present/ https://www.esrd.com/simulation-governance-at-the-present/#respond Thu, 13 Jun 2024 20:23:21 +0000 https://www.esrd.com/?p=31866 At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation, digital twins, certification by analysis, and AI-assisted simulation projects are generating considerable interest in engineering communities. For these initiatives to succeed, the reliability of numerical simulations must be assured. This can happen only if management understands that simulation governance is an essential prerequisite for success and undertakes to establish and enforce quality control standards for all simulation projects.

The idea of simulation governance is so simple that it is self-evident: Management is responsible for the exercise of command and control over all aspects of numerical simulation. The formulation of technical requirements is not at all simple, however. A notable obstacle is the widespread confusion of the practice of finite element modeling with numerical simulation. This misconception is fueled by marketing hyperbole, falsely suggesting that purchasing a suite of software products is equivalent to outsourcing numerical simulation.  

At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.

The Kuhn Cycle

The development of numerical simulation technology falls under the broad category of scientific research programs, which encompass model development projects in the engineering and applied sciences as well. By and large, these programs follow the pattern of the Kuhn Cycle [1] illustrated schematically in Fig. 1 in blue:

Figure 1: Schematic illustration of the Kuhn cycle.

A period of pre-science is followed by normal science. In this period, researchers have agreed on an explanatory framework (paradigm) that guides the development of their models and algorithms.  Program (or model) drift sets in when problems are identified for which solutions cannot be found within the confines of the current paradigm. A program crisis occurs when the drift becomes excessive and attempts to remove the limitations are unsuccessful. Program revolution begins when candidates for a new approach are proposed. This eventually leads to the emergence of a new paradigm, which then becomes the explanatory framework for the new normal science.

The Development of Finite Element Analysis

The development of finite element analysis followed a similar pattern. The period of pre-science began in 1956 and lasted until about 1970. In this period, engineers who were familiar with the matrix methods of structural analysis were trying to extend that method to stress analysis. The formulation of the algorithms was based on intuition; testing was based on trial and error, and arguing from the particular to the general (a logical fallacy) was common.   

Normal science began in the early 1970s when the mathematical foundations of finite element analysis were addressed in the applied mathematics community. By that time, the major finite element modeling software products in use today were under development. Those development efforts were largely motivated by the needs of the US space program. The developers adopted a software architecture based on pre-science thinking. I will refer to these products as legacy FE software: For example, NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970.

Mathematical analysis of the finite element method identified a number of conceptual errors. However, the conceptual framework of mathematical analysis and the language used by mathematicians were foreign to the engineering community, and there was no meaningful interaction between the two communities.

The scientific foundations of finite element analysis were firmly established by 1990, and finite element analysis became a branch of applied mathematics. This means that, for a very large class of problems that includes linear elasticity, the conditions for stability and consistency were established, estimates were obtained for convergence rates, and solution verification procedures were developed, as were elegant algorithms for superconvergent extraction of quantities of interest such as stress intensity factors. I was privileged to have worked closely with Ivo Babuška, an outstanding mathematician who is rightfully credited for many key contributions.

Normal science continues in the mathematical sphere, but it has no influence on the practice of finite element modeling. As indicated in Fig. 1, the practice of finite element modeling is rooted in the pre-science period of finite element analysis, and having bypassed the period of normal science, it had reached the stage of program crisis decades ago.

Evidence of Program Crisis

The knowledge base of the finite element method in the pre-science period was a small fraction of what it is today. The technical differences between finite element modeling and numerical simulation are addressed in one of my earlier blog posts [2]. Here, I note that decision-makers who have to rely on computed information have reasons to be disappointed. For example, the Air Force Chief of Staff,  Gen. Norton Schwartz, was quoted in Defense News, 2012 [3] saying: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

General Schwartz expected that the reliability of predictions based on numerical simulation would be similar to the reliability of observations in physical tests. This expectation was not unreasonable considering that by that time, legacy FE software tools had been under development for more than 40 years. What the general did not know was that, while the user interfaces greatly improved and impressive graphic representations could be produced, the underlying solution methodology was (and still is) based on pre-1970s thinking.

As a result, efforts to integrate finite element modeling with artificial intelligence and to establish digital twins based on finite element modeling will surely end in failure.

Paradigm Change Is Necessary

Paradigm change is never easy. Max Planck observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This is often paraphrased, saying: “Progress occurs one funeral at a time.” Planck was referring to the foundational sciences and changing academic minds.  The situation is more challenging in the engineering sciences, where practices and procedures are often deeply embedded in established workflows and changing workflows is typically difficult and expensive.

What Should Management Do?

First and foremost, management should understand that simulation is one of the most abused words in the English language. Furthermore:

  • Treat any marketing claim involving simulation with an extra dose of skepticism. Prior to undertaking projects in the areas of digital transformation, certification by analysis, digital twins, and AI-assisted simulation, ensure that the mathematical models produce reliable predictions.
  • Recognize the difference between finite element modeling and numerical simulation.
  • Understand that mathematical models produce reliable predictions only within their domains of calibration.
  • Treat model form and numerical approximation errors separately and require error control in the formulation and application of mathematical models.
  • Do not accept computed data without error metrics.
  • Understand that model development projects are open-ended.
  • Establish conditions favorable for the evolutionary development of mathematical models.
  • Become familiar with the concepts and terminology in reference [4]. For additional information on simulation governance, I recommend ESRD’s website.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023. https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/.

[3] Weisgerber, M. DoD Anticipates Better Price on Next F-35 Batch, Gannett Government Media Corporation, 8 March 2012. [Online]. Available: https://tinyurl.com/282cbwhs.

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. 


Related Blogs:

]]>
https://www.esrd.com/simulation-governance-at-the-present/feed/ 0
Digital Transformation https://www.esrd.com/digital-transformation/ https://www.esrd.com/digital-transformation/#respond Fri, 17 May 2024 01:31:22 +0000 https://www.esrd.com/?p=31765 Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, we address digital transformation in the engineering sciences, focusing on numerical simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, I address digital transformation in the engineering sciences, focusing on numerical simulation.

Digital Technologies in the Engineering Sciences

Digital technologies have been integrated into the engineering sciences since the 1950s.  The adoption process has not been uniform across all disciplines. Some fields (like aerospace) adopted technologies early, while others were slower to change. The development and adoption of these technologies are ongoing. Engineering today is increasingly digital, and innovations are constantly changing the way engineers approach their work. Here are some important milestones:

Early Adoption (1950s-1970s)

  • Mainframe computers were used for engineering calculations that would have been impossible or extremely time-consuming to perform by hand.
  • Numerical control (NC) machines used punched tape or cards to control tool movements, streamlining machining processes.
  • Early Computer-Aided Design (CAD) systems revolutionized drafting in the 1960s. They allowed engineers to create and manipulate drawings on a computer, making design iterations much faster than previously possible.

Period of Rapid Growth (1980s-1990s)

  • Affordable Personal Computers (PCs) made computing power accessible to individual engineers and small firms.
  • Development of CAD software brought 3D modeling from specialized applications into mainstream design.
  • Finite Element Modeling software became commercially available, allowing engineers to perform structural and strength calculations.
  • The mathematical foundations of the finite element method (FEM) were established, and finite element analysis (FEA) became a branch of Applied Mathematics.

Post-Millennial Development  (2000s-Present)

  • Cloud-based solutions offer scalable computing power and collaboration tools, making complex calculations accessible without massive hardware investment.
  • Building Information Modeling (BIM) revolutionized the architecture, engineering, and construction (AEC) industries.
  • Internet of Things (IoT): Networked sensors and devices provide engineers with real-time data to monitor structures, predict maintenance needs, and optimize operations.
  • Additive Manufacturing (3D Printing) allows for the rapid creation of complex prototypes and even functional end-use parts.

Given that digital technologies have been successfully integrated into engineering practice, it may appear that not much else needs to be done. However, important challenges remain, and there are many opportunities for improvement. This is discussed next.

Outlook: Opportunities and Challenges

Bearing in mind that the primary goal of digital transformation is to enhance competitiveness, in the field of numerical simulation, this translates to improving the predictive performance of mathematical models. Ideally, we aim to reach a reliability level in model predictions comparable to that of physical experimentation. From the technological point of view, this goal is achievable: We have the theoretical understanding of how to maximize the predictive performance of mathematical models through the application of verification, validation, and uncertainty quantification procedures. Furthermore, advancements in explainable artificial intelligence (XAI) technology can be utilized to optimize the management of numerical simulation projects so as to maximize their reliability and effectiveness.  

The primary challenge in the field of engineering sciences is that further progress in digital transformation will require fundamental changes in how numerical simulation is currently understood by the engineering community and how it is practiced in industrial settings. It is essential to keep in mind the differences between finite element modeling and numerical simulation. I explained the reasons for this in an earlier blog post [1]. The art of finite element modeling will have to be replaced by the science of finite element analysis, and the verification, validation, and uncertainty quantification (VVUQ) procedures will have to be applied [2].

Paradoxically, the successful early integration of finite element modeling practices and software tools into engineering workflows now impedes attempts to utilize technological advances that occurred after the 1970s. The software architecture of legacy finite element codes was substantially set by 1970, based on understanding the finite element method that existed at that time. Limitations of the software architecture prevented subsequent advances, such as a posteriori error estimation in terms of the quantities of interest and control of model form errors, both of which are essential for meeting the reliability requirements in numerical simulation. Abandoning finite element modeling practices and embracing the methodology of numerical simulation technology is a major challenge for the engineering community.

The “I Believe” Button

An ANSYS blog [3] tells the story of a presentation made to an A&D executive. The presentation was to make a case for transforming his department using digital engineering. At the end of the presentation, the executive pointed to a coaster on his desk. “See this? That’s the ‘I believe’ button. I can’t hit it. I just can’t hit it. Help me hit it.” Clearly, the executive was asking for convincing evidence that the computed information was sufficiently reliable to support decision-making in his department. Put in another way, he did not have the courage to sign the blueprint on the basis of data generated by digital engineering. What it takes to gather such courage was addressed in one of my earlier blogs [4]. Reliability considerations significantly influence the implementation of simulation process data management (SPDM).

Change Is Necessary

The frequently cited remark by W. Edwards Deming: “Change is not obligatory, but neither is survival,” reminds us of the criticality of embracing change.


References

[1] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023.
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/
[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. 162 pp. 206–214, 2024. The publisher is providing free access to this article until May 22, 2024. Anyone may download it without registration or fees by clicking on this link:
https://authors.elsevier.com/c/1isOB3CDPQAe0b
[3] Bleymaier, S. Hit the “I Believe” Button for Digital Transformation. ANSYS Blog. June 14, 2023. https://www.ansys.com/blog/believe-in-digital-transformation
[4] Szabó B. Where do you get the courage to sign the blueprint? ESRD Blog. October 6, 2023.
https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/


Related Blogs:

]]>
https://www.esrd.com/digital-transformation/feed/ 0
Digital Twins https://www.esrd.com/digital-twins/ https://www.esrd.com/digital-twins/#respond Thu, 02 May 2024 15:33:33 +0000 https://www.esrd.com/?p=31726 The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term "digital twin" was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term “digital twin” was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.

Digital Twins: An Overview

An overview of the current understanding of the idea of digital twins at NASA is available in a keynote presentation delivered in 2021 [1]. This presentation contains the following quote from reference [2]:

“The Digital Twin (DT) is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin.”

I think that this is closer to being an aspirational statement than a functional definition of digital twins.  On the positive side, this statement articulates that the reliability of the results of the simulation should be comparable to that of a physical experiment. Note that this is possible only when mathematical models are used within their domains of calibration [3]. On the negative side, the description of a product “from the micro atomic level to the macro geometrical level” is neither necessary nor feasible. The goal of a simulation project is not to describe a physical system from A to Z but rather to predict the quantities of interest, such as expected fatigue life, margins of safety, limit load, deformation, natural frequency, and the like. In view of this, I propose the following definition:

“A Digital Twin (DT) is a set of mathematical models formulated to predict quantities of interest that characterize the functioning of a potential or actual manufactured product. When the mathematical models are used within their domains of calibration, the reliability of the predictions is comparable to that of a physical experiment.”

The set of mathematical models may comprise a single model of a component or several interacting component models. The motivation for creating digital twins typically comes from the requirements of product lifecycle management: High-value assets are monitored throughout their lifecycles, and the models that constitute a digital twin are updated with new data as they become available. This fits into the framework of model development projects discussed in one of my blogs, “Model Development in the Engineering Sciences,” and in greater detail in reference [3]. An essential attribute of any mathematical model is its domain of calibration.

Example 1: Component Twin

The Single Fastener Analysis Tool (SFAT) is a smart application engineered for comprehensive analyses of single and double shear joints of metal or composite plates. It also serves as an example of a component twin and highlights the technical challenges involved in the development of digital twins.

Figure 1. Single Fastener Analysis Tool (SFAT). Examples of use cases.

SFAT offers the flexibility to model laminates either as ply-by-ply or homogenized entities. It can accommodate various types of fastener heads, such as protruding and countersunk, including those with hollow shafts. It is capable of supporting different fits such as neat, interference, and clearance.

SFAT also provides additional input options to account for factors like shimmed and unshimmed gaps, bushings, and washers. The application allows for the specification of shear load and fastener pre-load as loading conditions. It provides estimates of the errors of approximation in terms of the quantities of interest.

Example 2: Asset Twin

A good example of asset twins is the structural health monitoring of large concrete dams. Following the collapse of the Malpasset dam in Provence, France, in 1959, the World Bank mandated that all dam projects seeking financial backing must undergo modeling and testing at the Experimental Institute for Models and Structures in Bergamo, Italy (ISMES). Subsequently, ISMES was commissioned to develop a system that will monitor the structural health of large dams. The dams would be instrumented, and a numerical simulation framework, now called digital twin, would be used to evaluate anomalies indicated by the instruments.

It was understood that numerical approximation errors would have to be controlled to small tolerances to ensure that they were negligibly small in comparison with the errors in measurements. To perform the calculations, a finite element program based on the p-version was written at ISMES in the second half of the 1970s under the direction of Dr. Alberto Peano, my former D.Sc. student. That program is still in use today under the name FIESTA [4].

Simulation Governance: Essential for Digital Twin Creation

Creating digital twins encompasses all aspects of model development, necessitating separate treatment of the model form and approximation errors. In other words, the verification, validation, and uncertainty quantification (VVUQ) procedures have to be applied. The model must be updated and recalibrated when new ideas are proposed or new data become available. The only difference is that in the case of digital twins, the updates involve individual object-specific data collected over the life span of the physical object.

Model development projects are classified as progressive, stagnant, and improper. A model development project is progressive if the domain of calibration is increasing, stagnant if it is not increasing, and improper if the problem-solving machinery is not consistent with the formulation of the mathematical model or lacks the ability to support solution verification [3]. The goal of simulation governance is to ensure that digital twin projects are progressive. Unfortunately, owing to a lack of simulation governance, the large majority of model development projects are improper, and hence, most digital twins fail to meet the required standards of reliability.


References

[1]  Allen, D. B. Digital Twins and Living Models at NASA. Keynote presentation at the ASME Digital Twin Summit. November 3, 2021.

[2] Grieves, M. and Vickers, J. Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In: Transdisciplinary Perspectives on Complex Systems. F-J. Kahlen, S. Flumerfelt and A. Alves (eds) Springer International Publishing, Switzerland, pp. 85-113, 2017.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.  The publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b

[4] Angeloni, P., Boccellato, R., Bonacina, E., Pasini, A., Peano, A.  Accuracy Assessment by Finite Element P-Version Software. In: Adey, R.A. (ed) Engineering Software IV. Springer, Berlin, Heidelberg, 1985. https://doi.org/10.1007/978-3-662-21877-8_24


Related Blogs:

]]>
https://www.esrd.com/digital-twins/feed/ 0
Not All Models Are Wrong https://www.esrd.com/not-all-models-are-wrong/ https://www.esrd.com/not-all-models-are-wrong/#respond Thu, 11 Apr 2024 15:55:43 +0000 https://www.esrd.com/?p=31628 Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


I never understood the statement: “All models are wrong, but some are useful”, attributed to George E. P. Box, a statistician, quoted in many papers and presentations. If that were the case, why should we try to build models and how would we know when and for what purposes they may be useful? We construct models with the objective of making reliable predictions, the degree of reliability being comparable to that of a physical experiment.

Consider, for example, the problem in Fig. 1 showing a sub-assembly of an aircraft structure. The quantity of interest is the margin of safety: Given multiple load conditions and design criteria, estimate the minimum value of the margin of safety and show that the numerical approximation error is less than 5%.   We must have sufficient reason to trust the results of simulation tasks like this.

Figure 1: Sub-assembly of an aircraft structure.

Trying to understand what George Box meant, I read the paper in which he supposedly made the statement that all models are wrong[1] but I did not find it very enlightening. Nor did I find that statement in its often-quoted form. What I found is this non sequitur: “Since all models are wrong the scientist must be alert to what is importantly wrong.” This makes the matter much more complicated: Now we have to classify wrongness into two categories: important and unimportant. By what criteria? – That is not explained.

Box did not have the same understanding as we do of what a mathematical model is. This is evidenced by the sentence: “In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless.” Our goal is not to model the “real world”, a vague concept, but to model specific aspects of physical reality, the quantities of interest having been clearly defined as, for example, in the case of the problem shown in Fig. 1. Our current understanding of mathematical models is based on the concept of model-dependent realism which was developed well after Box’s 1978 paper was written.

Model-Dependent Realism

The term model-dependent realism was introduced by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design [2] but the distinction between physical reality and ideas of physical reality is older. For example, Wolfgang Pauli wrote in 1948: “The layman always means, when he says `reality’ that he is speaking of something self-evidently known; whereas to me it seems the most important and exceedingly difficult task of our time is to work on the construction of a new idea of reality.” [From a letter to Markus Fierz.]

If two different models describe a set of physical phenomena equally well then both models are equally valid: It is meaningless to speak about “true reality”. In Hawking’s own words [3]: “I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation.” In other words, mathematical models are, essentially, phenomenological models.

What is a Mathematical Model?

A mathematical model is an operator that transforms one set of data D, the input, into another set, the quantities of interest F. In shorthand notation we have:

\boldsymbol D\xrightarrow[(I,\boldsymbol p)]{}\boldsymbol F,\quad (\boldsymbol D, \boldsymbol p) \in ℂ \quad (1)

where the right arrow represents the mathematical model. The letters I and p under the right arrow indicate that the transformation involves an idealization (I) as well as parameters (physical properties) p that are determined through calibration experiments. Restrictions on D and p define the domain of calibration . The domain of calibration is an essential feature of any mathematical model [4], [5].

Most mathematical models used in engineering have the property that the quantities of interest F continuously depend on D and p. This means that small changes in D and/or p will result in correspondingly small changes in F which is a prerequisite to making reliable predictions.

To ensure that the predictions based on a mathematical model are reliable, it is necessary to control two types of error: The model form error and the numerical approximation errors.

Model Form Errors

The formulation of mathematical models invariably involves making restrictive assumptions such as neglecting certain geometric features, idealizing the physical properties of the material, idealizing boundary conditions, neglecting the effects of residual stresses, etc. Therefore, any mathematical model should be understood to be a special case of a more comprehensive model. This is the hierarchic view of models.

To test whether a restrictive assumption is acceptable for a particular application, it is necessary to estimate the influence of that assumption on the quantities of interest and, if necessary, revise the model. An exploration of the influence of modeling assumptions on the quantities of interest is called virtual experimentation [6]. Simulation software tools must have the capability to support virtual experimentation.

Approximation Errors

Approximation errors occur when the quantities of interest are estimated through a numerical process.  This means that we get a numerical approximation to F, denoted by Fnum. It is necessary to show that the relative error in Fnum does not exceed an allowable value τall:

| \boldsymbol F - \boldsymbol F_{num} |/|\boldsymbol F| \le \tau_{all} \quad (2)

This is the requirement of solution verification. To meet this requirement, it is necessary to obtain a converging sequence of numerical solutions with respect to increasing degrees of freedom [6].

Model Development Projects

The formulation of mathematical models is a creative, open-ended activity, guided by insight, experience, and personal preferences. Objective criteria are used to validate and rank mathematical models [4], [5]. 

Model development projects have been classified as progressive, stagnant, and improper [5]. A model development project is progressive if the domain of calibration is increasing, stagnant if the domain of calibration is not increasing, and improper if one or more algorithms are inconsistent with the formulation or the problem-solving method does not have the capability to estimate and control the numerical approximation errors in the quantities of interest. The most important objective of simulation governance is to provide favorable conditions for the evolutionary development of mathematical models and to ensure that the procedures of verification, validation and uncertainty quantification (VVUQ) are properly applied.

Not All Models Are Wrong, but Many of Them Are…

Box’s statement that all models are wrong is not correct. Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models. And there are models, not tethered to scientific principles and methods, that are not even wrong.


References

[1] Box, G. E. P. Science and Statistics. Journal of the American Statistical Association, Vol. 71, No. 356, pp. 791-799, 1976.

[2] Hawking, S. and Mlodinow, L. The Grand Design. Random House 2010.

[3] Hawking, S. The nature of space and time.  Princeton University Press, 2010 (with Roger Penrose).

[4] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp. 75-86, 2021 [open source].

[5] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024. Note: the publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b.

[6] B. Szabó and I. Babuška,  Finite Element Analysis.  Method, Verification and Validation. 2nd edition, John Wiley & Sons, Inc., 2021.  


Related Blogs:

]]>
https://www.esrd.com/not-all-models-are-wrong/feed/ 0
Certification by Analysis (CbA) – Are We There Yet? https://www.esrd.com/certification-by-analysis-are-we-there-yet/ https://www.esrd.com/certification-by-analysis-are-we-there-yet/#respond Thu, 07 Mar 2024 21:36:09 +0000 https://www.esrd.com/?p=31410 Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards. The key to CbA is reliability. This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments. To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


While reading David McCullough’s book “The Wright Brothers”, a fascinating story about the development of the first flying machine, this question occurred to me: Would the Wright brothers have succeeded if they had used substantially fewer physical experiments and relied on finite element modeling instead?  I believe that the answer is: no.  Consider what happened in the JSF program.

Lessons from the JSF Program

In 1992, eighty-nine years after the Wright brothers’ Flying Machine first flew at Kitty Hawk, the US government, decided to fund the design and manufacture of a fifth-generation fighter aircraft that combines air-to-air, strike, and ground attack capabilities. Persuaded that numerical simulation technology was sufficiently mature, the decision-makers permitted the manufacturer to concurrently build and test the aircraft, known as Joint Strike Fighter (JSF). The JSF, also known as the F-35, was first flown in 2006.  By 2014, the program was 163 billion dollars over budget and seven years behind schedule.

Two senior officers illuminated the situation in these words:

Vice Admiral David Venlet, the Program Executive Officer, quoted in AOL Defense in 2011 [1]: “JSF’s build and test was a miscalculation…. Fatigue testing and analysis are turning up so many potential cracks and hot spots in the Joint Strike Fighter’s airframe that the production rate of the F-35 should be slowed further over the next few years… The cost burden sucks the wind out of your lungs“.

Gen. Norton Schwartz, Air Force Chief of Staff, quoted in Defense News, 2012 [2]: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

These officers believed that the software tools were so advanced that testing would confirm the validity of design decisions based on them. This turned out to be wrong. However, their mistaken belief was not entirely unreasonable if we consider that by the start of the JSF program commercial finite element analysis (FEA) software products were 30+ years old, therefore they could have reasonably assumed that the reliability of these products greatly improved, as were the hardware systems and visualization tools capable of creating impressive color images, tacitly suggesting that the underlying methodology is capable of guaranteeing the quality and reliability of the output quantities.  Indeed, there were very significant advancements in the science of finite element analysis which became a bona-fide branch of applied mathematics in that period.  The problem was that commercial FEA software tools did not keep pace with those important scientific developments.

There are at least two reasons for this:  First, the software architecture of the commercial finite element codes was based on the thinking of the 1960s and 70s when the theoretical foundations of FEA were not yet established.  As a result, several limitations were incorporated.  Those limitations kept code developers from incorporating later advancements, such as a posteriori error estimation, advanced discretization strategies, and stability criteria.  Second, decision-makers who rely on computed information failed to specify the technical requirements that simulation software must meet, such as, for example, to report not just the quantities of interest but also their estimated relative errors.  To fulfill this key requirement, legacy FE software would have had to be overhauled to such an extent that only their nameplates would have remained the same.

Technical Requirements for CbA

Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards.  The key to CbA is reliability.  This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments.   To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.

Model form errors occur because we invariably make simplifying assumptions when we formulate mathematical models.  For example, formulations based on the theory of linear elasticity include the assumptions that the stress-strain relationship is a linear function, independent of the size of the strain and that the deformation is so small that the difference between the equilibrium equations written on the undeformed and deformed configurations can be neglected.  As long as these assumptions are valid, the linear theory of elasticity provides reliable estimates of the response of elastic bodies to applied loads.  The linear solution also provides information on the extent to which the assumptions were violated in a particular model.  For example, if it is found that the strains exceed the proportional limit, it is advisable to check the effects of plastic deformation.  This is done iteratively until a convergence criterion is satisfied.  Similarly, the effects of large deformation can be estimated.  Model form errors are controlled by viewing any mathematical model as one in a sequence of hierarchic models of increasing complexity and selecting the model that is consistent with the conditions of the simulation.

Numerical errors are the errors associated with approximating the exact solution of mathematical problems, such as the equations of elasticity, Navier-Stokes, and Maxwell, and the method used to extract the quantities of interest from the approximate solution.   The goal of solution verification is to show that the numerical errors in the quantities of interest are within acceptable bounds.

The domain of calibration defines the intervals of physical parameters and input data on which the model was calibrated.  This is a relatively new concept, introduced in 2021 [3], that is also addressed in a forthcoming paper [4].  A common mistake in simulation is to use models outside of their domains of calibration.

Organizational Aspects

To achieve the level of reliability in numerical simulation, necessary for the utilization of CbA, management will have to implement simulation governance [5] and apply the protocols of verification, validation, and uncertainty quantification.

Are We There Yet?

No, we are not there yet. Although we have made significant progress in controlling errors in model form and numerical approximation, one very large obstacle remains: Management has yet to recognize that they are responsible for simulation governance, which is a critical prerequisite for CbA.


References

[1] Whittle, R. JSF’s Build and Test was ‘Miscalculation,’ Adm. Venlet Says; Production Must Slow. [Online] https://breakingdefense.com/2011/12/jsf-build-and-test-was-miscalculation-production-must-slow-v/ [Accessed 21 February 2024].

[2] M. Weisgerber, M.  DoD Anticipates Better Price on Next F-35 Batch.  Gannett Government Media Corporation, 8 March 2012. [Online]. https://tinyurl.com/282cbwhs [Accessed 22 February 2024].

[3] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021 [open source].

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  To appear in Computers & Mathematics with Applications in 2024.  The manuscript is available on request.

[5] Szabó, B. and Actis, R. Planning for Simulation Governance and Management:  Ensuring Simulation is an Asset, not a Liability. Benchmark, July 2021.


Related Blogs:

]]>
https://www.esrd.com/certification-by-analysis-are-we-there-yet/feed/ 0
Model Development in the Engineering Sciences https://www.esrd.com/model-development-in-engineering-sciences/ https://www.esrd.com/model-development-in-engineering-sciences/#respond Mon, 12 Feb 2024 19:29:36 +0000 https://www.esrd.com/?p=31040 In the engineering sciences, mathematical models are based on the equations of continuum mechanics, heat flow, Maxwell, Navier-Stokes, or some combination of these. These equations have been validated and their domains of calibration are generally much larger than the expected domain of calibration of the model being developed. In the terminology introduced by Lakatos, the assumptions incorporated in these equations are called hardcore assumptions, and the assumptions incorporated in the other constituents of a model are called auxiliary hypotheses. Model development is concerned with the formulation, calibration, and validation of auxiliary hypotheses. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are based on the equations of continuum mechanics, heat flow, Maxwell, Navier-Stokes, or some combination of these. These equations have been validated and their domains of calibration are generally much larger than the expected domain of calibration of the model being developed. In the terminology introduced by Lakatos [1], the assumptions incorporated in these equations are called hardcore assumptions, and the assumptions incorporated in the other constituents of a model are called auxiliary hypotheses. Model development is concerned with the formulation, calibration, and validation of auxiliary hypotheses. 

Assume, for example, that we are interested in predicting the length of a small crack in a flight-critical aircraft component, caused by the application of a load spectrum. In this case, the mathematical model comprises the equations of continuum mechanics (the hardcore assumptions) and the following auxiliary hypotheses: (a) a predictor of crack propagation, (b) an algorithm that accounts for the statistical dispersion of the calibration data, and (c) an algorithm that accounts for the retardation effects of tensile overload events and the acceleration effects of compressive overload events.

The auxiliary hypotheses introduce parameters that have to be determined by calibration. In our example, we are concerned with crack propagation caused by variable-cycle loading. In linear elastic fracture mechanics (LEFM), for example, the commonly used predictor of crack increment per cycle is the difference in the values of the stress intensity factors between subsequent high and low positive values, denoted by ΔK.

The relationship between crack increment per cycle, denoted by Δa, and the corresponding ΔK value is determined through calibration experiments. Various hypotheses are used to account for the cycle ratio. Additional auxiliary hypotheses account for the statistical dispersion of crack length and the retardation and acceleration events caused by loading sequence effects. The formulation of auxiliary hypotheses is a creative process. Therefore, model development projects must be open to new ideas. Many plausible hypotheses have been and can yet be proposed. Ideally, the predictive performance of competing alternatives would be evaluated using all of the qualified data available for calibration and the models ranked accordingly. Given the stochastic nature of experimental data, predictions should be in terms of probabilities of outcome. Consequently, the proper measure of predictive performance is the likelihood function. Ranking must also account for the size of the domain of calibration [2]. The volume of experimental information tends to increase over time. Consequently, model development is an open-ended activity encompassing subjective and objective elements.

Example: Calibration and Ranking Models of Crack Growth in LEFM

Let us suppose that we want to decide whether we should prefer the Walker [3] or Forman [4] versions of the predictor of crack propagation based on experimental data consisting of specimen dimensions, elastic properties, and tabular data of measured crack length (a) vs. the observed number of load cycles (N) for each cycle ratio (R). For the sake of simplicity, we assume constant cycle loading conditions

The first step is to construct a statistical model for the probability density of crack length, given the number of cycles and the characteristics of the load spectrum. The second step is to extract the ΔaN vs. ΔK data from the a vs. N data where ΔK is determined from the specimen dimensions and loading conditions. The third step is to calibrate each of the candidate hypotheses. This involves setting the predictor’s parameters so that the likelihood of the predicted data is maximum. This process is illustrated schematically by the flow chart shown in Fig. 1.

Figure 1: Schematic illustration of the calibration process.

Finally, the calibration process is documented and the domain of calibration is defined. The model that scored the highest likelihood value is preferred. The ranking is, of course, conditioned on the data available for calibration. As new data are acquired, the calibration process has to be repeated, and the ranking may change. It is also possible that the likelihood values are so close that the results do not justify preferring one model over another. Those models are deemed equivalent. Model development is an open-ended process. No one has the final say.

Opportunities for Improvement

To my knowledge, none of the predictors of crack propagation used in current professional practice have been put through a process of verification, validation, and uncertainty quantification (VVUQ) as outlined in the foregoing section. Rather, investigators tend to follow an unstructured process, whereby they have an idea for a predictor, and, using their experimental data, show that, with a suitable choice of parameters, their definition of the predictor works. Typically, the domain of calibration is not defined explicitly but can be inferred from the documentation. The result is that the relative merit of the ideas put forward by various investigators is unknown and the domains of calibration tend to be very small. In addition, no assurances are given regarding the quality of the data on which the calibration depends. In many instances, only the derived data (i.e. the ΔaN vs. ΔK data), rather than the original records of observation (i.e. the a vs. N data) are made available. This leaves the question of whether the ΔK values were properly verified unanswered.

The situation is similar in the development of design rules for metallic and composite materials: Much work is being done without the disciplined application of VVUQ protocols.  As a result, most of that work is being wasted. 

For example, The World Wide Failure Exercise (WWFE), an international project with the mission to find the best method to accurately predict the strength of composite materials, failed to produce the desired result.  See, for example, [5].  A highly disturbing observation was made by Professor Mike Hinton, one of the organizers of WWFE, in his keynote address to the 2011 NAFEMS World Congress [6]: “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.”  I do not believe that significant improvements in predictive performance occurred since then.

In my view, progress will not be possible unless and until VVUQ protocols are adopted for model development projects.  These protocols play a crucial role in the evolutionary development of mathematical models. 


References

[1] Lakatos, I. The methodology of scientific research programmes, vol. 1, J. Currie and G. Worrall, Eds., Cambridge University Press, 1972.

[2] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021.

[3] Walker, K. The Effect of Stress Ratio During Crack Propagation and Fatigue for 2024-T3 and 7075-T6 Aluminum. Effects of Environment and Complex Load History on Fatigue Life, ASTM International, pp. 1–14, 1970. doi:10.1520/stp32032s, ISBN 9780803100329

[4] Forman, R. G., Kearney, V. E.  and Engle, R. M.  Numerical analysis of crack propagation in cyclic-loaded structures. Journal of Basic Engineering, pp. 459-463, September 1967.

[5] Christensen, R. M. Letter to World Wide Failure Exercise, WWFE-II. https://www.failurecriteria.com/lettertoworldwid.html

[6] Hinton, M. Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?  NAFEMS World Congress, Boston, May 2011.


Related Blogs:

]]>
https://www.esrd.com/model-development-in-engineering-sciences/feed/ 0
The Demarcation Problem in the Engineering Sciences https://www.esrd.com/demarcation-problem-in-engineering-sciences/ https://www.esrd.com/demarcation-problem-in-engineering-sciences/#respond Thu, 01 Feb 2024 14:52:11 +0000 https://www.esrd.com/?p=30871 In engineering sciences, we classify mathematical models as ‘proper’ or ‘improper’ rather than ‘scientific’ or ‘pseudoscientific’. A model is said to be proper if it is consistent with the relevant mathematical theorems that guarantee the existence and, when applicable, the uniqueness of the exact solution. Otherwise, the model is improper. At present, the large majority of models used in engineering practice are improper. Following are examples of frequently occurring types of error, with brief explanations.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Generally speaking, philosophers are much better at asking questions than answering them. The question of distinguishing between science and pseudoscience, known as the demarcation problem, is one of their hotly debated issues. Some even argued that the demarcation problem is unsolvable [1]. That may well be true when the question is posed in its broadest generality. However, this question can and must be answered clearly and unequivocally in the engineering sciences.

That is because, in the engineering sciences, we rely on validated models of broad applicability, such as the theories of heat transfer and continuum mechanics, the Maxwell equations, and the Navier-Stokes equations.  Therefore, we can be confident that we are building on a solid scientific foundation. A solid foundation does not guarantee a sound structure, however. We must ensure that the algorithms used to estimate the quantities of interest are also based on solid scientific principles. This entails checking that there are no errors in the formulation, implementation, or application of models.

In engineering sciences, we classify mathematical models as ‘proper’ or ‘improper’ rather than ‘scientific’ or ‘pseudoscientific’. A model is said to be proper if it is consistent with the relevant mathematical theorems that guarantee the existence and, when applicable, the uniqueness of the exact solution. Otherwise, the model is improper. At present, the large majority of models used in engineering practice are improper. Following are examples of frequently occurring types of error, with brief explanations.

Conceptual Errors

Conceptual errors, also known as “variational crimes”, occur when the input data and/or the numerical implementation is inconsistent with the formulation of the mathematical model. For example, considering the displacement formulation in two and three dimensions, point constraints are permitted only as rigid body constraints, when the body is in equilibrium. Point forces are permitted only in the domain of secondary interest [2], non-conforming elements and reduced integration are not permitted.

When conceptual errors are present, the numerical solution is not an approximation to the solution of the mathematical problem we have in mind, in which case it is not possible to estimate the errors of approximation. In other words, it is not possible to perform solution verification.

Model Form Errors

Model form errors are associated with the assumptions incorporated in mathematical models. Those assumptions impose limitations on the applicability of the model. Various approaches exist for estimating the effects of those limitations on the quantities of interest. The following examples illustrate two such approaches.

Example 1

Linear elasticity problems limit the stresses and strains to the elastic range, the displacement formulation imposes limitations on Poisson’s ratio, and pointwise stresses or strains are considered averages over a representative volume element. This is because the assumptions of continuum theory do not apply to real materials on the micro-scale.

Linear elasticity problems should be understood to be special cases of nonlinear problems that account for the effects of large displacements and large strains and one of many possible material laws. Having solved a linear problem, we can check whether and to what extent were the simplifying assumptions violated, and then we can decide if it is necessary to solve the appropriate nonlinear problem. This is the hierarchic view of models: Each model is understood to be a special case of a more comprehensive model [2].

Remark

Theoretically, one could make the model form error arbitrarily small by moving up the model hierarchy.  In practice, however, increasing complexity in model form entails an increasing number of parameters that have to be determined experimentally. This introduces uncertainties, which increase the dispersion of the predicted values of the quantities of interest.

Example 2

In many practical applications, the mathematical problem is simplified by dimensional reduction. Within the framework of linear elasticity, for instance, we have hierarchies of plate and shell models where the variation of displacements along the normal to the reference surface is restricted to polynomials or, in the case of laminated plates and shells, piecewise polynomials of low order [3]. In these models, boundary layer effects occur. The boundary layers are typically strong at free edges. These effects are caused by edge singularities that perturb the dimensionally reduced solution. The perturbation depends on the hierarchic order of the model. Typically, the goal of computation is strength analysis, that is, estimation of the values of predictors of failure initiation. It must be shown that the predictors are independent of the hierarchic order. This challenging problem is typically overlooked in finite element modeling. In the absence of an analytical tool capable of guaranteeing the accuracy of predictors of failure initiation, it is not possible to determine whether a design rule is satisfied or not.

Figure 1: T-joint of laminated plates.

Numerical Errors

Since the quantities of interest are computed numerically, it is necessary to verify that the numerical values are sufficiently close to their exact counterparts. The meaning of “sufficiently close” is context-dependent: For example, when formulating design rules, an interpretation of experimental information is involved. It has to be ensured that the numerical error in the quantities of interest is negligibly small in comparison with the size of the experimental errors. Otherwise, preventable uncertainties are introduced in the calibration process.

Realizing the Potential of Numerical Simulation

If we examine a representative sample of mathematical models used in the various branches of engineering, we find that the large majority of models suffer from one or more errors like those we described above. In other words, the large majority of models used in engineering practice are improper. There are many reasons for this, caused mainly by the obsolete notion of finite element modeling, deeply entrenched in the engineering community.

As noted in my earlier blog, entitled Obstacles to Progress, the art of finite element modeling evolved well before the theoretical foundations of finite element analysis were established. Engineering books, academic courses, and professional workshops emphasize the practical, intuitive aspects of finite element modeling and typically omit cautioning against variational crimes. Even some of the fundamental concepts and terminology needed for understanding the scientific foundations of numerical simulation are missing. For example, a senior engineer of a Fortune 100 company, with impeccable academic credentials earned more than three decades before, told me that, in his opinion, the exact solution is the outcome of a physical experiment. This statement revealed a lack of awareness of the meaning and relevance of the terms: verification, validation, and uncertainty quantification.

To realize the potential of numerical simulation, management will have to exercise simulation governance [4]. This will necessitate learning to distinguish between proper and improper modeling practices and establishing the technical requirements needed to ensure that both the model form and approximation errors in the quantities of interest are within acceptable bounds.


References

[1] Laudan L. The Demise of the Demarcation Problem. In: Cohen R.S., Laudan L. (eds) Physics, Philosophy and Psychoanalysis. Boston Studies in the Philosophy of Science, vol 76. Springer, Dordrecht, 1983.

[2] Szabό, B. and Babuška, I. Finite Element Analysis. Method, Verification, and Validation (Section 4.1). John Wiley & Sons, Inc., 2021.

[3] Actis, R., Szabó, B. and Schwab, C. Hierarchic models for laminated plates and shells. Computer Methods in Applied Mechanics and Engineering, 172(1-4), pp. 79-107, 1999.

[4] Szabó, B. and Actis, R. Simulation governance: Technical requirements for mechanical design. Computer Methods in Applied Mechanics and Engineering, 249, pp.158-168, 2012.


Related Blogs:

]]>
https://www.esrd.com/demarcation-problem-in-engineering-sciences/feed/ 0
A Low-Hanging Fruit: Smart Engineering Simulation Applications https://www.esrd.com/low-hanging-fruit-smart-engineering-simulation-apps/ https://www.esrd.com/low-hanging-fruit-smart-engineering-simulation-apps/#respond Fri, 19 Jan 2024 14:30:27 +0000 https://www.esrd.com/?p=30799 Smart Engineering Simulation Applications, (SESA or Sim Apps), are software tools crafted by expert analysts to standardize recurring simulation tasks. Learn more about how Sim Apps and AI tools can work together to enhance engineering simulation workflows.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Smart Engineering Simulation Applications, (SESA or Sim Apps), are software tools crafted by expert analysts to standardize recurring simulation tasks. Sim Apps encapsulate expert knowledge and provide important advantages: (a) Help streamline decisions pertaining to design and certification, (b) Ensure the reliability and consistency of results, (c) Provide transparency, (d) Make simulation tools accessible to users whose expertise is in other fields, and (e) Serve to accumulate and preserve tribal knowledge.

The key requirements of explainable artificial intelligence (XAI) are reliability and transparency. In the context of Sim Apps, reliability means: (a) The model form error is within acceptable tolerances, (b) the data and parameters are within the domain of calibration, and (c) code, data, and solution verification are performed. Transparency means the ability to independently and objectively assess the trustworthiness of predictions. Transparency implies traceability and auditability.

ESRD developed and successfully deployed Sim Apps designed to improve engineering workflows concerned with the application of design rules. This involves either the selection of design parameters, such that a structural or mechanical component has a positive margin of safety, given a particular loading condition, or certifying that a component, characterized by a set of parameters, conforms with the applicable design rules, given a loading condition.

Deployment Through AI Tools

Augmented Language Model (ALM) is a term that refers to various techniques aimed at addressing the limitations of standard Large Language Models (LLMs), such as Microsoft’s Bing and Google’s Bard. ALMs enhance the capabilities of LLMs by augmenting them with the ability to use software tools, as well as the information necessary to correctly understand the output of those tools. Properly designed and implemented Sim Apps are ideally suited for this purpose.

Use Case

The following use case illustrates how users will interact with ALMs: The user states a problem in English professional language. For example: “I am interested in estimating the strength of a composite Pi joint with partial delamination. My goal is to estimate whether the delamination will increase, given a loading condition. I need to calculate the energy release rate along the delamination fronts.” Problems like this occur, for example, when making disposition decisions in condition-based maintenance. The ALM responds by recommending one or more expert-designed applications and providing an interface with access to the relevant documentation, and examples, as indicated in Fig. 1.

Figure 1: User interface of an expert-designed Sim App.

This figure shows a Sim App, developed by ESRD and available as a COTS tool. The user, who is not required to have expertise in numerical simulation, replaces the default model parameters with the applicable dimensions, material properties, stacking sequence, and loads. The parameters are restricted to permissible values. The solution process is initiated next. This process, performed automatically, produces the quantity of interest (QoI), in the present case the energy release rate associated with the delamination front, and an error estimation in terms of the QoI. In other words, solution verification is built into the application. Finally, the Sim App produces a report which contains the results and all records relevant to the analysis.

In formulating Sim Apps, the expert designer is responsible for controlling the model form errors and for setting limits on the parameters so that the Sim App can be used only within the domain of calibration of the model. Solution verification is provided by the Sim App and the user is advised of the size of the estimated approximation error in the quantities of interest. The conceptual and theoretical basis of this process is available in [1].

ESRD has developed and successfully deployed Sim Apps based on the scientific principles of finite element analysis (as opposed to the art of finite element modeling) that is fully capable of being augmented in LLMs. One should think of these applications as engineering handbooks on steroids: The essential difference between ESRD’s Sim Apps and classical engineering handbooks is that classical handbooks present results for parameterized problems that can be solved by classical methods. This severely limits the complexity of problems, consequently, users rarely find the problems they are truly interested in solving. Whereas the classical solutions are highly accurate, the limitations force users to interpolate between entries. This typically leads to large model form errors.

In contrast, ESRD’s Sim Apps produce verified FEA solutions for parameterized models. This enables the standardization of more complex models, offering reliability and accuracy on par with traditional engineering handbooks.

The Role of Generative AI

Generative AI can be trained to connect problem statements with the available solutions. Training will require curating and labeling items in the documentation of Sim Apps. The importance of proper training is highlighted by the following example:

In July 2023, I asked Microsoft Bing the following question: Is it correct to say: “finite element modeling is to numerical simulation like astrology is to astronomy”?

Bing replied:

Yes, it is correct to say that “finite element modeling is to numerical simulation like astrology is to astronomy”. The analogy is used to describe the difference between the two methods. Finite element modeling is a subset of numerical simulation that uses a mesh of small elements to approximate the solution of a differential equation. Numerical simulation is a more general term that includes other methods such as finite difference and finite volume methods. Similarly, astrology is a subset of astronomy that uses celestial objects’ positions and movements to predict human affairs. Astronomy is a more general term that includes other methods such as astrophysics and cosmology.

My response:

You gave the correct answer “yes” but your explanation was wrong: Finite element modeling is not a subset of numerical simulation and astrology cannot be a subset of astronomy because astrology is not a scientific field.

Bing replied:

I’m sorry if my previous response was not helpful. I’m still learning so I appreciate your understanding and patience.

I asked the same question six months later. This time Bing correctly distinguished between astrology and astronomy but was confused about the relationship between finite element modeling and numerical simulation.  Clearly, there is much more training to do. For information on the present state of AI, and the direction of its evolution, I recommend an excellent lecture by Professor Michael Wooldridge on the future of generative AI [2].


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis. Method, Verification, and Validation. John Wiley & Sons, Inc., 2021.

[2] Wooldridge, M. The Turing Lectures: The future of generative AI (December 2023).  Video recording.  https://www.youtube.com/watch?v=b76gsOSkHB4&t=3270s.


Related Blogs:

]]>
https://www.esrd.com/low-hanging-fruit-smart-engineering-simulation-apps/feed/ 0
Questions About Singularities https://www.esrd.com/questions-about-singularities/ https://www.esrd.com/questions-about-singularities/#respond Fri, 05 Jan 2024 14:50:05 +0000 https://www.esrd.com/?p=30716 Engineering students and professionals alike want to know: are singularities "real"? And if so, when they appear in our solutions, what are we supposed to do with them? Regarding the practical question of what to do with singularities; we need to distinguish between cases where singularities are just nuisances and where a singularity is the object of simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In my many years of teaching finite element analysis to engineering students, I had to answer many questions about singularities. The usual question, typically in a skeptical tone, was: Are singularities real? 

As I was teaching mechanical engineering students, I understood that the question was in the context of continuum mechanics, and the tone suggested that the student found the idea of a material withstanding infinitely large stresses to be utterly absurd. I also understood that he was really interested in knowing why singularities appear in our solutions, and, as a practical matter, what we are supposed to do about them. 

I seized such teachable moments to discuss the relationship between mathematical models and physical reality. I explained that a mathematical model is a precisely formulated idea about some specific aspect of physical reality, and should never be confused with reality. Except for marketing pronouncements, there is no such thing as ‘real-world simulation’ or ‘simulating reality’.

Mathematical models are based on certain assumptions which impose limitations on the scope of applicability of the model. For example, models that incorporate the assumptions of linear elasticity, limit strains to be much smaller than unity, the stress is assumed to be proportional to strain, independent of the magnitude of strain, and the material is assumed to be homogeneous. As long as these assumptions are satisfied, the model will make reasonable predictions of deformation and stress distribution.  However, the model will produce distorted images of reality when those limitations are exceeded. A common mistake in interpreting model predictions is not taking the limitations of the model into account. 

Regarding the practical question of what to do with singularities; we need to distinguish between cases where singularities are just nuisances and where a singularity is the object of simulation.

Singularities as Nuisances

Singularities usually occur due to some minor simplification: For example, in a complicated mechanical or structural component, we may omit fillets, represent the applied forces by point loads, allow abrupt changes in constraint conditions, and so on. In other words, we make the a priori judgment that those simplifications will not significantly influence the quantities of interest.

It is useful to think of the solution domain Ω as consisting of a region of primary interest Ω1 and a region of secondary interest Ω2. The quantities of interest are defined on Ω1. The role of Ω2 is to provide the boundary conditions for Ω1. It is sufficient to ensure that the error, measured in the norm of the formulation, is small on Ω2, a condition that is usually not difficult to satisfy, even when minor features, such as fillets and details of load distribution are omitted. 

Using the terminology of structural mechanics, the problem is one of load-displacement relationships on the region of secondary interest, whereas, on the region of primary interest, it is one of strength relationships, that is, the exact values of the quantities of interest have to be finite numbers.

Singularities as the Objects of Simulation

Linear elastic fracture mechanics (LEFM) is an important sub-field of structural mechanics. The goal of the simulation is to predict the size of a crack in a structural or mechanical component, given an initial crack configuration and a load spectrum. Since crack propagation involves highly nonlinear, irreversible processes, it may seem surprising that the predictor can be determined from the stress field of a problem of linear elasticity. A brief explanation follows.

Consider a crack-like notch in an elastic plate, loaded by forces F, as shown in Fig. 1. At the notch tip, the solution of linear elasticity predicts infinitely high stresses. However, pointwise stresses (and strains) have no meaning for real materials. The smallest volume on which stress is meaningfully defined for a real material is the representative volume element (RVE). Failure theories are formulated with reference to stresses or strains averaged over RVEs, not points.

Figure 1. Crack-like notch. Notation.

Surrounding the notch tip is a process zone, bounded by the curve ΓPZ shown in Fig 1. In the process zone, large dislocations and voids form and coalesce. These processes are not only outside of the scope of linear elasticity but outside of the scope of continuum mechanics as well. In the zone, labeled ΩNL, continuum mechanics with non-linear material properties is applicable. On and outside of ΓNL the linear theory of elasticity is applicable.

The fundamental modeling assumptions of LEFM are that (a) there is a small circle of radius R, on which the solutions of the nonlinear continuum problem and the linear elasticity problem are virtually identical, and (b) the entire process inside the circle is characterized by the stress intensity factor(s) [1]. These assumptions permit experimental determination of the relationship between crack increments and the difference between the stress intensity factors corresponding to the maximum and minimum load levels in a load cycle, denoted by ΔK. The prediction of crack lengths is based on such empirical relationships.

LEFM models have been validated under constant cycle loading for long cracks in thin plates.  Prediction of the growth of small cracks in 3-dimensional stress fields is much more difficult and several ad-hoc procedures are in use [2]. It is safe to say that none of those procedures have been validated under proper validation protocols, such as those outlined in [3].

Conceptual Issues

LEFM is based on the assumption that the driver of crack propagation is a function of the stress intensity factors defined on two-dimensional stress fields. There are two major conceptual problems:

  • The relationship between crack increments and ΔK can only be calibrated using 3-dimensional test specimens which have singular points where the crack front intersects the surface of the specimens.  Those singularities, not present in two dimensions, influence the relationship between ΔK and the crack increment, hence that relationship is not purely a material property but also depends on the thickness dimension of the test article.
  • Application of current LEFM models to very short cracks, such as those that occur at fastener holes in aircraft structures, is highly problematic since the stress field is very different from the two-dimensional stress field on which the stress intensity factors are defined. Other drivers of crack propagation, defined on three-dimensional stress fields, have not been explored. Rather, correction factors have been used. However, the domains of calibration of the correction factors are generally unknown.

We now have reliable methods available to address these issues using the procedures of verification, validation, and uncertainty quantification (VVUQ) [3]. It will take a substantial investment, however, to upgrade the predictive performance of the currently used  LEFM models.  


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis. Method, Verification, and Validation. John Wiley & Sons, Inc., 2021.

[2] AFGROW DTD Handbook. https://afgrow.net/applications/DTDHandbook.

[3] B. Szabó and I. Babuška, “Methodology of model development in the applied sciences,” Journal of Computational and Applied Mechanics, vol. 16, no. 2, pp. 75-86, 2021 (open source).


Related Blogs:

]]>
https://www.esrd.com/questions-about-singularities/feed/ 0