Solution Verification Archives - ESRD https://www.esrd.com/tag/solution-verification/ Engineering Software Research and Development, Inc. Mon, 08 Jul 2024 13:28:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Solution Verification Archives - ESRD https://www.esrd.com/tag/solution-verification/ 32 32 Variational Crimes https://www.esrd.com/variational-crimes/ https://www.esrd.com/variational-crimes/#respond Mon, 08 Jul 2024 11:00:00 +0000 https://www.esrd.com/?p=31948 From the beginning of FEM acceptance, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated (variational crimes). This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In Thomas Kuhn’s terminology, “pre-science“ refers to a period of early development in a field of research [1]. During this period, there is no established explanatory framework (paradigm) mature enough to solve the main problems. In the case of the finite element method (FEM), the period of pre-science started when reference [2] was published in 1956 and ended in the early 1970s when scientific investigation began in the applied mathematics community. The publication of lectures at the University of Maryland [3] and the first mathematical book on FEM [4] marked the transition to what Kuhn termed “normal science”.

Two Views

Engineers view FEM as an intuitive modeling tool, whereas mathematicians see it as a method for approximating the solutions of partial differential equations cast in variational form. On the engineering side, the emphasis is on implementation and applications, while mathematicians are concerned with clarifying the conditions for stability and consistency, establishing error estimates, and formulating extraction procedures for various quantities of interest. 

From the beginning, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated. This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.

The Art of Finite Element Modeling

The art of finite element modeling has its roots in the pre-science period of finite element analysis when engineers sought to extend the matrix methods of structural analysis, developed for trusses and frames, to complex structures such as plates, shells, and solids. The major finite element modeling software products in use today, such as NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970. As long as the goal is to find force-displacement relationships, such as in load models of airframes and crash dynamics models of automobiles, finite element modeling can provide useful information. However, problems arise when the quantities of interest include (or depend on) the pointwise derivatives of the solution, as in strength analysis where stresses and strains are of interest.

Misplaced Accusations

The first mathematical book on the finite element method [4] dedicated a chapter to violations of the rules of variational calculus in various implementations of the finite element method. The title of the chapter is “Variational Crimes,” a catchphrase that quickly caught on. The variational crimes are charged as follows:

  1. Using non-conforming Elements: Non-conforming elements are those that do not satisfy the interelement continuity requirements of the variational formulation.
  2. Using numerical integration.
  3. Approximating domains and boundary conditions.

Item 1 is a serious crime, however, the motivations for committing this crime can be negated by properly formulating mathematical models. Items 2 and 3 are not crimes; they are essential features of the finite element method, and the associated errors can be easily controlled. The authors were thinking about asymptotic error estimators (what happens when the diameter of the largest element goes to zero) that did not account for items 2 and 3. They did not want to bother with the complications caused by numerical integration and the approximation of the domains and boundary conditions, so they declared those features to be crimes. This may have been a clever move but certainly not a helpful one.

Sherlock Holmes investigating variational crimes in Victorian London. Image generated by Microsoft Copilot.

Egregious Variational Crimes

The authors of reference [4] failed to mention the truly egregious variational crimes that are very common in the practice of finite element modeling today and will have to be abandoned if the reliability predictions based on finite element computations are to be established:

  1. Using point constraints. Perhaps the most common variational crime is using point constraints for other than rigid body constraints. The finite element solution will converge to a solution that ignores the point constraints if such a solution exists, else it will diverge. However, the rates of convergence or divergence are typically very slow. For the discretizations used in practice, it is hardly noticeable.  So then, why should we worry about it? – Either we are not approximating the solution to the problem we had in mind, or we are “approximating” a problem that has no solution. Finding an approximation to a solution that does not exist makes no sense, yet such occurrences are very common in finite element modeling practice. The apparent credibility of the finite element solution is owed to the near cancellation of two large errors: The conceptual error of using illegal constraints and the numerical error of not using sufficiently fine discretization to make the conceptual error visible.  A detailed explanation is available in reference [5], Section 5.2.8.
  2. Using point forces in 2D and 3D elasticity (or more generally in 2D and 3D problems). In linear elasticity, the exact solution does not have finite strain energy when point forces are applied. Hence, any finite element solution “approximates” a problem that does not have a solution in energy space.  Once again, divergence is very slow. When point forces are applied, element-by-element equilibrium is satisfied, and the effects of point forces are local, whereas the effects of point constraints are global. Generally, it is permissible to apply point forces in the region of secondary interest but not in the region of primary interest, where the goal is to compute quantities that depend on the derivatives, such as stresses and strains [5].
  3. Using reduced integration. At the time of the publication of their book [4], Strang and Fix could not have known about reduced integration which was introduced a few years later [6]. Reduced integration was justified in typical finite element modeling fashion: Low-order elements exhibit shear locking and Poisson ratio locking. Since the elements that lock “are too stiff,” it is possible to make them softer by using fewer than the necessary integration points. The consequences were that the elements exhibited spurious “zero energy modes,” called “hourglassing,” that had to be controlled by various tuning parameters. For example, in the Abaqus Analysis User’s Manual, C3D8RHT(S) is identified as an “8-node trilinear displacement and temperature, reduced integration with hourglass control, hybrid with constant pressure” element. Tinkering with the integration rules may be useful in the art of finite element modeling when the goal is to tune stiffness relationships (as, for example, in crash dynamics models), but it is an egregious crime in finite element analysis because it introduces a source of error that cannot be controlled by mesh refinement, or increasing the polynomial degree, and makes a posteriori error estimation impossible.
  4. Reporting computed data that do not converge to a finite value. For example, if a domain has one or more sharp reentrant corners in the region of primary interest, then the maximum stress computed from a finite element solution will be a finite number but will tend to infinity when the degrees of freedom are increased. It is not meaningful to report such a computed value: The error is infinitely large.
  5. Tricks used when connecting elements based on different formulations. For example, connecting an axisymmetric shell element (3 degrees of freedom per node) with an axisymmetric solid element (2 degrees of freedom) involves tricks of various sorts, most of which are illegal.

Takeaway

The deeply ingrained practice of finite element modeling has its roots in the pre-science period of the development of the finite element method. To meet the current reliability expectations in numerical simulation, it will be necessary to routinely perform solution verification. This is possible only through the science of finite element analysis, respecting the rules of variational calculus. When thinking about digital transformation, digital twins, certification by analysis, and linking simulation with artificial intelligence tools, one must think about the science of finite element analysis and not the art of finite element modeling rooted in pre-1970s thinking.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Turner, M.J., Clough, R.W., Martin, H.C. and Topp, L.J. Stiffness and deflection analysis of complex structures. Journal of the Aeronautical Sciences23(9), pp. 805-823, 1956.

[3] Babuška, I. and Aziz, A.K. Survey lectures on the mathematical foundations of the finite element method.  The mathematical foundations of the finite element method with applications to partial differential equations (A. K. Aziz, ed.) Academic Press, 1972.

[4] Strang, G. and Fix, G. An analysis of the finite element method. Prentice Hall, 1973.

[5] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[6] Hughes, T.J., Cohen, M. and Haroun, M. Reduced and selective integration techniques in the finite element analysis of plates. Nuclear Engineering and Design46(1), pp.203-222, 1978.


Related Blogs:

]]>
https://www.esrd.com/variational-crimes/feed/ 0
Simulation Governance https://www.esrd.com/simulation-governance-at-the-present/ https://www.esrd.com/simulation-governance-at-the-present/#respond Thu, 13 Jun 2024 20:23:21 +0000 https://www.esrd.com/?p=31866 At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation, digital twins, certification by analysis, and AI-assisted simulation projects are generating considerable interest in engineering communities. For these initiatives to succeed, the reliability of numerical simulations must be assured. This can happen only if management understands that simulation governance is an essential prerequisite for success and undertakes to establish and enforce quality control standards for all simulation projects.

The idea of simulation governance is so simple that it is self-evident: Management is responsible for the exercise of command and control over all aspects of numerical simulation. The formulation of technical requirements is not at all simple, however. A notable obstacle is the widespread confusion of the practice of finite element modeling with numerical simulation. This misconception is fueled by marketing hyperbole, falsely suggesting that purchasing a suite of software products is equivalent to outsourcing numerical simulation.  

At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.

The Kuhn Cycle

The development of numerical simulation technology falls under the broad category of scientific research programs, which encompass model development projects in the engineering and applied sciences as well. By and large, these programs follow the pattern of the Kuhn Cycle [1] illustrated schematically in Fig. 1 in blue:

Figure 1: Schematic illustration of the Kuhn cycle.

A period of pre-science is followed by normal science. In this period, researchers have agreed on an explanatory framework (paradigm) that guides the development of their models and algorithms.  Program (or model) drift sets in when problems are identified for which solutions cannot be found within the confines of the current paradigm. A program crisis occurs when the drift becomes excessive and attempts to remove the limitations are unsuccessful. Program revolution begins when candidates for a new approach are proposed. This eventually leads to the emergence of a new paradigm, which then becomes the explanatory framework for the new normal science.

The Development of Finite Element Analysis

The development of finite element analysis followed a similar pattern. The period of pre-science began in 1956 and lasted until about 1970. In this period, engineers who were familiar with the matrix methods of structural analysis were trying to extend that method to stress analysis. The formulation of the algorithms was based on intuition; testing was based on trial and error, and arguing from the particular to the general (a logical fallacy) was common.   

Normal science began in the early 1970s when the mathematical foundations of finite element analysis were addressed in the applied mathematics community. By that time, the major finite element modeling software products in use today were under development. Those development efforts were largely motivated by the needs of the US space program. The developers adopted a software architecture based on pre-science thinking. I will refer to these products as legacy FE software: For example, NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970.

Mathematical analysis of the finite element method identified a number of conceptual errors. However, the conceptual framework of mathematical analysis and the language used by mathematicians were foreign to the engineering community, and there was no meaningful interaction between the two communities.

The scientific foundations of finite element analysis were firmly established by 1990, and finite element analysis became a branch of applied mathematics. This means that, for a very large class of problems that includes linear elasticity, the conditions for stability and consistency were established, estimates were obtained for convergence rates, and solution verification procedures were developed, as were elegant algorithms for superconvergent extraction of quantities of interest such as stress intensity factors. I was privileged to have worked closely with Ivo Babuška, an outstanding mathematician who is rightfully credited for many key contributions.

Normal science continues in the mathematical sphere, but it has no influence on the practice of finite element modeling. As indicated in Fig. 1, the practice of finite element modeling is rooted in the pre-science period of finite element analysis, and having bypassed the period of normal science, it had reached the stage of program crisis decades ago.

Evidence of Program Crisis

The knowledge base of the finite element method in the pre-science period was a small fraction of what it is today. The technical differences between finite element modeling and numerical simulation are addressed in one of my earlier blog posts [2]. Here, I note that decision-makers who have to rely on computed information have reasons to be disappointed. For example, the Air Force Chief of Staff,  Gen. Norton Schwartz, was quoted in Defense News, 2012 [3] saying: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

General Schwartz expected that the reliability of predictions based on numerical simulation would be similar to the reliability of observations in physical tests. This expectation was not unreasonable considering that by that time, legacy FE software tools had been under development for more than 40 years. What the general did not know was that, while the user interfaces greatly improved and impressive graphic representations could be produced, the underlying solution methodology was (and still is) based on pre-1970s thinking.

As a result, efforts to integrate finite element modeling with artificial intelligence and to establish digital twins based on finite element modeling will surely end in failure.

Paradigm Change Is Necessary

Paradigm change is never easy. Max Planck observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This is often paraphrased, saying: “Progress occurs one funeral at a time.” Planck was referring to the foundational sciences and changing academic minds.  The situation is more challenging in the engineering sciences, where practices and procedures are often deeply embedded in established workflows and changing workflows is typically difficult and expensive.

What Should Management Do?

First and foremost, management should understand that simulation is one of the most abused words in the English language. Furthermore:

  • Treat any marketing claim involving simulation with an extra dose of skepticism. Prior to undertaking projects in the areas of digital transformation, certification by analysis, digital twins, and AI-assisted simulation, ensure that the mathematical models produce reliable predictions.
  • Recognize the difference between finite element modeling and numerical simulation.
  • Understand that mathematical models produce reliable predictions only within their domains of calibration.
  • Treat model form and numerical approximation errors separately and require error control in the formulation and application of mathematical models.
  • Do not accept computed data without error metrics.
  • Understand that model development projects are open-ended.
  • Establish conditions favorable for the evolutionary development of mathematical models.
  • Become familiar with the concepts and terminology in reference [4]. For additional information on simulation governance, I recommend ESRD’s website.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023. https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/.

[3] Weisgerber, M. DoD Anticipates Better Price on Next F-35 Batch, Gannett Government Media Corporation, 8 March 2012. [Online]. Available: https://tinyurl.com/282cbwhs.

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. 


Related Blogs:

]]>
https://www.esrd.com/simulation-governance-at-the-present/feed/ 0
Digital Transformation https://www.esrd.com/digital-transformation/ https://www.esrd.com/digital-transformation/#respond Fri, 17 May 2024 01:31:22 +0000 https://www.esrd.com/?p=31765 Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, we address digital transformation in the engineering sciences, focusing on numerical simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, I address digital transformation in the engineering sciences, focusing on numerical simulation.

Digital Technologies in the Engineering Sciences

Digital technologies have been integrated into the engineering sciences since the 1950s.  The adoption process has not been uniform across all disciplines. Some fields (like aerospace) adopted technologies early, while others were slower to change. The development and adoption of these technologies are ongoing. Engineering today is increasingly digital, and innovations are constantly changing the way engineers approach their work. Here are some important milestones:

Early Adoption (1950s-1970s)

  • Mainframe computers were used for engineering calculations that would have been impossible or extremely time-consuming to perform by hand.
  • Numerical control (NC) machines used punched tape or cards to control tool movements, streamlining machining processes.
  • Early Computer-Aided Design (CAD) systems revolutionized drafting in the 1960s. They allowed engineers to create and manipulate drawings on a computer, making design iterations much faster than previously possible.

Period of Rapid Growth (1980s-1990s)

  • Affordable Personal Computers (PCs) made computing power accessible to individual engineers and small firms.
  • Development of CAD software brought 3D modeling from specialized applications into mainstream design.
  • Finite Element Modeling software became commercially available, allowing engineers to perform structural and strength calculations.
  • The mathematical foundations of the finite element method (FEM) were established, and finite element analysis (FEA) became a branch of Applied Mathematics.

Post-Millennial Development  (2000s-Present)

  • Cloud-based solutions offer scalable computing power and collaboration tools, making complex calculations accessible without massive hardware investment.
  • Building Information Modeling (BIM) revolutionized the architecture, engineering, and construction (AEC) industries.
  • Internet of Things (IoT): Networked sensors and devices provide engineers with real-time data to monitor structures, predict maintenance needs, and optimize operations.
  • Additive Manufacturing (3D Printing) allows for the rapid creation of complex prototypes and even functional end-use parts.

Given that digital technologies have been successfully integrated into engineering practice, it may appear that not much else needs to be done. However, important challenges remain, and there are many opportunities for improvement. This is discussed next.

Outlook: Opportunities and Challenges

Bearing in mind that the primary goal of digital transformation is to enhance competitiveness, in the field of numerical simulation, this translates to improving the predictive performance of mathematical models. Ideally, we aim to reach a reliability level in model predictions comparable to that of physical experimentation. From the technological point of view, this goal is achievable: We have the theoretical understanding of how to maximize the predictive performance of mathematical models through the application of verification, validation, and uncertainty quantification procedures. Furthermore, advancements in explainable artificial intelligence (XAI) technology can be utilized to optimize the management of numerical simulation projects so as to maximize their reliability and effectiveness.  

The primary challenge in the field of engineering sciences is that further progress in digital transformation will require fundamental changes in how numerical simulation is currently understood by the engineering community and how it is practiced in industrial settings. It is essential to keep in mind the differences between finite element modeling and numerical simulation. I explained the reasons for this in an earlier blog post [1]. The art of finite element modeling will have to be replaced by the science of finite element analysis, and the verification, validation, and uncertainty quantification (VVUQ) procedures will have to be applied [2].

Paradoxically, the successful early integration of finite element modeling practices and software tools into engineering workflows now impedes attempts to utilize technological advances that occurred after the 1970s. The software architecture of legacy finite element codes was substantially set by 1970, based on understanding the finite element method that existed at that time. Limitations of the software architecture prevented subsequent advances, such as a posteriori error estimation in terms of the quantities of interest and control of model form errors, both of which are essential for meeting the reliability requirements in numerical simulation. Abandoning finite element modeling practices and embracing the methodology of numerical simulation technology is a major challenge for the engineering community.

The “I Believe” Button

An ANSYS blog [3] tells the story of a presentation made to an A&D executive. The presentation was to make a case for transforming his department using digital engineering. At the end of the presentation, the executive pointed to a coaster on his desk. “See this? That’s the ‘I believe’ button. I can’t hit it. I just can’t hit it. Help me hit it.” Clearly, the executive was asking for convincing evidence that the computed information was sufficiently reliable to support decision-making in his department. Put in another way, he did not have the courage to sign the blueprint on the basis of data generated by digital engineering. What it takes to gather such courage was addressed in one of my earlier blogs [4]. Reliability considerations significantly influence the implementation of simulation process data management (SPDM).

Change Is Necessary

The frequently cited remark by W. Edwards Deming: “Change is not obligatory, but neither is survival,” reminds us of the criticality of embracing change.


References

[1] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023.
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/
[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. 162 pp. 206–214, 2024. The publisher is providing free access to this article until May 22, 2024. Anyone may download it without registration or fees by clicking on this link:
https://authors.elsevier.com/c/1isOB3CDPQAe0b
[3] Bleymaier, S. Hit the “I Believe” Button for Digital Transformation. ANSYS Blog. June 14, 2023. https://www.ansys.com/blog/believe-in-digital-transformation
[4] Szabó B. Where do you get the courage to sign the blueprint? ESRD Blog. October 6, 2023.
https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/


Related Blogs:

]]>
https://www.esrd.com/digital-transformation/feed/ 0
Digital Twins https://www.esrd.com/digital-twins/ https://www.esrd.com/digital-twins/#respond Thu, 02 May 2024 15:33:33 +0000 https://www.esrd.com/?p=31726 The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term "digital twin" was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term “digital twin” was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.

Digital Twins: An Overview

An overview of the current understanding of the idea of digital twins at NASA is available in a keynote presentation delivered in 2021 [1]. This presentation contains the following quote from reference [2]:

“The Digital Twin (DT) is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin.”

I think that this is closer to being an aspirational statement than a functional definition of digital twins.  On the positive side, this statement articulates that the reliability of the results of the simulation should be comparable to that of a physical experiment. Note that this is possible only when mathematical models are used within their domains of calibration [3]. On the negative side, the description of a product “from the micro atomic level to the macro geometrical level” is neither necessary nor feasible. The goal of a simulation project is not to describe a physical system from A to Z but rather to predict the quantities of interest, such as expected fatigue life, margins of safety, limit load, deformation, natural frequency, and the like. In view of this, I propose the following definition:

“A Digital Twin (DT) is a set of mathematical models formulated to predict quantities of interest that characterize the functioning of a potential or actual manufactured product. When the mathematical models are used within their domains of calibration, the reliability of the predictions is comparable to that of a physical experiment.”

The set of mathematical models may comprise a single model of a component or several interacting component models. The motivation for creating digital twins typically comes from the requirements of product lifecycle management: High-value assets are monitored throughout their lifecycles, and the models that constitute a digital twin are updated with new data as they become available. This fits into the framework of model development projects discussed in one of my blogs, “Model Development in the Engineering Sciences,” and in greater detail in reference [3]. An essential attribute of any mathematical model is its domain of calibration.

Example 1: Component Twin

The Single Fastener Analysis Tool (SFAT) is a smart application engineered for comprehensive analyses of single and double shear joints of metal or composite plates. It also serves as an example of a component twin and highlights the technical challenges involved in the development of digital twins.

Figure 1. Single Fastener Analysis Tool (SFAT). Examples of use cases.

SFAT offers the flexibility to model laminates either as ply-by-ply or homogenized entities. It can accommodate various types of fastener heads, such as protruding and countersunk, including those with hollow shafts. It is capable of supporting different fits such as neat, interference, and clearance.

SFAT also provides additional input options to account for factors like shimmed and unshimmed gaps, bushings, and washers. The application allows for the specification of shear load and fastener pre-load as loading conditions. It provides estimates of the errors of approximation in terms of the quantities of interest.

Example 2: Asset Twin

A good example of asset twins is the structural health monitoring of large concrete dams. Following the collapse of the Malpasset dam in Provence, France, in 1959, the World Bank mandated that all dam projects seeking financial backing must undergo modeling and testing at the Experimental Institute for Models and Structures in Bergamo, Italy (ISMES). Subsequently, ISMES was commissioned to develop a system that will monitor the structural health of large dams. The dams would be instrumented, and a numerical simulation framework, now called digital twin, would be used to evaluate anomalies indicated by the instruments.

It was understood that numerical approximation errors would have to be controlled to small tolerances to ensure that they were negligibly small in comparison with the errors in measurements. To perform the calculations, a finite element program based on the p-version was written at ISMES in the second half of the 1970s under the direction of Dr. Alberto Peano, my former D.Sc. student. That program is still in use today under the name FIESTA [4].

Simulation Governance: Essential for Digital Twin Creation

Creating digital twins encompasses all aspects of model development, necessitating separate treatment of the model form and approximation errors. In other words, the verification, validation, and uncertainty quantification (VVUQ) procedures have to be applied. The model must be updated and recalibrated when new ideas are proposed or new data become available. The only difference is that in the case of digital twins, the updates involve individual object-specific data collected over the life span of the physical object.

Model development projects are classified as progressive, stagnant, and improper. A model development project is progressive if the domain of calibration is increasing, stagnant if it is not increasing, and improper if the problem-solving machinery is not consistent with the formulation of the mathematical model or lacks the ability to support solution verification [3]. The goal of simulation governance is to ensure that digital twin projects are progressive. Unfortunately, owing to a lack of simulation governance, the large majority of model development projects are improper, and hence, most digital twins fail to meet the required standards of reliability.


References

[1]  Allen, D. B. Digital Twins and Living Models at NASA. Keynote presentation at the ASME Digital Twin Summit. November 3, 2021.

[2] Grieves, M. and Vickers, J. Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In: Transdisciplinary Perspectives on Complex Systems. F-J. Kahlen, S. Flumerfelt and A. Alves (eds) Springer International Publishing, Switzerland, pp. 85-113, 2017.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.  The publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b

[4] Angeloni, P., Boccellato, R., Bonacina, E., Pasini, A., Peano, A.  Accuracy Assessment by Finite Element P-Version Software. In: Adey, R.A. (ed) Engineering Software IV. Springer, Berlin, Heidelberg, 1985. https://doi.org/10.1007/978-3-662-21877-8_24


Related Blogs:

]]>
https://www.esrd.com/digital-twins/feed/ 0
Not All Models Are Wrong https://www.esrd.com/not-all-models-are-wrong/ https://www.esrd.com/not-all-models-are-wrong/#respond Thu, 11 Apr 2024 15:55:43 +0000 https://www.esrd.com/?p=31628 Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


I never understood the statement: “All models are wrong, but some are useful”, attributed to George E. P. Box, a statistician, quoted in many papers and presentations. If that were the case, why should we try to build models and how would we know when and for what purposes they may be useful? We construct models with the objective of making reliable predictions, the degree of reliability being comparable to that of a physical experiment.

Consider, for example, the problem in Fig. 1 showing a sub-assembly of an aircraft structure. The quantity of interest is the margin of safety: Given multiple load conditions and design criteria, estimate the minimum value of the margin of safety and show that the numerical approximation error is less than 5%.   We must have sufficient reason to trust the results of simulation tasks like this.

Figure 1: Sub-assembly of an aircraft structure.

Trying to understand what George Box meant, I read the paper in which he supposedly made the statement that all models are wrong[1] but I did not find it very enlightening. Nor did I find that statement in its often-quoted form. What I found is this non sequitur: “Since all models are wrong the scientist must be alert to what is importantly wrong.” This makes the matter much more complicated: Now we have to classify wrongness into two categories: important and unimportant. By what criteria? – That is not explained.

Box did not have the same understanding as we do of what a mathematical model is. This is evidenced by the sentence: “In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless.” Our goal is not to model the “real world”, a vague concept, but to model specific aspects of physical reality, the quantities of interest having been clearly defined as, for example, in the case of the problem shown in Fig. 1. Our current understanding of mathematical models is based on the concept of model-dependent realism which was developed well after Box’s 1978 paper was written.

Model-Dependent Realism

The term model-dependent realism was introduced by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design [2] but the distinction between physical reality and ideas of physical reality is older. For example, Wolfgang Pauli wrote in 1948: “The layman always means, when he says `reality’ that he is speaking of something self-evidently known; whereas to me it seems the most important and exceedingly difficult task of our time is to work on the construction of a new idea of reality.” [From a letter to Markus Fierz.]

If two different models describe a set of physical phenomena equally well then both models are equally valid: It is meaningless to speak about “true reality”. In Hawking’s own words [3]: “I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation.” In other words, mathematical models are, essentially, phenomenological models.

What is a Mathematical Model?

A mathematical model is an operator that transforms one set of data D, the input, into another set, the quantities of interest F. In shorthand notation we have:

\boldsymbol D\xrightarrow[(I,\boldsymbol p)]{}\boldsymbol F,\quad (\boldsymbol D, \boldsymbol p) \in ℂ \quad (1)

where the right arrow represents the mathematical model. The letters I and p under the right arrow indicate that the transformation involves an idealization (I) as well as parameters (physical properties) p that are determined through calibration experiments. Restrictions on D and p define the domain of calibration . The domain of calibration is an essential feature of any mathematical model [4], [5].

Most mathematical models used in engineering have the property that the quantities of interest F continuously depend on D and p. This means that small changes in D and/or p will result in correspondingly small changes in F which is a prerequisite to making reliable predictions.

To ensure that the predictions based on a mathematical model are reliable, it is necessary to control two types of error: The model form error and the numerical approximation errors.

Model Form Errors

The formulation of mathematical models invariably involves making restrictive assumptions such as neglecting certain geometric features, idealizing the physical properties of the material, idealizing boundary conditions, neglecting the effects of residual stresses, etc. Therefore, any mathematical model should be understood to be a special case of a more comprehensive model. This is the hierarchic view of models.

To test whether a restrictive assumption is acceptable for a particular application, it is necessary to estimate the influence of that assumption on the quantities of interest and, if necessary, revise the model. An exploration of the influence of modeling assumptions on the quantities of interest is called virtual experimentation [6]. Simulation software tools must have the capability to support virtual experimentation.

Approximation Errors

Approximation errors occur when the quantities of interest are estimated through a numerical process.  This means that we get a numerical approximation to F, denoted by Fnum. It is necessary to show that the relative error in Fnum does not exceed an allowable value τall:

| \boldsymbol F - \boldsymbol F_{num} |/|\boldsymbol F| \le \tau_{all} \quad (2)

This is the requirement of solution verification. To meet this requirement, it is necessary to obtain a converging sequence of numerical solutions with respect to increasing degrees of freedom [6].

Model Development Projects

The formulation of mathematical models is a creative, open-ended activity, guided by insight, experience, and personal preferences. Objective criteria are used to validate and rank mathematical models [4], [5]. 

Model development projects have been classified as progressive, stagnant, and improper [5]. A model development project is progressive if the domain of calibration is increasing, stagnant if the domain of calibration is not increasing, and improper if one or more algorithms are inconsistent with the formulation or the problem-solving method does not have the capability to estimate and control the numerical approximation errors in the quantities of interest. The most important objective of simulation governance is to provide favorable conditions for the evolutionary development of mathematical models and to ensure that the procedures of verification, validation and uncertainty quantification (VVUQ) are properly applied.

Not All Models Are Wrong, but Many of Them Are…

Box’s statement that all models are wrong is not correct. Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models. And there are models, not tethered to scientific principles and methods, that are not even wrong.


References

[1] Box, G. E. P. Science and Statistics. Journal of the American Statistical Association, Vol. 71, No. 356, pp. 791-799, 1976.

[2] Hawking, S. and Mlodinow, L. The Grand Design. Random House 2010.

[3] Hawking, S. The nature of space and time.  Princeton University Press, 2010 (with Roger Penrose).

[4] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp. 75-86, 2021 [open source].

[5] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024. Note: the publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b.

[6] B. Szabó and I. Babuška,  Finite Element Analysis.  Method, Verification and Validation. 2nd edition, John Wiley & Sons, Inc., 2021.  


Related Blogs:

]]>
https://www.esrd.com/not-all-models-are-wrong/feed/ 0
Certification by Analysis (CbA) – Are We There Yet? https://www.esrd.com/certification-by-analysis-are-we-there-yet/ https://www.esrd.com/certification-by-analysis-are-we-there-yet/#respond Thu, 07 Mar 2024 21:36:09 +0000 https://www.esrd.com/?p=31410 Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards. The key to CbA is reliability. This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments. To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


While reading David McCullough’s book “The Wright Brothers”, a fascinating story about the development of the first flying machine, this question occurred to me: Would the Wright brothers have succeeded if they had used substantially fewer physical experiments and relied on finite element modeling instead?  I believe that the answer is: no.  Consider what happened in the JSF program.

Lessons from the JSF Program

In 1992, eighty-nine years after the Wright brothers’ Flying Machine first flew at Kitty Hawk, the US government, decided to fund the design and manufacture of a fifth-generation fighter aircraft that combines air-to-air, strike, and ground attack capabilities. Persuaded that numerical simulation technology was sufficiently mature, the decision-makers permitted the manufacturer to concurrently build and test the aircraft, known as Joint Strike Fighter (JSF). The JSF, also known as the F-35, was first flown in 2006.  By 2014, the program was 163 billion dollars over budget and seven years behind schedule.

Two senior officers illuminated the situation in these words:

Vice Admiral David Venlet, the Program Executive Officer, quoted in AOL Defense in 2011 [1]: “JSF’s build and test was a miscalculation…. Fatigue testing and analysis are turning up so many potential cracks and hot spots in the Joint Strike Fighter’s airframe that the production rate of the F-35 should be slowed further over the next few years… The cost burden sucks the wind out of your lungs“.

Gen. Norton Schwartz, Air Force Chief of Staff, quoted in Defense News, 2012 [2]: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

These officers believed that the software tools were so advanced that testing would confirm the validity of design decisions based on them. This turned out to be wrong. However, their mistaken belief was not entirely unreasonable if we consider that by the start of the JSF program commercial finite element analysis (FEA) software products were 30+ years old, therefore they could have reasonably assumed that the reliability of these products greatly improved, as were the hardware systems and visualization tools capable of creating impressive color images, tacitly suggesting that the underlying methodology is capable of guaranteeing the quality and reliability of the output quantities.  Indeed, there were very significant advancements in the science of finite element analysis which became a bona-fide branch of applied mathematics in that period.  The problem was that commercial FEA software tools did not keep pace with those important scientific developments.

There are at least two reasons for this:  First, the software architecture of the commercial finite element codes was based on the thinking of the 1960s and 70s when the theoretical foundations of FEA were not yet established.  As a result, several limitations were incorporated.  Those limitations kept code developers from incorporating later advancements, such as a posteriori error estimation, advanced discretization strategies, and stability criteria.  Second, decision-makers who rely on computed information failed to specify the technical requirements that simulation software must meet, such as, for example, to report not just the quantities of interest but also their estimated relative errors.  To fulfill this key requirement, legacy FE software would have had to be overhauled to such an extent that only their nameplates would have remained the same.

Technical Requirements for CbA

Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards.  The key to CbA is reliability.  This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments.   To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.

Model form errors occur because we invariably make simplifying assumptions when we formulate mathematical models.  For example, formulations based on the theory of linear elasticity include the assumptions that the stress-strain relationship is a linear function, independent of the size of the strain and that the deformation is so small that the difference between the equilibrium equations written on the undeformed and deformed configurations can be neglected.  As long as these assumptions are valid, the linear theory of elasticity provides reliable estimates of the response of elastic bodies to applied loads.  The linear solution also provides information on the extent to which the assumptions were violated in a particular model.  For example, if it is found that the strains exceed the proportional limit, it is advisable to check the effects of plastic deformation.  This is done iteratively until a convergence criterion is satisfied.  Similarly, the effects of large deformation can be estimated.  Model form errors are controlled by viewing any mathematical model as one in a sequence of hierarchic models of increasing complexity and selecting the model that is consistent with the conditions of the simulation.

Numerical errors are the errors associated with approximating the exact solution of mathematical problems, such as the equations of elasticity, Navier-Stokes, and Maxwell, and the method used to extract the quantities of interest from the approximate solution.   The goal of solution verification is to show that the numerical errors in the quantities of interest are within acceptable bounds.

The domain of calibration defines the intervals of physical parameters and input data on which the model was calibrated.  This is a relatively new concept, introduced in 2021 [3], that is also addressed in a forthcoming paper [4].  A common mistake in simulation is to use models outside of their domains of calibration.

Organizational Aspects

To achieve the level of reliability in numerical simulation, necessary for the utilization of CbA, management will have to implement simulation governance [5] and apply the protocols of verification, validation, and uncertainty quantification.

Are We There Yet?

No, we are not there yet. Although we have made significant progress in controlling errors in model form and numerical approximation, one very large obstacle remains: Management has yet to recognize that they are responsible for simulation governance, which is a critical prerequisite for CbA.


References

[1] Whittle, R. JSF’s Build and Test was ‘Miscalculation,’ Adm. Venlet Says; Production Must Slow. [Online] https://breakingdefense.com/2011/12/jsf-build-and-test-was-miscalculation-production-must-slow-v/ [Accessed 21 February 2024].

[2] M. Weisgerber, M.  DoD Anticipates Better Price on Next F-35 Batch.  Gannett Government Media Corporation, 8 March 2012. [Online]. https://tinyurl.com/282cbwhs [Accessed 22 February 2024].

[3] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021 [open source].

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  To appear in Computers & Mathematics with Applications in 2024.  The manuscript is available on request.

[5] Szabó, B. and Actis, R. Planning for Simulation Governance and Management:  Ensuring Simulation is an Asset, not a Liability. Benchmark, July 2021.


Related Blogs:

]]>
https://www.esrd.com/certification-by-analysis-are-we-there-yet/feed/ 0
Model Development in the Engineering Sciences https://www.esrd.com/model-development-in-engineering-sciences/ https://www.esrd.com/model-development-in-engineering-sciences/#respond Mon, 12 Feb 2024 19:29:36 +0000 https://www.esrd.com/?p=31040 In the engineering sciences, mathematical models are based on the equations of continuum mechanics, heat flow, Maxwell, Navier-Stokes, or some combination of these. These equations have been validated and their domains of calibration are generally much larger than the expected domain of calibration of the model being developed. In the terminology introduced by Lakatos, the assumptions incorporated in these equations are called hardcore assumptions, and the assumptions incorporated in the other constituents of a model are called auxiliary hypotheses. Model development is concerned with the formulation, calibration, and validation of auxiliary hypotheses. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are based on the equations of continuum mechanics, heat flow, Maxwell, Navier-Stokes, or some combination of these. These equations have been validated and their domains of calibration are generally much larger than the expected domain of calibration of the model being developed. In the terminology introduced by Lakatos [1], the assumptions incorporated in these equations are called hardcore assumptions, and the assumptions incorporated in the other constituents of a model are called auxiliary hypotheses. Model development is concerned with the formulation, calibration, and validation of auxiliary hypotheses. 

Assume, for example, that we are interested in predicting the length of a small crack in a flight-critical aircraft component, caused by the application of a load spectrum. In this case, the mathematical model comprises the equations of continuum mechanics (the hardcore assumptions) and the following auxiliary hypotheses: (a) a predictor of crack propagation, (b) an algorithm that accounts for the statistical dispersion of the calibration data, and (c) an algorithm that accounts for the retardation effects of tensile overload events and the acceleration effects of compressive overload events.

The auxiliary hypotheses introduce parameters that have to be determined by calibration. In our example, we are concerned with crack propagation caused by variable-cycle loading. In linear elastic fracture mechanics (LEFM), for example, the commonly used predictor of crack increment per cycle is the difference in the values of the stress intensity factors between subsequent high and low positive values, denoted by ΔK.

The relationship between crack increment per cycle, denoted by Δa, and the corresponding ΔK value is determined through calibration experiments. Various hypotheses are used to account for the cycle ratio. Additional auxiliary hypotheses account for the statistical dispersion of crack length and the retardation and acceleration events caused by loading sequence effects. The formulation of auxiliary hypotheses is a creative process. Therefore, model development projects must be open to new ideas. Many plausible hypotheses have been and can yet be proposed. Ideally, the predictive performance of competing alternatives would be evaluated using all of the qualified data available for calibration and the models ranked accordingly. Given the stochastic nature of experimental data, predictions should be in terms of probabilities of outcome. Consequently, the proper measure of predictive performance is the likelihood function. Ranking must also account for the size of the domain of calibration [2]. The volume of experimental information tends to increase over time. Consequently, model development is an open-ended activity encompassing subjective and objective elements.

Example: Calibration and Ranking Models of Crack Growth in LEFM

Let us suppose that we want to decide whether we should prefer the Walker [3] or Forman [4] versions of the predictor of crack propagation based on experimental data consisting of specimen dimensions, elastic properties, and tabular data of measured crack length (a) vs. the observed number of load cycles (N) for each cycle ratio (R). For the sake of simplicity, we assume constant cycle loading conditions

The first step is to construct a statistical model for the probability density of crack length, given the number of cycles and the characteristics of the load spectrum. The second step is to extract the ΔaN vs. ΔK data from the a vs. N data where ΔK is determined from the specimen dimensions and loading conditions. The third step is to calibrate each of the candidate hypotheses. This involves setting the predictor’s parameters so that the likelihood of the predicted data is maximum. This process is illustrated schematically by the flow chart shown in Fig. 1.

Figure 1: Schematic illustration of the calibration process.

Finally, the calibration process is documented and the domain of calibration is defined. The model that scored the highest likelihood value is preferred. The ranking is, of course, conditioned on the data available for calibration. As new data are acquired, the calibration process has to be repeated, and the ranking may change. It is also possible that the likelihood values are so close that the results do not justify preferring one model over another. Those models are deemed equivalent. Model development is an open-ended process. No one has the final say.

Opportunities for Improvement

To my knowledge, none of the predictors of crack propagation used in current professional practice have been put through a process of verification, validation, and uncertainty quantification (VVUQ) as outlined in the foregoing section. Rather, investigators tend to follow an unstructured process, whereby they have an idea for a predictor, and, using their experimental data, show that, with a suitable choice of parameters, their definition of the predictor works. Typically, the domain of calibration is not defined explicitly but can be inferred from the documentation. The result is that the relative merit of the ideas put forward by various investigators is unknown and the domains of calibration tend to be very small. In addition, no assurances are given regarding the quality of the data on which the calibration depends. In many instances, only the derived data (i.e. the ΔaN vs. ΔK data), rather than the original records of observation (i.e. the a vs. N data) are made available. This leaves the question of whether the ΔK values were properly verified unanswered.

The situation is similar in the development of design rules for metallic and composite materials: Much work is being done without the disciplined application of VVUQ protocols.  As a result, most of that work is being wasted. 

For example, The World Wide Failure Exercise (WWFE), an international project with the mission to find the best method to accurately predict the strength of composite materials, failed to produce the desired result.  See, for example, [5].  A highly disturbing observation was made by Professor Mike Hinton, one of the organizers of WWFE, in his keynote address to the 2011 NAFEMS World Congress [6]: “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.”  I do not believe that significant improvements in predictive performance occurred since then.

In my view, progress will not be possible unless and until VVUQ protocols are adopted for model development projects.  These protocols play a crucial role in the evolutionary development of mathematical models. 


References

[1] Lakatos, I. The methodology of scientific research programmes, vol. 1, J. Currie and G. Worrall, Eds., Cambridge University Press, 1972.

[2] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021.

[3] Walker, K. The Effect of Stress Ratio During Crack Propagation and Fatigue for 2024-T3 and 7075-T6 Aluminum. Effects of Environment and Complex Load History on Fatigue Life, ASTM International, pp. 1–14, 1970. doi:10.1520/stp32032s, ISBN 9780803100329

[4] Forman, R. G., Kearney, V. E.  and Engle, R. M.  Numerical analysis of crack propagation in cyclic-loaded structures. Journal of Basic Engineering, pp. 459-463, September 1967.

[5] Christensen, R. M. Letter to World Wide Failure Exercise, WWFE-II. https://www.failurecriteria.com/lettertoworldwid.html

[6] Hinton, M. Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?  NAFEMS World Congress, Boston, May 2011.


Related Blogs:

]]>
https://www.esrd.com/model-development-in-engineering-sciences/feed/ 0
Why Finite Element Modeling is Not Numerical Simulation? https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/ https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/#respond Thu, 02 Nov 2023 15:05:12 +0000 https://www.esrd.com/?p=30196 The term “simulation” is often used interchangeably with “finite element modeling” in the engineering literature and marketing materials.  It is important to understand the difference between the two.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The term “simulation” is often used interchangeably with “finite element modeling” in the engineering literature and marketing materials.  It is important to understand the difference between the two.

The Origins of Finite Element Modeling

Finite element modeling is a practice rooted in the 1960s and 70s.  The development of the finite element method began in 1956 and was greatly accelerated during the US space program in the 1960s. The pioneers were engineers who were familiar with the matrix methods of structural analysis and sought to extend those methods to solve the partial differential equations that model the behavior of elastic bodies of arbitrary geometry subjected to various loads.   The early papers and the first book on the finite element method [1], written when our understanding of the subject was just a small fraction of what it is today, greatly influenced the idea of finite element modeling and its subsequent implementations.

Guided by their understanding of models for structural trusses and frames, the early code developers formulated finite elements for two- and three-dimensional elasticity problems, plate and shell problems, etc. They focused on getting the stiffness relationships right, subject to the limitations imposed by the software architecture on the number of nodes per element and the number of degrees of freedom per node.  They observed that elements of low polynomial degree were “too stiff”.  The elements were then “softened” by using fewer integration points than necessary.  This caused “hourglassing” (zero energy modes) to occur which was fixed by “hourglass control”.  For example, the formulation of the element designated as C3D8R and described as “8-node linear brick, reduced integration with hourglass control” in the Abaqus Analysis User’s Guide [2] was based on such considerations.

Through an artful combination of elements and the finite element mesh, the code developers were able to show reasonable correspondence between the solutions of some simple problems and the finite element solutions.  It is a logical fallacy, called the fallacy of composition, to assume that elements that performed well in particular situations will also perform well in all situations.

The Science of Finite Element Analysis

Investigation of the mathematical foundations of finite element analysis (FEA) began in the early 1970s.  Mathematicians understand FEA as a method for obtaining an approximation to the exact solution of a well-defined mathematical problem, such as a problem of elasticity.  Specifically, the finite element solution uFE has to converge to the exact solution uEX in a norm (which depends on the formulation) as the number of degrees of freedom n is increased:

Under conditions that are usually satisfied in practice, it is known that uEX exists and it is unique.

The first mathematical book on finite element analysis was published in 1973 [3].  Looking at the engineering papers and contemporary implementations, the authors identified four types of error, called “variational crimes”. These are (1) non-conforming elements, (2) numerical integration, (3) approximation of the domain and boundary conditions, and (4) mixed methods. In fact, many other kinds of variational crimes commonly occur in finite element modeling, such as using point forces, point constraints, and reduced integration.

By the mid-1980s the mathematical foundations of FEA were substantially established.   It was known how to design finite element meshes and assign polynomial degrees so as to achieve optimal or nearly optimal rates of convergence, how to extract the quantities of interest from the finite element solution, and how to estimate their errors.  Finite element analysis became a branch of applied mathematics.

By that time the software architectures of the large finite element codes used in current engineering practice were firmly established. Unfortunately, they were not flexible enough to accommodate the new technical requirements that arose from scientific understanding of the finite element method. Thus, the pre-scientific origins of finite element analysis became petrified in today’s legacy finite element codes.

Figure 1 shows an example that would be extremely difficult, if not impossible, to solve using legacy finite element analysis tools:

Figure 1: Lug-clevis-pin assembly. The lug is made of 16 fiber-matrix composite plies and 5 titanium plies. The model accounts for mechanical contact as well as the nonlinear deformation of the titanium plies. Solution verification was performed.

Notes on Tuning

On a sufficiently small domain of calibration any model, even a finite element model laden with variational crimes, can produce results that appear reasonable and can be tuned to match experimental observations. We use the term tuning to refer to the artful practice of balancing two large errors in such a way that they nearly cancel each other out. One error is conceptual:  Owing to variational crimes, the numerical solution does not converge to a limit value in the norm of the formulation as the number of degrees of freedom is increased. The other error is numerical: The discretization error is large enough to mask the conceptual error [4].

Tuning can be effective in structural problems, such as automobile crash dynamics and load models of airframes, where the force-displacement relationships are of interest.  Tuning is not effective, however, when the quantities of interest are stresses or strains at stress concentrations.  Therefore finite element modeling is not well suited for strength calculations.

Solution Verification is Mandatory

Solution verification is an essential technical requirement for democratization, model development, and applications of mathematical models.  Legacy FEA software products were not designed to meet this requirement. 

There is a general consensus that numerical simulation will have to be integrated with explainable artificial intelligence (XAI) tools.  This can be successful only if mathematical models are free from variational crimes.

The Main Points

Owing to limitations in their infrastructure, legacy finite element codes have not kept pace with important developments that occurred after the mid-1970s.

The practice of finite element modeling will have to be replaced by numerical simulation.  The changes will be forced by the technical requirements of XAI.

References

[1]  O. C. Zienkiewicz and Y. K. Cheung, The Finite Element Method in Structural and Continuum Mechanics, London: McGraw-Hill, 1967.

[2]  http://130.149.89.49:2080/v6.14/books/usb/default.htm

[3] G. Strang and G. J. Fix, An Analysis of the Finite Element Method, Englewood Cliffs, NJ: Prentice-Hall, 1973. [4] B. Szabό and I. Babuška, Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[4] B. Szabό and I. Babuška, Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

]]>
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/feed/ 0
Why Is a Hierarchic Modeling Framework Important? https://www.esrd.com/why-is-hierarchic-modeling-framework-important/ https://www.esrd.com/why-is-hierarchic-modeling-framework-important/#respond Thu, 19 Jul 2018 22:09:40 +0000 https://esrd.com/?p=7308 In this S.A.F.E.R. Simulation article, we explore the concept of Hierarchic Modeling, some practical applications of Hierarchic Modeling, and the importance of implementing a Hierarchic Modeling framework in CAE software tools to support the practice of Simulation Governance.]]>

Selecting the simplest model for an analysis is not always trivial for engineers. A Hierarchic Modeling framework eases this burden by providing support for investigating model form errors.

In our previous SAFER Simulation articles, we have explored the concepts of Numerical Simulation, Challenges of Legacy FEA, Finite Element Modeling, Simulation Governance and High-Fidelity Aerostructure Analysis. We worked to establish a lexicon and foundational basis for how ESRD’s technological framework fits into solving the increasingly complex applications facing today’s engineering community.

In this SAFER Simulation article, we explore the concept of Hierarchic Modeling, some practical applications of Hierarchic Modeling, and the importance of implementing a Hierarchic Modeling framework in CAE software tools to support the practice of Simulation Governance.

What Is Hierarchic Modeling?

“Hierarchic models for plates and shells” by Drs. Ricardo Actis, Barna Szabó and Christoph Schwab. Comput. Methods Appl. Mech. Engrg. 172 (1999) 79-107.

The concept of Hierarchic Modeling is not new, it was introduced in the 1990s, and together with hierarchic finite element spaces and hierarchic basis functions it was implemented in StressCheck Professional. From the introduction to the 1999 Computer methods in applied mechanics and engineering technical paper “Hierarchic models of laminated plates and shells” by Drs. Actis, Szabó and Schwab:

The notion of hierarchic models differs from the notions of hierarchic finite element spaces and hierarchic basis functions. Hierarchic models provide means for systematic control of modeling errors whereas hierarchic finite element spaces provide means for controlling discretization errors. The basis functions employed to span hierarchic finite element spaces may or may not be hierarchic. Brief explanations follow:

Hierarchic models are a sequence of mathematical models, the exact solutions of which constitute a converging sequence of functions in the norm or norms appropriate for the formulation and the objectives of analysis. Of interest is the exact solution of the highest model, which is the limit of the converging sequence of solutions. In the case of elastic beams, plates and shells the highest model is the fully three-dimensional model of linear elasticity, although even the fully three-dimensional elastic model can be viewed as only the first in a sequence of hierarchic models that account for nonlinear effects, such as geometric, material and contact nonlinearities.

Hierarchic Modeling makes it possible to identify the simplest model that accounts for all features that influence the quantities of interest given the expected accuracy. This is related to the problem-solving principle, known as Occam’s razor, that when presented with competing models to solve a problem, one should select the model with the fewest assumptions, subject to the constraint of required accuracy.

Not all CAE software tools are capable of supporting Hierarchic Modeling in practice, especially for complex applications for which many modeling assumptions are to be examined.

What Do CAE Software Tools Need to Support Hierarchic Modeling?

As previously discussion in our S.A.F.E.R. Simulation blog article on Numerical Simulation, to enable support for a Hierarchic Modeling framework, and by extension the practice of Simulation Governance, CAE software tools must meet three basic requirements:

  1. The model definition must be independent from the approximation.
  2. Simple procedures must be available for assessing the influence of modeling assumptions (in support of model validation).
  3. Simple procedures must be available for objective assessment of the errors of approximation (in support of solution verification).

 

The above requirements, and how meeting these requirements are supported in practice, are explained in greater detail in our Brief History of FEA page and its narrated video. The first implementation of model hierarchies in a CAE software tool, as explained in the video, was released in 1991 (ESRD’s StressCheck Professional).

An implementation framework meeting these three requirements enables the practice of Simulation Governance, providing the basis for the creation and deployment of engineering Sim Apps. Democratization of Simulation for standardization and automation of new technologies, such as Sim Apps, can be done with proper safeguards provided that the software tools used for the creation and deployment meet these technical requirements.

Why Should Engineers Care About Hierarchic Modeling?

“On the role of hierarchic spaces and models in verification and validation” by Drs. Barna Szabó and Ricardo Actis. Comput. Methods Appl. Mech. Engrg. 198 (2009) 1273-1280.

Legacy CAE tools used for Finite Element Modeling were not designed to support Hierarchic Modeling. This is because the concept of Hierarchic Modeling was established many years after the infrastructure of legacy FEA tools was created. Their main limitation is that the model definition and the approximation are not treated separately. Different orders of model complexity cannot be objectively compared by engineering analysts, therefore there is no basis for establishing  confidence in the modeling assumptions.

From 2009’s Computer methods in applied mechanics and engineering technical paper “On the role of hierarchic spaces and models in verification and validation” by Drs. Actis and Szabó:

It is also necessary for the computer implementation
to support hierarchic sequences of models, allowing investigation of the sensitivities of the data of interest and the data measured in validation experiments to the various assumptions incorporated in the model…There is a strong predisposition in the engineering community to view each model class as a separate entity. It is much more useful however to view any mathematical model as a special case of a more comprehensive model, rather than a member of a conventionally defined model class.

For example, the usual beam, plate and shell models are special cases of a model based on the three-dimensional linear theory of elasticity, which in turn is a special case of large families of models based on the equations of continuum mechanics that account for a variety of hyperelastic, elastic-plastic and other material laws, large deformation, contact, etc. This is the hierarchic view of mathematical models.

Comparison of maximum von Mises stress convergence for different hierarchic fastened connection models.

To aid finding the simplest model, sensitivity studies via virtual experimentation are recommended. For example, modeling fastened joints may or may not require full multi-body contact effects if the data of interest are sufficiently far from the region of load transfer; bearing load applications, distributed normal springs or partial contact via “plugs” may be sufficient. By extension, if a structural support is to be approximated by distributed springs, the spring coefficients should be defined parametrically so that sensitivity studies are easy to perform.

The following examples and practical applications illustrate how a Hierarchic Modeling framework leads to increased control over and confidence in the engineering decision-making process.

Applications of Hierarchic Modeling In Engineering Practice

We will focus on two practical applications common to many aerospace engineers: fastened (bolted) joint analysis, and the influences of nonlinear effects such as plasticity. Both engineering applications require high-fidelity analyses to represent the data of interest, and are typically sensitive to the modeling assumptions.

Fastened Joint Analysis

ESRD recently provided a webinar titled “Hierarchic Approaches to Modeling Fastened Connections”, which incorporated the main points from the above discussion. The webinar can be viewed below in its entirety.

Through StressCheck Professional‘s Hierarchic Modeling framework, different modeling assumptions are tested for several classes of fastened joints and connections, including lap joints, splice joints and fittings, and in many cases a simpler model was found that represented the data of interest within a sufficient tolerance:

Some of the fastened joint modeling assumptions explored included the following:

  • In-plane only vs out-of-plane bending effects on load transfer and detailed stresses
  • 2D structural shear connections vs 3D detailed multi-body contact
  • 2D bearing loading vs 3D bearing loading vs 3D multi-body contact
  • Compression-only normal springs vs multi-body contact
  • Fused fasteners vs multi-body contact
  • Linear elastic vs. elastic-plastic materials

 

Without a Hierarchic Modeling framework, exploring these modeling assumptions would be unfeasible in engineering practice, and create a “simulation bottleneck” for engineering analysts.

Linear vs Nonlinear Effects

In some aerospace engineering applications, it may be necessary to investigate the influence of nonlinearities, such as plasticity and/or large deformations, in the results of interest.  For that reason a linear solution (i.e. small strain, small deformation and linear elastic material coefficients) must be viewed as the first in a hierarchy of models that includes nonlinear constitutive  relations, finite deformation and mechanical contact.

In a Hierarchic Modeling framework, engineering analysts should not need to change the discretization (i.e. mesh, element types and mapping) when transitioning from a linear to nonlinear model analysis for example; the switch should be seamless and simple, allowing the order of the model to increase on demand.

The following demo videos examine two case studies in nonlinear effects, in which the Hierarchic Modeling framework of StressCheck Professional was used to assess the influence of simplifying modeling assumptions without changing the discretization.

Geometric Nonlinearities

In the first case study, a linear vs. geometric nonlinear (large strain/large displacement) analysis for a 3D helical spring was performed:

Performing a geometric nonlinear analysis for the helical spring, in which equilibrium is satisfied in the deformed configuration, required no interaction with the model inputs or discretization parameters. The engineering analyst simply starts from a converged linear solution as the first step in the geometric nonlinear iterations.

The model hierarchies were then compared in live results processing with minimal effort, allowing the engineering analyst to quickly assess how accounting for large displacements/rotations affects the outcome of the results.

Material Nonlinearities

In the second case study, elastic-plastic materials are assigned to a detailed 3D eyebolt geometry, allowing plasticity to develop as the eyebolt is overloaded in tension:

To incorporate plasticity into the model, it was only required to update the material properties from linear to elastic-plastic; no other change in model inputs was required. Then, after a converged linear solution was available, a material nonlinear analysis was seamlessly initiated.

As in the previous case study, both models were available for assessment in live results processing, allowing the engineering analyst to determine whether material nonlinear effects are significant at the given load level (i.e. is the plasticity extensive) or the plastic zone is fully confined by elastic material.

Summary

As demonstrated in the above examples, and articulated in the technical paper excerpts, support for a seamless transition between model orders and theories is made possible by the implementation of a Hierarchic Modeling framework. To implement Hierarchic Modeling, CAE software tools must also allow separation of model definition from the discretization (in legacy FEA software, the definition of the model and the numerical approximation are combined, necessitating large element libraries). Without this clear separation, it is not feasible to reliably perform verification and validation in engineering practice.

Additionally, engineering analysts should expect modern FEA and CAE tools to support “what if” and sensitivity studies, such that modeling assumptions can be easily assessed and the simplest model used with confidence. As more and more engineering organizations look to democratize simulation, and virtual experimentation is increasingly used, it is essential to have numerical simulation tools that treat model definition separately from the approximation.

Finally, through the use of hierarchic finite element spaces and mathematical models it is possible to control approximation errors separately from modeling errors, while providing objective measures of solution quality for every result, anywhere in the model, in support of the increasing simulation demands on engineers.

How We Can Help…

Need a demo of an engineering application, such as a detailed stress, fracture, global-local or composites solution? Fill out the below form (note the required fields) and submit. An ESRD representative will respond shortly with more information. Thank you!
Please indicate an organization, such as the agency, company or academic institution to which you are affiliated.
For more details on the engineering applications supported by our software products, refer to our Applications page.
ESRD will work with you to schedule a 1 to 2-hour Teams meeting to review the selected engineering applications.

 

 

]]>
https://www.esrd.com/why-is-hierarchic-modeling-framework-important/feed/ 0
What Are the Key Quality Checks for FEA Solution Verification? https://www.esrd.com/what-are-the-key-quality-checks-for-fea-solution-verification/ https://www.esrd.com/what-are-the-key-quality-checks-for-fea-solution-verification/#respond Wed, 06 Mar 2019 02:39:48 +0000 https://esrd.com/?p=9360 In this S.A.F.E.R. Simulation post, we'll explore Five Key Quality Checks for verifying the accuracy of FEA solutions. To help us drive the conversation in a practical manner, we selected a widely available and well understood benchmark problem to model, solve and perform each Key Quality Check using ESRD's flagship FEA software, StressCheck Professional.]]>

Verifying the accuracy of FEA solutions is straightforward when employing the following Key Quality Checks.

In a recent ESRD webinar, we asked a simple but powerful question: if you routinely perform Numerical Simulation via finite element analysis (FEA), how do you verify the accuracy of your engineering simulations? During this webinar, we reviewed ‘The Four Key Quality Checks’ that should be performed for any detailed stress analysis as part of the solution verification process:

  • Global Error: how fast is the estimated relative error in the energy norm reduced as the degrees of freedom (DOF) are increased? And, is the associated convergence rate indicative of a smooth solution?
  • Deformed Shape: based on the boundary conditions and material properties, does the overall model deformation at a reasonable scale make sense? Are there any unreasonable displacements and/or rotations?
  • Stress Fringes Continuity: are the unaveraged, unblended stress fringes smooth or are there noticeable “jumps” across element boundaries? Note: stress averaging should ALWAYS be off when performing detailed stress analysis. Significant stress jumps across element boundaries is an indication that the error of approximation is still high.
  • Peak Stress Convergence: is the peak (most tensile or compressive) stress in your region of interest converging to a limit as the DOF are increased? OR is the peak stress diverging?

 

When the stress gradients are also of interest, there is an additional Key Quality Check that should be performed:

  • Stress Gradient Overlays: when stress distributions are extracted across or through a feature containing the peak stress, are these gradients relatively unchanged with increasing DOF? Or are the stress distribution overlays dissimilar in shape?

 

In this S.A.F.E.R. Simulation blog, we’ll explore each of the above Key Quality Checks as well as additional best practices for verifying the accuracy of FEA solutions. To help us drive the conversation in a practical manner, we selected a widely available and well understood benchmark problem to model, solve and perform each Key Quality Check using ESRD’s flagship FEA software, StressCheck Professional.

Note: the following Key Quality Checks for FEA Solution Verification focus on results processing for linear and nonlinear detailed stress analyses applications. Webinars containing solution verification best practices have been previously presented for fracture mechanics applications, global-local analysis (co-hosted by Altair), and fastened connection and bolted joint analysis.

Benchmark Problem: Tension Bar of Circular Cross Section with Semi-Circular Groove

Benchmark problem for Key Quality Checks for FEA Solution Verification.

The benchmark problem for the following discussion focuses on accurately computing a very common stress concentration factor, the classical solution(s) of which may be found in myriad engineering handbook publications and used often by many practicing structural engineers: tension bar of circular cross section with a semi-circular groove.

Since the available literature supports numerous classical solutions, we will limit our coverage to three (3) of the most popular classical stress concentration factor approximation sources: Peterson, Shigley and Roark.

Classical Source #1: ‘Peterson’s Stress Concentration Factors’ (Pilkey)

Our first classical source comes from Section 2.5.2 and Chart 2.19 (‘Stress concentration factors Ktn for a tension bar of circular cross section with a U-shaped groove’) in ‘Peterson’s Stress Concentration Factors’, 2nd Edition, by Walter D. Pilkey:

Courtesy ‘Stress Concentration Factors’, 2nd Edition (Pilkey).

Courtesy ‘Stress Concentration Factors’, 2nd Edition (Pilkey).

The curve marked ‘Semicircular’ will be used for the classical stress concentration factor approximation.

Note: as is documented in Section 2.5.2 above, Chart 2.19 is computed from the Neuber 3D case Ktn curve (Chart 2.18, see below) for a nominal Poisson’s ratio of 0.3:

Courtesy ‘Stress Concentration Factors’, 2nd Edition (Pilkey).

Pilkey notes in Section 1.4 (‘Stress Concentration as a Three-Dimensional Problem’) that the Poisson’s ratio will have an effect on the Ktn for cases such as the above.

Classical Source #2: ‘Shigley’s Mechanical Engineering Design’ (Budnyas & Nisbett)

Our second classical source comes from Figure A-15-13, Table A-15, in ‘Shigley’s Mechanical Engineering Design’, 9th edition, by Richard G. Budnyas & J. Keith Nisbett:

Courtesy ‘Shigley’s Mechanical Engineering Design’, 9th edition (Budnyas & Nisbett).

Classical Source #3: ‘Roark’s Formulas for Stress and Strain’ (Young & Budynas)

Our third source comes from the equation in Table 17.1, ’15. U-notch in a circular shaft’, ‘Roark’s Formulas for Stress and Strain’, 7th Edition, by Warren C. Young and Richard D. Budynas:

Courtesy Roark’s ‘Formulas for Stress and Strain’, 7th Edition (Young & Budynas).

We will use the equation for the semi-circular notch (h/r = 1) for the classical stress concentration factor approximation.

Classical Stress Concentration Factor Comparison:

For this benchmark case study, the dimensions and axial tension force were defined as following (in US Customary units):

  • D = 9″
  • d = 6″
  • h = 1.5″
  • r = 1.5″
  • P = 10,000 lbf
  • σnom = 4*P/π/d2 = 354 psi
  • r/d = 0.25
  • D/d = 1.5
  • h/r = 1.0

 

These values result in the following classical solutions for the stress concentration factor:

Classical Source Ktn σmax = Ktnnom
Peterson 1.78 630.12 psi
Shigley 1.69 598.26 psi
Roark 1.82 644.28 psi

The above classical solutions are noted by the authors as approximations of the stress concentration factor, given the configuration of geometric and axial loading parameter values; the exact solution can be obtained by solving the 3D elasticity problem. An approximation to the solution of the elasticity problem can be obtained using the finite element method (e.g. via StressCheck Professional or another FEA implementation).

A reasonable goal of our benchmark case study is to determine which (if any) of the classical solutions best approximates this particular configuration.

Modeling Process: CAD + Automesh + BC’s + Material Properties

The solid geometry for the benchmark case study was constructed in StressCheck Professional using 3D solid modeling techniques, an automesh of 3665 curved tetrahedral elements was generated, and boundary conditions (axial loads, rigid body constraints) were applied:

Curved Tetrahedral Automesh (courtesy StressCheck Professional)

The linear elastic material properties selected for the benchmark case study are representative of a 2014-T6 aluminum extrusion (i.e. E = 10.9 Msi, v = 0.397).

Solution Process: Linear P-extension + Fixed Mesh

The model was analyzed in StressCheck Professional’s Linear solver via an hierarchic p-extension process, in which the orders of all elements on the fixed mesh were uniformly increased from 2nd order (p=2) to 8th order (p=8) for a total of seven (7) runs.

Note: before executing the solution, the mesh was converted to geometric (blended) mapping, which ensured the optimal representation of the geometric boundaries. This conversion was required for the solution order to exceed p=5, as by default StressCheck Professional’s tetrahedral elements are curved using 2nd order functions (Isopar).

Since StressCheck Professional automatically stores all completed runs of increasing DOF for results processing, we can determine the minimum DOF for which the benchmark case study was well approximated for each Key Quality Check.

Note: it is not necessary to always increase the order of all elements to 8th order, unless the mesh is a) generated manually and is a minimum mesh of high-aspect ratio elements, or b) a solution of exceedingly low discretization error in the data of interest is the goal (our reason). Many times a sufficiently refined mesh at a lower order (p<6) will achieve an acceptable discretization error for most practical engineering applications.

Results Extraction: Do We Pass Each Key Quality Check?

After the solution process completed, the estimated relative error in the energy norm (EREEN) was automatically reported as 0.01%, indicating no significant discretization errors but telling us very little about our data of interest, the stress concentration factor.

Then, how do we determine if we have an accurate enough FEA solution to approximate the stress concentration factor for the benchmark case study? Let’s go through each Key Quality Check to determine if our discretization is sufficient.

Key Quality Check #1: Global Error

Key Quality Check #1: Global Error (courtesy StressCheck Professional)

Studying how the global error (% Error column), as represented by EREEN, decreases with increasing DOF is our first ‘Key Quality Check’. This value is a measure of how well we are approximating the exact solution of the 3D elasticity problem in energy norm.

Additionally, a Convergence Rate of >1.0 is also a good indicator of the overall smoothness of the solution. Note: in problems with mathematical singularities, such as the simulation of cracks in fracture mechanics applications, the convergence rate is typically <1.0.

VERDICT: Pass

Key Quality Check #2: Deformed Shape

Key Quality Check #2: Deformed Shape (courtesy StressCheck Professional)

Since the benchmark case study was loaded axially under self-equilibrating loads of P=10,000 lbf, rigid body constraints were applied to three nodes at the leftmost side to cancel the six rigid body modes in 3D elasticity.

The deformed shape for the highest DOF run indicates the model is behaving as expected at a 2,000:1 deformed scale (red outlines are the undeformed configuration).

VERDICT: Pass

Key Quality Check #3: Stress Fringes Continuity

Key Quality Check #3: Stress Fringes Continuity (courtesy StressCheck Professional)

When assessing the stress fringes for quality, it is important to ensure that there are no significant “jumps” across element boundaries (edges/faces) in regions where the stresses are expected to be smooth, continuous and unperturbed. This assessment requires that the stresses be plotted without any averaging or blending features enabled.

The 1st principal stress (S1) fringe continuity for the highest DOF run is quite smooth across element boundaries, with no significant “jumps” detected in the region of interest (root of the notch). The maximum 1st principal stress value (S1max) is computed as 619.3 psi.

However, we will need to verify that this value has converged to a limit (e.g. independent of DOF) before it is compared with the benchmark case study’s theoretical Ktn and σmax.

VERDICT: Pass

Key Quality Check #4: Peak Stress Convergence

Key Quality Check #4: Peak Stress Convergence (courtesy StressCheck Professional)

For this benchmark case study, our data of interest was the peak stress at the root of the circumferential groove. Since StressCheck Professional automatically keeps all solutions for ‘deep dive’ results processing, it is very simple and easy to ‘check the stress’.

Selecting the StressCheck model’s curve which encircles the root of the groove, an extraction of maximum 1st principal stress (S1max) vs. each run of increasing DOF was performed. Even though we have a fairly refined mesh in the groove, note the large differences between the first three runs (p=2 to 4) and the final four runs (p=5 to 8). For this reason, it is simply not enough to have a “good mesh” or smooth stress fringes that pass the “eyeball norm”; the peak stress values must be rigorously proven to be independent of mesh and DOF.

It can be observed from the table that convergence in S1max was achieved by the 4th or 5th run, with a converged value of S1max = 619.3 psi. Here is a summary of how the classical stress concentration factor approximations Ktn rate for this particular configuration:

Classical Source Ktn σmax = Ktnnom Converged S1max % Relative Difference:
Peterson 1.78 630.12 psi 619.3 psi 1.75
Shigley 1.69 598.26 psi 619.3 psi -3.39
Roark 1.82 644.28 psi 619.3 psi 4.03

% Relative Difference = 100*(σmax – S1max)/S1max

It appears that Peterson’s classical stress concentration factor approximation is most appropriate, with a relative difference of 1.75% when compared to the estimated exact solution from the numerical simulation.

Note: the S1max convergence table confirms that it was not necessary to continue increasing the DOF by p-extension past the 5th run (p=6 in this particular case); we could have stopped the p-extension process once the error in the S1max was sufficiently small for our purposes.

VERDICT: Pass

A Note on the Poisson’s Ratio Effect

Recalling the derivation of the Peterson Ktn, the value use in the benchmark case study assumed a v=0.3 for its approximation, while a v=0.397 as was used in StressCheck Professional. This highlights the importance of understanding the derivation and limitations of classical solutions.

If we “eyeball” Chart 2.18 for r/d = 0.25 and a v~0.4, we get a Ktn of ~1.78 (vs. Ktn~1.81 for v=0.3). We then multiply the Peterson Ktn by 1.78/1.81 to get an ‘adjusted’ Peterson Ktn ~1.75 for v=0.397. This results in a σmax = 619.67, a difference of 0.06%.

Learn More (Video)

 

That being said, it is always up to the engineer and management to determine an acceptable classical solution error in practical engineering applications.

Key Quality Check #5: Stress Gradient Overlays

Key Quality Check #5: Stress Gradient Overlays (courtesy StressCheck Professional)

As an additional Key Quality Check, we should ensure that the stresses nearby the location of peak stress are also well-represented and do not change much with increasing DOF. In StressCheck Professional we can dynamically extract the stresses across or through any feature, for any resolution and available solution, and overlay these stress gradients on the same chart for an assessment of quality.

The stress gradient extraction was performed across the groove for the final three runs (p=6 to 8), and the automatic stress gradient overlay showed that there was practically no difference between the point-wise values. Again, this proves that the 5th run (p=6) was sufficient for representing both the peak stress and the groove stress gradient.

Note: as for the stress fringe continuity check (Key Quality Check #3), it is important to perform this extraction without averaging features enabled.

VERDICT: Pass

In Summary…

Example of the democratization of classical engineering handbook methods via FEA-based digital engineering applications.

Solutions of typical structural details in 2D and 3D elasticity obtained by classical methods are approximations obtained using various techniques developed in the pre-computer age. This benchmark case study shows that in order to rank results obtained by classical methods, they have to be compared with the corresponding values obtained from the exact solution of the problem of elasticity. Alternatively, when the exact solution is not available, classical methods can be compared with the results from an approximate solution of the same problem of elasticity obtained by FEA.

It was also shown that strict solution verification procedures are required to provide evidence that the approximation error in the quantities of interests are much smaller than the difference observed among the results obtained by classical solutions, an essential technical requirement of Simulation Governance and any benchmarking-by-FEA process.

Finally, this example also highlights another important point: Classical engineering handbooks and design manuals are examples of democratization practiced in the pre-computer age. With the maturing of numerical simulation technology it is now possible to remove the manifold limitations of classical engineering solutions and provide parametric solutions for the problems engineers actually need to solve. This is the main goal of democratization.

There is a fundamentally important prerequisite, however: The exceptionally rare talents of engineer-scientists who populated conventional handbooks have to be democratized, that is, digitally mapped into the world of modern-day analysis.

The time has come for democratization to be reinvented.

]]>
https://www.esrd.com/what-are-the-key-quality-checks-for-fea-solution-verification/feed/ 0