Reliability Archives - ESRD https://www.esrd.com/tag/reliability/ Engineering Software Research and Development, Inc. Thu, 07 Nov 2024 20:22:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Reliability Archives - ESRD https://www.esrd.com/tag/reliability/ 32 32 Meshless Methods https://www.esrd.com/meshless-methods/ https://www.esrd.com/meshless-methods/#respond Thu, 07 Nov 2024 16:25:07 +0000 https://www.esrd.com/?p=32906 Meshless methods, also known as mesh-free methods, are computational techniques used for the approximation of the solutions of partial differential equations in the engineering and applied sciences. The advertised advantage of the method is that users do not have to worry about meshing. However, eliminating the meshing problem has introduced other, more complex issues. Oftentimes, advocates of meshless methods fail to mention their numerous disadvantages.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Meshless methods, also known as mesh-free methods, are computational techniques used for the approximation of the solutions of partial differential equations in the engineering and applied sciences. The advertised advantage of the method is that users do not have to worry about meshing. However, eliminating the meshing problem has introduced other, more complex issues. Oftentimes, advocates of meshless methods fail to mention their numerous disadvantages.

When meshless methods were first proposed as an alternative to the finite element method, creating a finite element mesh was more burdensome than it is today. Undoubtedly, mesh generation will become even less problematic with the application of artificial intelligence tools, and the main argument for using meshless methods will weaken over time.

An artistic rendering of the idea of meshless clouds. The spheres represent the supports of the basis functions associated with the centers of the spheres. Image generated by Microsoft Copilot.

Setting Criteria

First and foremost, numerical solution methods must be reliable. This is not just a desirable feature but an essential prerequisite for realizing the potential of numerical simulation and achieving success with initiatives such as digital transformation, digital twins, and explainable artificial intelligence, all of which rely on predictions based on numerical simulation. Assurance of reliability means that (a) the data and parameters fall within the domain of calibration of a validated model, and (b) the numerical solutions have been verified.

In the following, I compare the finite element and meshless methods from the point of view of reliability. The basis for comparison is the finite element method as it would be implemented today, not as it was implemented in legacy codes which are based on pre-1970s thinking. Currently, ESRD’s StressCheck is the only commercially available implementation that supports procedures for estimating and controlling model form and approximation errors in terms of the quantities of interest.

The Finite Element Method (FEM)

The finite element method (FEM) has a solid scientific foundation, developed post-1970. It is supported by theorems that establish conditions for its stability, consistency, and convergence rates. Algorithms exist for estimating the relative errors in approximations of quantities of interest, alongside procedures for controlling model form errors [1].

The Partition of Unity Finite Element Method (PUFEM)

The finite element method has been shown to work well for a wide range of problems, covering most engineering problems. However, it is not without limitations: For the convergence rates to be reasonable, the exact solution of the underlying problem has to have some regularity. Resorting to alternative techniques is warranted when standard implementations of the finite element method are not applicable.  One such technique is the Partition of Unity Finite Element Method (PUFEM), which can be understood as a generalization of the h, p, and hp versions of the finite element method [2]. It provides the ability to incorporate analytical information specific to the problem being solved in the finite element space.

The FEM Challenge

Any method proposed to rival FEM should, at the very least, demonstrate superior performance for a clearly defined set of problems. The benchmark for comparison should involve computing a specific quantity of interest and proving that the relative error is less than, for example, 1%. I am not aware of any publication on meshless methods that has tackled this challenge.

Meshless Methods

Various meshless methods, such as the Element-Free Galerkin (EFG) method, Moving Least Squares (MLS), and Smoothed Particle Hydrodynamics (SPH), using weak and strong formulations of the underlying partial differential equations, have been proposed. The theoretical foundations of meshless methods are not as well-developed as those of the Finite Element Method (FEM). The users of meshless methods have to cope with the following issues:

  1. Enforcement of boundary conditions: The enforcement of essential boundary conditions in meshless methods is generally more complex and less intuitive than in FEM. The size of errors incurred from enforcing boundary conditions can be substantial.
  2. Sensitivity to the choice of basis functions: The stability of meshless methods can be highly sensitive to the choice of basis functions.
  3. Verification: Solution verification with meshless methods poses significant challenges.
  4. Most meshless methods are not really meshless: It is true that traditional meshing is not required, but in weak formulations, the products of the derivatives of the basis functions have to be integrated. Numerical integration is performed over the domains defined by the intersection of supports (support is the subdomain on which the basis function is not zero), which requires a “background mesh.”
  5. Computational power: Meshless methods often require greater computational power due to the global nature of the shape functions used, which can lead to denser matrices compared to FEM.

Advice to Management

Decision-makers need solid evidence supporting the reliability of data generated by numerical simulation. Otherwise, where would they get their courage to sign the “blueprint”? They should require estimates of the error of approximation for the quantities of interest. Without such estimates, the value of the computed information is greatly diminished because the unknown approximation errors increase uncertainties in the predicted data.

Management should treat claims of accuracy in marketing materials for legacy finite element software and any software implementing meshless methods with a healthy dose of skepticism. Assertions that a software product was tested against benchmarks and found to perform well should never be taken to mean that it will perform similarly well in all cases. Management should require problem-specific estimates of relative errors in the quantities of interest.


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[2] Melenk, J. M. and Babuška, I. The partition of unity finite element method: Basic theory and applications. Computer Methods in Applied Mechanics and Engineering. Vol 139(1-4), pp. 289-314, 1996.


Related Blogs:

]]>
https://www.esrd.com/meshless-methods/feed/ 0
A Critique of the World Wide Failure Exercise https://www.esrd.com/critique-of-the-wwfe/ https://www.esrd.com/critique-of-the-wwfe/#respond Thu, 03 Oct 2024 13:00:00 +0000 https://www.esrd.com/?p=32759 The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses, and ran between 2007 and 2013. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses. It ran between 2007 and 2013. Quoting from reference [1]: “Twelve challenging test problems were defined by the organizers of WWFE-II, encompassing a range of materials (polymer, glass/epoxy, carbon/epoxy), lay-ups (unidirectional, angle ply, cross-ply, and quasi-isotropic laminates) and various 3D stress states”. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates.

The von Mises stress in an ideal fiber-matrix composite subjected to shearing deformation. The displacements are magnified 15X. Verified solution by StressCheck.

Composite Failure Model Development

According to Thomas Kuhn, the period of normal science begins when investigators have agreed upon a paradigm, that is, the fundamental ideas, methods, language, and theories that guide their research and development activities [2]. We can understand WWFE as an effort by the composite materials research community to formulate such a paradigm. While some steps were taken toward achieving that goal, the goal was not reached. The final results of WWFE-II were inconclusive. The main reason is that the project lacked some of the essential constituents of a model development program. To establish favorable conditions for the evolutionary development of failure criteria for composite materials, procedures similar to those outlined in reference [3] will be necessary. The main points are briefly described below.

  1. Formulation of the mathematical model: The operators that transform the input data into the quantities of interest are defined. In the case of WWFE, a predictor of failure is part of the mathematical model. In WWFE II, twelve different predictors were investigated. These predictors were formulated based on subjective factors: intuition, insight, and personal preferences. A properly conceived model development project provides an objective framework for ranking candidate models based on their predictive performance. Additionally, given the stochastic outcomes of experiments, a statistical model that accounts for the natural dispersion of failure events must be included in the mathematical model.
  2. Calibration: Mathematical models have physical and statistical parameters that are determined in calibration experiments. Invariably, there are limitations on the available experimental data. Those limitations define the domain of calibration. The participants of WWFE failed to grasp the crucial role of calibration in the development of mathematical models. Quoting from reference [1]: “One of the undesirable features, which was shared among a number of theories, is their tendency to calibrate the predictions against test data and then predict the same using the empirical constants extracted from the experiments.”  ̶  Calibration is not an undesirable feature. It is an essential part of any model development project. Mathematical models will produce reliable predictions only when the parameters and data are within their domains of calibration. One of the important goals of model development projects is to ensure that the domain of calibration is sufficiently large to cover all applications, given the intended use of the model. However, calibration and validation are separate activities. The dataset used for validation has to be different from the dataset used for calibration [3]. Predicting the calibration data once calibration was performed cannot lead to meaningful conclusions regarding the suitability or fitness of a model.
  3. Validation: Developers are provided complete descriptions of the validation experiments and, based on this information, predict the probabilities of the outcomes of validation experiments. The validation metric is the likelihood of the outcomes.
  4. Solution verification: It must be shown that the numerical errors in the quantities of interest are negligibly small compared to the errors in experimental observations.
  5. Disposition: Candidate models are ranked based on their predictive performance, measured by the ratio of predicted to realized likelihood values. The calibration domain is updated using all available data. At the end of the validation experiments, the calibration data is augmented with the validation data.
  6. Data management: Experimental data must be collected, curated, and archived to ensure its quality, usability, and accessibility.
  7. Model development projects are open-ended: New ideas can be proposed anytime, and the available experimental data will increase over time. Therefore, no one has the final word in a model development project. Models and their domains of calibration are updated as new data become available.

The Tale of Two Model Development Projects

It is interesting to compare the status of model development for predicting failure events in composite materials with linear elastic fracture mechanics (LEFM), which is concerned with predicting crack propagation in metals, a much less complicated problem. Although no consensus emerged from WWFE-II, there was no shortage of ideas on formulating predictors. In the case of LEFM, on the other hand, the consensus that the stress intensity factor is the predictor of crack propagation emerged in the 1970s, effectively halting further investigation of predictors and causing prolonged stagnation [3]. Undertaking a model development program and applying verification, validation, and uncertainty quantification procedures are essential prerequisites for progress in both cases.

Two Candid Observations

Professor Mike Hinton, one of the organizers of WWFE, delivered a keynote presentation at the NAFEMS World Congress in Boston in May 2011 titled “Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?” In this presentation, he shared two candid observations that shed light on the status of models created to predict failure events in composite materials:

  1. “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.” – In other words, in the absence of properly validated and implemented models, the predictions are unreliable.
  2. Disclosed that Professor Zvi Hashin declined the invitation to participate in WWFE-I, explaining his reason in a letter.  He wrote: “My only work in this subject relates to unidirectional fibre composites, not to laminates” … “I must say to you that I personally do not know how to predict the failure of a laminate (and furthermore, that I do not believe that anybody else does).”

Although these observations are dated, I believe they remain relevant today. Contrary to numerous marketing claims, we are still very far from realizing the benefits of numerical simulation in composite materials.

A Sustained Model Development Program Is Essential

To advance the development of design rules for composite materials, stakeholders need to initiate a long-term model development project, as outlined in reference [3]. This approach will provide a structured and systematic framework for research and innovation. Without such a coordinated effort, the industry has no choice but to rely on the inefficient and costly method of make-and-break engineering, hindering overall progress and leading to inconsistent results. Establishing a comprehensive model development project will create favorable conditions for the evolutionary development of design rules for composite materials.

The WWFE project was large and ambitious. However, a much larger effort will be needed to develop design rules for composite materials.


References

[1] Kaddour, A. S., and Hinton, M. J. Maturity of 3D Failure Criteria for Fibre-Reinforced Composites: Comparison Between Theories and Experiments: Part B of WWFE-II,” J. Comp. Mats., 47, 925-966, 2013.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/critique-of-the-wwfe/feed/ 0
Finite Element Libraries: Mixing the “What” with the “How” https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/ https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/#respond Tue, 03 Sep 2024 15:12:16 +0000 https://www.esrd.com/?p=32142 Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of the more complex ones. This view guided the development of the finite element (FE) method in the 1960s and 70s, and ultimately led to legacy FE codes adopting an "element-centric" approach.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of complex ones. This view shaped the development of the finite element (FE) method in the 1960s and 70s. The software architecture of the legacy FE codes was established in that period.

The Element-Centric View

Richard MacNeal, a principal developer of NASTRAN and co-founder of the MacNeal-Schwendler Corporation (MSC), once told me that his dream was to formulate “the perfect 4-node shell element”. His background was in analog computers, and he thought of finite elements as tuneable objects: If one tunes an element just right, as potentiometers are tuned in analog computers, then a perfect element can be created. This element-centric view led to the implementation of large element libraries, which are still in use today. These libraries mix what we wish to solve (in this instance, a shell model) with how we wish to solve it (using 4-node finite elements).

A cluttered, unattractive library, emblematic of finite element libraries in legacy FE codes. Image generated by Gemini.

In formulating his shell element, MacNeal was constrained by the limitations of the architecture of NASTRAN. Quoting from reference [1]: “An important general feature of NASTRAN which limits the choice of element formulation  is that, with rare exceptions, the degrees of freedom consist of the three components of translation and the three components of rotation at discrete points.” This feature originated from models of structural frames where the joints of beams and columns are allowed to translate and rotate in three mutually orthogonal directions. Such restrictions, common to all legacy FE codes, prevented those codes from keeping pace with the subsequent scientific development of FE analysis.

MacNeal’s formulation of his shell element was entirely intuitive. There is no proof that the finite element solutions corresponding to progressively refined meshes will converge to the exact solution of a particular shell model or even converge at all. Model form and approximation are intertwined.

The classical shell model, also known as the Novozhilov-Koiter (N-K) model, taught in advanced strength of materials classes, is based on the assumption that normals to the mid-surface in the undeformed configuration remain normal after deformation. Making this assumption was necessary in the pre-computer era to allow the solution of simple shell problems by classical methods. Today, the N-K shell model is only of theoretical and historical interest. Instead, we have a hierarchic sequence of shell models of increasing complexity. The next shell model is the Naghdi model, which is based on the assumption that normals to the mid-surface in the undeformed configuration remain straight lines but not necessarily normal. Higher models permit the normal to deform in ways that can be well approximated by polynomials [2]. 

Shells behave like three-dimensional solids in the neighborhoods of support attachments, stiffeners, nozzles, and cutouts. Therefore, restrictions on the transverse variation of the displacement components are not warranted in those locations. Whether a shell is thin or thick depends not only on the ratio of the thickness to the radius of curvature but also on the smoothness of the exact solution. The proper choice of a shell model depends on the problem at hand and the goals of computation. Consider, for example, the free vibration of a shell. When the wavelengths of the mode shapes are close to the thickness, the shearing deformations cannot be neglected, and hence, the shell behaves as a thick shell. Perfect shell elements do not exist. Furthermore, there is no such thing as a perfect element of any kind.

The Model-Centric View

In the model-centric view, we recognize that any model is a special case of a more comprehensive model. For instance, in solid mechanics problems, we typically start with a problem of linear elasticity, where one of the assumptions is that stress is proportional to strain, regardless of the size of the strain. Once the solution is available, we check whether the proportional limit was exceeded. If it was, we solve a nonlinear problem, for example, using the deformation theory of plasticity with a suitable material law. In that case, the linear solution is the first iteration in solving the nonlinear problem. If the displacements are large, we continue with the iterations to solve the geometric nonlinear problem. It is important to ensure that the errors of approximation are negligibly small throughout the numerical solution process.

At first glance, it might seem that model form errors can be made arbitrarily small. However, this is generally not possible. As the complexity of the model increases, so does the number of physical parameters. For instance, transitioning from linear elasticity to accounting for plastic deformation requires introducing empirical constants to characterize nonlinear material behavior. These constants have statistical variations, which increase prediction uncertainty. Ultimately, these uncertainties will likely outweigh the benefits of more complex models.

Implementation

An FE code should allow users to control both the model form and the approximation errors. To achieve this, model and element definitions must be separate, and seamless transitions from one model to another and from one discretization to another must be made possible. In principle, it is possible to control both types of error using legacy FE codes, but since model and element definitions are mixed in the element libraries, the process becomes so complicated that it is impractical to use in industrial settings.

Model form errors are controlled through hierarchic sequences of models, while approximation errors are controlled through hierarchic sequences of finite element spaces [2]. The stopping criterion is that the quantities of interest should remain substantially unchanged in the next level of the hierarchy.

Advice to Management

To ensure the reliability of predictions, it must be shown that the model form errors and the approximation errors do not exceed pre-specified tolerances. Moreover, the model parameters and data must be within the domain of calibration [3]. Management should not trust model-generated predictions unless evidence is provided showing that these conditions are satisfied.

When considering various marketing claims regarding the promised benefits of numerical simulation, digital twins, and digital transformation, management is well advised to keep this statement by philosopher David Hume in mind: “A wise man apportions his beliefs to the evidence.”


References

[1] MacNeal, R. H. A simple quadrilateral shell element.  Computers & Structures, Vol. 8, pp. 175-183, 1978.

[2] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/feed/ 0
The Kuhn Cycle in the Engineering Sciences https://www.esrd.com/kuhn-cycle-in-engineering-sciences/ https://www.esrd.com/kuhn-cycle-in-engineering-sciences/#respond Thu, 01 Aug 2024 14:05:06 +0000 https://www.esrd.com/?p=32070 Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects: Normal Science, Model Drift, Model Crisis, Model Revolution, and Paradigm Change. The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are used as sources of information for making technical decisions. Consequently, decision-makers need convincing evidence that relying on predictions from a mathematical model is justified. Such reliance is warranted only if:

  • the model has been validated, and its domain of calibration is clearly defined;
  • the errors of approximation are known to be within permissible tolerances [1].

Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects [2]:

  • Normal Science – Development of mathematical models based on the best scientific understanding of the subject matter.
  • Model Drift – Limitations of the model are encountered. Certain quantities of interest cannot be predicted by the model with sufficient reliability.
  • Model Crisis – Model drift becomes excessive.  Attempts to remove the limitations of the model are unsuccessful.
  • Model Revolution – This begins when candidates for a new model are proposed. The domain of calibration of the new model is sufficiently large to resolve most if not all, issues identified with the preceding model.
  • Paradigm Change – A paradigm consists of the fundamental ideas, methods, language, and theories that are accepted by the members of a scientific or professional community. In this phase, a new paradigm emerges, which then becomes the new Normal Science.

The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.

Example: Linear Elastic Fracture Mechanics

In linear elastic fracture mechanics (LEFM), the goal is to predict the size of a crack, given a geometrical description, an initial crack configuration, material properties, and a load spectrum. The mathematical model comprises (a) the equations of the theory of elasticity, (b) a predictor that establishes a relationship between a functional defined on the elastic stress field (usually the stress intensity factor), and increments in crack length caused by the application of constant amplitude cyclic loads, (c) a statistical model that accounts for the natural dispersion of crack lengths, and (d) an algorithm that accounts for the effects of tensile and compressive overload events.

Evolution of LEFM

The period of normal science in LEFM began around 1920 and ended in the 1970s. Many important contributions were made in that period. For a historical overview and commentaries, see reference [3]. Here, I mention only three seminal contributions: The work of Alan A. Griffith, who investigated brittle fracturing, George. R. Irwin modified Griffith’s theory for the fracturing of metals, and Paul C. Paris proposed the following relationship between the increment in crack length per cycle of loading and the stress intensity factor K:

{da\over dN} = C(K_{max}-K_{min})^m

where N is the cycle count, C and m are constants determined by calibration. This empirical formula is known as Paris’ law. Numerous variants have been proposed to account for cycle ratios and limiting conditions.

In 1972, the US Air Force adopted damage-tolerant design as part of the Airplane Structural Integrity Program (ASIP) [MIL-STD-1530, 1972]. Damage-tolerant design requires showing that a specified maximum initial damage would not produce a crack large enough to endanger flight safety. The paradigm that Paris’ law is the predictor of crack growth under cyclic loading is now universally accepted.

Fly in the Ointment

Paris’ law is defined on two-dimensional stress fields. However, it is not possible to calibrate any predictor in two dimensions. The specimens used in calibration experiments are typically plate-like objects. In the neighborhood of the points where the crack front intersects the surfaces, the stress field is very different from what is assumed in Paris’ law. Therefore, the parameters C and m in equation (1) are not purely material properties but also depend on the thickness of the test specimen. Nevertheless, as long as Paris’ law is applied to long cracks in plates, the predictions are accurate enough to be useful for practical purposes. However, problems arise when a crack is small relative to the thickness of the plate, for instance, a small corner crack at a fastener hole, which is one of the very important cases in damage-tolerant design. Attempts to fix this problem through the introduction of correction factors have not been successful. First, model drift and then model crisis set in. 

The consensus that the stress intensity factor drives crack propagation consolidated into a dogma about 50 years ago. New generations of engineers have been indoctrinated with this belief, and today, any challenge to this belief is met with utmost skepticism and even hostility. An unfortunate consequence of this is that healthy model development stalled about 50 years ago. The key requirement of damage-tolerant design, which is to reliably predict the size of a crack after the application of a load spectrum, is not met even in those cases where Paris’ law is applicable. This point is illustrated in the following section.

Evidence of the Model Crisis

A round-robin exercise was conducted in 2022. The problem statement was as follows: A centrally cracked 7075-T651 aluminum panel of thickness 0.245 inches, width 3.954 inches, a load spectrum, and the initial half crack length (denoted by ) of 0.070 inches. The quantity of interest was the half-crack length as a function of the number of cycles of loading. The specimen configuration and notation are shown in Fig. 1(a). The load spectrum was characterized by two load maxima given in terms of the nominal stress values σ1 = 22.5 ksi, σ2 = 2σ1/3. The load σ = σ1 was applied in cycles numbered 1, 101, 201, etc. The load σ = σ2 was applied in every other cycle. The minimum load was zero for all cycles. In comparison with typical design load spectra, this is a highly simplified spectrum. The participants in this round-robin were professional organizations that routinely provide estimates of this kind in support of design and certification decisions.

Calibration data were provided in the form of tabular records of da/dN corresponding to (Kmax – Kmin) for various (Kmin/Kmax) ratios. The participants were asked to account for the effects of the periodic overload events on the crack length. 

A positive overload causes a larger increment of the crack length in accordance with Paris’ law, and it also causes compressive residual stress to develop ahead of the crack tip. This residual stress retards crack growth in subsequent cycles while the crack traverses the zone of compressive residual stress. Various models have been formulated to account for retardation (see, for example, AFGROW – DTD Handbook Section 5.2.1.2). Each participant chose a different model. No information was given on whether or how those models were validated. The results of the experiments were revealed only after the predictions were made.

Fig. 1 (b) shows the results of the experiments and four of the predicted outcomes. In three of the four cases, the predicted number of cycles is substantially greater than the load cycles in the experiments, and there is a large spread between the predictions.

Figure 1: (a) Test article. (b) The results of experiments and predicted crack lengths.

This problem is within the domain of calibration of Paris’ law, and the available calibration records cover the interval of the (Kmax – Kmin) values used in the round robin exercise. Therefore, in this instance, the suitability of the stress intensity factor to serve as a predictor of crack propagation is not in question.

Noting that the primary objective of LEFM is to provide estimates of crack length following the application of a load spectrum, and this is a highly simplified problem, these results suggest that retardation models based on LEFM are in a state of crisis. This crisis can be resolved through the application of the principles and procedures of verification, validation, and uncertainty quantification (VVUQ) in a model development project conducted in accordance with the procedures described in [1].


Outlook

Damage-tolerant design necessitates reliable prediction of crack size, given an initial flaw and a load spectrum. However, the outcome of the round-robin exercise indicates that this key requirement is not currently met. While I’m not in a position to estimate the economic costs of this, it’s safe to say they must be a significant part of military aircraft sustainment programs.

I believe that to advance LEFM beyond the crisis stage, organizations that rely on damage-tolerant design procedures must mandate the application of verification, validation, and uncertainty quantification procedures, as outlined in reference [1]. This will not be an easy task, however. A paradigm shift can be a controversial and messy process. As W. Edwards Deming, American engineer, economist, and composer, observed: “Two basic rules of life are: 1) Change is inevitable. 2) Everybody resists change.”


References

[1] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Rossmanith, H. P., Ed., Fracture Mechanics Research in Retrospect. An Anniversary Volume in Honour of George R. Irwin’s 90th Birthday, Rotterdam: A. A. Balkema, 1997.


Related Blogs:

]]>
https://www.esrd.com/kuhn-cycle-in-engineering-sciences/feed/ 0
Variational Crimes https://www.esrd.com/variational-crimes/ https://www.esrd.com/variational-crimes/#respond Mon, 08 Jul 2024 11:00:00 +0000 https://www.esrd.com/?p=31948 From the beginning of FEM acceptance, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated (variational crimes). This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In Thomas Kuhn’s terminology, “pre-science“ refers to a period of early development in a field of research [1]. During this period, there is no established explanatory framework (paradigm) mature enough to solve the main problems. In the case of the finite element method (FEM), the period of pre-science started when reference [2] was published in 1956 and ended in the early 1970s when scientific investigation began in the applied mathematics community. The publication of lectures at the University of Maryland [3] and the first mathematical book on FEM [4] marked the transition to what Kuhn termed “normal science”.

Two Views

Engineers view FEM as an intuitive modeling tool, whereas mathematicians see it as a method for approximating the solutions of partial differential equations cast in variational form. On the engineering side, the emphasis is on implementation and applications, while mathematicians are concerned with clarifying the conditions for stability and consistency, establishing error estimates, and formulating extraction procedures for various quantities of interest. 

From the beginning, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated. This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.

The Art of Finite Element Modeling

The art of finite element modeling has its roots in the pre-science period of finite element analysis when engineers sought to extend the matrix methods of structural analysis, developed for trusses and frames, to complex structures such as plates, shells, and solids. The major finite element modeling software products in use today, such as NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970. As long as the goal is to find force-displacement relationships, such as in load models of airframes and crash dynamics models of automobiles, finite element modeling can provide useful information. However, problems arise when the quantities of interest include (or depend on) the pointwise derivatives of the solution, as in strength analysis where stresses and strains are of interest.

Misplaced Accusations

The first mathematical book on the finite element method [4] dedicated a chapter to violations of the rules of variational calculus in various implementations of the finite element method. The title of the chapter is “Variational Crimes,” a catchphrase that quickly caught on. The variational crimes are charged as follows:

  1. Using non-conforming Elements: Non-conforming elements are those that do not satisfy the interelement continuity requirements of the variational formulation.
  2. Using numerical integration.
  3. Approximating domains and boundary conditions.

Item 1 is a serious crime, however, the motivations for committing this crime can be negated by properly formulating mathematical models. Items 2 and 3 are not crimes; they are essential features of the finite element method, and the associated errors can be easily controlled. The authors were thinking about asymptotic error estimators (what happens when the diameter of the largest element goes to zero) that did not account for items 2 and 3. They did not want to bother with the complications caused by numerical integration and the approximation of the domains and boundary conditions, so they declared those features to be crimes. This may have been a clever move but certainly not a helpful one.

Sherlock Holmes investigating variational crimes in Victorian London. Image generated by Microsoft Copilot.

Egregious Variational Crimes

The authors of reference [4] failed to mention the truly egregious variational crimes that are very common in the practice of finite element modeling today and will have to be abandoned if the reliability predictions based on finite element computations are to be established:

  1. Using point constraints. Perhaps the most common variational crime is using point constraints for other than rigid body constraints. The finite element solution will converge to a solution that ignores the point constraints if such a solution exists, else it will diverge. However, the rates of convergence or divergence are typically very slow. For the discretizations used in practice, it is hardly noticeable.  So then, why should we worry about it? – Either we are not approximating the solution to the problem we had in mind, or we are “approximating” a problem that has no solution. Finding an approximation to a solution that does not exist makes no sense, yet such occurrences are very common in finite element modeling practice. The apparent credibility of the finite element solution is owed to the near cancellation of two large errors: The conceptual error of using illegal constraints and the numerical error of not using sufficiently fine discretization to make the conceptual error visible.  A detailed explanation is available in reference [5], Section 5.2.8.
  2. Using point forces in 2D and 3D elasticity (or more generally in 2D and 3D problems). In linear elasticity, the exact solution does not have finite strain energy when point forces are applied. Hence, any finite element solution “approximates” a problem that does not have a solution in energy space.  Once again, divergence is very slow. When point forces are applied, element-by-element equilibrium is satisfied, and the effects of point forces are local, whereas the effects of point constraints are global. Generally, it is permissible to apply point forces in the region of secondary interest but not in the region of primary interest, where the goal is to compute quantities that depend on the derivatives, such as stresses and strains [5].
  3. Using reduced integration. At the time of the publication of their book [4], Strang and Fix could not have known about reduced integration which was introduced a few years later [6]. Reduced integration was justified in typical finite element modeling fashion: Low-order elements exhibit shear locking and Poisson ratio locking. Since the elements that lock “are too stiff,” it is possible to make them softer by using fewer than the necessary integration points. The consequences were that the elements exhibited spurious “zero energy modes,” called “hourglassing,” that had to be controlled by various tuning parameters. For example, in the Abaqus Analysis User’s Manual, C3D8RHT(S) is identified as an “8-node trilinear displacement and temperature, reduced integration with hourglass control, hybrid with constant pressure” element. Tinkering with the integration rules may be useful in the art of finite element modeling when the goal is to tune stiffness relationships (as, for example, in crash dynamics models), but it is an egregious crime in finite element analysis because it introduces a source of error that cannot be controlled by mesh refinement, or increasing the polynomial degree, and makes a posteriori error estimation impossible.
  4. Reporting computed data that do not converge to a finite value. For example, if a domain has one or more sharp reentrant corners in the region of primary interest, then the maximum stress computed from a finite element solution will be a finite number but will tend to infinity when the degrees of freedom are increased. It is not meaningful to report such a computed value: The error is infinitely large.
  5. Tricks used when connecting elements based on different formulations. For example, connecting an axisymmetric shell element (3 degrees of freedom per node) with an axisymmetric solid element (2 degrees of freedom) involves tricks of various sorts, most of which are illegal.

Takeaway

The deeply ingrained practice of finite element modeling has its roots in the pre-science period of the development of the finite element method. To meet the current reliability expectations in numerical simulation, it will be necessary to routinely perform solution verification. This is possible only through the science of finite element analysis, respecting the rules of variational calculus. When thinking about digital transformation, digital twins, certification by analysis, and linking simulation with artificial intelligence tools, one must think about the science of finite element analysis and not the art of finite element modeling rooted in pre-1970s thinking.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Turner, M.J., Clough, R.W., Martin, H.C. and Topp, L.J. Stiffness and deflection analysis of complex structures. Journal of the Aeronautical Sciences23(9), pp. 805-823, 1956.

[3] Babuška, I. and Aziz, A.K. Survey lectures on the mathematical foundations of the finite element method.  The mathematical foundations of the finite element method with applications to partial differential equations (A. K. Aziz, ed.) Academic Press, 1972.

[4] Strang, G. and Fix, G. An analysis of the finite element method. Prentice Hall, 1973.

[5] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[6] Hughes, T.J., Cohen, M. and Haroun, M. Reduced and selective integration techniques in the finite element analysis of plates. Nuclear Engineering and Design46(1), pp.203-222, 1978.


Related Blogs:

]]>
https://www.esrd.com/variational-crimes/feed/ 0
Simulation Governance https://www.esrd.com/simulation-governance-at-the-present/ https://www.esrd.com/simulation-governance-at-the-present/#respond Thu, 13 Jun 2024 20:23:21 +0000 https://www.esrd.com/?p=31866 At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation, digital twins, certification by analysis, and AI-assisted simulation projects are generating considerable interest in engineering communities. For these initiatives to succeed, the reliability of numerical simulations must be assured. This can happen only if management understands that simulation governance is an essential prerequisite for success and undertakes to establish and enforce quality control standards for all simulation projects.

The idea of simulation governance is so simple that it is self-evident: Management is responsible for the exercise of command and control over all aspects of numerical simulation. The formulation of technical requirements is not at all simple, however. A notable obstacle is the widespread confusion of the practice of finite element modeling with numerical simulation. This misconception is fueled by marketing hyperbole, falsely suggesting that purchasing a suite of software products is equivalent to outsourcing numerical simulation.  

At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.

The Kuhn Cycle

The development of numerical simulation technology falls under the broad category of scientific research programs, which encompass model development projects in the engineering and applied sciences as well. By and large, these programs follow the pattern of the Kuhn Cycle [1] illustrated schematically in Fig. 1 in blue:

Figure 1: Schematic illustration of the Kuhn cycle.

A period of pre-science is followed by normal science. In this period, researchers have agreed on an explanatory framework (paradigm) that guides the development of their models and algorithms.  Program (or model) drift sets in when problems are identified for which solutions cannot be found within the confines of the current paradigm. A program crisis occurs when the drift becomes excessive and attempts to remove the limitations are unsuccessful. Program revolution begins when candidates for a new approach are proposed. This eventually leads to the emergence of a new paradigm, which then becomes the explanatory framework for the new normal science.

The Development of Finite Element Analysis

The development of finite element analysis followed a similar pattern. The period of pre-science began in 1956 and lasted until about 1970. In this period, engineers who were familiar with the matrix methods of structural analysis were trying to extend that method to stress analysis. The formulation of the algorithms was based on intuition; testing was based on trial and error, and arguing from the particular to the general (a logical fallacy) was common.   

Normal science began in the early 1970s when the mathematical foundations of finite element analysis were addressed in the applied mathematics community. By that time, the major finite element modeling software products in use today were under development. Those development efforts were largely motivated by the needs of the US space program. The developers adopted a software architecture based on pre-science thinking. I will refer to these products as legacy FE software: For example, NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970.

Mathematical analysis of the finite element method identified a number of conceptual errors. However, the conceptual framework of mathematical analysis and the language used by mathematicians were foreign to the engineering community, and there was no meaningful interaction between the two communities.

The scientific foundations of finite element analysis were firmly established by 1990, and finite element analysis became a branch of applied mathematics. This means that, for a very large class of problems that includes linear elasticity, the conditions for stability and consistency were established, estimates were obtained for convergence rates, and solution verification procedures were developed, as were elegant algorithms for superconvergent extraction of quantities of interest such as stress intensity factors. I was privileged to have worked closely with Ivo Babuška, an outstanding mathematician who is rightfully credited for many key contributions.

Normal science continues in the mathematical sphere, but it has no influence on the practice of finite element modeling. As indicated in Fig. 1, the practice of finite element modeling is rooted in the pre-science period of finite element analysis, and having bypassed the period of normal science, it had reached the stage of program crisis decades ago.

Evidence of Program Crisis

The knowledge base of the finite element method in the pre-science period was a small fraction of what it is today. The technical differences between finite element modeling and numerical simulation are addressed in one of my earlier blog posts [2]. Here, I note that decision-makers who have to rely on computed information have reasons to be disappointed. For example, the Air Force Chief of Staff,  Gen. Norton Schwartz, was quoted in Defense News, 2012 [3] saying: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

General Schwartz expected that the reliability of predictions based on numerical simulation would be similar to the reliability of observations in physical tests. This expectation was not unreasonable considering that by that time, legacy FE software tools had been under development for more than 40 years. What the general did not know was that, while the user interfaces greatly improved and impressive graphic representations could be produced, the underlying solution methodology was (and still is) based on pre-1970s thinking.

As a result, efforts to integrate finite element modeling with artificial intelligence and to establish digital twins based on finite element modeling will surely end in failure.

Paradigm Change Is Necessary

Paradigm change is never easy. Max Planck observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This is often paraphrased, saying: “Progress occurs one funeral at a time.” Planck was referring to the foundational sciences and changing academic minds.  The situation is more challenging in the engineering sciences, where practices and procedures are often deeply embedded in established workflows and changing workflows is typically difficult and expensive.

What Should Management Do?

First and foremost, management should understand that simulation is one of the most abused words in the English language. Furthermore:

  • Treat any marketing claim involving simulation with an extra dose of skepticism. Prior to undertaking projects in the areas of digital transformation, certification by analysis, digital twins, and AI-assisted simulation, ensure that the mathematical models produce reliable predictions.
  • Recognize the difference between finite element modeling and numerical simulation.
  • Understand that mathematical models produce reliable predictions only within their domains of calibration.
  • Treat model form and numerical approximation errors separately and require error control in the formulation and application of mathematical models.
  • Do not accept computed data without error metrics.
  • Understand that model development projects are open-ended.
  • Establish conditions favorable for the evolutionary development of mathematical models.
  • Become familiar with the concepts and terminology in reference [4]. For additional information on simulation governance, I recommend ESRD’s website.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023. https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/.

[3] Weisgerber, M. DoD Anticipates Better Price on Next F-35 Batch, Gannett Government Media Corporation, 8 March 2012. [Online]. Available: https://tinyurl.com/282cbwhs.

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. 


Related Blogs:

]]>
https://www.esrd.com/simulation-governance-at-the-present/feed/ 0
Digital Transformation https://www.esrd.com/digital-transformation/ https://www.esrd.com/digital-transformation/#respond Fri, 17 May 2024 01:31:22 +0000 https://www.esrd.com/?p=31765 Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, we address digital transformation in the engineering sciences, focusing on numerical simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, I address digital transformation in the engineering sciences, focusing on numerical simulation.

Digital Technologies in the Engineering Sciences

Digital technologies have been integrated into the engineering sciences since the 1950s.  The adoption process has not been uniform across all disciplines. Some fields (like aerospace) adopted technologies early, while others were slower to change. The development and adoption of these technologies are ongoing. Engineering today is increasingly digital, and innovations are constantly changing the way engineers approach their work. Here are some important milestones:

Early Adoption (1950s-1970s)

  • Mainframe computers were used for engineering calculations that would have been impossible or extremely time-consuming to perform by hand.
  • Numerical control (NC) machines used punched tape or cards to control tool movements, streamlining machining processes.
  • Early Computer-Aided Design (CAD) systems revolutionized drafting in the 1960s. They allowed engineers to create and manipulate drawings on a computer, making design iterations much faster than previously possible.

Period of Rapid Growth (1980s-1990s)

  • Affordable Personal Computers (PCs) made computing power accessible to individual engineers and small firms.
  • Development of CAD software brought 3D modeling from specialized applications into mainstream design.
  • Finite Element Modeling software became commercially available, allowing engineers to perform structural and strength calculations.
  • The mathematical foundations of the finite element method (FEM) were established, and finite element analysis (FEA) became a branch of Applied Mathematics.

Post-Millennial Development  (2000s-Present)

  • Cloud-based solutions offer scalable computing power and collaboration tools, making complex calculations accessible without massive hardware investment.
  • Building Information Modeling (BIM) revolutionized the architecture, engineering, and construction (AEC) industries.
  • Internet of Things (IoT): Networked sensors and devices provide engineers with real-time data to monitor structures, predict maintenance needs, and optimize operations.
  • Additive Manufacturing (3D Printing) allows for the rapid creation of complex prototypes and even functional end-use parts.

Given that digital technologies have been successfully integrated into engineering practice, it may appear that not much else needs to be done. However, important challenges remain, and there are many opportunities for improvement. This is discussed next.

Outlook: Opportunities and Challenges

Bearing in mind that the primary goal of digital transformation is to enhance competitiveness, in the field of numerical simulation, this translates to improving the predictive performance of mathematical models. Ideally, we aim to reach a reliability level in model predictions comparable to that of physical experimentation. From the technological point of view, this goal is achievable: We have the theoretical understanding of how to maximize the predictive performance of mathematical models through the application of verification, validation, and uncertainty quantification procedures. Furthermore, advancements in explainable artificial intelligence (XAI) technology can be utilized to optimize the management of numerical simulation projects so as to maximize their reliability and effectiveness.  

The primary challenge in the field of engineering sciences is that further progress in digital transformation will require fundamental changes in how numerical simulation is currently understood by the engineering community and how it is practiced in industrial settings. It is essential to keep in mind the differences between finite element modeling and numerical simulation. I explained the reasons for this in an earlier blog post [1]. The art of finite element modeling will have to be replaced by the science of finite element analysis, and the verification, validation, and uncertainty quantification (VVUQ) procedures will have to be applied [2].

Paradoxically, the successful early integration of finite element modeling practices and software tools into engineering workflows now impedes attempts to utilize technological advances that occurred after the 1970s. The software architecture of legacy finite element codes was substantially set by 1970, based on understanding the finite element method that existed at that time. Limitations of the software architecture prevented subsequent advances, such as a posteriori error estimation in terms of the quantities of interest and control of model form errors, both of which are essential for meeting the reliability requirements in numerical simulation. Abandoning finite element modeling practices and embracing the methodology of numerical simulation technology is a major challenge for the engineering community.

The “I Believe” Button

An ANSYS blog [3] tells the story of a presentation made to an A&D executive. The presentation was to make a case for transforming his department using digital engineering. At the end of the presentation, the executive pointed to a coaster on his desk. “See this? That’s the ‘I believe’ button. I can’t hit it. I just can’t hit it. Help me hit it.” Clearly, the executive was asking for convincing evidence that the computed information was sufficiently reliable to support decision-making in his department. Put in another way, he did not have the courage to sign the blueprint on the basis of data generated by digital engineering. What it takes to gather such courage was addressed in one of my earlier blogs [4]. Reliability considerations significantly influence the implementation of simulation process data management (SPDM).

Change Is Necessary

The frequently cited remark by W. Edwards Deming: “Change is not obligatory, but neither is survival,” reminds us of the criticality of embracing change.


References

[1] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023.
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/
[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. 162 pp. 206–214, 2024. The publisher is providing free access to this article until May 22, 2024. Anyone may download it without registration or fees by clicking on this link:
https://authors.elsevier.com/c/1isOB3CDPQAe0b
[3] Bleymaier, S. Hit the “I Believe” Button for Digital Transformation. ANSYS Blog. June 14, 2023. https://www.ansys.com/blog/believe-in-digital-transformation
[4] Szabó B. Where do you get the courage to sign the blueprint? ESRD Blog. October 6, 2023.
https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/


Related Blogs:

]]>
https://www.esrd.com/digital-transformation/feed/ 0
Digital Twins https://www.esrd.com/digital-twins/ https://www.esrd.com/digital-twins/#respond Thu, 02 May 2024 15:33:33 +0000 https://www.esrd.com/?p=31726 The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term "digital twin" was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The idea of a digital twin originated at NASA in the 1960s as a “living model” of the Apollo program. When Apollo 13 experienced an oxygen tank explosion, NASA utilized multiple simulators and extended a physical model of the spacecraft to include digital simulations, creating a digital twin. This twin was used to analyze the events leading up to the accident and investigate ideas for a solution. The term “digital twin” was coined by NASA engineer John Vickers much later. While the term is commonly associated with modeling physical objects, it is also employed to represent organizational processes. Here, we consider digital twins of physical entities only.

Digital Twins: An Overview

An overview of the current understanding of the idea of digital twins at NASA is available in a keynote presentation delivered in 2021 [1]. This presentation contains the following quote from reference [2]:

“The Digital Twin (DT) is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin.”

I think that this is closer to being an aspirational statement than a functional definition of digital twins.  On the positive side, this statement articulates that the reliability of the results of the simulation should be comparable to that of a physical experiment. Note that this is possible only when mathematical models are used within their domains of calibration [3]. On the negative side, the description of a product “from the micro atomic level to the macro geometrical level” is neither necessary nor feasible. The goal of a simulation project is not to describe a physical system from A to Z but rather to predict the quantities of interest, such as expected fatigue life, margins of safety, limit load, deformation, natural frequency, and the like. In view of this, I propose the following definition:

“A Digital Twin (DT) is a set of mathematical models formulated to predict quantities of interest that characterize the functioning of a potential or actual manufactured product. When the mathematical models are used within their domains of calibration, the reliability of the predictions is comparable to that of a physical experiment.”

The set of mathematical models may comprise a single model of a component or several interacting component models. The motivation for creating digital twins typically comes from the requirements of product lifecycle management: High-value assets are monitored throughout their lifecycles, and the models that constitute a digital twin are updated with new data as they become available. This fits into the framework of model development projects discussed in one of my blogs, “Model Development in the Engineering Sciences,” and in greater detail in reference [3]. An essential attribute of any mathematical model is its domain of calibration.

Example 1: Component Twin

The Single Fastener Analysis Tool (SFAT) is a smart application engineered for comprehensive analyses of single and double shear joints of metal or composite plates. It also serves as an example of a component twin and highlights the technical challenges involved in the development of digital twins.

Figure 1. Single Fastener Analysis Tool (SFAT). Examples of use cases.

SFAT offers the flexibility to model laminates either as ply-by-ply or homogenized entities. It can accommodate various types of fastener heads, such as protruding and countersunk, including those with hollow shafts. It is capable of supporting different fits such as neat, interference, and clearance.

SFAT also provides additional input options to account for factors like shimmed and unshimmed gaps, bushings, and washers. The application allows for the specification of shear load and fastener pre-load as loading conditions. It provides estimates of the errors of approximation in terms of the quantities of interest.

Example 2: Asset Twin

A good example of asset twins is the structural health monitoring of large concrete dams. Following the collapse of the Malpasset dam in Provence, France, in 1959, the World Bank mandated that all dam projects seeking financial backing must undergo modeling and testing at the Experimental Institute for Models and Structures in Bergamo, Italy (ISMES). Subsequently, ISMES was commissioned to develop a system that will monitor the structural health of large dams. The dams would be instrumented, and a numerical simulation framework, now called digital twin, would be used to evaluate anomalies indicated by the instruments.

It was understood that numerical approximation errors would have to be controlled to small tolerances to ensure that they were negligibly small in comparison with the errors in measurements. To perform the calculations, a finite element program based on the p-version was written at ISMES in the second half of the 1970s under the direction of Dr. Alberto Peano, my former D.Sc. student. That program is still in use today under the name FIESTA [4].

Simulation Governance: Essential for Digital Twin Creation

Creating digital twins encompasses all aspects of model development, necessitating separate treatment of the model form and approximation errors. In other words, the verification, validation, and uncertainty quantification (VVUQ) procedures have to be applied. The model must be updated and recalibrated when new ideas are proposed or new data become available. The only difference is that in the case of digital twins, the updates involve individual object-specific data collected over the life span of the physical object.

Model development projects are classified as progressive, stagnant, and improper. A model development project is progressive if the domain of calibration is increasing, stagnant if it is not increasing, and improper if the problem-solving machinery is not consistent with the formulation of the mathematical model or lacks the ability to support solution verification [3]. The goal of simulation governance is to ensure that digital twin projects are progressive. Unfortunately, owing to a lack of simulation governance, the large majority of model development projects are improper, and hence, most digital twins fail to meet the required standards of reliability.


References

[1]  Allen, D. B. Digital Twins and Living Models at NASA. Keynote presentation at the ASME Digital Twin Summit. November 3, 2021.

[2] Grieves, M. and Vickers, J. Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In: Transdisciplinary Perspectives on Complex Systems. F-J. Kahlen, S. Flumerfelt and A. Alves (eds) Springer International Publishing, Switzerland, pp. 85-113, 2017.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.  The publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b

[4] Angeloni, P., Boccellato, R., Bonacina, E., Pasini, A., Peano, A.  Accuracy Assessment by Finite Element P-Version Software. In: Adey, R.A. (ed) Engineering Software IV. Springer, Berlin, Heidelberg, 1985. https://doi.org/10.1007/978-3-662-21877-8_24


Related Blogs:

]]>
https://www.esrd.com/digital-twins/feed/ 0
Not All Models Are Wrong https://www.esrd.com/not-all-models-are-wrong/ https://www.esrd.com/not-all-models-are-wrong/#respond Thu, 11 Apr 2024 15:55:43 +0000 https://www.esrd.com/?p=31628 Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


I never understood the statement: “All models are wrong, but some are useful”, attributed to George E. P. Box, a statistician, quoted in many papers and presentations. If that were the case, why should we try to build models and how would we know when and for what purposes they may be useful? We construct models with the objective of making reliable predictions, the degree of reliability being comparable to that of a physical experiment.

Consider, for example, the problem in Fig. 1 showing a sub-assembly of an aircraft structure. The quantity of interest is the margin of safety: Given multiple load conditions and design criteria, estimate the minimum value of the margin of safety and show that the numerical approximation error is less than 5%.   We must have sufficient reason to trust the results of simulation tasks like this.

Figure 1: Sub-assembly of an aircraft structure.

Trying to understand what George Box meant, I read the paper in which he supposedly made the statement that all models are wrong[1] but I did not find it very enlightening. Nor did I find that statement in its often-quoted form. What I found is this non sequitur: “Since all models are wrong the scientist must be alert to what is importantly wrong.” This makes the matter much more complicated: Now we have to classify wrongness into two categories: important and unimportant. By what criteria? – That is not explained.

Box did not have the same understanding as we do of what a mathematical model is. This is evidenced by the sentence: “In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless.” Our goal is not to model the “real world”, a vague concept, but to model specific aspects of physical reality, the quantities of interest having been clearly defined as, for example, in the case of the problem shown in Fig. 1. Our current understanding of mathematical models is based on the concept of model-dependent realism which was developed well after Box’s 1978 paper was written.

Model-Dependent Realism

The term model-dependent realism was introduced by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design [2] but the distinction between physical reality and ideas of physical reality is older. For example, Wolfgang Pauli wrote in 1948: “The layman always means, when he says `reality’ that he is speaking of something self-evidently known; whereas to me it seems the most important and exceedingly difficult task of our time is to work on the construction of a new idea of reality.” [From a letter to Markus Fierz.]

If two different models describe a set of physical phenomena equally well then both models are equally valid: It is meaningless to speak about “true reality”. In Hawking’s own words [3]: “I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation.” In other words, mathematical models are, essentially, phenomenological models.

What is a Mathematical Model?

A mathematical model is an operator that transforms one set of data D, the input, into another set, the quantities of interest F. In shorthand notation we have:

\boldsymbol D\xrightarrow[(I,\boldsymbol p)]{}\boldsymbol F,\quad (\boldsymbol D, \boldsymbol p) \in ℂ \quad (1)

where the right arrow represents the mathematical model. The letters I and p under the right arrow indicate that the transformation involves an idealization (I) as well as parameters (physical properties) p that are determined through calibration experiments. Restrictions on D and p define the domain of calibration . The domain of calibration is an essential feature of any mathematical model [4], [5].

Most mathematical models used in engineering have the property that the quantities of interest F continuously depend on D and p. This means that small changes in D and/or p will result in correspondingly small changes in F which is a prerequisite to making reliable predictions.

To ensure that the predictions based on a mathematical model are reliable, it is necessary to control two types of error: The model form error and the numerical approximation errors.

Model Form Errors

The formulation of mathematical models invariably involves making restrictive assumptions such as neglecting certain geometric features, idealizing the physical properties of the material, idealizing boundary conditions, neglecting the effects of residual stresses, etc. Therefore, any mathematical model should be understood to be a special case of a more comprehensive model. This is the hierarchic view of models.

To test whether a restrictive assumption is acceptable for a particular application, it is necessary to estimate the influence of that assumption on the quantities of interest and, if necessary, revise the model. An exploration of the influence of modeling assumptions on the quantities of interest is called virtual experimentation [6]. Simulation software tools must have the capability to support virtual experimentation.

Approximation Errors

Approximation errors occur when the quantities of interest are estimated through a numerical process.  This means that we get a numerical approximation to F, denoted by Fnum. It is necessary to show that the relative error in Fnum does not exceed an allowable value τall:

| \boldsymbol F - \boldsymbol F_{num} |/|\boldsymbol F| \le \tau_{all} \quad (2)

This is the requirement of solution verification. To meet this requirement, it is necessary to obtain a converging sequence of numerical solutions with respect to increasing degrees of freedom [6].

Model Development Projects

The formulation of mathematical models is a creative, open-ended activity, guided by insight, experience, and personal preferences. Objective criteria are used to validate and rank mathematical models [4], [5]. 

Model development projects have been classified as progressive, stagnant, and improper [5]. A model development project is progressive if the domain of calibration is increasing, stagnant if the domain of calibration is not increasing, and improper if one or more algorithms are inconsistent with the formulation or the problem-solving method does not have the capability to estimate and control the numerical approximation errors in the quantities of interest. The most important objective of simulation governance is to provide favorable conditions for the evolutionary development of mathematical models and to ensure that the procedures of verification, validation and uncertainty quantification (VVUQ) are properly applied.

Not All Models Are Wrong, but Many of Them Are…

Box’s statement that all models are wrong is not correct. Models, developed under the discipline of VVUQ, can be relied on to make correct predictions within their domains of calibration. However, model development projects lacking the discipline of VVUQ tend to produce wrong models. And there are models, not tethered to scientific principles and methods, that are not even wrong.


References

[1] Box, G. E. P. Science and Statistics. Journal of the American Statistical Association, Vol. 71, No. 356, pp. 791-799, 1976.

[2] Hawking, S. and Mlodinow, L. The Grand Design. Random House 2010.

[3] Hawking, S. The nature of space and time.  Princeton University Press, 2010 (with Roger Penrose).

[4] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp. 75-86, 2021 [open source].

[5] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024. Note: the publisher is providing free access to this article until May 22, 2024.  Anyone may download it without registration or fees by clicking on this link: https://authors.elsevier.com/c/1isOB3CDPQAe0b.

[6] B. Szabó and I. Babuška,  Finite Element Analysis.  Method, Verification and Validation. 2nd edition, John Wiley & Sons, Inc., 2021.  


Related Blogs:

]]>
https://www.esrd.com/not-all-models-are-wrong/feed/ 0
Certification by Analysis (CbA) – Are We There Yet? https://www.esrd.com/certification-by-analysis-are-we-there-yet/ https://www.esrd.com/certification-by-analysis-are-we-there-yet/#respond Thu, 07 Mar 2024 21:36:09 +0000 https://www.esrd.com/?p=31410 Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards. The key to CbA is reliability. This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments. To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


While reading David McCullough’s book “The Wright Brothers”, a fascinating story about the development of the first flying machine, this question occurred to me: Would the Wright brothers have succeeded if they had used substantially fewer physical experiments and relied on finite element modeling instead?  I believe that the answer is: no.  Consider what happened in the JSF program.

Lessons from the JSF Program

In 1992, eighty-nine years after the Wright brothers’ Flying Machine first flew at Kitty Hawk, the US government, decided to fund the design and manufacture of a fifth-generation fighter aircraft that combines air-to-air, strike, and ground attack capabilities. Persuaded that numerical simulation technology was sufficiently mature, the decision-makers permitted the manufacturer to concurrently build and test the aircraft, known as Joint Strike Fighter (JSF). The JSF, also known as the F-35, was first flown in 2006.  By 2014, the program was 163 billion dollars over budget and seven years behind schedule.

Two senior officers illuminated the situation in these words:

Vice Admiral David Venlet, the Program Executive Officer, quoted in AOL Defense in 2011 [1]: “JSF’s build and test was a miscalculation…. Fatigue testing and analysis are turning up so many potential cracks and hot spots in the Joint Strike Fighter’s airframe that the production rate of the F-35 should be slowed further over the next few years… The cost burden sucks the wind out of your lungs“.

Gen. Norton Schwartz, Air Force Chief of Staff, quoted in Defense News, 2012 [2]: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

These officers believed that the software tools were so advanced that testing would confirm the validity of design decisions based on them. This turned out to be wrong. However, their mistaken belief was not entirely unreasonable if we consider that by the start of the JSF program commercial finite element analysis (FEA) software products were 30+ years old, therefore they could have reasonably assumed that the reliability of these products greatly improved, as were the hardware systems and visualization tools capable of creating impressive color images, tacitly suggesting that the underlying methodology is capable of guaranteeing the quality and reliability of the output quantities.  Indeed, there were very significant advancements in the science of finite element analysis which became a bona-fide branch of applied mathematics in that period.  The problem was that commercial FEA software tools did not keep pace with those important scientific developments.

There are at least two reasons for this:  First, the software architecture of the commercial finite element codes was based on the thinking of the 1960s and 70s when the theoretical foundations of FEA were not yet established.  As a result, several limitations were incorporated.  Those limitations kept code developers from incorporating later advancements, such as a posteriori error estimation, advanced discretization strategies, and stability criteria.  Second, decision-makers who rely on computed information failed to specify the technical requirements that simulation software must meet, such as, for example, to report not just the quantities of interest but also their estimated relative errors.  To fulfill this key requirement, legacy FE software would have had to be overhauled to such an extent that only their nameplates would have remained the same.

Technical Requirements for CbA

Certification by Analysis (CbA) uses validated computer simulations to demonstrate compliance with regulations, replacing some traditional physical tests. CbA allows for exploring a wide range of design scenarios, accelerates innovation, lowers expenses, and upholds rigorous safety standards.  The key to CbA is reliability.  This means that the data generated by numerical simulation should be as trustworthy as if they were generated by carefully conducted physical experiments.   To achieve that goal, it is necessary to control two fundamentally different types of error; the model form error and the numerical approximation error, and use the models within their domains of calibration.

Model form errors occur because we invariably make simplifying assumptions when we formulate mathematical models.  For example, formulations based on the theory of linear elasticity include the assumptions that the stress-strain relationship is a linear function, independent of the size of the strain and that the deformation is so small that the difference between the equilibrium equations written on the undeformed and deformed configurations can be neglected.  As long as these assumptions are valid, the linear theory of elasticity provides reliable estimates of the response of elastic bodies to applied loads.  The linear solution also provides information on the extent to which the assumptions were violated in a particular model.  For example, if it is found that the strains exceed the proportional limit, it is advisable to check the effects of plastic deformation.  This is done iteratively until a convergence criterion is satisfied.  Similarly, the effects of large deformation can be estimated.  Model form errors are controlled by viewing any mathematical model as one in a sequence of hierarchic models of increasing complexity and selecting the model that is consistent with the conditions of the simulation.

Numerical errors are the errors associated with approximating the exact solution of mathematical problems, such as the equations of elasticity, Navier-Stokes, and Maxwell, and the method used to extract the quantities of interest from the approximate solution.   The goal of solution verification is to show that the numerical errors in the quantities of interest are within acceptable bounds.

The domain of calibration defines the intervals of physical parameters and input data on which the model was calibrated.  This is a relatively new concept, introduced in 2021 [3], that is also addressed in a forthcoming paper [4].  A common mistake in simulation is to use models outside of their domains of calibration.

Organizational Aspects

To achieve the level of reliability in numerical simulation, necessary for the utilization of CbA, management will have to implement simulation governance [5] and apply the protocols of verification, validation, and uncertainty quantification.

Are We There Yet?

No, we are not there yet. Although we have made significant progress in controlling errors in model form and numerical approximation, one very large obstacle remains: Management has yet to recognize that they are responsible for simulation governance, which is a critical prerequisite for CbA.


References

[1] Whittle, R. JSF’s Build and Test was ‘Miscalculation,’ Adm. Venlet Says; Production Must Slow. [Online] https://breakingdefense.com/2011/12/jsf-build-and-test-was-miscalculation-production-must-slow-v/ [Accessed 21 February 2024].

[2] M. Weisgerber, M.  DoD Anticipates Better Price on Next F-35 Batch.  Gannett Government Media Corporation, 8 March 2012. [Online]. https://tinyurl.com/282cbwhs [Accessed 22 February 2024].

[3] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021 [open source].

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  To appear in Computers & Mathematics with Applications in 2024.  The manuscript is available on request.

[5] Szabó, B. and Actis, R. Planning for Simulation Governance and Management:  Ensuring Simulation is an Asset, not a Liability. Benchmark, July 2021.


Related Blogs:

]]>
https://www.esrd.com/certification-by-analysis-are-we-there-yet/feed/ 0