The ESRD Blog Archives - ESRD https://www.esrd.com/category/the-esrd-blog/ Engineering Software Research and Development, Inc. Thu, 16 Jan 2025 16:28:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg The ESRD Blog Archives - ESRD https://www.esrd.com/category/the-esrd-blog/ 32 32 Chaos in the Brickyard Revisited https://www.esrd.com/chaos-in-the-brickyard-revisited/ https://www.esrd.com/chaos-in-the-brickyard-revisited/#respond Wed, 15 Jan 2025 17:29:23 +0000 https://www.esrd.com/?p=33000 In a letter published in Science in 1963, Bernard K. Forscher used the metaphor of building edifices to represent the construction of scientific models, also called laws. These models explain observed phenomena and make predictions beyond the observations made. Building models consistent with the science of numerical simulation should never be confused with finite element modeling, an activity rooted in pre-1970s thinking. We should keep Forscher's metaphor in mind when evaluating claims about the benefits AI integration is expected to bring to numerical simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In a letter published in Science in 1963, Bernard K. Forscher used the metaphor of building edifices to represent the construction of scientific models, also called laws. These models explain observed phenomena and make predictions beyond the observations made [1]. Quoting from Forscher’s letter: “The making of bricks was a difficult and expensive undertaking, and the wise builder avoided waste by making only bricks of the shape and size necessary for the enterprise at hand.”

Progress was limited by the availability of bricks. To speed things up, artisans, referred to as junior scientists, were hired to work on brickmaking. Initially, this arrangement worked well. Unfortunately, however, the brickmakers became obsessed with making bricks. They argued that if enough bricks were available, the builders would be able to select what was necessary. Large sums of money were allocated, and the number of brickmakers mushroomed. They came to believe that producing a sufficient number of bricks was equivalent to building an edifice. The land became flooded with bricks, and more and more storage places, called journals, had to be created. Forscher concluded with this cheerless note: “And saddest of all, sometimes no effort was made even to maintain a distinction between a pile of bricks and a true edifice.

A chaotic brickyard. Image produced by Microsoft Copilot.

This was the situation sixty years ago. Over time, the “publish or perish” ethos intensified within the academic culture, prioritizing quantity over quality. This led to a surge in the production of academic papers, metaphorically akin to “brickmaking.” Additionally, a consensus emerged among researchers regarding what constitutes acceptable ideas and methods worthy of funding and publication. This consensus is upheld by the peer-review systems of granting agencies and journals, which tend to discourage challenges to mainstream views, thereby reinforcing established norms and practices and discouraging innovation. Successful grantsmanship requires that the topics proposed for investigation be aligned with the mainstream.

Stagnation in the Fundamental Sciences

Sabine Hossenfelder, a theoretical physicist, argues that physics, particularly in its foundational aspects, has been stagnant for the past 50 years, even though the number of physicists and the number of papers published in the field have been increasing steadily. In her view, the foundations of physics have not seen significant progress since the completion of the standard model of particle physics in the mid-1970s. She criticizes the field for relying too much on mathematics rather than empirical evidence, which has led to physicists being more heavily focused on the aesthetics of their theories than on nature [2]. She also shared a compelling personal account of her experience with the “publish-or-perish” world in a podcast [3].

I think that one possible explanation for this stagnation is that human intelligence, much like animal intelligence, has its limits. For example, while we can teach dogs to recognize several words, we cannot teach them to appreciate a Shakespearean sonnet. Nobel laureate Richard Feynman famously said: “I think I can safely say that nobody understands quantum mechanics.” We may have to be content with model-dependent realism, as suggested by Hawking and Mlodinow [4].

Stagnation in the Applied Sciences

All of the counter-selective elements identified by Hossenfelder are also present in engineering and applied sciences. However, in these disciplines, the causes of stagnation are entirely man-made. I will focus on numerical simulation, which spans all engineering disciplines and happens to be my own field. First, a brief historical retrospection is necessary.

In numerical simulation, the primary method used for approximating the solutions of partial differential equations is the finite element method (FEM). Interest in this method started with the publication of a paper in 1956, about a year before the space race began. In the following years, research and development activities concerned with FEM received generous amounts of funding. Many ideas, rooted in engineering intuition and informed by prior experience with matrix methods of structural analysis, were advanced and tested through numerical experimentation. Some ideas worked, others did not. Because the theoretical foundations of FEM had not yet been established, it was impossible to tell whether ideas that worked in particular cases were sound or not.

Current engineering practice is dominated by finite element modeling, an intuitive approach rooted in pre-1970s thinking. In contrast, numerical simulation is based on the science of finite element analysis (FEA), which matured later. Although these are conceptually different approaches, the two terms are frequently used interchangeably in engineering parlance. Whereas finite element modeling is an intuition-based practice, numerical simulation demands a disciplined science-based approach to the formulation and validation of mathematical models. The goal is to control both the model-form and approximation errors. An essential constituent of any mathematical model is the domain of calibration [5]. This is generally overlooked in current engineering practice.

During the 1960s and 1970s, when the FEM was still quite immature, several design decisions were made concerning the software architecture for FEM implementations. Although these decisions were reasonable at the time, they introduced limitations that significantly hindered the future development of FEM software, leading to prolonged stagnation.

The theoretical foundations of FEM were developed by mathematicians after 1970. Many important results emerged in the 1980s, leading to FEA becoming a branch of applied mathematics. However, the engineering community largely failed to grasp the importance and relevance of these advances due to a lack of common terminology and conceptual framework. A significant contributing factor was the difficulty and expense involved in upgrading the software infrastructure from the 1960s and 70s. As a result, these developments have not significantly influenced mainstream FEA engineering practices to the present day.

Example: Making Piles of Faulty Bricks

One of the limitations imposed by the software architecture designed for FEM in the 1960s was the restriction on the number of nodes and nodal variables. It was found that some elements were ‘too stiff.’ To address this, the idea of using reduced integration was proposed, meaning that fewer integration points were used than necessary for the integration error to be negligibly small. This approach tried to correct the stiffness problem by committing variational crimes.

Many papers were published showing that reduced integration worked well. However, it was later discovered that while reduced integration can be effective in some situations, it can cause “hourglassing,” that is, zero energy modes. Subsequently, many papers were published on how to control hourglassing. All these papers added to the brickyard’s clutter, and worst of all, hourglassing remains in legacy finite element codes even today.

Challenges and Opportunities

There is a broad consensus that numerical simulation must be integrated with explainable artificial intelligence (XAI). Indeed, XAI has the potential to elevate numerical simulation to a much higher level than would be possible otherwise. This integration can succeed only if the mathematical models are properly formulated, calibrated, and validated. It is essential to ensure that the numerical errors are estimated and controlled.

Legacy FEA codes are not equipped to meet these requirements; nevertheless, claims are being advanced, suggesting fast, easy, and inexpensive simulation that does not require much expertise because AI would take care of that.  These claims should be treated with extreme caution, as they do not come from those who can tell the difference between an edifice and a pile of bricks.


References

[1] Forscher, B. K. Chaos in the Brickyard. Science, 18 October 1963, Vol. 142, p. 339.

[2] Hossenfelder, S. Lost in Math: How Beauty Leads Physics Astray. Basic Books, 2018.

[3] Hossenfelder, S. My dream died, and now I’m here. Podcast: https://www.youtube.com/watch?v=LKiBlGDfRU8&t=12s.

[4] Hawking, S. and Mlodinow, L. The Grand Design. Random House 2010.

[5] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. 


Related Blogs:

]]>
https://www.esrd.com/chaos-in-the-brickyard-revisited/feed/ 0
Isogeometric Analysis (IGA) https://www.esrd.com/isogeometric-analysis-iga/ https://www.esrd.com/isogeometric-analysis-iga/#respond Mon, 09 Dec 2024 21:49:58 +0000 https://www.esrd.com/?p=32926 Contrary to various claims, IGA is not a new paradigm in the numerical approximation of partial differential equations; it is simply an alternative implementation of the p-version of the finite element method. IGA is one possible implementation of the p-version of the finite element method. Its distinguishing features are: (a) it retains the CAD geometry, (b) the basis functions are the same as those used for CAD representation, typically B-splines or NURBS, and (c) it provides for the enforcement of the inter-element continuity of the derivatives of the basis functions.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Isogeometric Analysis (IGA) was introduced in a paper published in 2005 [1], where it was presented as an alternative to the standard, polynomial-based finite element analysis (FEA). By that time, finite element analysis had a solid scientific foundation and was firmly established as a branch of applied mathematics. Also, implementations based on the algorithms described in reference [2] were in professional use. Bearing this in mind is important to avoid confusion with earlier implementations of the finite element method found in legacy FEA software. Those implementations are based on the pre-scientific thinking of the 1960s and 70s. When evaluating IGA, it should be compared with the finite element method based on the science of finite element analysis, not the outdated implementations preserved in legacy FEA software.

Finite Element Fundamentals

We should understand the finite element method as a set of algorithms designed to approximate the exact solutions of partial differential equations cast in variational form. The approximating functions are mapped polynomials characterized by the finite element mesh Δ, the assigned polynomial degrees p, and the mapping functions Q [2]. The boldface symbols indicate arrays, signifying that each element may have a different polynomial degree and mapping function. These entities define the finite element space S(Δ,p,Q). The finite element solution uFE is that function in S(Δ,p,Q), which minimizes the error measured in a norm that depends on the formulation, usually the energy norm. Formally we have:

||u_{EX} - u_{FE}||_{E} = \min_{u \in S(\Delta,\mathbf{p},\mathbf{Q})}||u_{EX}-u||_{E}\qquad (1)

where uEX is the exact solution and ∥.E is the energy norm [2]. The basis functions are continuous, but their derivatives normal to the element boundaries may be discontinuous. Of particular interest are hierarchic sequences of finite element spaces S1 ⊂ S2S3 … because the corresponding finite element solutions are guaranteed to converge to the exact solution, allowing estimation of the limit values of the quantities of interest. This makes solution verification possible, which is an essential technical requirement in numerical simulation.

Isogeometric Analysis

IGA is one possible implementation of the p-version of the finite element method. Its distinguishing features are: (a) it retains the CAD geometry, (b) the basis functions are the same as those used for CAD representation, typically B-splines or NURBS, and (c) it provides for the enforcement of the inter-element continuity of the derivatives of the basis functions.

At first glance, retaining the CAD model appears to be very advantageous. However, CAD models typically consist of multiple NURBS patches containing small gaps or overlaps. These patches may have been trimmed or joined in ways that disrupt the continuity required for IGA. Therefore, these models need to be edited, or “repaired”, before IGA can be applied. The necessity for editing to produce watertight CAD models also applies to mesh generators, computer numerical control (CNC) machining, and 3D printing.

In addition, the CAD models must be transformed into a format suitable for IGA. This transformation may involve reconstructing a CAD solid using IGA-compatible patches. In the context of IGA, the term “patch” refers to a continuous segment of the domain that is described by a single set of basis functions. Enforcement of the continuity of the basis functions across patch boundaries is a requirement. 

IGA provides the option to enforce the continuity of one or more derivatives of the basis functions. In addition to the basis functions, the components of a patch include control points that define the geometric configuration and a knot vector that defines where the basis functions transition or change. The knot vector determines the degree of continuity of the basis functions at these transitions. – If this sounds complicated, that is because it is. Fortunately, a simpler way of handling the mapping problem associated with high-order finite element methods exists. This is discussed in the next section.

The term “k-refinement” is used to indicate that all derivatives up to and including the kth derivative of the basis functions are continuous. This is then put on par with the terms “h-refinement” and “p-refinement.” However, this is misleading for the following reason: In h- and p-refinements, the number of degrees of freedom (DOF) is increased either by decreasing the size of the largest element in the mesh (h) or by increasing the lowest polynomial degree of elements (p). In k-refinement, on the other hand, the DOF are decreased when higher continuity is enforced than the minimum required by the formulation. Therefore, “k-restriction” would be the correct term. Referring to equation (1), we see that imposing any restriction on the finite element space cannot possibly decrease the error of approximation in the norm of the formulation. Therefore, we can speak of h-convergence or p-convergence but not of k-convergence.

Mapping for High-Order Finite Element Methods

High-order finite element methods, when used in conjunction with properly defined meshes, have been shown to be effective in controlling the errors of approximation, provided that the primary sources of error are the size of the elements and the polynomial degrees assigned to the elements. Additional errors arise from using numerical integration, approximating the boundaries of the solution domain by mapped polynomials, and enforcing essential boundary conditions. It is necessary to ensure that these secondary errors are negligibly small in comparison to the primary errors. 

A method for approximating curved surfaces with polynomials was developed in the early 1990s and implemented in ESRD’s StressCheck®. The approximation is based on using optimal (or nearly optimal) collocation points. Although the approximated surface is only continuous, and its derivatives may be discontinuous, the errors of approximation caused by such mappings have been shown to be negligibly small. The following example illustrates that even when all derivatives of the underlying exact solution are continuous, mapping elements using 5th-order polynomials and the Chen-Babuška collocation points [3] and enforcing only the minimal continuity required by the formulation yields very satisfactory results.

Example: Free Vibration of a Spherical Shell

We consider the free vibration of a spherical shell. The shell, shown in Fig. 1(a), has a radius of 100 mm, wall thickness of 1.00 mm, modulus of elasticity of 1.80E5 N/mm2, Poisson’s ratio of 0.333, and mass density of 7.86E-9 Ns2/mm4 (7860 kg/m3). The goal of computation is to find the first twenty eigenvalues and show that the numerical error is less than 1%.

For the shell model, we chose the anisotropic product space (p,p,3), meaning that the displacement components are approximated by polynomials of degree p in the tangential directions to the mid-surface of the shell but fixed at degree 3 in the direction of the normal. The formulation requires that the displacement components be continuous across element boundaries.

The 20th eigenfunction and the contours of the corresponding first principal stress are shown in Fig. 1(b). The contours were not smoothed, yet there is no noticeable discontinuity in the stress at the inter-element boundaries.

Since the sphere is not constrained, the first six eigenvalues are zero. All eigenvalues converge strongly. For example, the 20th eigenvalue is 7021 Hz, which does not change as p increases from 6 to 8. Repeated eigenvalues occur; hence, the mode shapes are not uniquely defined.

Figure 1: (a) Spherical shell, automesh, 16 elements. (b) The 20th eigenfunction and the contours of the  corresponding first principal stress (p=8, product space). The images were generated by StressCheck 12.0.

The IGA Challenge

Contrary to various claims, IGA is not a new paradigm in the numerical approximation of partial differential equations; it is simply an alternative implementation of the p-version of the finite element method. I will refer to the implementation of the p-version documented in reference [2] as the standard implementation. In this standard implementation, only the minimal required level of continuity (denoted as C0) is enforced to ensure applicability to the broadest class of problems admitted by the formulation.

Advocates of IGA have presented examples demonstrating that it requires fewer DOF to solve certain problems compared to the standard implementation of the p-version. However, the basis for comparison should be the operation count, not the DOF. This is because, while enforcing the continuity of derivatives decreases the number of DOF, it increases the density of the stiffness and mass matrices. A challenge must be based on a class of problems sufficiently large to be of interest from the perspective of engineering practice, justifying the investment associated with code development. It must be demonstrated that IGA performs better in obtaining quantities of interest and estimating their relative errors. As far as I know, no such claim has been formulated and substantiated.


References

[1] Hughes, T.J., Cottrell, J.A. and Bazilevs, Y. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering194(39-41), pp. 4135-4195, 2005.

[2] Szabό, B. and Babuška, I. Finite Element Analysis. Hoboken, NJ. John Wiley & Sons, Inc., 1991.  The 2nd edition, Finite Element Analysis: Method, Verification and Validation, was published in 2021.

[3] Chen, Q. and Babuška, I. Approximate optimal points for polynomial interpolation of real functions in an interval and in a triangle. Computer Methods in Applied Mechanics and Engineering128(3-4), pp. 405-417, 1995.


Related Blogs:

]]>
https://www.esrd.com/isogeometric-analysis-iga/feed/ 0
XAI Will Force Clear Thinking About the Nature of Mathematical Models https://www.esrd.com/xai-and-mathematical-model-reliability/ https://www.esrd.com/xai-and-mathematical-model-reliability/#respond Wed, 15 Nov 2023 17:45:07 +0000 https://www.esrd.com/?p=30302 It is generally recognized that explainable artificial intelligence (XAI) will play an important role in numerical simulation where it will impose the requirements of reliability, traceability, and auditability. These requirements will necessitate clear thinking about the nature of mathematical models, the trustworthiness of their predictions, and ways to improve their reliability.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


It is generally recognized that explainable artificial intelligence (XAI) will play an important role in numerical simulation where it will impose the requirements of reliability, traceability, and auditability. These requirements will necessitate clear thinking about the nature of mathematical models, the trustworthiness of their predictions, and ways to improve their reliability.

Courtesy Gerd Altmann/geralt.

What is a Mathematical Model?

A mathematical model is an operator that transforms one set of data D, the input, into another set, the quantities of interest F. In shorthand notation we have:

\boldsymbol D\xrightarrow[(I,\boldsymbol p)]{}\boldsymbol F,\quad (\boldsymbol D, \boldsymbol p) \in ℂ \quad (1)

where the right arrow represents the mathematical model. The letters I and p under the right arrow indicate that the transformation involves an idealization (I) as well as parameters (physical properties) p that are determined by calibration. Restrictions on D and p define the domain of calibration , which is also called the domain of application of the mathematical model.

The formulation of mathematical models is a creative, open-ended activity, guided by insight, experience, and personal preferences. The validation and ranking of mathematical models, on the other hand, are based on objective criteria.

The systematic improvement of the predictive performance of mathematical models and their validation is, essentially, a scientific research program. According to Lakatos [1], a scientific research program has three constituent elements: (a) a set of hardcore assumptions, (b) a set of auxiliary hypotheses, and (c) a problem-solving machinery.

In the applied sciences, the hardcore assumptions are the assumptions incorporated in validated models of broad applicability, such as the theory of elasticity, the Navier-Stokes equations, and the Maxwell equations. The objects of investigation are the auxiliary hypotheses.

For example, in linear elastic fracture mechanics (LEFM), the goal is to predict the probability distribution of the length of a crack in a structural component, given the initial crack configuration and a load spectrum.  In this case, the hardcore assumptions are the assumptions incorporated in the theory of elasticity. One auxiliary hypothesis establishes a relationship between a functional defined on the elastic stress field, such as the stress intensity factor, and increments in crack length caused by the application of cyclic loads. The second auxiliary hypothesis accounts for the effects of overload and underload events.  The third auxiliary hypothesis models the statistical dispersion of crack length.

The parameters characterize the relationships defined by the auxiliary hypotheses and define the material properties of the hardcore problem.  The domain of calibration  is the set of restrictions on the parameters imposed by the assumptions in the hardcore hypothesis and limitations in the available calibration data.

Problem-Solving

The problem-solving machinery is a numerical method, typically the finite element method. It generates an approximate solution from which the quantities of interest Fnum are computed. It is necessary to show that the relative error in Fnum does not exceed an allowable value τall:

| \boldsymbol F - \boldsymbol F_{num} |/|\boldsymbol F| \le \tau_{all} \quad (2)

To achieve this goal, it is necessary to obtain a sequence of numerical solutions with increasing degrees of freedom [2].

Demarcation

Not all model development projects (MDPs) are created equal. It is useful to differentiate between progressive, stagnant, and improper MDPs:  An MDP is progressive if the domain of calibration is increasing; stagnant if the domain of calibration is not increasing, and improper if the auxiliary hypotheses do not conform with the hardcore assumptions, or the problem-solving method does not have the capability to estimate and control the numerical approximation errors in the quantities of interest.  Linear elastic fracture mechanics is an example of stagnant model development projects [3]. 

Presently, the large majority of engineering model development projects is improper.  The primary reason for this is that finite element modeling rather than numerical simulation is used, hence the capability to estimate and control the numerical approximation errors is absent. 

Finite element modeling is formally similar to equation (1):

\boldsymbol D\xrightarrow[(i,\boldsymbol p)]{} \overline {\boldsymbol F}_{num} \quad (3)

where lowercase i is used to indicate intuition in the place of idealization (I) and num replaces F.  The overbar is used to distinguish the solutions obtained by finite element modeling and proper application of the finite element method.

In finite element modeling, elements are intuitively selected from the library of a finite element software tool and assembled to represent the object of analysis. Constraints and loads are imposed to produce a numerical problem. The right arrow in equation (3) represents a ”numerical model”, which may not be an approximation to a well-defined mathematical model, in which case F is not defined and num does not converge to limit value as the number of degrees of freedom is increased. Consequently, error estimation is not possible. Also, the domain of calibration has a different meaning in finite element modeling than in numerical simulation.

Opportunities for Improving the Predictive Performance of Models

There is a very substantial unrealized potential in numerical simulation technology. To realize that potential, it will be necessary to replace the practice of finite element modeling with numerical simulation and utilize XAI tools to aid analysts in performing simulation projects:

  • Rapid advancements are anticipated in the standardization of engineering workflows, initially through the use of expert-designed engineering simulation applications equipped with autonomous error control procedures.
  • XAI will make it possible to control the errors of approximation very effectively.  Ideally, the information in the input will be used to design the initial mesh and assignment of polynomial degrees in such a way that in one or two adaptive steps the desired accuracies are reached.
  • XAI will be less helpful in controlling model form errors. This is because the formulation of models involves creative input for which no algorithm exists. Nevertheless, XAI will be useful in tracking the evolutionary changes in model development and the relevant experimental data.
  • XAI will help navigate numerical simulation projects.
    • Prevent the use of intuitively plausible but conceptually wrong input data.
    • Shorten training time for the operators of simulation software tools.

The Main Points

  • The reliability and effectiveness of numerical simulation can be greatly enhanced through integration with XAI processes. 
  • The main elements of XAI-integrated numerical simulation processes are shown in Figure 1:

Figure 1: The main elements of XAI-integrated numerical simulation.
  • The integration of numerical simulation with explainable artificial intelligence tools will force the adoption of science-based algorithms for solution verification and hierarchic modeling approaches. 

References

[1] I. Lakatos, The methodology of scientific research programmes, vol. 1, J. Currie and G. Worrall, eds., Cambridge University Press, 1972.

[2] B. Szabó and I. Babuška,  Finite Element Analysis.  Method, Verification and Validation. 2nd edition, John Wiley & Sons, Inc., 2021.  

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/xai-and-mathematical-model-reliability/feed/ 0
Meshless Methods https://www.esrd.com/meshless-methods/ https://www.esrd.com/meshless-methods/#respond Thu, 07 Nov 2024 16:25:07 +0000 https://www.esrd.com/?p=32906 Meshless methods, also known as mesh-free methods, are computational techniques used for the approximation of the solutions of partial differential equations in the engineering and applied sciences. The advertised advantage of the method is that users do not have to worry about meshing. However, eliminating the meshing problem has introduced other, more complex issues. Oftentimes, advocates of meshless methods fail to mention their numerous disadvantages.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Meshless methods, also known as mesh-free methods, are computational techniques used for the approximation of the solutions of partial differential equations in the engineering and applied sciences. The advertised advantage of the method is that users do not have to worry about meshing. However, eliminating the meshing problem has introduced other, more complex issues. Oftentimes, advocates of meshless methods fail to mention their numerous disadvantages.

When meshless methods were first proposed as an alternative to the finite element method, creating a finite element mesh was more burdensome than it is today. Undoubtedly, mesh generation will become even less problematic with the application of artificial intelligence tools, and the main argument for using meshless methods will weaken over time.

An artistic rendering of the idea of meshless clouds. The spheres represent the supports of the basis functions associated with the centers of the spheres. Image generated by Microsoft Copilot.

Setting Criteria

First and foremost, numerical solution methods must be reliable. This is not just a desirable feature but an essential prerequisite for realizing the potential of numerical simulation and achieving success with initiatives such as digital transformation, digital twins, and explainable artificial intelligence, all of which rely on predictions based on numerical simulation. Assurance of reliability means that (a) the data and parameters fall within the domain of calibration of a validated model, and (b) the numerical solutions have been verified.

In the following, I compare the finite element and meshless methods from the point of view of reliability. The basis for comparison is the finite element method as it would be implemented today, not as it was implemented in legacy codes which are based on pre-1970s thinking. Currently, ESRD’s StressCheck is the only commercially available implementation that supports procedures for estimating and controlling model form and approximation errors in terms of the quantities of interest.

The Finite Element Method (FEM)

The finite element method (FEM) has a solid scientific foundation, developed post-1970. It is supported by theorems that establish conditions for its stability, consistency, and convergence rates. Algorithms exist for estimating the relative errors in approximations of quantities of interest, alongside procedures for controlling model form errors [1].

The Partition of Unity Finite Element Method (PUFEM)

The finite element method has been shown to work well for a wide range of problems, covering most engineering problems. However, it is not without limitations: For the convergence rates to be reasonable, the exact solution of the underlying problem has to have some regularity. Resorting to alternative techniques is warranted when standard implementations of the finite element method are not applicable.  One such technique is the Partition of Unity Finite Element Method (PUFEM), which can be understood as a generalization of the h, p, and hp versions of the finite element method [2]. It provides the ability to incorporate analytical information specific to the problem being solved in the finite element space.

The FEM Challenge

Any method proposed to rival FEM should, at the very least, demonstrate superior performance for a clearly defined set of problems. The benchmark for comparison should involve computing a specific quantity of interest and proving that the relative error is less than, for example, 1%. I am not aware of any publication on meshless methods that has tackled this challenge.

Meshless Methods

Various meshless methods, such as the Element-Free Galerkin (EFG) method, Moving Least Squares (MLS), and Smoothed Particle Hydrodynamics (SPH), using weak and strong formulations of the underlying partial differential equations, have been proposed. The theoretical foundations of meshless methods are not as well-developed as those of the Finite Element Method (FEM). The users of meshless methods have to cope with the following issues:

  1. Enforcement of boundary conditions: The enforcement of essential boundary conditions in meshless methods is generally more complex and less intuitive than in FEM. The size of errors incurred from enforcing boundary conditions can be substantial.
  2. Sensitivity to the choice of basis functions: The stability of meshless methods can be highly sensitive to the choice of basis functions.
  3. Verification: Solution verification with meshless methods poses significant challenges.
  4. Most meshless methods are not really meshless: It is true that traditional meshing is not required, but in weak formulations, the products of the derivatives of the basis functions have to be integrated. Numerical integration is performed over the domains defined by the intersection of supports (support is the subdomain on which the basis function is not zero), which requires a “background mesh.”
  5. Computational power: Meshless methods often require greater computational power due to the global nature of the shape functions used, which can lead to denser matrices compared to FEM.

Advice to Management

Decision-makers need solid evidence supporting the reliability of data generated by numerical simulation. Otherwise, where would they get their courage to sign the “blueprint”? They should require estimates of the error of approximation for the quantities of interest. Without such estimates, the value of the computed information is greatly diminished because the unknown approximation errors increase uncertainties in the predicted data.

Management should treat claims of accuracy in marketing materials for legacy finite element software and any software implementing meshless methods with a healthy dose of skepticism. Assertions that a software product was tested against benchmarks and found to perform well should never be taken to mean that it will perform similarly well in all cases. Management should require problem-specific estimates of relative errors in the quantities of interest.


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[2] Melenk, J. M. and Babuška, I. The partition of unity finite element method: Basic theory and applications. Computer Methods in Applied Mechanics and Engineering. Vol 139(1-4), pp. 289-314, 1996.


Related Blogs:

]]>
https://www.esrd.com/meshless-methods/feed/ 0
A Critique of the World Wide Failure Exercise https://www.esrd.com/critique-of-the-wwfe/ https://www.esrd.com/critique-of-the-wwfe/#respond Thu, 03 Oct 2024 13:00:00 +0000 https://www.esrd.com/?p=32759 The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses, and ran between 2007 and 2013. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses. It ran between 2007 and 2013. Quoting from reference [1]: “Twelve challenging test problems were defined by the organizers of WWFE-II, encompassing a range of materials (polymer, glass/epoxy, carbon/epoxy), lay-ups (unidirectional, angle ply, cross-ply, and quasi-isotropic laminates) and various 3D stress states”. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates.

The von Mises stress in an ideal fiber-matrix composite subjected to shearing deformation. The displacements are magnified 15X. Verified solution by StressCheck.

Composite Failure Model Development

According to Thomas Kuhn, the period of normal science begins when investigators have agreed upon a paradigm, that is, the fundamental ideas, methods, language, and theories that guide their research and development activities [2]. We can understand WWFE as an effort by the composite materials research community to formulate such a paradigm. While some steps were taken toward achieving that goal, the goal was not reached. The final results of WWFE-II were inconclusive. The main reason is that the project lacked some of the essential constituents of a model development program. To establish favorable conditions for the evolutionary development of failure criteria for composite materials, procedures similar to those outlined in reference [3] will be necessary. The main points are briefly described below.

  1. Formulation of the mathematical model: The operators that transform the input data into the quantities of interest are defined. In the case of WWFE, a predictor of failure is part of the mathematical model. In WWFE II, twelve different predictors were investigated. These predictors were formulated based on subjective factors: intuition, insight, and personal preferences. A properly conceived model development project provides an objective framework for ranking candidate models based on their predictive performance. Additionally, given the stochastic outcomes of experiments, a statistical model that accounts for the natural dispersion of failure events must be included in the mathematical model.
  2. Calibration: Mathematical models have physical and statistical parameters that are determined in calibration experiments. Invariably, there are limitations on the available experimental data. Those limitations define the domain of calibration. The participants of WWFE failed to grasp the crucial role of calibration in the development of mathematical models. Quoting from reference [1]: “One of the undesirable features, which was shared among a number of theories, is their tendency to calibrate the predictions against test data and then predict the same using the empirical constants extracted from the experiments.”  ̶  Calibration is not an undesirable feature. It is an essential part of any model development project. Mathematical models will produce reliable predictions only when the parameters and data are within their domains of calibration. One of the important goals of model development projects is to ensure that the domain of calibration is sufficiently large to cover all applications, given the intended use of the model. However, calibration and validation are separate activities. The dataset used for validation has to be different from the dataset used for calibration [3]. Predicting the calibration data once calibration was performed cannot lead to meaningful conclusions regarding the suitability or fitness of a model.
  3. Validation: Developers are provided complete descriptions of the validation experiments and, based on this information, predict the probabilities of the outcomes of validation experiments. The validation metric is the likelihood of the outcomes.
  4. Solution verification: It must be shown that the numerical errors in the quantities of interest are negligibly small compared to the errors in experimental observations.
  5. Disposition: Candidate models are ranked based on their predictive performance, measured by the ratio of predicted to realized likelihood values. The calibration domain is updated using all available data. At the end of the validation experiments, the calibration data is augmented with the validation data.
  6. Data management: Experimental data must be collected, curated, and archived to ensure its quality, usability, and accessibility.
  7. Model development projects are open-ended: New ideas can be proposed anytime, and the available experimental data will increase over time. Therefore, no one has the final word in a model development project. Models and their domains of calibration are updated as new data become available.

The Tale of Two Model Development Projects

It is interesting to compare the status of model development for predicting failure events in composite materials with linear elastic fracture mechanics (LEFM), which is concerned with predicting crack propagation in metals, a much less complicated problem. Although no consensus emerged from WWFE-II, there was no shortage of ideas on formulating predictors. In the case of LEFM, on the other hand, the consensus that the stress intensity factor is the predictor of crack propagation emerged in the 1970s, effectively halting further investigation of predictors and causing prolonged stagnation [3]. Undertaking a model development program and applying verification, validation, and uncertainty quantification procedures are essential prerequisites for progress in both cases.

Two Candid Observations

Professor Mike Hinton, one of the organizers of WWFE, delivered a keynote presentation at the NAFEMS World Congress in Boston in May 2011 titled “Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?” In this presentation, he shared two candid observations that shed light on the status of models created to predict failure events in composite materials:

  1. “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.” – In other words, in the absence of properly validated and implemented models, the predictions are unreliable.
  2. Disclosed that Professor Zvi Hashin declined the invitation to participate in WWFE-I, explaining his reason in a letter.  He wrote: “My only work in this subject relates to unidirectional fibre composites, not to laminates” … “I must say to you that I personally do not know how to predict the failure of a laminate (and furthermore, that I do not believe that anybody else does).”

Although these observations are dated, I believe they remain relevant today. Contrary to numerous marketing claims, we are still very far from realizing the benefits of numerical simulation in composite materials.

A Sustained Model Development Program Is Essential

To advance the development of design rules for composite materials, stakeholders need to initiate a long-term model development project, as outlined in reference [3]. This approach will provide a structured and systematic framework for research and innovation. Without such a coordinated effort, the industry has no choice but to rely on the inefficient and costly method of make-and-break engineering, hindering overall progress and leading to inconsistent results. Establishing a comprehensive model development project will create favorable conditions for the evolutionary development of design rules for composite materials.

The WWFE project was large and ambitious. However, a much larger effort will be needed to develop design rules for composite materials.


References

[1] Kaddour, A. S., and Hinton, M. J. Maturity of 3D Failure Criteria for Fibre-Reinforced Composites: Comparison Between Theories and Experiments: Part B of WWFE-II,” J. Comp. Mats., 47, 925-966, 2013.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/critique-of-the-wwfe/feed/ 0
Finite Element Libraries: Mixing the “What” with the “How” https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/ https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/#respond Tue, 03 Sep 2024 15:12:16 +0000 https://www.esrd.com/?p=32142 Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of the more complex ones. This view guided the development of the finite element (FE) method in the 1960s and 70s, and ultimately led to legacy FE codes adopting an "element-centric" approach.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of complex ones. This view shaped the development of the finite element (FE) method in the 1960s and 70s. The software architecture of the legacy FE codes was established in that period.

The Element-Centric View

Richard MacNeal, a principal developer of NASTRAN and co-founder of the MacNeal-Schwendler Corporation (MSC), once told me that his dream was to formulate “the perfect 4-node shell element”. His background was in analog computers, and he thought of finite elements as tuneable objects: If one tunes an element just right, as potentiometers are tuned in analog computers, then a perfect element can be created. This element-centric view led to the implementation of large element libraries, which are still in use today. These libraries mix what we wish to solve (in this instance, a shell model) with how we wish to solve it (using 4-node finite elements).

A cluttered, unattractive library, emblematic of finite element libraries in legacy FE codes. Image generated by Gemini.

In formulating his shell element, MacNeal was constrained by the limitations of the architecture of NASTRAN. Quoting from reference [1]: “An important general feature of NASTRAN which limits the choice of element formulation  is that, with rare exceptions, the degrees of freedom consist of the three components of translation and the three components of rotation at discrete points.” This feature originated from models of structural frames where the joints of beams and columns are allowed to translate and rotate in three mutually orthogonal directions. Such restrictions, common to all legacy FE codes, prevented those codes from keeping pace with the subsequent scientific development of FE analysis.

MacNeal’s formulation of his shell element was entirely intuitive. There is no proof that the finite element solutions corresponding to progressively refined meshes will converge to the exact solution of a particular shell model or even converge at all. Model form and approximation are intertwined.

The classical shell model, also known as the Novozhilov-Koiter (N-K) model, taught in advanced strength of materials classes, is based on the assumption that normals to the mid-surface in the undeformed configuration remain normal after deformation. Making this assumption was necessary in the pre-computer era to allow the solution of simple shell problems by classical methods. Today, the N-K shell model is only of theoretical and historical interest. Instead, we have a hierarchic sequence of shell models of increasing complexity. The next shell model is the Naghdi model, which is based on the assumption that normals to the mid-surface in the undeformed configuration remain straight lines but not necessarily normal. Higher models permit the normal to deform in ways that can be well approximated by polynomials [2]. 

Shells behave like three-dimensional solids in the neighborhoods of support attachments, stiffeners, nozzles, and cutouts. Therefore, restrictions on the transverse variation of the displacement components are not warranted in those locations. Whether a shell is thin or thick depends not only on the ratio of the thickness to the radius of curvature but also on the smoothness of the exact solution. The proper choice of a shell model depends on the problem at hand and the goals of computation. Consider, for example, the free vibration of a shell. When the wavelengths of the mode shapes are close to the thickness, the shearing deformations cannot be neglected, and hence, the shell behaves as a thick shell. Perfect shell elements do not exist. Furthermore, there is no such thing as a perfect element of any kind.

The Model-Centric View

In the model-centric view, we recognize that any model is a special case of a more comprehensive model. For instance, in solid mechanics problems, we typically start with a problem of linear elasticity, where one of the assumptions is that stress is proportional to strain, regardless of the size of the strain. Once the solution is available, we check whether the proportional limit was exceeded. If it was, we solve a nonlinear problem, for example, using the deformation theory of plasticity with a suitable material law. In that case, the linear solution is the first iteration in solving the nonlinear problem. If the displacements are large, we continue with the iterations to solve the geometric nonlinear problem. It is important to ensure that the errors of approximation are negligibly small throughout the numerical solution process.

At first glance, it might seem that model form errors can be made arbitrarily small. However, this is generally not possible. As the complexity of the model increases, so does the number of physical parameters. For instance, transitioning from linear elasticity to accounting for plastic deformation requires introducing empirical constants to characterize nonlinear material behavior. These constants have statistical variations, which increase prediction uncertainty. Ultimately, these uncertainties will likely outweigh the benefits of more complex models.

Implementation

An FE code should allow users to control both the model form and the approximation errors. To achieve this, model and element definitions must be separate, and seamless transitions from one model to another and from one discretization to another must be made possible. In principle, it is possible to control both types of error using legacy FE codes, but since model and element definitions are mixed in the element libraries, the process becomes so complicated that it is impractical to use in industrial settings.

Model form errors are controlled through hierarchic sequences of models, while approximation errors are controlled through hierarchic sequences of finite element spaces [2]. The stopping criterion is that the quantities of interest should remain substantially unchanged in the next level of the hierarchy.

Advice to Management

To ensure the reliability of predictions, it must be shown that the model form errors and the approximation errors do not exceed pre-specified tolerances. Moreover, the model parameters and data must be within the domain of calibration [3]. Management should not trust model-generated predictions unless evidence is provided showing that these conditions are satisfied.

When considering various marketing claims regarding the promised benefits of numerical simulation, digital twins, and digital transformation, management is well advised to keep this statement by philosopher David Hume in mind: “A wise man apportions his beliefs to the evidence.”


References

[1] MacNeal, R. H. A simple quadrilateral shell element.  Computers & Structures, Vol. 8, pp. 175-183, 1978.

[2] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/feed/ 0
The Kuhn Cycle in the Engineering Sciences https://www.esrd.com/kuhn-cycle-in-engineering-sciences/ https://www.esrd.com/kuhn-cycle-in-engineering-sciences/#respond Thu, 01 Aug 2024 14:05:06 +0000 https://www.esrd.com/?p=32070 Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects: Normal Science, Model Drift, Model Crisis, Model Revolution, and Paradigm Change. The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are used as sources of information for making technical decisions. Consequently, decision-makers need convincing evidence that relying on predictions from a mathematical model is justified. Such reliance is warranted only if:

  • the model has been validated, and its domain of calibration is clearly defined;
  • the errors of approximation are known to be within permissible tolerances [1].

Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects [2]:

  • Normal Science – Development of mathematical models based on the best scientific understanding of the subject matter.
  • Model Drift – Limitations of the model are encountered. Certain quantities of interest cannot be predicted by the model with sufficient reliability.
  • Model Crisis – Model drift becomes excessive.  Attempts to remove the limitations of the model are unsuccessful.
  • Model Revolution – This begins when candidates for a new model are proposed. The domain of calibration of the new model is sufficiently large to resolve most if not all, issues identified with the preceding model.
  • Paradigm Change – A paradigm consists of the fundamental ideas, methods, language, and theories that are accepted by the members of a scientific or professional community. In this phase, a new paradigm emerges, which then becomes the new Normal Science.

The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.

Example: Linear Elastic Fracture Mechanics

In linear elastic fracture mechanics (LEFM), the goal is to predict the size of a crack, given a geometrical description, an initial crack configuration, material properties, and a load spectrum. The mathematical model comprises (a) the equations of the theory of elasticity, (b) a predictor that establishes a relationship between a functional defined on the elastic stress field (usually the stress intensity factor), and increments in crack length caused by the application of constant amplitude cyclic loads, (c) a statistical model that accounts for the natural dispersion of crack lengths, and (d) an algorithm that accounts for the effects of tensile and compressive overload events.

Evolution of LEFM

The period of normal science in LEFM began around 1920 and ended in the 1970s. Many important contributions were made in that period. For a historical overview and commentaries, see reference [3]. Here, I mention only three seminal contributions: The work of Alan A. Griffith, who investigated brittle fracturing, George. R. Irwin modified Griffith’s theory for the fracturing of metals, and Paul C. Paris proposed the following relationship between the increment in crack length per cycle of loading and the stress intensity factor K:

{da\over dN} = C(K_{max}-K_{min})^m

where N is the cycle count, C and m are constants determined by calibration. This empirical formula is known as Paris’ law. Numerous variants have been proposed to account for cycle ratios and limiting conditions.

In 1972, the US Air Force adopted damage-tolerant design as part of the Airplane Structural Integrity Program (ASIP) [MIL-STD-1530, 1972]. Damage-tolerant design requires showing that a specified maximum initial damage would not produce a crack large enough to endanger flight safety. The paradigm that Paris’ law is the predictor of crack growth under cyclic loading is now universally accepted.

Fly in the Ointment

Paris’ law is defined on two-dimensional stress fields. However, it is not possible to calibrate any predictor in two dimensions. The specimens used in calibration experiments are typically plate-like objects. In the neighborhood of the points where the crack front intersects the surfaces, the stress field is very different from what is assumed in Paris’ law. Therefore, the parameters C and m in equation (1) are not purely material properties but also depend on the thickness of the test specimen. Nevertheless, as long as Paris’ law is applied to long cracks in plates, the predictions are accurate enough to be useful for practical purposes. However, problems arise when a crack is small relative to the thickness of the plate, for instance, a small corner crack at a fastener hole, which is one of the very important cases in damage-tolerant design. Attempts to fix this problem through the introduction of correction factors have not been successful. First, model drift and then model crisis set in. 

The consensus that the stress intensity factor drives crack propagation consolidated into a dogma about 50 years ago. New generations of engineers have been indoctrinated with this belief, and today, any challenge to this belief is met with utmost skepticism and even hostility. An unfortunate consequence of this is that healthy model development stalled about 50 years ago. The key requirement of damage-tolerant design, which is to reliably predict the size of a crack after the application of a load spectrum, is not met even in those cases where Paris’ law is applicable. This point is illustrated in the following section.

Evidence of the Model Crisis

A round-robin exercise was conducted in 2022. The problem statement was as follows: A centrally cracked 7075-T651 aluminum panel of thickness 0.245 inches, width 3.954 inches, a load spectrum, and the initial half crack length (denoted by ) of 0.070 inches. The quantity of interest was the half-crack length as a function of the number of cycles of loading. The specimen configuration and notation are shown in Fig. 1(a). The load spectrum was characterized by two load maxima given in terms of the nominal stress values σ1 = 22.5 ksi, σ2 = 2σ1/3. The load σ = σ1 was applied in cycles numbered 1, 101, 201, etc. The load σ = σ2 was applied in every other cycle. The minimum load was zero for all cycles. In comparison with typical design load spectra, this is a highly simplified spectrum. The participants in this round-robin were professional organizations that routinely provide estimates of this kind in support of design and certification decisions.

Calibration data were provided in the form of tabular records of da/dN corresponding to (Kmax – Kmin) for various (Kmin/Kmax) ratios. The participants were asked to account for the effects of the periodic overload events on the crack length. 

A positive overload causes a larger increment of the crack length in accordance with Paris’ law, and it also causes compressive residual stress to develop ahead of the crack tip. This residual stress retards crack growth in subsequent cycles while the crack traverses the zone of compressive residual stress. Various models have been formulated to account for retardation (see, for example, AFGROW – DTD Handbook Section 5.2.1.2). Each participant chose a different model. No information was given on whether or how those models were validated. The results of the experiments were revealed only after the predictions were made.

Fig. 1 (b) shows the results of the experiments and four of the predicted outcomes. In three of the four cases, the predicted number of cycles is substantially greater than the load cycles in the experiments, and there is a large spread between the predictions.

Figure 1: (a) Test article. (b) The results of experiments and predicted crack lengths.

This problem is within the domain of calibration of Paris’ law, and the available calibration records cover the interval of the (Kmax – Kmin) values used in the round robin exercise. Therefore, in this instance, the suitability of the stress intensity factor to serve as a predictor of crack propagation is not in question.

Noting that the primary objective of LEFM is to provide estimates of crack length following the application of a load spectrum, and this is a highly simplified problem, these results suggest that retardation models based on LEFM are in a state of crisis. This crisis can be resolved through the application of the principles and procedures of verification, validation, and uncertainty quantification (VVUQ) in a model development project conducted in accordance with the procedures described in [1].


Outlook

Damage-tolerant design necessitates reliable prediction of crack size, given an initial flaw and a load spectrum. However, the outcome of the round-robin exercise indicates that this key requirement is not currently met. While I’m not in a position to estimate the economic costs of this, it’s safe to say they must be a significant part of military aircraft sustainment programs.

I believe that to advance LEFM beyond the crisis stage, organizations that rely on damage-tolerant design procedures must mandate the application of verification, validation, and uncertainty quantification procedures, as outlined in reference [1]. This will not be an easy task, however. A paradigm shift can be a controversial and messy process. As W. Edwards Deming, American engineer, economist, and composer, observed: “Two basic rules of life are: 1) Change is inevitable. 2) Everybody resists change.”


References

[1] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Rossmanith, H. P., Ed., Fracture Mechanics Research in Retrospect. An Anniversary Volume in Honour of George R. Irwin’s 90th Birthday, Rotterdam: A. A. Balkema, 1997.


Related Blogs:

]]>
https://www.esrd.com/kuhn-cycle-in-engineering-sciences/feed/ 0
Variational Crimes https://www.esrd.com/variational-crimes/ https://www.esrd.com/variational-crimes/#respond Mon, 08 Jul 2024 11:00:00 +0000 https://www.esrd.com/?p=31948 From the beginning of FEM acceptance, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated (variational crimes). This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In Thomas Kuhn’s terminology, “pre-science“ refers to a period of early development in a field of research [1]. During this period, there is no established explanatory framework (paradigm) mature enough to solve the main problems. In the case of the finite element method (FEM), the period of pre-science started when reference [2] was published in 1956 and ended in the early 1970s when scientific investigation began in the applied mathematics community. The publication of lectures at the University of Maryland [3] and the first mathematical book on FEM [4] marked the transition to what Kuhn termed “normal science”.

Two Views

Engineers view FEM as an intuitive modeling tool, whereas mathematicians see it as a method for approximating the solutions of partial differential equations cast in variational form. On the engineering side, the emphasis is on implementation and applications, while mathematicians are concerned with clarifying the conditions for stability and consistency, establishing error estimates, and formulating extraction procedures for various quantities of interest. 

From the beginning, a significant communication gap existed between the engineering and mathematical communities. Engineers did not understand why mathematicians would worry so much about the number of square-integrable derivatives, and mathematicians did not understand how it is possible that engineers can find useful solutions even when the rules of variational calculus are violated. This gap widened over the years: On one hand, the art of finite element modeling became an integral part of engineering practice. On the other hand, the science of finite element analysis became an established branch of applied mathematics.

The Art of Finite Element Modeling

The art of finite element modeling has its roots in the pre-science period of finite element analysis when engineers sought to extend the matrix methods of structural analysis, developed for trusses and frames, to complex structures such as plates, shells, and solids. The major finite element modeling software products in use today, such as NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970. As long as the goal is to find force-displacement relationships, such as in load models of airframes and crash dynamics models of automobiles, finite element modeling can provide useful information. However, problems arise when the quantities of interest include (or depend on) the pointwise derivatives of the solution, as in strength analysis where stresses and strains are of interest.

Misplaced Accusations

The first mathematical book on the finite element method [4] dedicated a chapter to violations of the rules of variational calculus in various implementations of the finite element method. The title of the chapter is “Variational Crimes,” a catchphrase that quickly caught on. The variational crimes are charged as follows:

  1. Using non-conforming Elements: Non-conforming elements are those that do not satisfy the interelement continuity requirements of the variational formulation.
  2. Using numerical integration.
  3. Approximating domains and boundary conditions.

Item 1 is a serious crime, however, the motivations for committing this crime can be negated by properly formulating mathematical models. Items 2 and 3 are not crimes; they are essential features of the finite element method, and the associated errors can be easily controlled. The authors were thinking about asymptotic error estimators (what happens when the diameter of the largest element goes to zero) that did not account for items 2 and 3. They did not want to bother with the complications caused by numerical integration and the approximation of the domains and boundary conditions, so they declared those features to be crimes. This may have been a clever move but certainly not a helpful one.

Sherlock Holmes investigating variational crimes in Victorian London. Image generated by Microsoft Copilot.

Egregious Variational Crimes

The authors of reference [4] failed to mention the truly egregious variational crimes that are very common in the practice of finite element modeling today and will have to be abandoned if the reliability predictions based on finite element computations are to be established:

  1. Using point constraints. Perhaps the most common variational crime is using point constraints for other than rigid body constraints. The finite element solution will converge to a solution that ignores the point constraints if such a solution exists, else it will diverge. However, the rates of convergence or divergence are typically very slow. For the discretizations used in practice, it is hardly noticeable.  So then, why should we worry about it? – Either we are not approximating the solution to the problem we had in mind, or we are “approximating” a problem that has no solution. Finding an approximation to a solution that does not exist makes no sense, yet such occurrences are very common in finite element modeling practice. The apparent credibility of the finite element solution is owed to the near cancellation of two large errors: The conceptual error of using illegal constraints and the numerical error of not using sufficiently fine discretization to make the conceptual error visible.  A detailed explanation is available in reference [5], Section 5.2.8.
  2. Using point forces in 2D and 3D elasticity (or more generally in 2D and 3D problems). In linear elasticity, the exact solution does not have finite strain energy when point forces are applied. Hence, any finite element solution “approximates” a problem that does not have a solution in energy space.  Once again, divergence is very slow. When point forces are applied, element-by-element equilibrium is satisfied, and the effects of point forces are local, whereas the effects of point constraints are global. Generally, it is permissible to apply point forces in the region of secondary interest but not in the region of primary interest, where the goal is to compute quantities that depend on the derivatives, such as stresses and strains [5].
  3. Using reduced integration. At the time of the publication of their book [4], Strang and Fix could not have known about reduced integration which was introduced a few years later [6]. Reduced integration was justified in typical finite element modeling fashion: Low-order elements exhibit shear locking and Poisson ratio locking. Since the elements that lock “are too stiff,” it is possible to make them softer by using fewer than the necessary integration points. The consequences were that the elements exhibited spurious “zero energy modes,” called “hourglassing,” that had to be controlled by various tuning parameters. For example, in the Abaqus Analysis User’s Manual, C3D8RHT(S) is identified as an “8-node trilinear displacement and temperature, reduced integration with hourglass control, hybrid with constant pressure” element. Tinkering with the integration rules may be useful in the art of finite element modeling when the goal is to tune stiffness relationships (as, for example, in crash dynamics models), but it is an egregious crime in finite element analysis because it introduces a source of error that cannot be controlled by mesh refinement, or increasing the polynomial degree, and makes a posteriori error estimation impossible.
  4. Reporting computed data that do not converge to a finite value. For example, if a domain has one or more sharp reentrant corners in the region of primary interest, then the maximum stress computed from a finite element solution will be a finite number but will tend to infinity when the degrees of freedom are increased. It is not meaningful to report such a computed value: The error is infinitely large.
  5. Tricks used when connecting elements based on different formulations. For example, connecting an axisymmetric shell element (3 degrees of freedom per node) with an axisymmetric solid element (2 degrees of freedom) involves tricks of various sorts, most of which are illegal.

Takeaway

The deeply ingrained practice of finite element modeling has its roots in the pre-science period of the development of the finite element method. To meet the current reliability expectations in numerical simulation, it will be necessary to routinely perform solution verification. This is possible only through the science of finite element analysis, respecting the rules of variational calculus. When thinking about digital transformation, digital twins, certification by analysis, and linking simulation with artificial intelligence tools, one must think about the science of finite element analysis and not the art of finite element modeling rooted in pre-1970s thinking.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Turner, M.J., Clough, R.W., Martin, H.C. and Topp, L.J. Stiffness and deflection analysis of complex structures. Journal of the Aeronautical Sciences23(9), pp. 805-823, 1956.

[3] Babuška, I. and Aziz, A.K. Survey lectures on the mathematical foundations of the finite element method.  The mathematical foundations of the finite element method with applications to partial differential equations (A. K. Aziz, ed.) Academic Press, 1972.

[4] Strang, G. and Fix, G. An analysis of the finite element method. Prentice Hall, 1973.

[5] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[6] Hughes, T.J., Cohen, M. and Haroun, M. Reduced and selective integration techniques in the finite element analysis of plates. Nuclear Engineering and Design46(1), pp.203-222, 1978.


Related Blogs:

]]>
https://www.esrd.com/variational-crimes/feed/ 0
Simulation Governance https://www.esrd.com/simulation-governance-at-the-present/ https://www.esrd.com/simulation-governance-at-the-present/#respond Thu, 13 Jun 2024 20:23:21 +0000 https://www.esrd.com/?p=31866 At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation, digital twins, certification by analysis, and AI-assisted simulation projects are generating considerable interest in engineering communities. For these initiatives to succeed, the reliability of numerical simulations must be assured. This can happen only if management understands that simulation governance is an essential prerequisite for success and undertakes to establish and enforce quality control standards for all simulation projects.

The idea of simulation governance is so simple that it is self-evident: Management is responsible for the exercise of command and control over all aspects of numerical simulation. The formulation of technical requirements is not at all simple, however. A notable obstacle is the widespread confusion of the practice of finite element modeling with numerical simulation. This misconception is fueled by marketing hyperbole, falsely suggesting that purchasing a suite of software products is equivalent to outsourcing numerical simulation.  

At present, a very substantial unrealized potential exists in numerical simulation. Simulation technology has matured to the point where management can realistically expect the reliability of predictions based on numerical simulations to match the reliability of observations in physical experimentation. This will require management to upgrade simulation practices through exercising simulation governance.

The Kuhn Cycle

The development of numerical simulation technology falls under the broad category of scientific research programs, which encompass model development projects in the engineering and applied sciences as well. By and large, these programs follow the pattern of the Kuhn Cycle [1] illustrated schematically in Fig. 1 in blue:

Figure 1: Schematic illustration of the Kuhn cycle.

A period of pre-science is followed by normal science. In this period, researchers have agreed on an explanatory framework (paradigm) that guides the development of their models and algorithms.  Program (or model) drift sets in when problems are identified for which solutions cannot be found within the confines of the current paradigm. A program crisis occurs when the drift becomes excessive and attempts to remove the limitations are unsuccessful. Program revolution begins when candidates for a new approach are proposed. This eventually leads to the emergence of a new paradigm, which then becomes the explanatory framework for the new normal science.

The Development of Finite Element Analysis

The development of finite element analysis followed a similar pattern. The period of pre-science began in 1956 and lasted until about 1970. In this period, engineers who were familiar with the matrix methods of structural analysis were trying to extend that method to stress analysis. The formulation of the algorithms was based on intuition; testing was based on trial and error, and arguing from the particular to the general (a logical fallacy) was common.   

Normal science began in the early 1970s when the mathematical foundations of finite element analysis were addressed in the applied mathematics community. By that time, the major finite element modeling software products in use today were under development. Those development efforts were largely motivated by the needs of the US space program. The developers adopted a software architecture based on pre-science thinking. I will refer to these products as legacy FE software: For example, NASTRAN, ANSYS, MARC, and Abaqus are all based on the understanding of the finite element method (FEM) that existed before 1970.

Mathematical analysis of the finite element method identified a number of conceptual errors. However, the conceptual framework of mathematical analysis and the language used by mathematicians were foreign to the engineering community, and there was no meaningful interaction between the two communities.

The scientific foundations of finite element analysis were firmly established by 1990, and finite element analysis became a branch of applied mathematics. This means that, for a very large class of problems that includes linear elasticity, the conditions for stability and consistency were established, estimates were obtained for convergence rates, and solution verification procedures were developed, as were elegant algorithms for superconvergent extraction of quantities of interest such as stress intensity factors. I was privileged to have worked closely with Ivo Babuška, an outstanding mathematician who is rightfully credited for many key contributions.

Normal science continues in the mathematical sphere, but it has no influence on the practice of finite element modeling. As indicated in Fig. 1, the practice of finite element modeling is rooted in the pre-science period of finite element analysis, and having bypassed the period of normal science, it had reached the stage of program crisis decades ago.

Evidence of Program Crisis

The knowledge base of the finite element method in the pre-science period was a small fraction of what it is today. The technical differences between finite element modeling and numerical simulation are addressed in one of my earlier blog posts [2]. Here, I note that decision-makers who have to rely on computed information have reasons to be disappointed. For example, the Air Force Chief of Staff,  Gen. Norton Schwartz, was quoted in Defense News, 2012 [3] saying: “There was a view that we had advanced to a stage of aircraft design where we could design an airplane that would be near perfect the first time it flew. I think we actually believed that. And I think we’ve demonstrated in a compelling way that that’s foolishness.”

General Schwartz expected that the reliability of predictions based on numerical simulation would be similar to the reliability of observations in physical tests. This expectation was not unreasonable considering that by that time, legacy FE software tools had been under development for more than 40 years. What the general did not know was that, while the user interfaces greatly improved and impressive graphic representations could be produced, the underlying solution methodology was (and still is) based on pre-1970s thinking.

As a result, efforts to integrate finite element modeling with artificial intelligence and to establish digital twins based on finite element modeling will surely end in failure.

Paradigm Change Is Necessary

Paradigm change is never easy. Max Planck observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” This is often paraphrased, saying: “Progress occurs one funeral at a time.” Planck was referring to the foundational sciences and changing academic minds.  The situation is more challenging in the engineering sciences, where practices and procedures are often deeply embedded in established workflows and changing workflows is typically difficult and expensive.

What Should Management Do?

First and foremost, management should understand that simulation is one of the most abused words in the English language. Furthermore:

  • Treat any marketing claim involving simulation with an extra dose of skepticism. Prior to undertaking projects in the areas of digital transformation, certification by analysis, digital twins, and AI-assisted simulation, ensure that the mathematical models produce reliable predictions.
  • Recognize the difference between finite element modeling and numerical simulation.
  • Understand that mathematical models produce reliable predictions only within their domains of calibration.
  • Treat model form and numerical approximation errors separately and require error control in the formulation and application of mathematical models.
  • Do not accept computed data without error metrics.
  • Understand that model development projects are open-ended.
  • Establish conditions favorable for the evolutionary development of mathematical models.
  • Become familiar with the concepts and terminology in reference [4]. For additional information on simulation governance, I recommend ESRD’s website.


References

[1] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[2] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023. https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/.

[3] Weisgerber, M. DoD Anticipates Better Price on Next F-35 Batch, Gannett Government Media Corporation, 8 March 2012. [Online]. Available: https://tinyurl.com/282cbwhs.

[4] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. 


Related Blogs:

]]>
https://www.esrd.com/simulation-governance-at-the-present/feed/ 0
Digital Transformation https://www.esrd.com/digital-transformation/ https://www.esrd.com/digital-transformation/#respond Fri, 17 May 2024 01:31:22 +0000 https://www.esrd.com/?p=31765 Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, we address digital transformation in the engineering sciences, focusing on numerical simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Digital transformation is a multifaceted concept with plenty of room for interpretation. Its common theme emphasizes the proactive adoption of digital technologies to reshape business practices with the goal of gaining a competitive edge. The scope, timeline, and resource allocation of digital transformation projects depend on the specific goals and objectives. Here, I address digital transformation in the engineering sciences, focusing on numerical simulation.

Digital Technologies in the Engineering Sciences

Digital technologies have been integrated into the engineering sciences since the 1950s.  The adoption process has not been uniform across all disciplines. Some fields (like aerospace) adopted technologies early, while others were slower to change. The development and adoption of these technologies are ongoing. Engineering today is increasingly digital, and innovations are constantly changing the way engineers approach their work. Here are some important milestones:

Early Adoption (1950s-1970s)

  • Mainframe computers were used for engineering calculations that would have been impossible or extremely time-consuming to perform by hand.
  • Numerical control (NC) machines used punched tape or cards to control tool movements, streamlining machining processes.
  • Early Computer-Aided Design (CAD) systems revolutionized drafting in the 1960s. They allowed engineers to create and manipulate drawings on a computer, making design iterations much faster than previously possible.

Period of Rapid Growth (1980s-1990s)

  • Affordable Personal Computers (PCs) made computing power accessible to individual engineers and small firms.
  • Development of CAD software brought 3D modeling from specialized applications into mainstream design.
  • Finite Element Modeling software became commercially available, allowing engineers to perform structural and strength calculations.
  • The mathematical foundations of the finite element method (FEM) were established, and finite element analysis (FEA) became a branch of Applied Mathematics.

Post-Millennial Development  (2000s-Present)

  • Cloud-based solutions offer scalable computing power and collaboration tools, making complex calculations accessible without massive hardware investment.
  • Building Information Modeling (BIM) revolutionized the architecture, engineering, and construction (AEC) industries.
  • Internet of Things (IoT): Networked sensors and devices provide engineers with real-time data to monitor structures, predict maintenance needs, and optimize operations.
  • Additive Manufacturing (3D Printing) allows for the rapid creation of complex prototypes and even functional end-use parts.

Given that digital technologies have been successfully integrated into engineering practice, it may appear that not much else needs to be done. However, important challenges remain, and there are many opportunities for improvement. This is discussed next.

Outlook: Opportunities and Challenges

Bearing in mind that the primary goal of digital transformation is to enhance competitiveness, in the field of numerical simulation, this translates to improving the predictive performance of mathematical models. Ideally, we aim to reach a reliability level in model predictions comparable to that of physical experimentation. From the technological point of view, this goal is achievable: We have the theoretical understanding of how to maximize the predictive performance of mathematical models through the application of verification, validation, and uncertainty quantification procedures. Furthermore, advancements in explainable artificial intelligence (XAI) technology can be utilized to optimize the management of numerical simulation projects so as to maximize their reliability and effectiveness.  

The primary challenge in the field of engineering sciences is that further progress in digital transformation will require fundamental changes in how numerical simulation is currently understood by the engineering community and how it is practiced in industrial settings. It is essential to keep in mind the differences between finite element modeling and numerical simulation. I explained the reasons for this in an earlier blog post [1]. The art of finite element modeling will have to be replaced by the science of finite element analysis, and the verification, validation, and uncertainty quantification (VVUQ) procedures will have to be applied [2].

Paradoxically, the successful early integration of finite element modeling practices and software tools into engineering workflows now impedes attempts to utilize technological advances that occurred after the 1970s. The software architecture of legacy finite element codes was substantially set by 1970, based on understanding the finite element method that existed at that time. Limitations of the software architecture prevented subsequent advances, such as a posteriori error estimation in terms of the quantities of interest and control of model form errors, both of which are essential for meeting the reliability requirements in numerical simulation. Abandoning finite element modeling practices and embracing the methodology of numerical simulation technology is a major challenge for the engineering community.

The “I Believe” Button

An ANSYS blog [3] tells the story of a presentation made to an A&D executive. The presentation was to make a case for transforming his department using digital engineering. At the end of the presentation, the executive pointed to a coaster on his desk. “See this? That’s the ‘I believe’ button. I can’t hit it. I just can’t hit it. Help me hit it.” Clearly, the executive was asking for convincing evidence that the computed information was sufficiently reliable to support decision-making in his department. Put in another way, he did not have the courage to sign the blueprint on the basis of data generated by digital engineering. What it takes to gather such courage was addressed in one of my earlier blogs [4]. Reliability considerations significantly influence the implementation of simulation process data management (SPDM).

Change Is Necessary

The frequently cited remark by W. Edwards Deming: “Change is not obligatory, but neither is survival,” reminds us of the criticality of embracing change.


References

[1] Szabó B. Why Finite Element Modeling is Not Numerical Simulation? ESRD Blog. November 2, 2023.
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/
[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. 162 pp. 206–214, 2024. The publisher is providing free access to this article until May 22, 2024. Anyone may download it without registration or fees by clicking on this link:
https://authors.elsevier.com/c/1isOB3CDPQAe0b
[3] Bleymaier, S. Hit the “I Believe” Button for Digital Transformation. ANSYS Blog. June 14, 2023. https://www.ansys.com/blog/believe-in-digital-transformation
[4] Szabó B. Where do you get the courage to sign the blueprint? ESRD Blog. October 6, 2023.
https://www.esrd.com/where-do-you-get-the-courage-to-sign-the-blueprint/


Related Blogs:

]]>
https://www.esrd.com/digital-transformation/feed/ 0