Finite Element Analysis Archives - ESRD https://www.esrd.com/tag/finite-element-analysis/ Engineering Software Research and Development, Inc. Fri, 02 Feb 2024 15:00:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Finite Element Analysis Archives - ESRD https://www.esrd.com/tag/finite-element-analysis/ 32 32 The Demarcation Problem in the Engineering Sciences https://www.esrd.com/demarcation-problem-in-engineering-sciences/ https://www.esrd.com/demarcation-problem-in-engineering-sciences/#respond Thu, 01 Feb 2024 14:52:11 +0000 https://www.esrd.com/?p=30871 In engineering sciences, we classify mathematical models as ‘proper’ or ‘improper’ rather than ‘scientific’ or ‘pseudoscientific’. A model is said to be proper if it is consistent with the relevant mathematical theorems that guarantee the existence and, when applicable, the uniqueness of the exact solution. Otherwise, the model is improper. At present, the large majority of models used in engineering practice are improper. Following are examples of frequently occurring types of error, with brief explanations.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Generally speaking, philosophers are much better at asking questions than answering them. The question of distinguishing between science and pseudoscience, known as the demarcation problem, is one of their hotly debated issues. Some even argued that the demarcation problem is unsolvable [1]. That may well be true when the question is posed in its broadest generality. However, this question can and must be answered clearly and unequivocally in the engineering sciences.

That is because, in the engineering sciences, we rely on validated models of broad applicability, such as the theories of heat transfer and continuum mechanics, the Maxwell equations, and the Navier-Stokes equations.  Therefore, we can be confident that we are building on a solid scientific foundation. A solid foundation does not guarantee a sound structure, however. We must ensure that the algorithms used to estimate the quantities of interest are also based on solid scientific principles. This entails checking that there are no errors in the formulation, implementation, or application of models.

In engineering sciences, we classify mathematical models as ‘proper’ or ‘improper’ rather than ‘scientific’ or ‘pseudoscientific’. A model is said to be proper if it is consistent with the relevant mathematical theorems that guarantee the existence and, when applicable, the uniqueness of the exact solution. Otherwise, the model is improper. At present, the large majority of models used in engineering practice are improper. Following are examples of frequently occurring types of error, with brief explanations.

Conceptual Errors

Conceptual errors, also known as “variational crimes”, occur when the input data and/or the numerical implementation is inconsistent with the formulation of the mathematical model. For example, considering the displacement formulation in two and three dimensions, point constraints are permitted only as rigid body constraints, when the body is in equilibrium. Point forces are permitted only in the domain of secondary interest [2], non-conforming elements and reduced integration are not permitted.

When conceptual errors are present, the numerical solution is not an approximation to the solution of the mathematical problem we have in mind, in which case it is not possible to estimate the errors of approximation. In other words, it is not possible to perform solution verification.

Model Form Errors

Model form errors are associated with the assumptions incorporated in mathematical models. Those assumptions impose limitations on the applicability of the model. Various approaches exist for estimating the effects of those limitations on the quantities of interest. The following examples illustrate two such approaches.

Example 1

Linear elasticity problems limit the stresses and strains to the elastic range, the displacement formulation imposes limitations on Poisson’s ratio, and pointwise stresses or strains are considered averages over a representative volume element. This is because the assumptions of continuum theory do not apply to real materials on the micro-scale.

Linear elasticity problems should be understood to be special cases of nonlinear problems that account for the effects of large displacements and large strains and one of many possible material laws. Having solved a linear problem, we can check whether and to what extent were the simplifying assumptions violated, and then we can decide if it is necessary to solve the appropriate nonlinear problem. This is the hierarchic view of models: Each model is understood to be a special case of a more comprehensive model [2].

Remark

Theoretically, one could make the model form error arbitrarily small by moving up the model hierarchy.  In practice, however, increasing complexity in model form entails an increasing number of parameters that have to be determined experimentally. This introduces uncertainties, which increase the dispersion of the predicted values of the quantities of interest.

Example 2

In many practical applications, the mathematical problem is simplified by dimensional reduction. Within the framework of linear elasticity, for instance, we have hierarchies of plate and shell models where the variation of displacements along the normal to the reference surface is restricted to polynomials or, in the case of laminated plates and shells, piecewise polynomials of low order [3]. In these models, boundary layer effects occur. The boundary layers are typically strong at free edges. These effects are caused by edge singularities that perturb the dimensionally reduced solution. The perturbation depends on the hierarchic order of the model. Typically, the goal of computation is strength analysis, that is, estimation of the values of predictors of failure initiation. It must be shown that the predictors are independent of the hierarchic order. This challenging problem is typically overlooked in finite element modeling. In the absence of an analytical tool capable of guaranteeing the accuracy of predictors of failure initiation, it is not possible to determine whether a design rule is satisfied or not.

Figure 1: T-joint of laminated plates.

Numerical Errors

Since the quantities of interest are computed numerically, it is necessary to verify that the numerical values are sufficiently close to their exact counterparts. The meaning of “sufficiently close” is context-dependent: For example, when formulating design rules, an interpretation of experimental information is involved. It has to be ensured that the numerical error in the quantities of interest is negligibly small in comparison with the size of the experimental errors. Otherwise, preventable uncertainties are introduced in the calibration process.

Realizing the Potential of Numerical Simulation

If we examine a representative sample of mathematical models used in the various branches of engineering, we find that the large majority of models suffer from one or more errors like those we described above. In other words, the large majority of models used in engineering practice are improper. There are many reasons for this, caused mainly by the obsolete notion of finite element modeling, deeply entrenched in the engineering community.

As noted in my earlier blog, entitled Obstacles to Progress, the art of finite element modeling evolved well before the theoretical foundations of finite element analysis were established. Engineering books, academic courses, and professional workshops emphasize the practical, intuitive aspects of finite element modeling and typically omit cautioning against variational crimes. Even some of the fundamental concepts and terminology needed for understanding the scientific foundations of numerical simulation are missing. For example, a senior engineer of a Fortune 100 company, with impeccable academic credentials earned more than three decades before, told me that, in his opinion, the exact solution is the outcome of a physical experiment. This statement revealed a lack of awareness of the meaning and relevance of the terms: verification, validation, and uncertainty quantification.

To realize the potential of numerical simulation, management will have to exercise simulation governance [4]. This will necessitate learning to distinguish between proper and improper modeling practices and establishing the technical requirements needed to ensure that both the model form and approximation errors in the quantities of interest are within acceptable bounds.


References

[1] Laudan L. The Demise of the Demarcation Problem. In: Cohen R.S., Laudan L. (eds) Physics, Philosophy and Psychoanalysis. Boston Studies in the Philosophy of Science, vol 76. Springer, Dordrecht, 1983.

[2] Szabό, B. and Babuška, I. Finite Element Analysis. Method, Verification, and Validation (Section 4.1). John Wiley & Sons, Inc., 2021.

[3] Actis, R., Szabó, B. and Schwab, C. Hierarchic models for laminated plates and shells. Computer Methods in Applied Mechanics and Engineering, 172(1-4), pp. 79-107, 1999.

[4] Szabó, B. and Actis, R. Simulation governance: Technical requirements for mechanical design. Computer Methods in Applied Mechanics and Engineering, 249, pp.158-168, 2012.


Related Blogs:

]]>
https://www.esrd.com/demarcation-problem-in-engineering-sciences/feed/ 0
Why Worry About Singularities? https://www.esrd.com/why-worry-about-singularities/ https://www.esrd.com/why-worry-about-singularities/#respond Thu, 14 Dec 2023 21:01:08 +0000 https://www.esrd.com/?p=30564 A mathematician delivered a keynote presentation at an engineering conference some years ago. At the coffee break, following the presentation, a highly respected senior developer of a legacy finite element code, remarked: “I do not understand why the speaker was so worried about singularities. We never see them.” The remark highlights the lack of a common language between the pre-scientific notion of finite element modeling and finite element analysis, which is a branch of applied mathematics. Read why mathematicians and engineers alike should "worry" about singularities.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


A mathematician delivered a keynote presentation at an engineering conference some years ago. At the coffee break, following the presentation, a highly respected senior developer of a legacy finite element code, remarked: “I do not understand why the speaker was so worried about singularities. We never see them.”

In the context of the keynote presentation, singularities were understood to be properties of the exact solutions of mathematical problems approximated by the finite element method. Singularities occur in points where the exact solution lacks differentiability or analyticity. The remark, on the other hand, was in the context of finite element modeling where a numerical problem is constructed without considering the underlying mathematical problem. The remark highlights the lack of a common language between the pre-scientific notion of finite element modeling and finite element analysis, which is a branch of applied mathematics.

Why Do Mathematicians Worry About Singularities?

Mathematicians understand finite element analysis (FEA) as a method for obtaining an approximation to the exact solution of a well-defined mathematical problem, such as a problem of elasticity, cast in variational form. Specifically, the finite element solution uFE converges to the exact solution uEX in a norm (which depends on the variational form) as the number of degrees of freedom N is increased. An important question is, how fast does it converge?  For most practical problems, convergence is quantified by the inequality:

||\boldsymbol u_{EX} -\boldsymbol u_{FE}||_E \le \frac{C}{N^{\beta}}  \quad (1)

where on the left is the energy norm measure of the difference between the exact and the finite element solution (which is closely related to the root-mean-square error in stress), C and β are positive constants, and β is called the rate of convergence. The size of β depends on the regularity (smoothness) of uEX and on the scheme used for increasing N. The details are available in textbooks, see (for example) [1].  The smoothness of uEX is quantified by a positive number λ. In many practical problems 0 < λ < 1.

For instance, consider the two-dimensional elasticity problem on the L-shaped domain, a frequently used benchmark problem, where λ equals 0.544. This is a manufactured problem with a known exact solution, allowing for the calculation of approximation errors [2].

Referring to Figure 1, if uniform mesh refinement is used at a fixed polynomial degree (h-extension) then β = λ/2 = 0.274. If the polynomial degree is increased on a fixed uniform mesh (p-extension) then β = λ = 0.544. If p-extension is used on a mesh that is graded in a geometric progression toward the singular point then, for large N, we still have β = λ = 0.544, however, convergence is much stronger at small N values.

Assume that we wish to reduce the relative error in energy norm to 1 percent. If we increase the polynomial degree uniformly (p-extension), on a geometrically graded mesh then we have to solve less than 103 simultaneous equations. In contrast, if we use a uniform mesh refinement and p = 2  (h-extension) then we have to solve about 107 equations. The ratio is roughly 104. It took less than one second on a desktop computer to solve 103 equations. If we assume that the solution time is proportional to the number of degrees of freedom squared, then achieving 1% relative error with uniform mesh refinement would take 108 seconds or 3.2 years. This shows that the errors of approximation can be controlled only through proper design of the discretization scheme, which involves taking the characteristics of the underlying mathematical problem into consideration.

Figure 1: The L-shaped domain problem. Convergence curves for uniform mesh refinement at a fixed polynomial degree (h-extension), increasing polynomial degree on a fixed uniform mesh (p-extension), and increasing polynomial degree on a geometrically graded fixed finite element mesh consisting of 18 elements.

Why Should Engineers Worry About Singularities?

If the solution of the underlying mathematical problem has singular points, as in the case of the L-shaped domain problem, then the goal of the computation cannot be the determination of the maximum stress. The finite element solution predicts finite values for stress, however, the predicted stress increases as N is increased. The error in the maximum stress is infinitely large even if the root-mean-square error in stress on the entire domain is negligibly small. This is illustrated in Figure 2 where the von Mises stress corresponding to the finite element solution on the 18-element geometrically graded mesh and p = 8 is displayed.

In engineering applications of the finite element method, small geometric features, such as fillets, are often neglected, resulting in sharp corners and edges. This may be permissible outside of the domain of primary interest, however, the quantities of interest within the domain of primary interest may be polluted by errors coming from singular points or edges [3].

Figure 2: Contours of the von Mises stress corresponding the finite element solution on an 18-element geometrically graded mesh, p=8.

In this model problem, the singularity was caused by a sharp corner. Singularities can be caused by abrupt changes in material properties, loading, and constraint conditions as well.

Outlook

A high level of expertise is required for properly designing a discretization scheme. Experts take into consideration the information contained in the input data and use that information to estimate the regularity of the exact solution. This guides the design of the finite element mesh and the assignment of polynomial degrees.  Feedback information can be utilized to revise and update the discretization scheme when necessary [4].

Explainable artificial intelligence (XAI) tools can provide high-quality guidance in the design of the initial discretization, based on the information content of the input data, and in the management of feedback information. It’s essential, that these tools be trained on the scientific principles of finite element analysis.


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed. John Wiley & Sons, Inc., 2021.

[2] Szabό, B. and Babuška, I. Finite Element Analysis. John Wiley & Sons, Inc., 1991.

[3] Babuška, I., Strouboulis, T., Upadhyay, C.S. and Gangaraj, S.K. A posteriori estimation and adaptive control of the pollution error in the h‐version of the finite element method. International Journal for Numerical Methods in Engineering, 38(24), pp. 4207-4235, 1995.

[4] Babuška, I. and Rank, E. An expert-system-like feedback approach in the hp-version of the finite element method. Finite Elements in Analysis and Design, 3(2), pp.127-147, 1987.


Related Blogs:

]]>
https://www.esrd.com/why-worry-about-singularities/feed/ 0
The Story of the P-version in a Nutshell https://www.esrd.com/story-of-p-version-in-a-nutshell/ https://www.esrd.com/story-of-p-version-in-a-nutshell/#respond Fri, 01 Dec 2023 02:15:56 +0000 https://www.esrd.com/?p=30417 The idea of achieving convergence by increasing the polynomial degree (p) of the approximating functions on a fixed mesh, known as the p-version of the finite element method, was at odds with the prevailing view in the finite element research community in the 1960s and 70s. But why?]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The idea of achieving convergence by increasing the polynomial degree (p) of the approximating functions on a fixed mesh, known as the p-version of the finite element method, was at odds with the prevailing view in the finite element research community in the 1960s and 70s.

The accepted paradigm was that elements should have a fixed polynomial degree, and convergence should be achieved by decreasing the size of the largest element of the mesh, denoted by h. This approach came to be called the h-version of the finite element method. This view greatly influenced the development of the software architecture of legacy finite element software in ways that made it inhospitable for later developments. 

The finite element research community rejected the idea of the p-version of the finite element method with nearly perfect unanimity, predicting that “it would never work”. The reasons given are listed below.

Why the “p-version would never work”?

The first objection was that the system of equations would become ill-conditioned at high p-levels. − This problem was solved by proper selection of the basis functions [1].

The second objection was that high-order elements will require excessive computer time. −  This problem was solved by proper ordering of the operations. If the task is stated in this way: “Compute (say) the maximum normal stress and verify that the result is accurate to within (say) 5 percent relative error“ then the p-version will require substantially fewer machine cycles than the h-version and virtually no user intervention.

The third objection was that mappings, other than isoparametric and subparametric mappings, fail to represent rigid body displacements exactly.  − This is true but unimportant because the errors associated with rigid body modes converge to zero very fast [1].

The fourth objection was that solutions obtained using high-order elements oscillate in the neighborhoods of singular points. – The rate of convergence of the p-version is stronger because the finite element solution oscillates in the neighborhood of singular points and the p-version is very efficient elsewhere [1].

The fifth objection was the hardest one to overcome. There was a theoretical estimate of the error of approximation in energy norm which states:

||\boldsymbol u_{ex} -\boldsymbol u_{fe}||_E \le Ch^{min(\lambda,p)}  \quad (1)

On the left of this inequality is the error of approximation in energy norm, on the right C is a positive constant, h is the size of the largest element of the mesh, λ is a measure of the regularity of the exact solution, usually a number less than one, and p is the polynomial degree. The argument was that since λ is usually a small number, it does not matter how high p is, it will not affect the convergence rate. This estimate is correct for the h-version, however, because C depends on p, it is not correct for the p-version [2].

(From left to right) Norman Katz, Ivo Babuška and Barna Szabó.

The sixth objection was that the p-version is not suitable for solving nonlinear problems. – This objection was answered when the German Research Foundation (DFG) launched a project in 1994 that involved nine university research institutes. The aim was to investigate adaptive finite element methods with reference to problems in the mechanics of solids [3]. The research was led by professors of mathematics and engineering.

As part of this project, the question of whether the p-version can be used for solving nonlinear problems was addressed. The researchers agreed to investigate a two-dimensional nonlinear model problem. The exact solution of the model problem was not known, therefore a highly refined mesh with millions of degrees of freedom was used to obtain a reference solution. This is the “overkill” method. The researchers unanimously agreed at the start of the project that the refinement was sufficient so that the corresponding finite element solution could be used as if it were the exact solution.

Professor Ernst Rank and Dr. Alexander Düster, of the Department of Construction Informatics of the Technical University of Munich, showed that the p-version can achieve significantly better results than the h-version, even when compared with adaptive mesh refinement, and recommended further investigation of complex material models with the p-version [4]. They were also able to show that the reference solution was not accurate enough. With this, the academic debate was decided in favor of the p-version. I attended the concluding conference held at the University in Hannover (now Leibniz University).

Understanding the Finite Element Method

The finite element method is properly understood as a numerical method for the solution of ordinary and partial differential equations cast in a variational form. The error of approximation is controlled by both the finite element mesh and the assignment of polynomial degrees [2]. 

The separate labels of h- and p-version exist for historical reasons since both the mesh (h) and the assignment of polynomial degrees (p) are important in finite element analysis. Hence, the h- and p-versions should not be seen as competing alternatives, but rather as integral components of an adaptable discretization strategy. Note that a code that has p-version capabilities can always be operated as an h-version code, but not the other way around.

There are other discretization strategies named X-FEM, Isogeometric Analysis, etc. They have advantages for certain classes of problems, but they lack the generality, adaptability, and efficiency of the finite element method implemented with p-version capabilities.

Outlook

Explainable Artificial Intelligence (XAI) will impose the requirements of reliability, traceability, and auditability on numerical simulation. This will lead to the adoption of methods that support solution verification and hierarchic modeling approaches in the engineering sciences.  

Artificial intelligence tools will have the capability to produce smart discretizations based on the information content of the problem definition. The p-version, used in conjunction with properly designed meshes, is expected to play a pivotal role in that process.


References

[1] B. Szabό and I. Babuška, Finite Element Analysis. John Wiley & Sons, Inc., 1991.

[2] I. Babuška, B. Szabó and I. N. Katz, The p-version of the finite element method. SIAM J. Numer. Anal., Vol. 18, pages 515-545, 1981.

[3] E. Ramm, E. Rank, R. Rannacher, K. Schweizerhof, E. Stein, W. Wendland, G. Wittum, P. Wriggers, and W. Wunderlich, Error-controlled Adaptive Finite Elements in Solid Mechanics, edited by E. Stein. John Wiley & Sons Ltd., Chichester 2003.

[4] A. Düster and E. Rank, The p-version of the finite element method compared to an adaptive h-version for the deformation theory of plasticity. Computer Methods in Applied Mechanics and Engineering, Vol. 190, pages 1925-1935, 2001.


Related Blogs:

]]>
https://www.esrd.com/story-of-p-version-in-a-nutshell/feed/ 0
Why Finite Element Modeling is Not Numerical Simulation? https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/ https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/#respond Thu, 02 Nov 2023 15:05:12 +0000 https://www.esrd.com/?p=30196 The term “simulation” is often used interchangeably with “finite element modeling” in the engineering literature and marketing materials.  It is important to understand the difference between the two.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The term “simulation” is often used interchangeably with “finite element modeling” in the engineering literature and marketing materials.  It is important to understand the difference between the two.

The Origins of Finite Element Modeling

Finite element modeling is a practice rooted in the 1960s and 70s.  The development of the finite element method began in 1956 and was greatly accelerated during the US space program in the 1960s. The pioneers were engineers who were familiar with the matrix methods of structural analysis and sought to extend those methods to solve the partial differential equations that model the behavior of elastic bodies of arbitrary geometry subjected to various loads.   The early papers and the first book on the finite element method [1], written when our understanding of the subject was just a small fraction of what it is today, greatly influenced the idea of finite element modeling and its subsequent implementations.

Guided by their understanding of models for structural trusses and frames, the early code developers formulated finite elements for two- and three-dimensional elasticity problems, plate and shell problems, etc. They focused on getting the stiffness relationships right, subject to the limitations imposed by the software architecture on the number of nodes per element and the number of degrees of freedom per node.  They observed that elements of low polynomial degree were “too stiff”.  The elements were then “softened” by using fewer integration points than necessary.  This caused “hourglassing” (zero energy modes) to occur which was fixed by “hourglass control”.  For example, the formulation of the element designated as C3D8R and described as “8-node linear brick, reduced integration with hourglass control” in the Abaqus Analysis User’s Guide [2] was based on such considerations.

Through an artful combination of elements and the finite element mesh, the code developers were able to show reasonable correspondence between the solutions of some simple problems and the finite element solutions.  It is a logical fallacy, called the fallacy of composition, to assume that elements that performed well in particular situations will also perform well in all situations.

The Science of Finite Element Analysis

Investigation of the mathematical foundations of finite element analysis (FEA) began in the early 1970s.  Mathematicians understand FEA as a method for obtaining an approximation to the exact solution of a well-defined mathematical problem, such as a problem of elasticity.  Specifically, the finite element solution uFE has to converge to the exact solution uEX in a norm (which depends on the formulation) as the number of degrees of freedom n is increased:

Under conditions that are usually satisfied in practice, it is known that uEX exists and it is unique.

The first mathematical book on finite element analysis was published in 1973 [3].  Looking at the engineering papers and contemporary implementations, the authors identified four types of error, called “variational crimes”. These are (1) non-conforming elements, (2) numerical integration, (3) approximation of the domain and boundary conditions, and (4) mixed methods. In fact, many other kinds of variational crimes commonly occur in finite element modeling, such as using point forces, point constraints, and reduced integration.

By the mid-1980s the mathematical foundations of FEA were substantially established.   It was known how to design finite element meshes and assign polynomial degrees so as to achieve optimal or nearly optimal rates of convergence, how to extract the quantities of interest from the finite element solution, and how to estimate their errors.  Finite element analysis became a branch of applied mathematics.

By that time the software architectures of the large finite element codes used in current engineering practice were firmly established. Unfortunately, they were not flexible enough to accommodate the new technical requirements that arose from scientific understanding of the finite element method. Thus, the pre-scientific origins of finite element analysis became petrified in today’s legacy finite element codes.

Figure 1 shows an example that would be extremely difficult, if not impossible, to solve using legacy finite element analysis tools:

Figure 1: Lug-clevis-pin assembly. The lug is made of 16 fiber-matrix composite plies and 5 titanium plies. The model accounts for mechanical contact as well as the nonlinear deformation of the titanium plies. Solution verification was performed.

Notes on Tuning

On a sufficiently small domain of calibration any model, even a finite element model laden with variational crimes, can produce results that appear reasonable and can be tuned to match experimental observations. We use the term tuning to refer to the artful practice of balancing two large errors in such a way that they nearly cancel each other out. One error is conceptual:  Owing to variational crimes, the numerical solution does not converge to a limit value in the norm of the formulation as the number of degrees of freedom is increased. The other error is numerical: The discretization error is large enough to mask the conceptual error [4].

Tuning can be effective in structural problems, such as automobile crash dynamics and load models of airframes, where the force-displacement relationships are of interest.  Tuning is not effective, however, when the quantities of interest are stresses or strains at stress concentrations.  Therefore finite element modeling is not well suited for strength calculations.

Solution Verification is Mandatory

Solution verification is an essential technical requirement for democratization, model development, and applications of mathematical models.  Legacy FEA software products were not designed to meet this requirement. 

There is a general consensus that numerical simulation will have to be integrated with explainable artificial intelligence (XAI) tools.  This can be successful only if mathematical models are free from variational crimes.

The Main Points

Owing to limitations in their infrastructure, legacy finite element codes have not kept pace with important developments that occurred after the mid-1970s.

The practice of finite element modeling will have to be replaced by numerical simulation.  The changes will be forced by the technical requirements of XAI.

References

[1]  O. C. Zienkiewicz and Y. K. Cheung, The Finite Element Method in Structural and Continuum Mechanics, London: McGraw-Hill, 1967.

[2]  http://130.149.89.49:2080/v6.14/books/usb/default.htm

[3] G. Strang and G. J. Fix, An Analysis of the Finite Element Method, Englewood Cliffs, NJ: Prentice-Hall, 1973. [4] B. Szabό and I. Babuška, Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[4] B. Szabό and I. Babuška, Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

]]>
https://www.esrd.com/why-finite-element-modeling-is-not-numerical-simulation/feed/ 0
Obstacles to Progress https://www.esrd.com/obstacles-to-progress/ https://www.esrd.com/obstacles-to-progress/#respond Tue, 24 Oct 2023 17:43:42 +0000 https://www.esrd.com/?p=30119 The development of the finite element method (FEM) consists of two main branches: the art of finite element modeling and the science of finite element analysis. Learn why in this blog.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Thomas Kuhn, a professor at MIT and a highly influential philosopher of science, was interested in how science progresses as opposed to how it is generally believed to be progressing.  He found that progress occurs in fits and starts, rather than through a steady accumulation of knowledge.  Typically, a period of normal science is followed by a period of stagnation which is prolonged by the tendency of professionals to develop dogmatic adherence to a paradigm.  In the period of stagnation, evidence accumulates that the methodology being developed is incapable of handling certain classes of problems.  This leads to a model crisis, followed by a paradigm shift and the start of a new phase of normal science.

Photograph of Thomas Kuhn (via Magdalenaday.com).

While Kuhn was thinking of science as a whole, his observations are particularly fitting in the applied sciences where changing an accepted paradigm is greatly complicated by the fact that methods based on it may have been incorporated in the workflows of industrial organizations.

The development of the finite element method (FEM) follows a similar but more complex pattern consisting of two main branches: the art of finite element modeling and the science of finite element analysis.

The Art of Finite Element Modeling

The art of finite element modeling evolved from the pioneering work of engineers in the aerospace sector.  They were familiar with the matrix methods of structural analysis and sought to extend it to the solution of elastostatic problems, initially in two dimensions.  They constructed triangular and quadrilateral elements by establishing linear relationships between nodal forces and displacements.

This work was greatly accelerated by the US space program in the 1960s.   In 1965 NASA awarded a contract for the development of a “general purpose” finite element analysis program, which was later named NASTRAN.  NASTRAN and the other legacy codes were designed based on the understanding of the finite element method that existed in the 1960s.  Unfortunately, the software architecture of legacy codes imposed limitations that prevented these codes from keeping pace with subsequent scientific developments in finite element analysis.

Legacy finite element codes were designed to support finite element modeling which is an intuitive construction of a numerical problem by assembling elements from the library of a legacy finite element software product.  Through artful selection of the elements, the constraints, and the loads, the force-displacement relationships can be estimated with reasonable accuracy. Note that a nodal force is an abstract entity, derived from the generalized formulation, not to be confused with concentrated forces, that are inadmissible in two and three-dimensional elasticity.  This point was not yet clearly understood by the developers of legacy codes who relied on early papers and the first book [1] on the finite element method.

The Science of Finite Element Analysis

Exploration of the mathematical foundations of the finite element method began in the early 1970s, well after the architecture of legacy finite element software products took shape.  The finite element method was viewed as a method by which the exact solutions of partial differential equations cast in variational form are approximated [2].  Of interest are: (a) the rate of convergence in a norm that depends on the formulation, (b) the stability of the sequence of numerical problems corresponding to an increasing number of degrees of freedom, (c) the estimation and control of the errors of approximation in the quantities of interest.

The mathematical foundations of finite element analysis were substantially established by the mid-1980s, and finite element analysis emerged as a branch of applied mathematics.

Stagnation in Finite Element Modeling

Legacy finite element codes came to be widely used in engineering practice before the theoretical foundations of the finite element method were firmly established. This led to the emergence of a culture of finite element modeling based on the pre-scientific understanding of the finite element method. There were attempts to incorporate adaptive control of the errors of approximation, however, these attempts failed because adaptive error control is possible only when the underlying mathematical problem is well defined (i.e. an exact solution exists), however, in most industrial-scale finite element models this is not the case.

The primary causes of stagnation are:

  • The organizations that rely on computed information have not required solution verification which is an essential technical requirement in numerical simulation. 
  • The vendors of legacy finite element software tools have not kept pace with the growth of the knowledge base of the finite element method.

Outlook

The knowledge base of finite element analysis (FEA) is currently much larger than what is available to practicing engineers through legacy finite element software tools. Linking numerical simulation with explainable artificial intelligence (XAI) tools will impose requirements for reliability, traceability, and auditability. To meet those requirements, software vendors will have to abandon old paradigms and implement state-of-the-art algorithms for solution verification and hierarchical modeling [3].

References

[1] Zienkiewicz, O.C. and Cheung, Y.K. The finite element method in continuum and structural mechanics. McGraw-Hill 1967.

[2] Babuška, I. and Aziz, A.K. Lectures on mathematical foundations of the finite element method. Report ORO-3443-42; BN-748. University of Maryland, College Park, Institute for Fluid Dynamics and Applied Mathematics, 1972.

[3] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed. John Wiley & Sons, Inc., 2021.

]]>
https://www.esrd.com/obstacles-to-progress/feed/ 0