Discretization Strategy Archives - ESRD https://www.esrd.com/tag/discretization-strategy/ Engineering Software Research and Development, Inc. Fri, 15 Dec 2023 16:41:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Discretization Strategy Archives - ESRD https://www.esrd.com/tag/discretization-strategy/ 32 32 Why Worry About Singularities? https://www.esrd.com/why-worry-about-singularities/ https://www.esrd.com/why-worry-about-singularities/#respond Thu, 14 Dec 2023 21:01:08 +0000 https://www.esrd.com/?p=30564 A mathematician delivered a keynote presentation at an engineering conference some years ago. At the coffee break, following the presentation, a highly respected senior developer of a legacy finite element code, remarked: “I do not understand why the speaker was so worried about singularities. We never see them.” The remark highlights the lack of a common language between the pre-scientific notion of finite element modeling and finite element analysis, which is a branch of applied mathematics. Read why mathematicians and engineers alike should "worry" about singularities.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


A mathematician delivered a keynote presentation at an engineering conference some years ago. At the coffee break, following the presentation, a highly respected senior developer of a legacy finite element code, remarked: “I do not understand why the speaker was so worried about singularities. We never see them.”

In the context of the keynote presentation, singularities were understood to be properties of the exact solutions of mathematical problems approximated by the finite element method. Singularities occur in points where the exact solution lacks differentiability or analyticity. The remark, on the other hand, was in the context of finite element modeling where a numerical problem is constructed without considering the underlying mathematical problem. The remark highlights the lack of a common language between the pre-scientific notion of finite element modeling and finite element analysis, which is a branch of applied mathematics.

Why Do Mathematicians Worry About Singularities?

Mathematicians understand finite element analysis (FEA) as a method for obtaining an approximation to the exact solution of a well-defined mathematical problem, such as a problem of elasticity, cast in variational form. Specifically, the finite element solution uFE converges to the exact solution uEX in a norm (which depends on the variational form) as the number of degrees of freedom N is increased. An important question is, how fast does it converge?  For most practical problems, convergence is quantified by the inequality:

||\boldsymbol u_{EX} -\boldsymbol u_{FE}||_E \le \frac{C}{N^{\beta}}  \quad (1)

where on the left is the energy norm measure of the difference between the exact and the finite element solution (which is closely related to the root-mean-square error in stress), C and β are positive constants, and β is called the rate of convergence. The size of β depends on the regularity (smoothness) of uEX and on the scheme used for increasing N. The details are available in textbooks, see (for example) [1].  The smoothness of uEX is quantified by a positive number λ. In many practical problems 0 < λ < 1.

For instance, consider the two-dimensional elasticity problem on the L-shaped domain, a frequently used benchmark problem, where λ equals 0.544. This is a manufactured problem with a known exact solution, allowing for the calculation of approximation errors [2].

Referring to Figure 1, if uniform mesh refinement is used at a fixed polynomial degree (h-extension) then β = λ/2 = 0.274. If the polynomial degree is increased on a fixed uniform mesh (p-extension) then β = λ = 0.544. If p-extension is used on a mesh that is graded in a geometric progression toward the singular point then, for large N, we still have β = λ = 0.544, however, convergence is much stronger at small N values.

Assume that we wish to reduce the relative error in energy norm to 1 percent. If we increase the polynomial degree uniformly (p-extension), on a geometrically graded mesh then we have to solve less than 103 simultaneous equations. In contrast, if we use a uniform mesh refinement and p = 2  (h-extension) then we have to solve about 107 equations. The ratio is roughly 104. It took less than one second on a desktop computer to solve 103 equations. If we assume that the solution time is proportional to the number of degrees of freedom squared, then achieving 1% relative error with uniform mesh refinement would take 108 seconds or 3.2 years. This shows that the errors of approximation can be controlled only through proper design of the discretization scheme, which involves taking the characteristics of the underlying mathematical problem into consideration.

Figure 1: The L-shaped domain problem. Convergence curves for uniform mesh refinement at a fixed polynomial degree (h-extension), increasing polynomial degree on a fixed uniform mesh (p-extension), and increasing polynomial degree on a geometrically graded fixed finite element mesh consisting of 18 elements.

Why Should Engineers Worry About Singularities?

If the solution of the underlying mathematical problem has singular points, as in the case of the L-shaped domain problem, then the goal of the computation cannot be the determination of the maximum stress. The finite element solution predicts finite values for stress, however, the predicted stress increases as N is increased. The error in the maximum stress is infinitely large even if the root-mean-square error in stress on the entire domain is negligibly small. This is illustrated in Figure 2 where the von Mises stress corresponding to the finite element solution on the 18-element geometrically graded mesh and p = 8 is displayed.

In engineering applications of the finite element method, small geometric features, such as fillets, are often neglected, resulting in sharp corners and edges. This may be permissible outside of the domain of primary interest, however, the quantities of interest within the domain of primary interest may be polluted by errors coming from singular points or edges [3].

Figure 2: Contours of the von Mises stress corresponding the finite element solution on an 18-element geometrically graded mesh, p=8.

In this model problem, the singularity was caused by a sharp corner. Singularities can be caused by abrupt changes in material properties, loading, and constraint conditions as well.

Outlook

A high level of expertise is required for properly designing a discretization scheme. Experts take into consideration the information contained in the input data and use that information to estimate the regularity of the exact solution. This guides the design of the finite element mesh and the assignment of polynomial degrees.  Feedback information can be utilized to revise and update the discretization scheme when necessary [4].

Explainable artificial intelligence (XAI) tools can provide high-quality guidance in the design of the initial discretization, based on the information content of the input data, and in the management of feedback information. It’s essential, that these tools be trained on the scientific principles of finite element analysis.


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed. John Wiley & Sons, Inc., 2021.

[2] Szabό, B. and Babuška, I. Finite Element Analysis. John Wiley & Sons, Inc., 1991.

[3] Babuška, I., Strouboulis, T., Upadhyay, C.S. and Gangaraj, S.K. A posteriori estimation and adaptive control of the pollution error in the h‐version of the finite element method. International Journal for Numerical Methods in Engineering, 38(24), pp. 4207-4235, 1995.

[4] Babuška, I. and Rank, E. An expert-system-like feedback approach in the hp-version of the finite element method. Finite Elements in Analysis and Design, 3(2), pp.127-147, 1987.


Related Blogs:

]]>
https://www.esrd.com/why-worry-about-singularities/feed/ 0
The Story of the P-version in a Nutshell https://www.esrd.com/story-of-p-version-in-a-nutshell/ https://www.esrd.com/story-of-p-version-in-a-nutshell/#respond Fri, 01 Dec 2023 02:15:56 +0000 https://www.esrd.com/?p=30417 The idea of achieving convergence by increasing the polynomial degree (p) of the approximating functions on a fixed mesh, known as the p-version of the finite element method, was at odds with the prevailing view in the finite element research community in the 1960s and 70s. But why?]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The idea of achieving convergence by increasing the polynomial degree (p) of the approximating functions on a fixed mesh, known as the p-version of the finite element method, was at odds with the prevailing view in the finite element research community in the 1960s and 70s.

The accepted paradigm was that elements should have a fixed polynomial degree, and convergence should be achieved by decreasing the size of the largest element of the mesh, denoted by h. This approach came to be called the h-version of the finite element method. This view greatly influenced the development of the software architecture of legacy finite element software in ways that made it inhospitable for later developments. 

The finite element research community rejected the idea of the p-version of the finite element method with nearly perfect unanimity, predicting that “it would never work”. The reasons given are listed below.

Why the “p-version would never work”?

The first objection was that the system of equations would become ill-conditioned at high p-levels. − This problem was solved by proper selection of the basis functions [1].

The second objection was that high-order elements will require excessive computer time. −  This problem was solved by proper ordering of the operations. If the task is stated in this way: “Compute (say) the maximum normal stress and verify that the result is accurate to within (say) 5 percent relative error“ then the p-version will require substantially fewer machine cycles than the h-version and virtually no user intervention.

The third objection was that mappings, other than isoparametric and subparametric mappings, fail to represent rigid body displacements exactly.  − This is true but unimportant because the errors associated with rigid body modes converge to zero very fast [1].

The fourth objection was that solutions obtained using high-order elements oscillate in the neighborhoods of singular points. – The rate of convergence of the p-version is stronger because the finite element solution oscillates in the neighborhood of singular points and the p-version is very efficient elsewhere [1].

The fifth objection was the hardest one to overcome. There was a theoretical estimate of the error of approximation in energy norm which states:

||\boldsymbol u_{ex} -\boldsymbol u_{fe}||_E \le Ch^{min(\lambda,p)}  \quad (1)

On the left of this inequality is the error of approximation in energy norm, on the right C is a positive constant, h is the size of the largest element of the mesh, λ is a measure of the regularity of the exact solution, usually a number less than one, and p is the polynomial degree. The argument was that since λ is usually a small number, it does not matter how high p is, it will not affect the convergence rate. This estimate is correct for the h-version, however, because C depends on p, it is not correct for the p-version [2].

(From left to right) Norman Katz, Ivo Babuška and Barna Szabó.

The sixth objection was that the p-version is not suitable for solving nonlinear problems. – This objection was answered when the German Research Foundation (DFG) launched a project in 1994 that involved nine university research institutes. The aim was to investigate adaptive finite element methods with reference to problems in the mechanics of solids [3]. The research was led by professors of mathematics and engineering.

As part of this project, the question of whether the p-version can be used for solving nonlinear problems was addressed. The researchers agreed to investigate a two-dimensional nonlinear model problem. The exact solution of the model problem was not known, therefore a highly refined mesh with millions of degrees of freedom was used to obtain a reference solution. This is the “overkill” method. The researchers unanimously agreed at the start of the project that the refinement was sufficient so that the corresponding finite element solution could be used as if it were the exact solution.

Professor Ernst Rank and Dr. Alexander Düster, of the Department of Construction Informatics of the Technical University of Munich, showed that the p-version can achieve significantly better results than the h-version, even when compared with adaptive mesh refinement, and recommended further investigation of complex material models with the p-version [4]. They were also able to show that the reference solution was not accurate enough. With this, the academic debate was decided in favor of the p-version. I attended the concluding conference held at the University in Hannover (now Leibniz University).

Understanding the Finite Element Method

The finite element method is properly understood as a numerical method for the solution of ordinary and partial differential equations cast in a variational form. The error of approximation is controlled by both the finite element mesh and the assignment of polynomial degrees [2]. 

The separate labels of h- and p-version exist for historical reasons since both the mesh (h) and the assignment of polynomial degrees (p) are important in finite element analysis. Hence, the h- and p-versions should not be seen as competing alternatives, but rather as integral components of an adaptable discretization strategy. Note that a code that has p-version capabilities can always be operated as an h-version code, but not the other way around.

There are other discretization strategies named X-FEM, Isogeometric Analysis, etc. They have advantages for certain classes of problems, but they lack the generality, adaptability, and efficiency of the finite element method implemented with p-version capabilities.

Outlook

Explainable Artificial Intelligence (XAI) will impose the requirements of reliability, traceability, and auditability on numerical simulation. This will lead to the adoption of methods that support solution verification and hierarchic modeling approaches in the engineering sciences.  

Artificial intelligence tools will have the capability to produce smart discretizations based on the information content of the problem definition. The p-version, used in conjunction with properly designed meshes, is expected to play a pivotal role in that process.


References

[1] B. Szabό and I. Babuška, Finite Element Analysis. John Wiley & Sons, Inc., 1991.

[2] I. Babuška, B. Szabó and I. N. Katz, The p-version of the finite element method. SIAM J. Numer. Anal., Vol. 18, pages 515-545, 1981.

[3] E. Ramm, E. Rank, R. Rannacher, K. Schweizerhof, E. Stein, W. Wendland, G. Wittum, P. Wriggers, and W. Wunderlich, Error-controlled Adaptive Finite Elements in Solid Mechanics, edited by E. Stein. John Wiley & Sons Ltd., Chichester 2003.

[4] A. Düster and E. Rank, The p-version of the finite element method compared to an adaptive h-version for the deformation theory of plasticity. Computer Methods in Applied Mechanics and Engineering, Vol. 190, pages 1925-1935, 2001.


Related Blogs:

]]>
https://www.esrd.com/story-of-p-version-in-a-nutshell/feed/ 0