Why Is Solution Verification Necessary?
By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA
We at ESRD preach and practice solution verification. We believe that reporting data computed by an approximate method is incomplete without providing an estimate of the size of the relative error. This simple and self-evident statement tends to trigger fierce resistance from those who were schooled in the use of legacy finite element modeling tools.
Persuading those folks is not easy because the conceptual framework and language of finite element modeling are very different from those of finite element analysis. I had the privilege of having witnessed several debates between Olek Zienkiewicz, one of the pioneers of finite element modeling, and Ivo Babuška, who elevated finite element analysis to a scientific discipline.
In these debates, Zienkiewicz would make an intuitive statement supported by one or more examples. Babuška would then construct a counter-example showing that the statement was not generally valid. Zienkiewicz would modify his statement, and Babuška would show that the modified statement was not generally valid either, and so on. Engineers tend to extrapolate from particular examples, whereas mathematicians delight in constructing counter-examples.
In defense of my fellow engineers, I say this: If we had waited for the mathematicians, we would still be living in caves. However, now that we have come this far, we must not ignore what they have to say. This is especially true in our age of artificial intelligence.
The Key Differences Between FEM and FEA
It is important to understand the difference between finite element modeling (FEM) and finite element analysis (FEA). The development of FEM dates back to the 1960s when engineers sought to extend the application of matrix methods of structural mechanics to problems in two- and three-dimensional elasticity, making several intuitively plausible but theoretically flawed assumptions. The development of FEA occurred later, with many important theorems being established in the 1980s.
The essential difference is that, whereas FEA is concerned with the numerical approximation of well-posed mathematical problems cast in variational form, FEM is an intuitive construction of a numerical problem that is usually not an approximation to a mathematical problem but stands on its own. Therefore, the concept of approximation error is well-defined in FEA but has no precise meaning in FEM.
Another important difference is the separation of model form errors from the approximation errors. In FEM, model form and approximation are conflated in the definition of finite elements. In FEA, on the other hand, the model form is associated with the definition of the mathematical problem, and the approximation errors are controlled independently through the finite element mesh and the polynomial degrees assigned to the elements.
Following are three frequently asked questions and the corresponding answers that highlight key technical issues pertaining to solution verification.
Question 1: If I do not know the exact solution of the mathematical problem being approximated by FEA, how can I compute the relative error of approximation?
Answer: It is not necessary to know the exact solution to obtain a reliable estimate of the relative error in the quantities of interest. We know that the exact values of the quantities of interest are finite numbers and independent of the choice of the numerical method used for finding an approximate solution. In the case of the finite element method, they are independent of the finite element mesh and the polynomial degree of the elements. Therefore, if we produce a converging sequence of finite element solutions either by mesh refinement or increasing the polynomial degrees, then the corresponding quantities of interest will converge to a limit. Estimating that limit from the sequence of solutions allows us to estimate the relative errors. This method has been thoroughly researched and explained in textbooks. See, for example, reference [1].
Question 2: Since we do not know the applied loads with precision, why should we worry about the accuracy of the numerical approximation?
Answer: The premise of this question is wrong. To understand why, consider two scenarios: the application and formulation of design rules. In the application of design rules, engineers must show that certain quantities, like the maximum stress, do not exceed allowed values under specified loading conditions. Therefore, the loading conditions are fixed by the design rules. The design criterion is:
\Phi_{max}(u_{EX}) \le \Phi_{all}\qquad (1)
where Φall is the allowable value of the design variable Φ > 0 and Φmax(uEX) is the maximum value of Φ corresponding to the exact solution uEX. Of course, we do not know uEX, we know only the finite element solution uFE. Suppose that Φmax(uFE) underestimates Φmax(uEX) by a relative error ?:
\Phi_{max}(u_{EX})-\Phi_{max}(u_{FE}) = \tau\Phi_{max}(u_{EX}),\qquad 0\le\tau\lt1\qquad (2)
In this case, equation (1) becomes:
\Phi_{max}(u_{FE}) \le (1-\tau)\Phi_{all}\qquad (3)
Two important conclusions follow from this result: (1) If we do not know the value of 𝜏, then we are not in a position to say whether the design meets the design criteria. In other words, design certification is not possible. (2) Approximation errors penalize design by reducing allowable values. Since these values are chosen conservatively to account for uncertainties, the economic costs of using further reduced values to compensate for errors in the approximate solution far exceed those associated with ensuring the accuracy of the data of interest within a small relative error. Therefore, estimating the relative error in terms of the quantities of interest is essential. Further discussion on this topic can be found in reference [1].
Let us now turn to the problem of formulating design rules. Here, we are interested in defining the allowable values Φall which are positive real numbers. These values characterize some hypothesis of failure that has to be validated in experiments. For example, the hypothesis that a ductile material begins to yield when the von Mises stress reaches a critical value is one such hypothesis. The critical value of the von Mises stress cannot be observed directly but must be inferred from experiments in which the applied load is controlled, and local displacements and, possibly, strains are measured. Once again, the load is known precisely, and it is necessary to ensure that the relative error in any numerically determined quantity of interest is negligibly small. Similar considerations apply to all model development projects [2].
Question 3: Is it possible to estimate the relative error in the quantities of interest using legacy finite element codes?
Answer: Yes, provided that the elements chosen from the element library are properly formulated, the boundary conditions are properly defined, and the quantities of interest are finite numbers. When the software architecture of legacy codes was established in the 1960s and early 1970s, no provisions were made for error estimation. Therefore, this step remains rather challenging and is usually omitted in practice.
The First V in VVUQ
The first “V” in the acronym VVUQ stands for verification. Verification is like a three-legged stool: one leg is solution verification, and the other two are data and code verification. All three are key technical requirements in numerical simulation. Analysts are responsible for data and solution verification, while code developers are responsible for code verification and for providing the means to perform solution verification efficiently and reliably in industrial settings.
References
[1] Szabó, B. and Babuška, I. Introduction to Finite Element Analysis. Formulation, Verification and Validation. John Wiley & Sons Ltd. Chichester UK 2011.[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024.Related Blogs:
- Where Do You Get the Courage to Sign the Blueprint?
- A Memo from the 5th Century BC
- Obstacles to Progress
- Why Finite Element Modeling is Not Numerical Simulation?
- XAI Will Force Clear Thinking About the Nature of Mathematical Models
- The Story of the P-version in a Nutshell
- Why Worry About Singularities?
- Questions About Singularities
- A Low-Hanging Fruit: Smart Engineering Simulation Applications
- The Demarcation Problem in the Engineering Sciences
- Model Development in the Engineering Sciences
- Certification by Analysis (CbA) – Are We There Yet?
- Not All Models Are Wrong
- Digital Twins
- Digital Transformation
- Simulation Governance
- Variational Crimes
- The Kuhn Cycle in the Engineering Sciences
- Finite Element Libraries: Mixing the “What” with the “How”
- A Critique of the World Wide Failure Exercise
- Meshless Methods
- Isogeometric Analysis (IGA)
- Chaos in the Brickyard Revisited
Leave a Reply
We appreciate your feedback!
You must be logged in to post a comment.