Finite Element Libraries: Mixing the “What” with the “How”
By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA
Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of complex ones. This view shaped the development of the finite element (FE) method in the 1960s and 70s. The software architecture of the legacy FE codes was established in that period.
The Element-Centric View
Richard MacNeal, a principal developer of NASTRAN and co-founder of the MacNeal-Schwendler Corporation (MSC), once told me that his dream was to formulate “the perfect 4-node shell element”. His background was in analog computers, and he thought of finite elements as tuneable objects: If one tunes an element just right, as potentiometers are tuned in analog computers, then a perfect element can be created. This element-centric view led to the implementation of large element libraries, which are still in use today. These libraries mix what we wish to solve (in this instance, a shell model) with how we wish to solve it (using 4-node finite elements).
A cluttered, unattractive library, emblematic of finite element libraries in legacy FE codes. Image generated by Gemini.
In formulating his shell element, MacNeal was constrained by the limitations of the architecture of NASTRAN. Quoting from reference [1]: “An important general feature of NASTRAN which limits the choice of element formulation is that, with rare exceptions, the degrees of freedom consist of the three components of translation and the three components of rotation at discrete points.” This feature originated from models of structural frames where the joints of beams and columns are allowed to translate and rotate in three mutually orthogonal directions. Such restrictions, common to all legacy FE codes, prevented those codes from keeping pace with the subsequent scientific development of FE analysis.
MacNeal’s formulation of his shell element was entirely intuitive. There is no proof that the finite element solutions corresponding to progressively refined meshes will converge to the exact solution of a particular shell model or even converge at all. Model form and approximation are intertwined.
The classical shell model, also known as the Novozhilov-Koiter (N-K) model, taught in advanced strength of materials classes, is based on the assumption that normals to the mid-surface in the undeformed configuration remain normal after deformation. Making this assumption was necessary in the pre-computer era to allow the solution of simple shell problems by classical methods. Today, the N-K shell model is only of theoretical and historical interest. Instead, we have a hierarchic sequence of shell models of increasing complexity. The next shell model is the Naghdi model, which is based on the assumption that normals to the mid-surface in the undeformed configuration remain straight lines but not necessarily normal. Higher models permit the normal to deform in ways that can be well approximated by polynomials [2].
Shells behave like three-dimensional solids in the neighborhoods of support attachments, stiffeners, nozzles, and cutouts. Therefore, restrictions on the transverse variation of the displacement components are not warranted in those locations. Whether a shell is thin or thick depends not only on the ratio of the thickness to the radius of curvature but also on the smoothness of the exact solution. The proper choice of a shell model depends on the problem at hand and the goals of computation. Consider, for example, the free vibration of a shell. When the wavelengths of the mode shapes are close to the thickness, the shearing deformations cannot be neglected, and hence, the shell behaves as a thick shell. Perfect shell elements do not exist. Furthermore, there is no such thing as a perfect element of any kind.
The Model-Centric View
In the model-centric view, we recognize that any model is a special case of a more comprehensive model. For instance, in solid mechanics problems, we typically start with a problem of linear elasticity, where one of the assumptions is that stress is proportional to strain, regardless of the size of the strain. Once the solution is available, we check whether the proportional limit was exceeded. If it was, we solve a nonlinear problem, for example, using the deformation theory of plasticity with a suitable material law. In that case, the linear solution is the first iteration in solving the nonlinear problem. If the displacements are large, we continue with the iterations to solve the geometric nonlinear problem. It is important to ensure that the errors of approximation are negligibly small throughout the numerical solution process.
At first glance, it might seem that model form errors can be made arbitrarily small. However, this is generally not possible. As the complexity of the model increases, so does the number of physical parameters. For instance, transitioning from linear elasticity to accounting for plastic deformation requires introducing empirical constants to characterize nonlinear material behavior. These constants have statistical variations, which increase prediction uncertainty. Ultimately, these uncertainties will likely outweigh the benefits of more complex models.
Implementation
An FE code should allow users to control both the model form and the approximation errors. To achieve this, model and element definitions must be separate, and seamless transitions from one model to another and from one discretization to another must be made possible. In principle, it is possible to control both types of error using legacy FE codes, but since model and element definitions are mixed in the element libraries, the process becomes so complicated that it is impractical to use in industrial settings.
Model form errors are controlled through hierarchic sequences of models, while approximation errors are controlled through hierarchic sequences of finite element spaces [2]. The stopping criterion is that the quantities of interest should remain substantially unchanged in the next level of the hierarchy.
Advice to Management
To ensure the reliability of predictions, it must be shown that the model form errors and the approximation errors do not exceed pre-specified tolerances. Moreover, the model parameters and data must be within the domain of calibration [3]. Management should not trust model-generated predictions unless evidence is provided showing that these conditions are satisfied.
When considering various marketing claims regarding the promised benefits of numerical simulation, digital twins, and digital transformation, management is well advised to keep this statement by philosopher David Hume in mind: “A wise man apportions his beliefs to the evidence.”
References
[1] MacNeal, R. H. A simple quadrilateral shell element. Computers & Structures, Vol. 8, pp. 175-183, 1978.[2] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.Related Blogs:
- Where Do You Get the Courage to Sign the Blueprint?
- A Memo from the 5th Century BC
- Obstacles to Progress
- Why Finite Element Modeling is Not Numerical Simulation?
- XAI Will Force Clear Thinking About the Nature of Mathematical Models
- The Story of the P-version in a Nutshell
- Why Worry About Singularities?
- Questions About Singularities
- A Low-Hanging Fruit: Smart Engineering Simulation Applications
- The Demarcation Problem in the Engineering Sciences
- Model Development in the Engineering Sciences
- Certification by Analysis (CbA) – Are We There Yet?
- Not All Models Are Wrong
- Digital Twins
- Digital Transformation
- Simulation Governance
- Variational Crimes
- The Kuhn Cycle in the Engineering Sciences
Leave a Reply
We appreciate your feedback!
You must be logged in to post a comment.