By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA
In one of my conference presentations, I discussed variational crimes, noting that using point forces and point constraints in finite element analysis serves as examples of such crimes. In the question-and-answer session, I was asked: “If using point constraints is a variational crime, then how is it possible that the structure designed to refloat the Costa Concordia was full of those crimes and yet it worked just fine.” The Costa Concordia was a cruise ship that hit a rock and sank off the coast of Giglio Island in the Mediterranean in January 2012 and was refloated in July 2014. Refloating this vessel, weighing over 114,000 tons, was a major engineering feat. The entire salvage operation cost approximately 2 billion US dollars.
The refloating of the Costa Concordia.
This question presented an opportunity for me to explain that finite element modeling (FEM) and finite element analysis (FEA) are complementary methods when analysts correctly understand their respective domains of application and use them accordingly. However, problems arise when FEM is used outside its scope, which is an all too frequent error.
FEM is an art where engineers are called upon to balance two qualitatively different errors. In contrast, FEA is a scientific method for approximating the solutions of partial differential equations cast in variational form. The objectives and scope of FEM are very different from those of FEA.
The Art of Finite Element Modeling
In the 1950s, engineers began using computers to solve the equations arising from the analysis of structural trusses and frames by the matrix method. This was a great relief from the burden of hand calculations, which were time-consuming and prone to error. By harnessing the power of early computing technology, they could perform complex structural analyses more efficiently and accurately.
In the analysis of trusses and frames, all errors reside in the assumptions incorporated in the classical beam-column theory and the idealizations of the connections between structural elements, which are typically considered either rigid or hinged. The numerical error is just the round-off error which, in the age of computers, is negligibly small. All errors are errors of idealization.
In 1956, it was proposed that matrix methods could be extended to find stress distributions in thin shells, such as the fuselage of an aircraft, and thus the finite element method was born. The idea was that a shell could be divided into smaller, discrete elements connected at nodes. Each element has a stiffness matrix that relates the nodal forces to the nodal displacements. By assembling these element stiffness matrices into a global stiffness matrix, the response of the entire structure can be calculated. This was a significant departure from the naturally discrete systems of trusses and frames. As a consequence, two types of errors occurred: (a) conceptual errors stemming from the use of nodal forces and displacements, and (b) discretization errors arising from the choice of elements and refinement of the finite element mesh.
Confidence in this method greatly increased when it was discovered that the nodal forces acting on any element, or group of elements, calculated from the finite element solution, satisfied the equilibrium equations. However, this is not an indicator of the quality of the finite element solution: It is related to the rank deficiency of the unconstrained stiffness matrices. An explanation is available in reference [1].
It was found that reasonably accurate force-displacement relationships in shells and other continuous structures can be predicted when the elements are suitably defined and the mesh is properly configured. This motivated the development of the legacy finite element modeling codes still widely used to support engineering decision-making processes. The development of those codes started in the mid-1960s when the knowledge base of finite element analysis was a very small fraction of what it is today.
Finite element models are useful for finding load distributions in large structures such as airframes, and the structure fabricated for the refloating of the Costa Concordia. Finite element models are also used when large deformations occur, such as in automotive crash dynamics. The conceptual errors associated with using nodal forces and displacements are compensated for by offsetting approximation errors [1]. This compensation is achieved through artful selection of finite elements from a finite element library and mesh design.
Finite Element Analysis
Finite element analysis has the following objectives:
- Given a well-posed mathematical problem in variational form, find an approximate solution such that the error of approximation is minimum in the energy norm. The approximate solution is characterized by a partition of the solution domain into finite elements (the mesh), the polynomial degrees assigned to the elements, and a corresponding set of mapping functions.
- Extract the quantities of interest (QoI) from the approximate solution and show that the relative error in the numerically computed QoI is smaller than a prescribed value.
The Differences Between FEM and FEA
The differences between FEM and FEA can be summarized as follows:
- Goals: FEM was designed for solving structural problems, such as determining load-displacement relationships, natural frequencies, and buckling loads. On the other hand, FEA focuses on strength-related problems, such as calculating stresses, strains, stress intensity factors, and similar quantities. This involves solving mathematical problems in continuum mechanics, extracting quantities of interest (QoI), and estimating their relative errors.
- Existence of an exact solution: In FEA, an exact solution must exist, while in the FEM, an exact solution may not exist—and usually does not. The numerical problem stands on its own. As a consequence, data inadmissible in FEA are admissible in FEM.
- Calibration vs tuning: The material properties and other parameters in the mathematical model and the quantities of interest in FEA are determined by calibration. Calibration is an experimental process in which the loads and corresponding displacements are known, and the values of the parameters are inferred from that information. In contrast, tuning is used in FEM: It is a trial-and-error process by which the properties of the elements and the mesh are adjusted until the observed outcome matches the outcome predicted by the model. It is by artful tuning that errors in discretization offset errors caused by variational crimes. This is why finite element models can still produce useful results even when laden with variational crimes. In many cases, experienced engineers can create reasonably good finite element models, bypassing the trial-and-error process. Some engineering organizations have established rules for the construction of finite element models.
- The domain of calibration: The set of intervals on which the model parameters have been calibrated is called the domain of calibration (DoC). A model is considered validated when all parameters are within the DoC [2]. An important objective of model development is to ensure that the DoC is sufficiently large to encompass all intended applications of the model. Tuned finite element models also have an analogous domain, the domain of tuning (DoT), but this domain is not well defined.
- Implementation: In FEM, the model definition and polynomial degrees are mixed in large element libraries. Variational crimes, such as reduced integration, are incorporated in some elements. In FEA, on the other hand, the definition of elements is strictly consistent with the formulation, and the model definition is distinct from the polynomial degrees, enabling the construction of sequences of hierarchical models and hierarchical finite element spaces [1]. In other words, the implementation of FEM is element-centric, whereas the implementation of FEA is model-centric.
- Solution verification: In FEM, the exact solution is not defined, hence solution verification is not possible. Instead, confidence is gained by tuning and rule-based model construction. By contrast, one of the key objectives of Finite Element Analysis (FEA) is to estimate the relative error in the quantities of interest.
- Integration with AI: Artificial intelligence (AI) can assist in navigating the operations of both FEM and FEA. However, FEA is far better suited to support explainable AI (XAI), offering insights into how predictions are made and highlighting caveats to consider.
The Main Points
FEM is not only useful in numerical simulation but also indispensable in certain applications. FEM and FEA are complementary tools: In engineering mechanics, FEM is useful for structural calculations, such as finding the load distribution in statically indeterminate systems, such as airframes and the structure designed to refloat the Costa Concordia, whereas FEA is used for strength, durability and damage tolerance calculations. This distinction is often overlooked, and FEM is used for both structural and strength calculations. The typical consequences are project delays, cost overruns, and increased maintenance costs. See, for example, reference [3].
References
[1] Szabó, B. and Babuška, I. Finite Element Analysis. Method, Verification and Validation. John Wiley & Sons Inc. Hoboken, NJ, 2021.[2] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024. [3] Szabó, B. and Actis, R. Planning for simulation governance and management. Ensuring simulation is an asset, not a liability. Benchmark, a NAFEMS Publication, July 2021.Related Blogs:
- Where Do You Get the Courage to Sign the Blueprint?
- A Memo from the 5th Century BC
- Obstacles to Progress
- Why Finite Element Modeling is Not Numerical Simulation?
- XAI Will Force Clear Thinking About the Nature of Mathematical Models
- The Story of the P-version in a Nutshell
- Why Worry About Singularities?
- Questions About Singularities
- A Low-Hanging Fruit: Smart Engineering Simulation Applications
- The Demarcation Problem in the Engineering Sciences
- Model Development in the Engineering Sciences
- Certification by Analysis (CbA) – Are We There Yet?
- Not All Models Are Wrong
- Digital Twins
- Digital Transformation
- Simulation Governance
- Variational Crimes
- The Kuhn Cycle in the Engineering Sciences
- Finite Element Libraries: Mixing the “What” with the “How”
- A Critique of the World Wide Failure Exercise
- Meshless Methods
- Isogeometric Analysis (IGA)
- Chaos in the Brickyard Revisited
- Why Is Solution Verification Necessary?
Leave a Reply
We appreciate your feedback!
You must be logged in to post a comment.