Simulation Archives - ESRD https://www.esrd.com/tag/simulation/ Engineering Software Research and Development, Inc. Thu, 07 Nov 2024 20:22:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Simulation Archives - ESRD https://www.esrd.com/tag/simulation/ 32 32 Meshless Methods https://www.esrd.com/meshless-methods/ https://www.esrd.com/meshless-methods/#respond Thu, 07 Nov 2024 16:25:07 +0000 https://www.esrd.com/?p=32906 Meshless methods, also known as mesh-free methods, are computational techniques used for the approximation of the solutions of partial differential equations in the engineering and applied sciences. The advertised advantage of the method is that users do not have to worry about meshing. However, eliminating the meshing problem has introduced other, more complex issues. Oftentimes, advocates of meshless methods fail to mention their numerous disadvantages.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Meshless methods, also known as mesh-free methods, are computational techniques used for the approximation of the solutions of partial differential equations in the engineering and applied sciences. The advertised advantage of the method is that users do not have to worry about meshing. However, eliminating the meshing problem has introduced other, more complex issues. Oftentimes, advocates of meshless methods fail to mention their numerous disadvantages.

When meshless methods were first proposed as an alternative to the finite element method, creating a finite element mesh was more burdensome than it is today. Undoubtedly, mesh generation will become even less problematic with the application of artificial intelligence tools, and the main argument for using meshless methods will weaken over time.

An artistic rendering of the idea of meshless clouds. The spheres represent the supports of the basis functions associated with the centers of the spheres. Image generated by Microsoft Copilot.

Setting Criteria

First and foremost, numerical solution methods must be reliable. This is not just a desirable feature but an essential prerequisite for realizing the potential of numerical simulation and achieving success with initiatives such as digital transformation, digital twins, and explainable artificial intelligence, all of which rely on predictions based on numerical simulation. Assurance of reliability means that (a) the data and parameters fall within the domain of calibration of a validated model, and (b) the numerical solutions have been verified.

In the following, I compare the finite element and meshless methods from the point of view of reliability. The basis for comparison is the finite element method as it would be implemented today, not as it was implemented in legacy codes which are based on pre-1970s thinking. Currently, ESRD’s StressCheck is the only commercially available implementation that supports procedures for estimating and controlling model form and approximation errors in terms of the quantities of interest.

The Finite Element Method (FEM)

The finite element method (FEM) has a solid scientific foundation, developed post-1970. It is supported by theorems that establish conditions for its stability, consistency, and convergence rates. Algorithms exist for estimating the relative errors in approximations of quantities of interest, alongside procedures for controlling model form errors [1].

The Partition of Unity Finite Element Method (PUFEM)

The finite element method has been shown to work well for a wide range of problems, covering most engineering problems. However, it is not without limitations: For the convergence rates to be reasonable, the exact solution of the underlying problem has to have some regularity. Resorting to alternative techniques is warranted when standard implementations of the finite element method are not applicable.  One such technique is the Partition of Unity Finite Element Method (PUFEM), which can be understood as a generalization of the h, p, and hp versions of the finite element method [2]. It provides the ability to incorporate analytical information specific to the problem being solved in the finite element space.

The FEM Challenge

Any method proposed to rival FEM should, at the very least, demonstrate superior performance for a clearly defined set of problems. The benchmark for comparison should involve computing a specific quantity of interest and proving that the relative error is less than, for example, 1%. I am not aware of any publication on meshless methods that has tackled this challenge.

Meshless Methods

Various meshless methods, such as the Element-Free Galerkin (EFG) method, Moving Least Squares (MLS), and Smoothed Particle Hydrodynamics (SPH), using weak and strong formulations of the underlying partial differential equations, have been proposed. The theoretical foundations of meshless methods are not as well-developed as those of the Finite Element Method (FEM). The users of meshless methods have to cope with the following issues:

  1. Enforcement of boundary conditions: The enforcement of essential boundary conditions in meshless methods is generally more complex and less intuitive than in FEM. The size of errors incurred from enforcing boundary conditions can be substantial.
  2. Sensitivity to the choice of basis functions: The stability of meshless methods can be highly sensitive to the choice of basis functions.
  3. Verification: Solution verification with meshless methods poses significant challenges.
  4. Most meshless methods are not really meshless: It is true that traditional meshing is not required, but in weak formulations, the products of the derivatives of the basis functions have to be integrated. Numerical integration is performed over the domains defined by the intersection of supports (support is the subdomain on which the basis function is not zero), which requires a “background mesh.”
  5. Computational power: Meshless methods often require greater computational power due to the global nature of the shape functions used, which can lead to denser matrices compared to FEM.

Advice to Management

Decision-makers need solid evidence supporting the reliability of data generated by numerical simulation. Otherwise, where would they get their courage to sign the “blueprint”? They should require estimates of the error of approximation for the quantities of interest. Without such estimates, the value of the computed information is greatly diminished because the unknown approximation errors increase uncertainties in the predicted data.

Management should treat claims of accuracy in marketing materials for legacy finite element software and any software implementing meshless methods with a healthy dose of skepticism. Assertions that a software product was tested against benchmarks and found to perform well should never be taken to mean that it will perform similarly well in all cases. Management should require problem-specific estimates of relative errors in the quantities of interest.


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[2] Melenk, J. M. and Babuška, I. The partition of unity finite element method: Basic theory and applications. Computer Methods in Applied Mechanics and Engineering. Vol 139(1-4), pp. 289-314, 1996.


Related Blogs:

]]>
https://www.esrd.com/meshless-methods/feed/ 0
A Critique of the World Wide Failure Exercise https://www.esrd.com/critique-of-the-wwfe/ https://www.esrd.com/critique-of-the-wwfe/#respond Thu, 03 Oct 2024 13:00:00 +0000 https://www.esrd.com/?p=32759 The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses, and ran between 2007 and 2013. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses. It ran between 2007 and 2013. Quoting from reference [1]: “Twelve challenging test problems were defined by the organizers of WWFE-II, encompassing a range of materials (polymer, glass/epoxy, carbon/epoxy), lay-ups (unidirectional, angle ply, cross-ply, and quasi-isotropic laminates) and various 3D stress states”. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates.

The von Mises stress in an ideal fiber-matrix composite subjected to shearing deformation. The displacements are magnified 15X. Verified solution by StressCheck.

Composite Failure Model Development

According to Thomas Kuhn, the period of normal science begins when investigators have agreed upon a paradigm, that is, the fundamental ideas, methods, language, and theories that guide their research and development activities [2]. We can understand WWFE as an effort by the composite materials research community to formulate such a paradigm. While some steps were taken toward achieving that goal, the goal was not reached. The final results of WWFE-II were inconclusive. The main reason is that the project lacked some of the essential constituents of a model development program. To establish favorable conditions for the evolutionary development of failure criteria for composite materials, procedures similar to those outlined in reference [3] will be necessary. The main points are briefly described below.

  1. Formulation of the mathematical model: The operators that transform the input data into the quantities of interest are defined. In the case of WWFE, a predictor of failure is part of the mathematical model. In WWFE II, twelve different predictors were investigated. These predictors were formulated based on subjective factors: intuition, insight, and personal preferences. A properly conceived model development project provides an objective framework for ranking candidate models based on their predictive performance. Additionally, given the stochastic outcomes of experiments, a statistical model that accounts for the natural dispersion of failure events must be included in the mathematical model.
  2. Calibration: Mathematical models have physical and statistical parameters that are determined in calibration experiments. Invariably, there are limitations on the available experimental data. Those limitations define the domain of calibration. The participants of WWFE failed to grasp the crucial role of calibration in the development of mathematical models. Quoting from reference [1]: “One of the undesirable features, which was shared among a number of theories, is their tendency to calibrate the predictions against test data and then predict the same using the empirical constants extracted from the experiments.”  ̶  Calibration is not an undesirable feature. It is an essential part of any model development project. Mathematical models will produce reliable predictions only when the parameters and data are within their domains of calibration. One of the important goals of model development projects is to ensure that the domain of calibration is sufficiently large to cover all applications, given the intended use of the model. However, calibration and validation are separate activities. The dataset used for validation has to be different from the dataset used for calibration [3]. Predicting the calibration data once calibration was performed cannot lead to meaningful conclusions regarding the suitability or fitness of a model.
  3. Validation: Developers are provided complete descriptions of the validation experiments and, based on this information, predict the probabilities of the outcomes of validation experiments. The validation metric is the likelihood of the outcomes.
  4. Solution verification: It must be shown that the numerical errors in the quantities of interest are negligibly small compared to the errors in experimental observations.
  5. Disposition: Candidate models are ranked based on their predictive performance, measured by the ratio of predicted to realized likelihood values. The calibration domain is updated using all available data. At the end of the validation experiments, the calibration data is augmented with the validation data.
  6. Data management: Experimental data must be collected, curated, and archived to ensure its quality, usability, and accessibility.
  7. Model development projects are open-ended: New ideas can be proposed anytime, and the available experimental data will increase over time. Therefore, no one has the final word in a model development project. Models and their domains of calibration are updated as new data become available.

The Tale of Two Model Development Projects

It is interesting to compare the status of model development for predicting failure events in composite materials with linear elastic fracture mechanics (LEFM), which is concerned with predicting crack propagation in metals, a much less complicated problem. Although no consensus emerged from WWFE-II, there was no shortage of ideas on formulating predictors. In the case of LEFM, on the other hand, the consensus that the stress intensity factor is the predictor of crack propagation emerged in the 1970s, effectively halting further investigation of predictors and causing prolonged stagnation [3]. Undertaking a model development program and applying verification, validation, and uncertainty quantification procedures are essential prerequisites for progress in both cases.

Two Candid Observations

Professor Mike Hinton, one of the organizers of WWFE, delivered a keynote presentation at the NAFEMS World Congress in Boston in May 2011 titled “Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?” In this presentation, he shared two candid observations that shed light on the status of models created to predict failure events in composite materials:

  1. “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.” – In other words, in the absence of properly validated and implemented models, the predictions are unreliable.
  2. Disclosed that Professor Zvi Hashin declined the invitation to participate in WWFE-I, explaining his reason in a letter.  He wrote: “My only work in this subject relates to unidirectional fibre composites, not to laminates” … “I must say to you that I personally do not know how to predict the failure of a laminate (and furthermore, that I do not believe that anybody else does).”

Although these observations are dated, I believe they remain relevant today. Contrary to numerous marketing claims, we are still very far from realizing the benefits of numerical simulation in composite materials.

A Sustained Model Development Program Is Essential

To advance the development of design rules for composite materials, stakeholders need to initiate a long-term model development project, as outlined in reference [3]. This approach will provide a structured and systematic framework for research and innovation. Without such a coordinated effort, the industry has no choice but to rely on the inefficient and costly method of make-and-break engineering, hindering overall progress and leading to inconsistent results. Establishing a comprehensive model development project will create favorable conditions for the evolutionary development of design rules for composite materials.

The WWFE project was large and ambitious. However, a much larger effort will be needed to develop design rules for composite materials.


References

[1] Kaddour, A. S., and Hinton, M. J. Maturity of 3D Failure Criteria for Fibre-Reinforced Composites: Comparison Between Theories and Experiments: Part B of WWFE-II,” J. Comp. Mats., 47, 925-966, 2013.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/critique-of-the-wwfe/feed/ 0
Finite Element Libraries: Mixing the “What” with the “How” https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/ https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/#respond Tue, 03 Sep 2024 15:12:16 +0000 https://www.esrd.com/?p=32142 Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of the more complex ones. This view guided the development of the finite element (FE) method in the 1960s and 70s, and ultimately led to legacy FE codes adopting an "element-centric" approach.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of complex ones. This view shaped the development of the finite element (FE) method in the 1960s and 70s. The software architecture of the legacy FE codes was established in that period.

The Element-Centric View

Richard MacNeal, a principal developer of NASTRAN and co-founder of the MacNeal-Schwendler Corporation (MSC), once told me that his dream was to formulate “the perfect 4-node shell element”. His background was in analog computers, and he thought of finite elements as tuneable objects: If one tunes an element just right, as potentiometers are tuned in analog computers, then a perfect element can be created. This element-centric view led to the implementation of large element libraries, which are still in use today. These libraries mix what we wish to solve (in this instance, a shell model) with how we wish to solve it (using 4-node finite elements).

A cluttered, unattractive library, emblematic of finite element libraries in legacy FE codes. Image generated by Gemini.

In formulating his shell element, MacNeal was constrained by the limitations of the architecture of NASTRAN. Quoting from reference [1]: “An important general feature of NASTRAN which limits the choice of element formulation  is that, with rare exceptions, the degrees of freedom consist of the three components of translation and the three components of rotation at discrete points.” This feature originated from models of structural frames where the joints of beams and columns are allowed to translate and rotate in three mutually orthogonal directions. Such restrictions, common to all legacy FE codes, prevented those codes from keeping pace with the subsequent scientific development of FE analysis.

MacNeal’s formulation of his shell element was entirely intuitive. There is no proof that the finite element solutions corresponding to progressively refined meshes will converge to the exact solution of a particular shell model or even converge at all. Model form and approximation are intertwined.

The classical shell model, also known as the Novozhilov-Koiter (N-K) model, taught in advanced strength of materials classes, is based on the assumption that normals to the mid-surface in the undeformed configuration remain normal after deformation. Making this assumption was necessary in the pre-computer era to allow the solution of simple shell problems by classical methods. Today, the N-K shell model is only of theoretical and historical interest. Instead, we have a hierarchic sequence of shell models of increasing complexity. The next shell model is the Naghdi model, which is based on the assumption that normals to the mid-surface in the undeformed configuration remain straight lines but not necessarily normal. Higher models permit the normal to deform in ways that can be well approximated by polynomials [2]. 

Shells behave like three-dimensional solids in the neighborhoods of support attachments, stiffeners, nozzles, and cutouts. Therefore, restrictions on the transverse variation of the displacement components are not warranted in those locations. Whether a shell is thin or thick depends not only on the ratio of the thickness to the radius of curvature but also on the smoothness of the exact solution. The proper choice of a shell model depends on the problem at hand and the goals of computation. Consider, for example, the free vibration of a shell. When the wavelengths of the mode shapes are close to the thickness, the shearing deformations cannot be neglected, and hence, the shell behaves as a thick shell. Perfect shell elements do not exist. Furthermore, there is no such thing as a perfect element of any kind.

The Model-Centric View

In the model-centric view, we recognize that any model is a special case of a more comprehensive model. For instance, in solid mechanics problems, we typically start with a problem of linear elasticity, where one of the assumptions is that stress is proportional to strain, regardless of the size of the strain. Once the solution is available, we check whether the proportional limit was exceeded. If it was, we solve a nonlinear problem, for example, using the deformation theory of plasticity with a suitable material law. In that case, the linear solution is the first iteration in solving the nonlinear problem. If the displacements are large, we continue with the iterations to solve the geometric nonlinear problem. It is important to ensure that the errors of approximation are negligibly small throughout the numerical solution process.

At first glance, it might seem that model form errors can be made arbitrarily small. However, this is generally not possible. As the complexity of the model increases, so does the number of physical parameters. For instance, transitioning from linear elasticity to accounting for plastic deformation requires introducing empirical constants to characterize nonlinear material behavior. These constants have statistical variations, which increase prediction uncertainty. Ultimately, these uncertainties will likely outweigh the benefits of more complex models.

Implementation

An FE code should allow users to control both the model form and the approximation errors. To achieve this, model and element definitions must be separate, and seamless transitions from one model to another and from one discretization to another must be made possible. In principle, it is possible to control both types of error using legacy FE codes, but since model and element definitions are mixed in the element libraries, the process becomes so complicated that it is impractical to use in industrial settings.

Model form errors are controlled through hierarchic sequences of models, while approximation errors are controlled through hierarchic sequences of finite element spaces [2]. The stopping criterion is that the quantities of interest should remain substantially unchanged in the next level of the hierarchy.

Advice to Management

To ensure the reliability of predictions, it must be shown that the model form errors and the approximation errors do not exceed pre-specified tolerances. Moreover, the model parameters and data must be within the domain of calibration [3]. Management should not trust model-generated predictions unless evidence is provided showing that these conditions are satisfied.

When considering various marketing claims regarding the promised benefits of numerical simulation, digital twins, and digital transformation, management is well advised to keep this statement by philosopher David Hume in mind: “A wise man apportions his beliefs to the evidence.”


References

[1] MacNeal, R. H. A simple quadrilateral shell element.  Computers & Structures, Vol. 8, pp. 175-183, 1978.

[2] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/feed/ 0