Validation Archives - ESRD https://www.esrd.com/tag/validation/ Engineering Software Research and Development, Inc. Thu, 03 Oct 2024 12:02:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Validation Archives - ESRD https://www.esrd.com/tag/validation/ 32 32 A Critique of the World Wide Failure Exercise https://www.esrd.com/critique-of-the-wwfe/ https://www.esrd.com/critique-of-the-wwfe/#respond Thu, 03 Oct 2024 13:00:00 +0000 https://www.esrd.com/?p=32759 The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses, and ran between 2007 and 2013. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


The World-Wide Failure Exercise (WWFE) was an international research project with the goal of assessing the predictive performance of competing failure models for composite materials. Part I (WWFE-I) focused on failure in fiber-reinforced polymer composites under two-dimensional (2D) stresses and ran from 1996 until 2004. Part II was concerned with failure criteria under both 2D and 3D stresses. It ran between 2007 and 2013. Quoting from reference [1]: “Twelve challenging test problems were defined by the organizers of WWFE-II, encompassing a range of materials (polymer, glass/epoxy, carbon/epoxy), lay-ups (unidirectional, angle ply, cross-ply, and quasi-isotropic laminates) and various 3D stress states”. Part III, also launched in 2007, was concerned with damage development in multi-directional composite laminates.

The von Mises stress in an ideal fiber-matrix composite subjected to shearing deformation. The displacements are magnified 15X. Verified solution by StressCheck.

Composite Failure Model Development

According to Thomas Kuhn, the period of normal science begins when investigators have agreed upon a paradigm, that is, the fundamental ideas, methods, language, and theories that guide their research and development activities [2]. We can understand WWFE as an effort by the composite materials research community to formulate such a paradigm. While some steps were taken toward achieving that goal, the goal was not reached. The final results of WWFE-II were inconclusive. The main reason is that the project lacked some of the essential constituents of a model development program. To establish favorable conditions for the evolutionary development of failure criteria for composite materials, procedures similar to those outlined in reference [3] will be necessary. The main points are briefly described below.

  1. Formulation of the mathematical model: The operators that transform the input data into the quantities of interest are defined. In the case of WWFE, a predictor of failure is part of the mathematical model. In WWFE II, twelve different predictors were investigated. These predictors were formulated based on subjective factors: intuition, insight, and personal preferences. A properly conceived model development project provides an objective framework for ranking candidate models based on their predictive performance. Additionally, given the stochastic outcomes of experiments, a statistical model that accounts for the natural dispersion of failure events must be included in the mathematical model.
  2. Calibration: Mathematical models have physical and statistical parameters that are determined in calibration experiments. Invariably, there are limitations on the available experimental data. Those limitations define the domain of calibration. The participants of WWFE failed to grasp the crucial role of calibration in the development of mathematical models. Quoting from reference [1]: “One of the undesirable features, which was shared among a number of theories, is their tendency to calibrate the predictions against test data and then predict the same using the empirical constants extracted from the experiments.”  ̶  Calibration is not an undesirable feature. It is an essential part of any model development project. Mathematical models will produce reliable predictions only when the parameters and data are within their domains of calibration. One of the important goals of model development projects is to ensure that the domain of calibration is sufficiently large to cover all applications, given the intended use of the model. However, calibration and validation are separate activities. The dataset used for validation has to be different from the dataset used for calibration [3]. Predicting the calibration data once calibration was performed cannot lead to meaningful conclusions regarding the suitability or fitness of a model.
  3. Validation: Developers are provided complete descriptions of the validation experiments and, based on this information, predict the probabilities of the outcomes of validation experiments. The validation metric is the likelihood of the outcomes.
  4. Solution verification: It must be shown that the numerical errors in the quantities of interest are negligibly small compared to the errors in experimental observations.
  5. Disposition: Candidate models are ranked based on their predictive performance, measured by the ratio of predicted to realized likelihood values. The calibration domain is updated using all available data. At the end of the validation experiments, the calibration data is augmented with the validation data.
  6. Data management: Experimental data must be collected, curated, and archived to ensure its quality, usability, and accessibility.
  7. Model development projects are open-ended: New ideas can be proposed anytime, and the available experimental data will increase over time. Therefore, no one has the final word in a model development project. Models and their domains of calibration are updated as new data become available.

The Tale of Two Model Development Projects

It is interesting to compare the status of model development for predicting failure events in composite materials with linear elastic fracture mechanics (LEFM), which is concerned with predicting crack propagation in metals, a much less complicated problem. Although no consensus emerged from WWFE-II, there was no shortage of ideas on formulating predictors. In the case of LEFM, on the other hand, the consensus that the stress intensity factor is the predictor of crack propagation emerged in the 1970s, effectively halting further investigation of predictors and causing prolonged stagnation [3]. Undertaking a model development program and applying verification, validation, and uncertainty quantification procedures are essential prerequisites for progress in both cases.

Two Candid Observations

Professor Mike Hinton, one of the organizers of WWFE, delivered a keynote presentation at the NAFEMS World Congress in Boston in May 2011 titled “Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?” In this presentation, he shared two candid observations that shed light on the status of models created to predict failure events in composite materials:

  1. “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.” – In other words, in the absence of properly validated and implemented models, the predictions are unreliable.
  2. Disclosed that Professor Zvi Hashin declined the invitation to participate in WWFE-I, explaining his reason in a letter.  He wrote: “My only work in this subject relates to unidirectional fibre composites, not to laminates” … “I must say to you that I personally do not know how to predict the failure of a laminate (and furthermore, that I do not believe that anybody else does).”

Although these observations are dated, I believe they remain relevant today. Contrary to numerous marketing claims, we are still very far from realizing the benefits of numerical simulation in composite materials.

A Sustained Model Development Program Is Essential

To advance the development of design rules for composite materials, stakeholders need to initiate a long-term model development project, as outlined in reference [3]. This approach will provide a structured and systematic framework for research and innovation. Without such a coordinated effort, the industry has no choice but to rely on the inefficient and costly method of make-and-break engineering, hindering overall progress and leading to inconsistent results. Establishing a comprehensive model development project will create favorable conditions for the evolutionary development of design rules for composite materials.

The WWFE project was large and ambitious. However, a much larger effort will be needed to develop design rules for composite materials.


References

[1] Kaddour, A. S., and Hinton, M. J. Maturity of 3D Failure Criteria for Fibre-Reinforced Composites: Comparison Between Theories and Experiments: Part B of WWFE-II,” J. Comp. Mats., 47, 925-966, 2013.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/critique-of-the-wwfe/feed/ 0
Finite Element Libraries: Mixing the “What” with the “How” https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/ https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/#respond Tue, 03 Sep 2024 15:12:16 +0000 https://www.esrd.com/?p=32142 Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of the more complex ones. This view guided the development of the finite element (FE) method in the 1960s and 70s, and ultimately led to legacy FE codes adopting an "element-centric" approach.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


Engineering students first learn statics, then strength of materials, and progress to the theories of plates and shells, continuum mechanics, and so on. As the course material advances from simple to complex, students often think that each theory (model) stands on its own, overlooking the fact that simpler models are special cases of complex ones. This view shaped the development of the finite element (FE) method in the 1960s and 70s. The software architecture of the legacy FE codes was established in that period.

The Element-Centric View

Richard MacNeal, a principal developer of NASTRAN and co-founder of the MacNeal-Schwendler Corporation (MSC), once told me that his dream was to formulate “the perfect 4-node shell element”. His background was in analog computers, and he thought of finite elements as tuneable objects: If one tunes an element just right, as potentiometers are tuned in analog computers, then a perfect element can be created. This element-centric view led to the implementation of large element libraries, which are still in use today. These libraries mix what we wish to solve (in this instance, a shell model) with how we wish to solve it (using 4-node finite elements).

A cluttered, unattractive library, emblematic of finite element libraries in legacy FE codes. Image generated by Gemini.

In formulating his shell element, MacNeal was constrained by the limitations of the architecture of NASTRAN. Quoting from reference [1]: “An important general feature of NASTRAN which limits the choice of element formulation  is that, with rare exceptions, the degrees of freedom consist of the three components of translation and the three components of rotation at discrete points.” This feature originated from models of structural frames where the joints of beams and columns are allowed to translate and rotate in three mutually orthogonal directions. Such restrictions, common to all legacy FE codes, prevented those codes from keeping pace with the subsequent scientific development of FE analysis.

MacNeal’s formulation of his shell element was entirely intuitive. There is no proof that the finite element solutions corresponding to progressively refined meshes will converge to the exact solution of a particular shell model or even converge at all. Model form and approximation are intertwined.

The classical shell model, also known as the Novozhilov-Koiter (N-K) model, taught in advanced strength of materials classes, is based on the assumption that normals to the mid-surface in the undeformed configuration remain normal after deformation. Making this assumption was necessary in the pre-computer era to allow the solution of simple shell problems by classical methods. Today, the N-K shell model is only of theoretical and historical interest. Instead, we have a hierarchic sequence of shell models of increasing complexity. The next shell model is the Naghdi model, which is based on the assumption that normals to the mid-surface in the undeformed configuration remain straight lines but not necessarily normal. Higher models permit the normal to deform in ways that can be well approximated by polynomials [2]. 

Shells behave like three-dimensional solids in the neighborhoods of support attachments, stiffeners, nozzles, and cutouts. Therefore, restrictions on the transverse variation of the displacement components are not warranted in those locations. Whether a shell is thin or thick depends not only on the ratio of the thickness to the radius of curvature but also on the smoothness of the exact solution. The proper choice of a shell model depends on the problem at hand and the goals of computation. Consider, for example, the free vibration of a shell. When the wavelengths of the mode shapes are close to the thickness, the shearing deformations cannot be neglected, and hence, the shell behaves as a thick shell. Perfect shell elements do not exist. Furthermore, there is no such thing as a perfect element of any kind.

The Model-Centric View

In the model-centric view, we recognize that any model is a special case of a more comprehensive model. For instance, in solid mechanics problems, we typically start with a problem of linear elasticity, where one of the assumptions is that stress is proportional to strain, regardless of the size of the strain. Once the solution is available, we check whether the proportional limit was exceeded. If it was, we solve a nonlinear problem, for example, using the deformation theory of plasticity with a suitable material law. In that case, the linear solution is the first iteration in solving the nonlinear problem. If the displacements are large, we continue with the iterations to solve the geometric nonlinear problem. It is important to ensure that the errors of approximation are negligibly small throughout the numerical solution process.

At first glance, it might seem that model form errors can be made arbitrarily small. However, this is generally not possible. As the complexity of the model increases, so does the number of physical parameters. For instance, transitioning from linear elasticity to accounting for plastic deformation requires introducing empirical constants to characterize nonlinear material behavior. These constants have statistical variations, which increase prediction uncertainty. Ultimately, these uncertainties will likely outweigh the benefits of more complex models.

Implementation

An FE code should allow users to control both the model form and the approximation errors. To achieve this, model and element definitions must be separate, and seamless transitions from one model to another and from one discretization to another must be made possible. In principle, it is possible to control both types of error using legacy FE codes, but since model and element definitions are mixed in the element libraries, the process becomes so complicated that it is impractical to use in industrial settings.

Model form errors are controlled through hierarchic sequences of models, while approximation errors are controlled through hierarchic sequences of finite element spaces [2]. The stopping criterion is that the quantities of interest should remain substantially unchanged in the next level of the hierarchy.

Advice to Management

To ensure the reliability of predictions, it must be shown that the model form errors and the approximation errors do not exceed pre-specified tolerances. Moreover, the model parameters and data must be within the domain of calibration [3]. Management should not trust model-generated predictions unless evidence is provided showing that these conditions are satisfied.

When considering various marketing claims regarding the promised benefits of numerical simulation, digital twins, and digital transformation, management is well advised to keep this statement by philosopher David Hume in mind: “A wise man apportions his beliefs to the evidence.”


References

[1] MacNeal, R. H. A simple quadrilateral shell element.  Computers & Structures, Vol. 8, pp. 175-183, 1978.

[2] Szabό, B. and Babuška, I. Finite Element Analysis: Method, Verification and Validation., 2nd ed., Hoboken, NJ: 2nd edition. John Wiley & Sons, Inc., 2021.

[3] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications, Vol. 162, pp. 206–214, 2024.


Related Blogs:

]]>
https://www.esrd.com/finite-element-libraries-mixing-the-what-with-the-how/feed/ 0