Fracture Mechanics Archives - ESRD https://www.esrd.com/tag/fracture-mechanics/ Engineering Software Research and Development, Inc. Thu, 01 Aug 2024 18:00:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Fracture Mechanics Archives - ESRD https://www.esrd.com/tag/fracture-mechanics/ 32 32 The Kuhn Cycle in the Engineering Sciences https://www.esrd.com/kuhn-cycle-in-engineering-sciences/ https://www.esrd.com/kuhn-cycle-in-engineering-sciences/#respond Thu, 01 Aug 2024 14:05:06 +0000 https://www.esrd.com/?p=32070 Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects: Normal Science, Model Drift, Model Crisis, Model Revolution, and Paradigm Change. The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are used as sources of information for making technical decisions. Consequently, decision-makers need convincing evidence that relying on predictions from a mathematical model is justified. Such reliance is warranted only if:

  • the model has been validated, and its domain of calibration is clearly defined;
  • the errors of approximation are known to be within permissible tolerances [1].

Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects [2]:

  • Normal Science – Development of mathematical models based on the best scientific understanding of the subject matter.
  • Model Drift – Limitations of the model are encountered. Certain quantities of interest cannot be predicted by the model with sufficient reliability.
  • Model Crisis – Model drift becomes excessive.  Attempts to remove the limitations of the model are unsuccessful.
  • Model Revolution – This begins when candidates for a new model are proposed. The domain of calibration of the new model is sufficiently large to resolve most if not all, issues identified with the preceding model.
  • Paradigm Change – A paradigm consists of the fundamental ideas, methods, language, and theories that are accepted by the members of a scientific or professional community. In this phase, a new paradigm emerges, which then becomes the new Normal Science.

The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.

Example: Linear Elastic Fracture Mechanics

In linear elastic fracture mechanics (LEFM), the goal is to predict the size of a crack, given a geometrical description, an initial crack configuration, material properties, and a load spectrum. The mathematical model comprises (a) the equations of the theory of elasticity, (b) a predictor that establishes a relationship between a functional defined on the elastic stress field (usually the stress intensity factor), and increments in crack length caused by the application of constant amplitude cyclic loads, (c) a statistical model that accounts for the natural dispersion of crack lengths, and (d) an algorithm that accounts for the effects of tensile and compressive overload events.

Evolution of LEFM

The period of normal science in LEFM began around 1920 and ended in the 1970s. Many important contributions were made in that period. For a historical overview and commentaries, see reference [3]. Here, I mention only three seminal contributions: The work of Alan A. Griffith, who investigated brittle fracturing, George. R. Irwin modified Griffith’s theory for the fracturing of metals, and Paul C. Paris proposed the following relationship between the increment in crack length per cycle of loading and the stress intensity factor K:

{da\over dN} = C(K_{max}-K_{min})^m

where N is the cycle count, C and m are constants determined by calibration. This empirical formula is known as Paris’ law. Numerous variants have been proposed to account for cycle ratios and limiting conditions.

In 1972, the US Air Force adopted damage-tolerant design as part of the Airplane Structural Integrity Program (ASIP) [MIL-STD-1530, 1972]. Damage-tolerant design requires showing that a specified maximum initial damage would not produce a crack large enough to endanger flight safety. The paradigm that Paris’ law is the predictor of crack growth under cyclic loading is now universally accepted.

Fly in the Ointment

Paris’ law is defined on two-dimensional stress fields. However, it is not possible to calibrate any predictor in two dimensions. The specimens used in calibration experiments are typically plate-like objects. In the neighborhood of the points where the crack front intersects the surfaces, the stress field is very different from what is assumed in Paris’ law. Therefore, the parameters C and m in equation (1) are not purely material properties but also depend on the thickness of the test specimen. Nevertheless, as long as Paris’ law is applied to long cracks in plates, the predictions are accurate enough to be useful for practical purposes. However, problems arise when a crack is small relative to the thickness of the plate, for instance, a small corner crack at a fastener hole, which is one of the very important cases in damage-tolerant design. Attempts to fix this problem through the introduction of correction factors have not been successful. First, model drift and then model crisis set in. 

The consensus that the stress intensity factor drives crack propagation consolidated into a dogma about 50 years ago. New generations of engineers have been indoctrinated with this belief, and today, any challenge to this belief is met with utmost skepticism and even hostility. An unfortunate consequence of this is that healthy model development stalled about 50 years ago. The key requirement of damage-tolerant design, which is to reliably predict the size of a crack after the application of a load spectrum, is not met even in those cases where Paris’ law is applicable. This point is illustrated in the following section.

Evidence of the Model Crisis

A round-robin exercise was conducted in 2022. The problem statement was as follows: A centrally cracked 7075-T651 aluminum panel of thickness 0.245 inches, width 3.954 inches, a load spectrum, and the initial half crack length (denoted by ) of 0.070 inches. The quantity of interest was the half-crack length as a function of the number of cycles of loading. The specimen configuration and notation are shown in Fig. 1(a). The load spectrum was characterized by two load maxima given in terms of the nominal stress values σ1 = 22.5 ksi, σ2 = 2σ1/3. The load σ = σ1 was applied in cycles numbered 1, 101, 201, etc. The load σ = σ2 was applied in every other cycle. The minimum load was zero for all cycles. In comparison with typical design load spectra, this is a highly simplified spectrum. The participants in this round-robin were professional organizations that routinely provide estimates of this kind in support of design and certification decisions.

Calibration data were provided in the form of tabular records of da/dN corresponding to (Kmax – Kmin) for various (Kmin/Kmax) ratios. The participants were asked to account for the effects of the periodic overload events on the crack length. 

A positive overload causes a larger increment of the crack length in accordance with Paris’ law, and it also causes compressive residual stress to develop ahead of the crack tip. This residual stress retards crack growth in subsequent cycles while the crack traverses the zone of compressive residual stress. Various models have been formulated to account for retardation (see, for example, AFGROW – DTD Handbook Section 5.2.1.2). Each participant chose a different model. No information was given on whether or how those models were validated. The results of the experiments were revealed only after the predictions were made.

Fig. 1 (b) shows the results of the experiments and four of the predicted outcomes. In three of the four cases, the predicted number of cycles is substantially greater than the load cycles in the experiments, and there is a large spread between the predictions.

Figure 1: (a) Test article. (b) The results of experiments and predicted crack lengths.

This problem is within the domain of calibration of Paris’ law, and the available calibration records cover the interval of the (Kmax – Kmin) values used in the round robin exercise. Therefore, in this instance, the suitability of the stress intensity factor to serve as a predictor of crack propagation is not in question.

Noting that the primary objective of LEFM is to provide estimates of crack length following the application of a load spectrum, and this is a highly simplified problem, these results suggest that retardation models based on LEFM are in a state of crisis. This crisis can be resolved through the application of the principles and procedures of verification, validation, and uncertainty quantification (VVUQ) in a model development project conducted in accordance with the procedures described in [1].


Outlook

Damage-tolerant design necessitates reliable prediction of crack size, given an initial flaw and a load spectrum. However, the outcome of the round-robin exercise indicates that this key requirement is not currently met. While I’m not in a position to estimate the economic costs of this, it’s safe to say they must be a significant part of military aircraft sustainment programs.

I believe that to advance LEFM beyond the crisis stage, organizations that rely on damage-tolerant design procedures must mandate the application of verification, validation, and uncertainty quantification procedures, as outlined in reference [1]. This will not be an easy task, however. A paradigm shift can be a controversial and messy process. As W. Edwards Deming, American engineer, economist, and composer, observed: “Two basic rules of life are: 1) Change is inevitable. 2) Everybody resists change.”


References

[1] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Rossmanith, H. P., Ed., Fracture Mechanics Research in Retrospect. An Anniversary Volume in Honour of George R. Irwin’s 90th Birthday, Rotterdam: A. A. Balkema, 1997.


Related Blogs:

]]>
https://www.esrd.com/kuhn-cycle-in-engineering-sciences/feed/ 0
Model Development in the Engineering Sciences https://www.esrd.com/model-development-in-engineering-sciences/ https://www.esrd.com/model-development-in-engineering-sciences/#respond Mon, 12 Feb 2024 19:29:36 +0000 https://www.esrd.com/?p=31040 In the engineering sciences, mathematical models are based on the equations of continuum mechanics, heat flow, Maxwell, Navier-Stokes, or some combination of these. These equations have been validated and their domains of calibration are generally much larger than the expected domain of calibration of the model being developed. In the terminology introduced by Lakatos, the assumptions incorporated in these equations are called hardcore assumptions, and the assumptions incorporated in the other constituents of a model are called auxiliary hypotheses. Model development is concerned with the formulation, calibration, and validation of auxiliary hypotheses. ]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are based on the equations of continuum mechanics, heat flow, Maxwell, Navier-Stokes, or some combination of these. These equations have been validated and their domains of calibration are generally much larger than the expected domain of calibration of the model being developed. In the terminology introduced by Lakatos [1], the assumptions incorporated in these equations are called hardcore assumptions, and the assumptions incorporated in the other constituents of a model are called auxiliary hypotheses. Model development is concerned with the formulation, calibration, and validation of auxiliary hypotheses. 

Assume, for example, that we are interested in predicting the length of a small crack in a flight-critical aircraft component, caused by the application of a load spectrum. In this case, the mathematical model comprises the equations of continuum mechanics (the hardcore assumptions) and the following auxiliary hypotheses: (a) a predictor of crack propagation, (b) an algorithm that accounts for the statistical dispersion of the calibration data, and (c) an algorithm that accounts for the retardation effects of tensile overload events and the acceleration effects of compressive overload events.

The auxiliary hypotheses introduce parameters that have to be determined by calibration. In our example, we are concerned with crack propagation caused by variable-cycle loading. In linear elastic fracture mechanics (LEFM), for example, the commonly used predictor of crack increment per cycle is the difference in the values of the stress intensity factors between subsequent high and low positive values, denoted by ΔK.

The relationship between crack increment per cycle, denoted by Δa, and the corresponding ΔK value is determined through calibration experiments. Various hypotheses are used to account for the cycle ratio. Additional auxiliary hypotheses account for the statistical dispersion of crack length and the retardation and acceleration events caused by loading sequence effects. The formulation of auxiliary hypotheses is a creative process. Therefore, model development projects must be open to new ideas. Many plausible hypotheses have been and can yet be proposed. Ideally, the predictive performance of competing alternatives would be evaluated using all of the qualified data available for calibration and the models ranked accordingly. Given the stochastic nature of experimental data, predictions should be in terms of probabilities of outcome. Consequently, the proper measure of predictive performance is the likelihood function. Ranking must also account for the size of the domain of calibration [2]. The volume of experimental information tends to increase over time. Consequently, model development is an open-ended activity encompassing subjective and objective elements.

Example: Calibration and Ranking Models of Crack Growth in LEFM

Let us suppose that we want to decide whether we should prefer the Walker [3] or Forman [4] versions of the predictor of crack propagation based on experimental data consisting of specimen dimensions, elastic properties, and tabular data of measured crack length (a) vs. the observed number of load cycles (N) for each cycle ratio (R). For the sake of simplicity, we assume constant cycle loading conditions

The first step is to construct a statistical model for the probability density of crack length, given the number of cycles and the characteristics of the load spectrum. The second step is to extract the ΔaN vs. ΔK data from the a vs. N data where ΔK is determined from the specimen dimensions and loading conditions. The third step is to calibrate each of the candidate hypotheses. This involves setting the predictor’s parameters so that the likelihood of the predicted data is maximum. This process is illustrated schematically by the flow chart shown in Fig. 1.

Figure 1: Schematic illustration of the calibration process.

Finally, the calibration process is documented and the domain of calibration is defined. The model that scored the highest likelihood value is preferred. The ranking is, of course, conditioned on the data available for calibration. As new data are acquired, the calibration process has to be repeated, and the ranking may change. It is also possible that the likelihood values are so close that the results do not justify preferring one model over another. Those models are deemed equivalent. Model development is an open-ended process. No one has the final say.

Opportunities for Improvement

To my knowledge, none of the predictors of crack propagation used in current professional practice have been put through a process of verification, validation, and uncertainty quantification (VVUQ) as outlined in the foregoing section. Rather, investigators tend to follow an unstructured process, whereby they have an idea for a predictor, and, using their experimental data, show that, with a suitable choice of parameters, their definition of the predictor works. Typically, the domain of calibration is not defined explicitly but can be inferred from the documentation. The result is that the relative merit of the ideas put forward by various investigators is unknown and the domains of calibration tend to be very small. In addition, no assurances are given regarding the quality of the data on which the calibration depends. In many instances, only the derived data (i.e. the ΔaN vs. ΔK data), rather than the original records of observation (i.e. the a vs. N data) are made available. This leaves the question of whether the ΔK values were properly verified unanswered.

The situation is similar in the development of design rules for metallic and composite materials: Much work is being done without the disciplined application of VVUQ protocols.  As a result, most of that work is being wasted. 

For example, The World Wide Failure Exercise (WWFE), an international project with the mission to find the best method to accurately predict the strength of composite materials, failed to produce the desired result.  See, for example, [5].  A highly disturbing observation was made by Professor Mike Hinton, one of the organizers of WWFE, in his keynote address to the 2011 NAFEMS World Congress [6]: “The theories coded into current FE tools almost certainly differ from the original theory and from the original creator’s intent.”  I do not believe that significant improvements in predictive performance occurred since then.

In my view, progress will not be possible unless and until VVUQ protocols are adopted for model development projects.  These protocols play a crucial role in the evolutionary development of mathematical models. 


References

[1] Lakatos, I. The methodology of scientific research programmes, vol. 1, J. Currie and G. Worrall, Eds., Cambridge University Press, 1972.

[2] Szabó, B. and Babuška, I. Methodology of model development in the applied sciences. Journal of Computational and Applied Mechanics, 16(2), pp.75-86, 2021.

[3] Walker, K. The Effect of Stress Ratio During Crack Propagation and Fatigue for 2024-T3 and 7075-T6 Aluminum. Effects of Environment and Complex Load History on Fatigue Life, ASTM International, pp. 1–14, 1970. doi:10.1520/stp32032s, ISBN 9780803100329

[4] Forman, R. G., Kearney, V. E.  and Engle, R. M.  Numerical analysis of crack propagation in cyclic-loaded structures. Journal of Basic Engineering, pp. 459-463, September 1967.

[5] Christensen, R. M. Letter to World Wide Failure Exercise, WWFE-II. https://www.failurecriteria.com/lettertoworldwid.html

[6] Hinton, M. Failure Criteria in Fibre Reinforced Polymer Composites: Can any of the Predictive Theories be Trusted?  NAFEMS World Congress, Boston, May 2011.


Related Blogs:

]]>
https://www.esrd.com/model-development-in-engineering-sciences/feed/ 0
Questions About Singularities https://www.esrd.com/questions-about-singularities/ https://www.esrd.com/questions-about-singularities/#respond Fri, 05 Jan 2024 14:50:05 +0000 https://www.esrd.com/?p=30716 Engineering students and professionals alike want to know: are singularities "real"? And if so, when they appear in our solutions, what are we supposed to do with them? Regarding the practical question of what to do with singularities; we need to distinguish between cases where singularities are just nuisances and where a singularity is the object of simulation.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In my many years of teaching finite element analysis to engineering students, I had to answer many questions about singularities. The usual question, typically in a skeptical tone, was: Are singularities real? 

As I was teaching mechanical engineering students, I understood that the question was in the context of continuum mechanics, and the tone suggested that the student found the idea of a material withstanding infinitely large stresses to be utterly absurd. I also understood that he was really interested in knowing why singularities appear in our solutions, and, as a practical matter, what we are supposed to do about them. 

I seized such teachable moments to discuss the relationship between mathematical models and physical reality. I explained that a mathematical model is a precisely formulated idea about some specific aspect of physical reality, and should never be confused with reality. Except for marketing pronouncements, there is no such thing as ‘real-world simulation’ or ‘simulating reality’.

Mathematical models are based on certain assumptions which impose limitations on the scope of applicability of the model. For example, models that incorporate the assumptions of linear elasticity, limit strains to be much smaller than unity, the stress is assumed to be proportional to strain, independent of the magnitude of strain, and the material is assumed to be homogeneous. As long as these assumptions are satisfied, the model will make reasonable predictions of deformation and stress distribution.  However, the model will produce distorted images of reality when those limitations are exceeded. A common mistake in interpreting model predictions is not taking the limitations of the model into account. 

Regarding the practical question of what to do with singularities; we need to distinguish between cases where singularities are just nuisances and where a singularity is the object of simulation.

Singularities as Nuisances

Singularities usually occur due to some minor simplification: For example, in a complicated mechanical or structural component, we may omit fillets, represent the applied forces by point loads, allow abrupt changes in constraint conditions, and so on. In other words, we make the a priori judgment that those simplifications will not significantly influence the quantities of interest.

It is useful to think of the solution domain Ω as consisting of a region of primary interest Ω1 and a region of secondary interest Ω2. The quantities of interest are defined on Ω1. The role of Ω2 is to provide the boundary conditions for Ω1. It is sufficient to ensure that the error, measured in the norm of the formulation, is small on Ω2, a condition that is usually not difficult to satisfy, even when minor features, such as fillets and details of load distribution are omitted. 

Using the terminology of structural mechanics, the problem is one of load-displacement relationships on the region of secondary interest, whereas, on the region of primary interest, it is one of strength relationships, that is, the exact values of the quantities of interest have to be finite numbers.

Singularities as the Objects of Simulation

Linear elastic fracture mechanics (LEFM) is an important sub-field of structural mechanics. The goal of the simulation is to predict the size of a crack in a structural or mechanical component, given an initial crack configuration and a load spectrum. Since crack propagation involves highly nonlinear, irreversible processes, it may seem surprising that the predictor can be determined from the stress field of a problem of linear elasticity. A brief explanation follows.

Consider a crack-like notch in an elastic plate, loaded by forces F, as shown in Fig. 1. At the notch tip, the solution of linear elasticity predicts infinitely high stresses. However, pointwise stresses (and strains) have no meaning for real materials. The smallest volume on which stress is meaningfully defined for a real material is the representative volume element (RVE). Failure theories are formulated with reference to stresses or strains averaged over RVEs, not points.

Figure 1. Crack-like notch. Notation.

Surrounding the notch tip is a process zone, bounded by the curve ΓPZ shown in Fig 1. In the process zone, large dislocations and voids form and coalesce. These processes are not only outside of the scope of linear elasticity but outside of the scope of continuum mechanics as well. In the zone, labeled ΩNL, continuum mechanics with non-linear material properties is applicable. On and outside of ΓNL the linear theory of elasticity is applicable.

The fundamental modeling assumptions of LEFM are that (a) there is a small circle of radius R, on which the solutions of the nonlinear continuum problem and the linear elasticity problem are virtually identical, and (b) the entire process inside the circle is characterized by the stress intensity factor(s) [1]. These assumptions permit experimental determination of the relationship between crack increments and the difference between the stress intensity factors corresponding to the maximum and minimum load levels in a load cycle, denoted by ΔK. The prediction of crack lengths is based on such empirical relationships.

LEFM models have been validated under constant cycle loading for long cracks in thin plates.  Prediction of the growth of small cracks in 3-dimensional stress fields is much more difficult and several ad-hoc procedures are in use [2]. It is safe to say that none of those procedures have been validated under proper validation protocols, such as those outlined in [3].

Conceptual Issues

LEFM is based on the assumption that the driver of crack propagation is a function of the stress intensity factors defined on two-dimensional stress fields. There are two major conceptual problems:

  • The relationship between crack increments and ΔK can only be calibrated using 3-dimensional test specimens which have singular points where the crack front intersects the surface of the specimens.  Those singularities, not present in two dimensions, influence the relationship between ΔK and the crack increment, hence that relationship is not purely a material property but also depends on the thickness dimension of the test article.
  • Application of current LEFM models to very short cracks, such as those that occur at fastener holes in aircraft structures, is highly problematic since the stress field is very different from the two-dimensional stress field on which the stress intensity factors are defined. Other drivers of crack propagation, defined on three-dimensional stress fields, have not been explored. Rather, correction factors have been used. However, the domains of calibration of the correction factors are generally unknown.

We now have reliable methods available to address these issues using the procedures of verification, validation, and uncertainty quantification (VVUQ) [3]. It will take a substantial investment, however, to upgrade the predictive performance of the currently used  LEFM models.  


References

[1] Szabό, B. and Babuška, I. Finite Element Analysis. Method, Verification, and Validation. John Wiley & Sons, Inc., 2021.

[2] AFGROW DTD Handbook. https://afgrow.net/applications/DTDHandbook.

[3] B. Szabó and I. Babuška, “Methodology of model development in the applied sciences,” Journal of Computational and Applied Mechanics, vol. 16, no. 2, pp. 75-86, 2021 (open source).


Related Blogs:

]]>
https://www.esrd.com/questions-about-singularities/feed/ 0