Model Development Archives - ESRD https://www.esrd.com/tag/model-development/ Engineering Software Research and Development, Inc. Thu, 01 Aug 2024 18:00:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.esrd.com/wp-content/uploads/cropped-SC_mark_LG72ppi-32x32.jpg Model Development Archives - ESRD https://www.esrd.com/tag/model-development/ 32 32 The Kuhn Cycle in the Engineering Sciences https://www.esrd.com/kuhn-cycle-in-engineering-sciences/ https://www.esrd.com/kuhn-cycle-in-engineering-sciences/#respond Thu, 01 Aug 2024 14:05:06 +0000 https://www.esrd.com/?p=32070 Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects: Normal Science, Model Drift, Model Crisis, Model Revolution, and Paradigm Change. The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


In the engineering sciences, mathematical models are used as sources of information for making technical decisions. Consequently, decision-makers need convincing evidence that relying on predictions from a mathematical model is justified. Such reliance is warranted only if:

  • the model has been validated, and its domain of calibration is clearly defined;
  • the errors of approximation are known to be within permissible tolerances [1].

Model development projects are essentially scientific research projects. As such, they are subject to the operation of the Kuhn Cycle, named after Thomas Kuhn, who identified five stages in scientific research projects [2]:

  • Normal Science – Development of mathematical models based on the best scientific understanding of the subject matter.
  • Model Drift – Limitations of the model are encountered. Certain quantities of interest cannot be predicted by the model with sufficient reliability.
  • Model Crisis – Model drift becomes excessive.  Attempts to remove the limitations of the model are unsuccessful.
  • Model Revolution – This begins when candidates for a new model are proposed. The domain of calibration of the new model is sufficiently large to resolve most if not all, issues identified with the preceding model.
  • Paradigm Change – A paradigm consists of the fundamental ideas, methods, language, and theories that are accepted by the members of a scientific or professional community. In this phase, a new paradigm emerges, which then becomes the new Normal Science.

The Kuhn cycle is a valuable concept for understanding how mathematical models evolve. It highlights the importance of paradigms in shaping model development and the role of paradigm shifts in the process.

Example: Linear Elastic Fracture Mechanics

In linear elastic fracture mechanics (LEFM), the goal is to predict the size of a crack, given a geometrical description, an initial crack configuration, material properties, and a load spectrum. The mathematical model comprises (a) the equations of the theory of elasticity, (b) a predictor that establishes a relationship between a functional defined on the elastic stress field (usually the stress intensity factor), and increments in crack length caused by the application of constant amplitude cyclic loads, (c) a statistical model that accounts for the natural dispersion of crack lengths, and (d) an algorithm that accounts for the effects of tensile and compressive overload events.

Evolution of LEFM

The period of normal science in LEFM began around 1920 and ended in the 1970s. Many important contributions were made in that period. For a historical overview and commentaries, see reference [3]. Here, I mention only three seminal contributions: The work of Alan A. Griffith, who investigated brittle fracturing, George. R. Irwin modified Griffith’s theory for the fracturing of metals, and Paul C. Paris proposed the following relationship between the increment in crack length per cycle of loading and the stress intensity factor K:

{da\over dN} = C(K_{max}-K_{min})^m

where N is the cycle count, C and m are constants determined by calibration. This empirical formula is known as Paris’ law. Numerous variants have been proposed to account for cycle ratios and limiting conditions.

In 1972, the US Air Force adopted damage-tolerant design as part of the Airplane Structural Integrity Program (ASIP) [MIL-STD-1530, 1972]. Damage-tolerant design requires showing that a specified maximum initial damage would not produce a crack large enough to endanger flight safety. The paradigm that Paris’ law is the predictor of crack growth under cyclic loading is now universally accepted.

Fly in the Ointment

Paris’ law is defined on two-dimensional stress fields. However, it is not possible to calibrate any predictor in two dimensions. The specimens used in calibration experiments are typically plate-like objects. In the neighborhood of the points where the crack front intersects the surfaces, the stress field is very different from what is assumed in Paris’ law. Therefore, the parameters C and m in equation (1) are not purely material properties but also depend on the thickness of the test specimen. Nevertheless, as long as Paris’ law is applied to long cracks in plates, the predictions are accurate enough to be useful for practical purposes. However, problems arise when a crack is small relative to the thickness of the plate, for instance, a small corner crack at a fastener hole, which is one of the very important cases in damage-tolerant design. Attempts to fix this problem through the introduction of correction factors have not been successful. First, model drift and then model crisis set in. 

The consensus that the stress intensity factor drives crack propagation consolidated into a dogma about 50 years ago. New generations of engineers have been indoctrinated with this belief, and today, any challenge to this belief is met with utmost skepticism and even hostility. An unfortunate consequence of this is that healthy model development stalled about 50 years ago. The key requirement of damage-tolerant design, which is to reliably predict the size of a crack after the application of a load spectrum, is not met even in those cases where Paris’ law is applicable. This point is illustrated in the following section.

Evidence of the Model Crisis

A round-robin exercise was conducted in 2022. The problem statement was as follows: A centrally cracked 7075-T651 aluminum panel of thickness 0.245 inches, width 3.954 inches, a load spectrum, and the initial half crack length (denoted by ) of 0.070 inches. The quantity of interest was the half-crack length as a function of the number of cycles of loading. The specimen configuration and notation are shown in Fig. 1(a). The load spectrum was characterized by two load maxima given in terms of the nominal stress values σ1 = 22.5 ksi, σ2 = 2σ1/3. The load σ = σ1 was applied in cycles numbered 1, 101, 201, etc. The load σ = σ2 was applied in every other cycle. The minimum load was zero for all cycles. In comparison with typical design load spectra, this is a highly simplified spectrum. The participants in this round-robin were professional organizations that routinely provide estimates of this kind in support of design and certification decisions.

Calibration data were provided in the form of tabular records of da/dN corresponding to (Kmax – Kmin) for various (Kmin/Kmax) ratios. The participants were asked to account for the effects of the periodic overload events on the crack length. 

A positive overload causes a larger increment of the crack length in accordance with Paris’ law, and it also causes compressive residual stress to develop ahead of the crack tip. This residual stress retards crack growth in subsequent cycles while the crack traverses the zone of compressive residual stress. Various models have been formulated to account for retardation (see, for example, AFGROW – DTD Handbook Section 5.2.1.2). Each participant chose a different model. No information was given on whether or how those models were validated. The results of the experiments were revealed only after the predictions were made.

Fig. 1 (b) shows the results of the experiments and four of the predicted outcomes. In three of the four cases, the predicted number of cycles is substantially greater than the load cycles in the experiments, and there is a large spread between the predictions.

Figure 1: (a) Test article. (b) The results of experiments and predicted crack lengths.

This problem is within the domain of calibration of Paris’ law, and the available calibration records cover the interval of the (Kmax – Kmin) values used in the round robin exercise. Therefore, in this instance, the suitability of the stress intensity factor to serve as a predictor of crack propagation is not in question.

Noting that the primary objective of LEFM is to provide estimates of crack length following the application of a load spectrum, and this is a highly simplified problem, these results suggest that retardation models based on LEFM are in a state of crisis. This crisis can be resolved through the application of the principles and procedures of verification, validation, and uncertainty quantification (VVUQ) in a model development project conducted in accordance with the procedures described in [1].


Outlook

Damage-tolerant design necessitates reliable prediction of crack size, given an initial flaw and a load spectrum. However, the outcome of the round-robin exercise indicates that this key requirement is not currently met. While I’m not in a position to estimate the economic costs of this, it’s safe to say they must be a significant part of military aircraft sustainment programs.

I believe that to advance LEFM beyond the crisis stage, organizations that rely on damage-tolerant design procedures must mandate the application of verification, validation, and uncertainty quantification procedures, as outlined in reference [1]. This will not be an easy task, however. A paradigm shift can be a controversial and messy process. As W. Edwards Deming, American engineer, economist, and composer, observed: “Two basic rules of life are: 1) Change is inevitable. 2) Everybody resists change.”


References

[1] Szabó, B. and Actis, R. The demarcation problem in the applied sciences.  Computers and Mathematics with Applications. 162 pp. 206–214, 2024.

[2] Kuhn, T. S., The structure of scientific revolutions. Vol. 962. University of Chicago Press, 1997.

[3] Rossmanith, H. P., Ed., Fracture Mechanics Research in Retrospect. An Anniversary Volume in Honour of George R. Irwin’s 90th Birthday, Rotterdam: A. A. Balkema, 1997.


Related Blogs:

]]>
https://www.esrd.com/kuhn-cycle-in-engineering-sciences/feed/ 0
XAI Will Force Clear Thinking About the Nature of Mathematical Models https://www.esrd.com/xai-and-mathematical-model-reliability/ https://www.esrd.com/xai-and-mathematical-model-reliability/#respond Wed, 15 Nov 2023 17:45:07 +0000 https://www.esrd.com/?p=30302 It is generally recognized that explainable artificial intelligence (XAI) will play an important role in numerical simulation where it will impose the requirements of reliability, traceability, and auditability. These requirements will necessitate clear thinking about the nature of mathematical models, the trustworthiness of their predictions, and ways to improve their reliability.]]>

By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA


It is generally recognized that explainable artificial intelligence (XAI) will play an important role in numerical simulation where it will impose the requirements of reliability, traceability, and auditability. These requirements will necessitate clear thinking about the nature of mathematical models, the trustworthiness of their predictions, and ways to improve their reliability.

Courtesy Gerd Altmann/geralt.

What is a Mathematical Model?

A mathematical model is an operator that transforms one set of data D, the input, into another set, the quantities of interest F. In shorthand notation we have:

\boldsymbol D\xrightarrow[(I,\boldsymbol p)]{}\boldsymbol F,\quad (\boldsymbol D, \boldsymbol p) \in ℂ \quad (1)

where the right arrow represents the mathematical model. The letters I and p under the right arrow indicate that the transformation involves an idealization (I) as well as parameters (physical properties) p that are determined by calibration. Restrictions on D and p define the domain of calibration , which is also called the domain of application of the mathematical model.

The formulation of mathematical models is a creative, open-ended activity, guided by insight, experience, and personal preferences. The validation and ranking of mathematical models, on the other hand, are based on objective criteria.

The systematic improvement of the predictive performance of mathematical models and their validation is, essentially, a scientific research program. According to Lakatos [1], a scientific research program has three constituent elements: (a) a set of hardcore assumptions, (b) a set of auxiliary hypotheses, and (c) a problem-solving machinery.

In the applied sciences, the hardcore assumptions are the assumptions incorporated in validated models of broad applicability, such as the theory of elasticity, the Navier-Stokes equations, and the Maxwell equations. The objects of investigation are the auxiliary hypotheses.

For example, in linear elastic fracture mechanics (LEFM), the goal is to predict the probability distribution of the length of a crack in a structural component, given the initial crack configuration and a load spectrum.  In this case, the hardcore assumptions are the assumptions incorporated in the theory of elasticity. One auxiliary hypothesis establishes a relationship between a functional defined on the elastic stress field, such as the stress intensity factor, and increments in crack length caused by the application of cyclic loads. The second auxiliary hypothesis accounts for the effects of overload and underload events.  The third auxiliary hypothesis models the statistical dispersion of crack length.

The parameters characterize the relationships defined by the auxiliary hypotheses and define the material properties of the hardcore problem.  The domain of calibration  is the set of restrictions on the parameters imposed by the assumptions in the hardcore hypothesis and limitations in the available calibration data.

Problem-Solving

The problem-solving machinery is a numerical method, typically the finite element method. It generates an approximate solution from which the quantities of interest Fnum are computed. It is necessary to show that the relative error in Fnum does not exceed an allowable value τall:

| \boldsymbol F - \boldsymbol F_{num} |/|\boldsymbol F| \le \tau_{all} \quad (2)

To achieve this goal, it is necessary to obtain a sequence of numerical solutions with increasing degrees of freedom [2].

Demarcation

Not all model development projects (MDPs) are created equal. It is useful to differentiate between progressive, stagnant, and improper MDPs:  An MDP is progressive if the domain of calibration is increasing; stagnant if the domain of calibration is not increasing, and improper if the auxiliary hypotheses do not conform with the hardcore assumptions, or the problem-solving method does not have the capability to estimate and control the numerical approximation errors in the quantities of interest.  Linear elastic fracture mechanics is an example of stagnant model development projects [3]. 

Presently, the large majority of engineering model development projects is improper.  The primary reason for this is that finite element modeling rather than numerical simulation is used, hence the capability to estimate and control the numerical approximation errors is absent. 

Finite element modeling is formally similar to equation (1):

\boldsymbol D\xrightarrow[(i,\boldsymbol p)]{} \overline {\boldsymbol F}_{num} \quad (3)

where lowercase i is used to indicate intuition in the place of idealization (I) and num replaces F.  The overbar is used to distinguish the solutions obtained by finite element modeling and proper application of the finite element method.

In finite element modeling, elements are intuitively selected from the library of a finite element software tool and assembled to represent the object of analysis. Constraints and loads are imposed to produce a numerical problem. The right arrow in equation (3) represents a ”numerical model”, which may not be an approximation to a well-defined mathematical model, in which case F is not defined and num does not converge to limit value as the number of degrees of freedom is increased. Consequently, error estimation is not possible. Also, the domain of calibration has a different meaning in finite element modeling than in numerical simulation.

Opportunities for Improving the Predictive Performance of Models

There is a very substantial unrealized potential in numerical simulation technology. To realize that potential, it will be necessary to replace the practice of finite element modeling with numerical simulation and utilize XAI tools to aid analysts in performing simulation projects:

  • Rapid advancements are anticipated in the standardization of engineering workflows, initially through the use of expert-designed engineering simulation applications equipped with autonomous error control procedures.
  • XAI will make it possible to control the errors of approximation very effectively.  Ideally, the information in the input will be used to design the initial mesh and assignment of polynomial degrees in such a way that in one or two adaptive steps the desired accuracies are reached.
  • XAI will be less helpful in controlling model form errors. This is because the formulation of models involves creative input for which no algorithm exists. Nevertheless, XAI will be useful in tracking the evolutionary changes in model development and the relevant experimental data.
  • XAI will help navigate numerical simulation projects.
    • Prevent the use of intuitively plausible but conceptually wrong input data.
    • Shorten training time for the operators of simulation software tools.

The Main Points

  • The reliability and effectiveness of numerical simulation can be greatly enhanced through integration with XAI processes. 
  • The main elements of XAI-integrated numerical simulation processes are shown in Figure 1:

Figure 1: The main elements of XAI-integrated numerical simulation.
  • The integration of numerical simulation with explainable artificial intelligence tools will force the adoption of science-based algorithms for solution verification and hierarchic modeling approaches. 

References

[1] I. Lakatos, The methodology of scientific research programmes, vol. 1, J. Currie and G. Worrall, eds., Cambridge University Press, 1972.

[2] B. Szabó and I. Babuška,  Finite Element Analysis.  Method, Verification and Validation. 2nd edition, John Wiley & Sons, Inc., 2021.  

[3] B. Szabó and R. Actis, The Demarcation Problem in the Applied Sciences. Manuscript under review. Available on request.


Related Blogs:

]]>
https://www.esrd.com/xai-and-mathematical-model-reliability/feed/ 0