Chaos in the Brickyard Revisited
By Dr. Barna Szabó
Engineering Software Research and Development, Inc.
St. Louis, Missouri USA
In a letter published in Science in 1963, Bernard K. Forscher used the metaphor of building edifices to represent the construction of scientific models, also called laws. These models explain observed phenomena and make predictions beyond the observations made [1]. Quoting from Forscher’s letter: “The making of bricks was a difficult and expensive undertaking, and the wise builder avoided waste by making only bricks of the shape and size necessary for the enterprise at hand.”
Progress was limited by the availability of bricks. To speed things up, artisans, referred to as junior scientists, were hired to work on brickmaking. Initially, this arrangement worked well. Unfortunately, however, the brickmakers became obsessed with making bricks. They argued that if enough bricks were available, the builders would be able to select what was necessary. Large sums of money were allocated, and the number of brickmakers mushroomed. They came to believe that producing a sufficient number of bricks was equivalent to building an edifice. The land became flooded with bricks, and more and more storage places, called journals, had to be created. Forscher concluded with this cheerless note: “And saddest of all, sometimes no effort was made even to maintain a distinction between a pile of bricks and a true edifice.”
A chaotic brickyard. Image produced by Microsoft Copilot.
This was the situation sixty years ago. Over time, the “publish or perish” ethos intensified within the academic culture, prioritizing quantity over quality. This led to a surge in the production of academic papers, metaphorically akin to “brickmaking.” Additionally, a consensus emerged among researchers regarding what constitutes acceptable ideas and methods worthy of funding and publication. This consensus is upheld by the peer-review systems of granting agencies and journals, which tend to discourage challenges to mainstream views, thereby reinforcing established norms and practices and discouraging innovation. Successful grantsmanship requires that the topics proposed for investigation be aligned with the mainstream.
Stagnation in the Fundamental Sciences
Sabine Hossenfelder, a theoretical physicist, argues that physics, particularly in its foundational aspects, has been stagnant for the past 50 years, even though the number of physicists and the number of papers published in the field have been increasing steadily. In her view, the foundations of physics have not seen significant progress since the completion of the standard model of particle physics in the mid-1970s. She criticizes the field for relying too much on mathematics rather than empirical evidence, which has led to physicists being more heavily focused on the aesthetics of their theories than on nature [2]. She also shared a compelling personal account of her experience with the “publish-or-perish” world in a podcast [3].
I think that one possible explanation for this stagnation is that human intelligence, much like animal intelligence, has its limits. For example, while we can teach dogs to recognize several words, we cannot teach them to appreciate a Shakespearean sonnet. Nobel laureate Richard Feynman famously said: “I think I can safely say that nobody understands quantum mechanics.” We may have to be content with model-dependent realism, as suggested by Hawking and Mlodinow [4].
Stagnation in the Applied Sciences
All of the counter-selective elements identified by Hossenfelder are also present in engineering and applied sciences. However, in these disciplines, the causes of stagnation are entirely man-made. I will focus on numerical simulation, which spans all engineering disciplines and happens to be my own field. First, a brief historical retrospection is necessary.
In numerical simulation, the primary method used for approximating the solutions of partial differential equations is the finite element method (FEM). Interest in this method started with the publication of a paper in 1956, about a year before the space race began. In the following years, research and development activities concerned with FEM received generous amounts of funding. Many ideas, rooted in engineering intuition and informed by prior experience with matrix methods of structural analysis, were advanced and tested through numerical experimentation. Some ideas worked, others did not. Because the theoretical foundations of FEM had not yet been established, it was impossible to tell whether ideas that worked in particular cases were sound or not.
Current engineering practice is dominated by finite element modeling, an intuitive approach rooted in pre-1970s thinking. In contrast, numerical simulation is based on the science of finite element analysis (FEA), which matured later. Although these are conceptually different approaches, the two terms are frequently used interchangeably in engineering parlance. Whereas finite element modeling is an intuition-based practice, numerical simulation demands a disciplined science-based approach to the formulation and validation of mathematical models. The goal is to control both the model-form and approximation errors. An essential constituent of any mathematical model is the domain of calibration [5]. This is generally overlooked in current engineering practice.
During the 1960s and 1970s, when the FEM was still quite immature, several design decisions were made concerning the software architecture for FEM implementations. Although these decisions were reasonable at the time, they introduced limitations that significantly hindered the future development of FEM software, leading to prolonged stagnation.
The theoretical foundations of FEM were developed by mathematicians after 1970. Many important results emerged in the 1980s, leading to FEA becoming a branch of applied mathematics. However, the engineering community largely failed to grasp the importance and relevance of these advances due to a lack of common terminology and conceptual framework. A significant contributing factor was the difficulty and expense involved in upgrading the software infrastructure from the 1960s and 70s. As a result, these developments have not significantly influenced mainstream FEA engineering practices to the present day.
Example: Making Piles of Faulty Bricks
One of the limitations imposed by the software architecture designed for FEM in the 1960s was the restriction on the number of nodes and nodal variables. It was found that some elements were ‘too stiff.’ To address this, the idea of using reduced integration was proposed, meaning that fewer integration points were used than necessary for the integration error to be negligibly small. This approach tried to correct the stiffness problem by committing variational crimes.
Many papers were published showing that reduced integration worked well. However, it was later discovered that while reduced integration can be effective in some situations, it can cause “hourglassing,” that is, zero energy modes. Subsequently, many papers were published on how to control hourglassing. All these papers added to the brickyard’s clutter, and worst of all, hourglassing remains in legacy finite element codes even today.
Challenges and Opportunities
There is a broad consensus that numerical simulation must be integrated with explainable artificial intelligence (XAI). Indeed, XAI has the potential to elevate numerical simulation to a much higher level than would be possible otherwise. This integration can succeed only if the mathematical models are properly formulated, calibrated, and validated. It is essential to ensure that the numerical errors are estimated and controlled.
Legacy FEA codes are not equipped to meet these requirements; nevertheless, claims are being advanced, suggesting fast, easy, and inexpensive simulation that does not require much expertise because AI would take care of that. These claims should be treated with extreme caution, as they do not come from those who can tell the difference between an edifice and a pile of bricks.
References
[1] Forscher, B. K. Chaos in the Brickyard. Science, 18 October 1963, Vol. 142, p. 339.[2] Hossenfelder, S. Lost in Math: How Beauty Leads Physics Astray. Basic Books, 2018.[3] Hossenfelder, S. My dream died, and now I’m here. Podcast: https://www.youtube.com/watch?v=LKiBlGDfRU8&t=12s.[4] Hawking, S. and Mlodinow, L. The Grand Design. Random House 2010.[5] Szabó, B. and Actis, R. The demarcation problem in the applied sciences. Computers and Mathematics with Applications. Vol. 162, pp. 206–214, 2024.Related Blogs:
- Where Do You Get the Courage to Sign the Blueprint?
- A Memo from the 5th Century BC
- Obstacles to Progress
- Why Finite Element Modeling is Not Numerical Simulation?
- XAI Will Force Clear Thinking About the Nature of Mathematical Models
- The Story of the P-version in a Nutshell
- Why Worry About Singularities?
- Questions About Singularities
- A Low-Hanging Fruit: Smart Engineering Simulation Applications
- The Demarcation Problem in the Engineering Sciences
- Model Development in the Engineering Sciences
- Certification by Analysis (CbA) – Are We There Yet?
- Not All Models Are Wrong
- Digital Twins
- Digital Transformation
- Simulation Governance
- Variational Crimes
- The Kuhn Cycle in the Engineering Sciences
- Finite Element Libraries: Mixing the “What” with the “How”
- A Critique of the World Wide Failure Exercise
- Meshless Methods
- Isogeometric Analysis (IGA)
Leave a Reply
We appreciate your feedback!
You must be logged in to post a comment.