Jacob Fish And Ted Belytschko, ?a First Course In Finite Elements By Jacob Fish And Ted Belytschko?, Wiley, 2007
NOTAVAILABLEA First Course in Finite ElementsBy Jacob Fish Ted BelytschkoJohn Wiley & SonsCopyright © 2007John Wiley & Sons, LtdAll right reserved.ISBN: 978-0-470-03580-1Chapter OneIntroduction1.1 BACKGROUNDMany physical phenomena in engineering and science can be described in terms of partial differentialequations. In general, solving these equations by classical analytical methods for arbitrary shapes is almostimpossible. The finite element method (FEM) is a numerical approach by which these partial differentialequations can be solved approximately. From an engineering standpoint, the FEM is a method for solvingengineering problems such as stress analysis, heat transfer, fluid flow and electromagnetics by computersimulation.Millions of engineers and scientists worldwide use the FEM to predict the behavior of structural,mechanical, thermal, electrical and chemical systems for both design and performance analyses. Itspopularity can be gleaned by the fact that over $1 billion is spent annually in the United States on FEMsoftware and computer time. A 1991 bibliography (Noor, 1991) lists nearly 400 finite element books inEnglish and other languages. A web search (in 2006) for the phrase 'finite element' using the Google searchengine yielded over 14 million pages of results.
Mackerle (lists 578 finite elementbooks published between 1967 and 2005.To explain the basic approach of the FEM, consider a plate with a hole as shown in Figure 1.1 for whichwe wish to find the temperature distribution. It is straightforward to write a heat balance equation for eachpoint in the plate. However, the solution of the resulting partial differential equation for a complicatedgeometry, such as an engine block, is impossible by classical methods like separation of variables.Numerical methods such as finite difference methods are also quite awkward for arbitrary shapes; softwaredevelopers have not marketed finite difference programs that can deal with the complicated geometries thatare commonplace in engineering. Similarly, stress analysis requires the solution of partial differentialequations that are very difficult to solve by analytical methods except for very simple shapes, such asrectangles, and engineering problems seldom have such simple shapes.The basic idea of FEM is to divide the body into finite elements, often just called elements,connected by nodes, and obtain an approximate solution as shown in Figure 1.1. This is called the finiteelement mesh and the process of making the mesh is called mesh generation.The FEM provides a systematic methodology by which the solution, in the case of our example, thetemperature field, can be determined by a computer program. For linear problems, the solution isdetermined by solving a system of linear equations; the number of unknowns (which are the nodaltemperatures) is equal to the number of nodes.
To obtain a reasonably accurate solution, thousands ofnodes are usually needed, so computers are essential for solving these equations. Generally, the accuracy ofthe solution improves as the number of elements (and nodes) increases, but the computer time, and hencethe cost, also increases. The finite element program determines the temperature at each node and the heatflow through each element. The results are usually presented as computer visualizations, such as contourplots, although selected results are often output on monitors. This information is then used in theengineering design process.The same basic approach is used in other types of problems. In stress analysis, the field variables are thedisplacements; in chemical systems, the field variables are material concentrations; and in electromagnetics,the potential field. The same type of mesh is used to represent the geometry of the structure orcomponent and to develop the finite element equations, and for a linear system, the nodal values areobtained by solving large systems (from 10.sup.3 to 10.sup.6 equations are common today, and in specialapplications, 10.sup.9) of linear algebraic equations.This text is limited to linear finite element analysis (FEA).
The preponderance of finite element analysesin engineering design is today still linear FEM. In heat conduction, linearity requires that the conductancebe independent of temperature. In stress analysis, linear FEM is applicable only if the material behavior islinear elastic and the displacements are small. These assumptions are discussed in more depth later in thebook. In stress analysis, for most analyses of operational loads, linear analysis is adequate as it is usuallyundesirable to have operational loads that can lead to nonlinear material behavior or large displacements.For the simulation of extreme loads, such as crash loads and drop tests of electronic components, nonlinearanalysis is required.The FEM was developed in the 1950s in the aerospace industry. The major players were Boeing and BellAerospace (long vanished) in the United States and Rolls Royce in the United Kingdom.
Turner, R.W.Clough, H.C. Martin and L.J. Topp published one of the first papers that laid out the major ideas in 1956(Turner et al., 1956). It established the procedures of element matrix assembly and element formulationsthat you will learn in this book, but did not use the term 'finite elements'.
The second author of this paper,Ray Clough, was a professor at Berkeley, who was at Boeing for a summer job. Subsequently, he wrote apaper that first used the term 'finite elements', and he was given much credit as one of the founders of themethod. He worked on finite elements only for a few more years, and then turned to experimental methods,but his work ignited a tremendous effort at Berkeley, led by the younger professors, primarily E. Wilson andR.L. Taylor and graduate students such as T.J.R. Felippa and K.J. Bathe, and Berkeley was thecenter of finite element research for many years.
This research coincided with the rapid growth of computerpower, and the method quickly became widely used in the nuclear power, defense, automotive andaeronautics industries.Much of the academic community first viewed FEM very skeptically, and some of the most prestigiousjournals refused to publish papers on FEM: the typical resistance of mankind (and particularly academiccommunities) to the new. Nevertheless, several capable researchers recognized its potential early, mostnotably O.C. Zienkiewicz and R.H.
Gallagher (at Cornell). Zienkiewicz built a renowned group atSwansea in Wales that included B. Owen and many others who pioneered concepts like theisoparametric element and nonlinear analysis methods. Other important early contributors were J.H.Argyris and J.T.
Oden.Subsequently, mathematicians discovered a 1943 paper by Courant (1943), in which he used triangularelements with variational principles to solve vibration problems. Consequently, many mathematicianshave claimed that this was the original discovery of the method (though it is somewhat reminiscent of theclaim that the Vikings discovered America instead of Columbus). It is interesting that for many yearsthe FEM lacked a theoretical basis, i.e. There was no mathematical proof that finite element solutionsgive the right answer. In the late 1960s, the field aroused the interest of many mathematicians, who showedthat for linear problems, such as the ones we will deal with in this book, finite element solutions convergeto the correct solution of the partial differential equation (provided that certain aspects of the problem aresufficiently smooth). In other words, it has been shown that as the number of elements increases,the solutions improve and tend in the limit to the exact solution of the partial differential equations.E. Wilson developed one of the first finite element programs that was widely used.
A First Course In Finite Elements Solution Manual
Its dissemination washastened by the fact that it was 'freeware', which was very common in the early 1960s, as the commercialvalue of software was not widely recognized at that time. The program was limited to two-dimensionalstress analysis. It was used and modified by many academic research groups and industrial laboratories andproved instrumental in demonstrating the power and versatility of finite elements to many users.Then in 1965, NASA funded a project to develop a general-purpose finite element program by a group inCalifornia led by Dick MacNeal. This program, which came to be known as NASTRAN, included a largearray of capabilities, such as two- and three-dimensional stress analyses, beam and shell elements, foranalyzing complex structures, such as airframes, and analysis of vibrations and time-dependent response todynamic loads. NASA funded this project with $3 000 000 (like $30 000 000 today).
The initial programwas put in the public domain, but it had many bugs. Shortly after the completion of the program, DickMacNeal and Bruce McCormick started a software firm that fixed most of the bugs and marketed theprogram to industry. By 1990, the program was the workhorse of most large industrial firms and thecompany, MacNeal-Schwendler, was a $100 million company.At about the same time, John Swanson developed a finite element program at Westinghouse ElectricCorp. For the analysis of nuclear reactors.
In 1969, Swanson left Westinghouse to market a program calledANSYS. The program had both linear and nonlinear capabilities, and it was soon widely adopted by manycompanies.
In 1996, ANSYS went public, and it now (in 2006) has a capitalization of $1.8 billion.Another nonlinear software package of more recent vintage is LS-DYNA. This program was firstdeveloped at Livermore National Laboratory by John Hallquist. In 1989, John Hallquist left thelaboratory to found his own company, Livermore Software and Technology, which markets theprogram. Intially, the program had nonlinear dynamic capabilities only, which were used primarilyfor crashworthiness, sheet metal forming and prototype simulations such as drop tests. But Hallquistquickly added a large range of capabilities, such as static analysis.
By 2006, the company had almost60 employees.ABAQUS was developed by a company called HKS, which was founded in 1978. The program wasinitially focused on nonlinear applications, but gradually linear capabilities were also added. The programwas widely used by researchers because HKS introduced gateways to the program, so that users could addnew material models and elements. In 2005, the company was sold to Dassault Systemes for $413 million.As you can see, even a 5% holding in one of these companies provided a very nice nest egg. That is whyyoungpeopleshouldalwaysconsiderstartingtheirowncompanies;generally,itismuchmorelucrativeandexciting than working for a big corporation.In many industrial projects, the finite element database becomes a key component of product developmentbecause it is used for a large number of different analyses, although in many cases, the mesh has to betailored for specific applications. The finite element database interfaces with the CAD database and is oftengenerated from the CAD database.
Unfortunately, in today's environment, the two are substantiallydifferent. Therefore, finite element systems contain translators, which generate finite element meshesfrom CAD databases; they can also generate finite element meshes from digitizations of surface data. Theneed for two databases causes substantial headaches and is one of the major bottlenecks in computerizedanalysis today, as often the two are not compatible.The availability of a wide range of analysis capabilities in one program makes possible analyses of manycomplex real-life problems.
For example, the flow around a car and through the engine compartment can beobtained by a fluid solver, called computational fluid dynamics (CFD) solver. This enables the designers topredict the drag factor and the lift of the shape and the flow in the engine compartment. The flow in theengine compartment is then used as a basis for heat transfer calculations on the engine block and radiator.These yield temperature distributions, which are combined with the loads, to obtain a stress analysis of theengine.Similarly, in the design of a computer or microdevice, the temperatures in the components can bedetermined through a combination of fluid analysis (for the air flowing around the components) and heatconduction analysis. The resulting temperatures can then be used to determine the stresses in thecomponents, such as at solder joints, that are crucial to the life of the component.
The same finite elementmodel, with some modifications, can be used to determine the electromagnetic fields in various situations.These are of importance for assessing operability when the component is exposed to various electromagneticfields.In aircraft design, loads from CFD calculations and wind tunnel tests are used to predict loads on theairframe. A finite element model is then used with thousands of load cases, which include loads in variousmaneuvers such as banking, landing, takeoff and so on, to determine the stresses in the airframe. Almost allof these are linear analyses; only determining the ultimate load capacity of an airframe requires a nonlinearanalysis. It is interesting that in the 1980s a famous professor predicted that by 1990 wind tunnels would beused only to store computer output. He was wrong on two counts: Printed computer output almostcompletely disappeared, but wind tunnels are still needed because turbulent flow is so difficult to computethat complete reliance on computer simulation is not feasible.Manufacturing processes are also simulated by finite elements. Thus, the solidification of castings issimulated to ensure good quality of the product.
In the design of sheet metal for applications such as cars andwashing machines, the forming process is simulated to insure that the part can be formed and to check thatafter springback (when the part is released from the die) the part still conforms to specifications.Similar procedures apply in most other industries. Indeed, it is amazing how the FEM has transformedthe engineering workplace in the past 40 years.
In the 1960s, most engineering design departmentsconsisted of a room of 1.5 m x 3 m tables on which engineers drew their design with T-squares and otherdrafting instruments. Stresses in the design were estimated by simple formulas, such as those that you learnin strength of materials for beam stretching, bending and torsion (these formulas are still useful,particularly for checking finite element solutions, because if the finite element differs from these formulasby an order of magnitude, the finite element solution is usually wrong). To verify the soundness of a design,prototypes were made and tested. Of course, prototypes are still used today, but primarily in the last stagesof a design. Thus, FEA has led to tremendous reductions in design cycle time, and effective use of this tool iscrucial to remaining competitive in many industries.A question that may occur to you is: Why has this tremendous change taken place? Undoubtedly, themajor contributor has been the exponential growth in the speed of computers and the even greater decline inthe cost of computational resources.
Figure 1.2 shows the speed of computers, beginning with the firstelectronic computer, the ENIAC in 1945. Computer speed here is measured in megaflops, a rather archaicterm that means millions of floating point operations per second (in the 1960s, real number multiplies werecalled floating point operations).The ENIAC was developed in 1945 to provide ballistic tables.
It occupied 1800 ft.sup.2 and employed 17468vacuum tubes. Yet its computational power was a small fraction of a $20 calculator. It was not until the1960s that computers had sufficient power to do reasonably sized finite element computations. Forexample, the 1966 Control Data 6600, the most powerful computer of its time, could handle about10 000 elements in several hours; today, a PC does this calculation in a matter of minutes. Not onlywere these computers slow, but they also had very little memory: the CDC 6600 had 32k words of randomaccess memory, which had to accommodate the operating system, the compiler and the program.As can be seen from Figure 1.2, the increase in computational power has been linear on a log scale,indicating a geometric progression in speed.
This geometric progression was first publicized by Moore, afounder of Intel, in the 1990s. He noticed that the number of transistors that could be packed on a chip, andhence the speed of computers, doubled every 18 months. This came to be known as Moore's law, andremarkably, it still holds.(Continues.)Excerpted from A First Course in Finite Elementsby Jacob Fish Ted BelytschkoCopyright © 2007 by John Wiley & Sons, Ltd.Excerpted by permission.All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.Excerpts are provided by Dial-A-Book Inc. Solely for the personal use of visitors to this web site.
Developed from the authors, combined total of 50 years undergraduate and graduate teaching experience, this book presents the finite element method formulated as a general-purpose numerical procedure for solving engineering problems governed by partial differential equations.? Focusing on the formulation and application of the finite element method through the integration Developed from the authors, combined total of 50 years undergraduate and graduate teaching experience, this book presents the finite element method formulated as a general-purpose numerical procedure for solving engineering problems governed by partial differential equations.? Focusing on the formulation and application of the finite element method through the integration of finite element theory, code development, and software application, the book is both introductory and self-contained, as well as being a hands-on experience for any student.
This authoritative text on Finite Elements: Adopts a generic approach to the subject, and is not application specific In conjunction with a web-based chapter, it integrates code development, theory, and application in one book Provides an accompanying Web site that includes ABAQUS Student Edition, Matlab data and programs, and instructor resources Contains a comprehensive set of homework problems at the end of each chapter Produces a practical, meaningful course for both lecturers, planning a finite element module, and for students using the text in private study. Accompanied by a book companion website housing supplementary material that can be found at A First Course in Finite Elements is the ideal practical introductory course for junior and senior undergraduate students from a variety of science and engineering disciplines.? The accompanying advanced topics at the end of each chapter al.