# Numerical Analysis with Algorithms and Programming

###### Santanu Saha Ray
Publisher:
Chapman & Hall/CRC
Publication Date:
2017
Number of Pages:
685
Format:
Hardcover
Price:
139.95
ISBN:
9781498741743
Category:
Textbook
[Reviewed by
Allen Stenger
, on
08/24/2017
]

This is a comprehensive and up-to-date guide to numerical methods, aimed at engineers. It’s more a handbook or encyclopedia or catalog than a textbook, although it does include a large number of numerical drill exercises (answers to odd-numbered problems are in the back). It’s not a proofs book, although it usually gives some motivation and plausibility arguments. Coverage is similar to other familiar textbooks such as Burden & Faires’s Numerical Analysis or Dahlquist & Bjorck’s Numerical Methods. It omits a few commonly-found topics, such as Monte Carlo methods and optimization problems such a maximum finding or linear programming. It does include some uncommon topics such as Padé approximants and successive over-relaxation (SOR) methods for linear systems.

One unfortunate omission, especially considering the target audience, is a survey of numerical software packages. The book uses Mathematica to illustrate the algorithms, but in real life people would either use built-in functions in Mathematica or use a separate software package, and would not write their own algorithms. Unfortunately the Mathematica examples are not provided in electronic form, so you have to type them in to try them. The examples are written in a FORTRAN or C style, with lots of loops, which is not how Mathematica would normally be used. For example, the main step in the trapezoidal rule (p. 198) is

For[i=1;s=0,i<=n-1,i++,s+=2*y[a+i*h]];

but a more natural Mathematica expression for this would be $s = \sum_{i=1}^{n-1} 2 y (a + h i).$

The two big weaknesses in this book are in comparing methods and in error estimation. There are multiple solution methods for nearly every kind of problem, which is good. For example, there are five different way to solve systems of linear equations, and five different ways to find eigenvalues. But the methods are explained in isolation; there’s no explanation of their relative strengths and weaknesses, and no guidance on when to use which method.

Most of the methods have at least a rudimentary error analysis, and this is good as far as it goes. However, these estimates are never used in the worked examples, so the student is not reminded that he should estimate the error and adjust the method’s parameters as needed. For example, there are rough error estimates for each of the numerical quadrature methods, but when it comes to the examples, we apply the methods on a given number of partitions that was plucked out of the air. The examples often end with an observation like “this is correct to 5 significant digits”, which is true, but the reason we know it is true is not from the error estimates but because we know the exact answer to the example problem and can compare the calculated answer.

Allen Stenger is a math hobbyist and retired software developer. He is an editor of the Missouri Journal of Mathematical Sciences. His personal web page is allenstenger.com. His mathematical interests are number theory and classical analysis.

Errors in Numerical Computations
Introduction
Preliminary Mathematical Theorems
Approximate Numbers and Significant Figures
Rounding Off Numbers
Truncation Errors
Floating Point Representation of Numbers
Propagation of Errors
General Formula for Errors
Loss of Significance Errors
Numerical Stability, Condition Number, and Convergence
Brief Idea of Convergence

Numerical Solutions of Algebraic and Transcendental Equations
Introduction
Basic Concepts and Definitions
Initial Approximation
Iterative Methods
Generalized Newton’s Method
Graeffe’s Root Squaring Method for Algebraic Equations

Interpolation
Introduction
Polynomial Interpolation

Numerical Differentiation
Introduction
Errors in Computation of Derivatives
Numerical Differentiation for Equispaced Nodes
Numerical Differentiation for Unequally Spaced Nodes
Richardson Extrapolation

Numerical Integration
Introduction
Numerical Integration from Lagrange’s Interpolation
Newton–Cotes Formula for Numerical Integration (Closed Type)
Numerical Integration Formula from Newton’s Forward Interpolation Formula
Richardson Extrapolation
Romberg Integration
Gaussian Quadrature: Determination of Nodes and Weights through Orthogonal Polynomials
Double Integration
Bernoulli Polynomials and Bernoulli Numbers
Euler–Maclaurin Formula

Numerical Solution of System of Linear Algebraic Equations
Introduction
Vector and Matrix Norm
Direct Methods
Iterative Method
Convergent Iteration Matrices
Convergence of Iterative Methods
Inversion of a Matrix by the Gaussian Method
Ill-Conditioned Systems
Thomas Algorithm

Numerical Solutions of Ordinary Differential Equations
Introduction
Single-Step Methods
Multistep Methods
System of Ordinary Differential Equations of First Order
Differential Equations of Higher Order
Boundary Value Problems
Stability of an Initial Value Problem
Stiff Differential Equations
A-Stability and L-Stability

Matrix Eigenvalue Problem
Introduction
Inclusion of Eigenvalues
Householder’s Method
The QR Method
Power Method
Inverse Power Method
Jacobi’s Method
Givens Method

Approximation of Functions
Introduction
Least Square Curve Fitting
Least Squares Approximation
Orthogonal Polynomials
The Minimax Polynomial Approximation
B-Splines

Numerical Solutions of Partial Differential Equations
Introduction
Classification of PDEs of Second Order
Types of Boundary Conditions and Problems
Finite-Difference Approximations to Partial Derivatives
Parabolic PDEs
Hyperbolic PDEs
Elliptic PDEs
Alternating Direction Implicit Method
Stability Analysis of the Numerical Schemes

An Introduction to the Finite Element Method
Introduction
Piecewise Linear Basis Functions
The Rayleigh–Ritz Method
The Galerkin Method

Bibliography