Numerical methods เนื้อหาไฟนอล 2023/1 Table of Contents
Linear algebra equation (cont.)
Jacobi Iteration Methods
From Ax = B [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ] { x 1 x 2 x 3 } = { b 1 b 2 b 3 } a 11 x 1 + a 12 x 2 + a 13 x 3 = b 1 x 1 = b 1 − a 12 x 2 − a 13 x 3 a 11 ∴ x 1 k + 1 = b 1 − a 12 x 2 k − a 13 x 3 k a 11 a 21 x 1 + a 22 x 2 + a 23 x 3 = b 2 x 2 = b 2 − a 21 x 1 − a 23 x 3 a 22 ∴ x 2 k + 1 = b 2 − a 21 x 1 k − a 23 x 3 k a 22 a 31 x 1 + a 32 x 2 + a 33 x 3 = b 3 x 3 = b 3 − a 31 x 1 − a 32 x 2 a 33 ∴ x 3 k + 1 = b 3 − a 31 x 1 k − a 32 x 2 k a 33 \text{From Ax = B} \\~\\
\begin{bmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33} \\
\end{bmatrix}
\begin{Bmatrix}
x_1 \\ x_2 \\ x_3
\end{Bmatrix} =
\begin{Bmatrix}
b_1 \\ b_2 \\ b_3
\end{Bmatrix} \\~\\
a_{11}{\color{red}x_1} + a_{12}x_2 + a_{13}x_3 = b_1 \\
\begin{aligned}
x_1 = \dfrac{b_1 - a_{12}x_2 - a_{13}x_3}{a_{11}} \\
\therefore x_1^{k+1} = \dfrac{b_1 - a_{12}x_2^k - a_{13}x_3^k}{a_{11}}
\end{aligned} \\~\\
a_{21}x_1 + a_{22}{\color{red}x_2} + a_{23}x_3 = b_2 \\
\begin{aligned}
x_2 = \dfrac{b_2 - a_{21}x_1 - a_{23}x_3}{a_{22}} \\
\therefore x_2^{k+1} = \dfrac{b_2 - a_{21}x_1^k - a_{23}x_3^k}{a_{22}}
\end{aligned} \\~\\
a_{31}x_1 + a_{32}x_2 + a_{33}{\color{red}x_3} = b_3 \\
\begin{aligned}
x_3 = \dfrac{b_3 - a_{31}x_1 - a_{32}x_2}{a_{33}} \\
\therefore x_3^{k+1} = \dfrac{b_3 - a_{31}x_1^k - a_{32}x_2^k}{a_{33}}
\end{aligned} \\~\\ From Ax = B a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 ⎩ ⎨ ⎧ x 1 x 2 x 3 ⎭ ⎬ ⎫ = ⎩ ⎨ ⎧ b 1 b 2 b 3 ⎭ ⎬ ⎫ a 11 x 1 + a 12 x 2 + a 13 x 3 = b 1 x 1 = a 11 b 1 − a 12 x 2 − a 13 x 3 ∴ x 1 k + 1 = a 11 b 1 − a 12 x 2 k − a 13 x 3 k a 21 x 1 + a 22 x 2 + a 23 x 3 = b 2 x 2 = a 22 b 2 − a 21 x 1 − a 23 x 3 ∴ x 2 k + 1 = a 22 b 2 − a 21 x 1 k − a 23 x 3 k a 31 x 1 + a 32 x 2 + a 33 x 3 = b 3 x 3 = a 33 b 3 − a 31 x 1 − a 32 x 2 ∴ x 3 k + 1 = a 33 b 3 − a 31 x 1 k − a 32 x 2 k
ต้องเช็ค Error ของทุก x i x_i x i ให้ต่ำกว่า Error threshold
error = ∣ x i k + 1 − x i k x i k + 1 × 100 % ∣ \text{error} = \left| \dfrac{x_i^{k+1} - x_i^{k}}{x_i^{k+1}} \times 100\% \right| error = x i k + 1 x i k + 1 − x i k × 100%
Guass-Seidal Iteration Methods
เหมือนกับ Jacobi Iteration Methods แต่ถ้ามีตัวแปร x k + 1 x^{k+1} x k + 1 ที่เคยคำนวนแล้วให้นำมาใช้
Example. x 1 k + 1 = 100 + x 2 k x 2 k + 1 = 400 + x 1 k 4 + x 3 k 2 x 3 k + 1 = 100 + x 2 k 2 x 1 ⇒ x 2 ⇒ x 3 1. x 1 k + 1 = 100 + x 2 k 2. x 2 k + 1 = 400 + x 1 k + 1 4 + x 3 k 2 3. x 3 k + 1 = 100 + x 2 k + 1 2 x 3 ⇒ x 2 ⇒ x 1 1. x 3 k + 1 = 100 + x 2 k 2 2. x 2 k + 1 = 400 + x 1 k 4 + x 3 k + 1 2 3. x 1 k + 1 = 100 + x 2 k + 1 \text{Example.}\\~\\
\begin{aligned}
x_1^{k+1} & = 100 + x_2^k \\
x_2^{k+1} & = 400 + \dfrac{x_1^k}{4} + \dfrac{x_3^k}{2} \\
x_3^{k+1} & = 100 + \dfrac{x_2^k}{2}
\end{aligned} \\~\\
x_1 \Rarr x_2 \Rarr x_3 \\
\begin{aligned}
& \text{1. } {\color{red}x_1^{k+1}} = 100 + x_2^k\\
& \text{2. } {\color{blue}x_2^{k+1}} = 400 + \dfrac{\color{red}x_1^{k+1}}{4} + \dfrac{x_3^k}{2}\\
& \text{3. } x_3^{k+1} = 100 + \dfrac{\color{blue}x_2^{k+1}}{2}\\
\end{aligned} \\~\\
x_3 \Rarr x_2 \Rarr x_1 \\
\begin{aligned}
& \text{1. } {\color{red}x_3^{k+1}} = 100 + \dfrac{x_2^k}{2} \\
& \text{2. } {\color{blue}x_2^{k+1}} = 400 + \dfrac{x_1^k}{4} + \dfrac{\color{red}x_3^{k+1}}{2} \\
& \text{3. } x_1^{k+1} = 100 + {\color{blue}x_2^{k+1}}
\end{aligned} \\~\\ Example. x 1 k + 1 x 2 k + 1 x 3 k + 1 = 100 + x 2 k = 400 + 4 x 1 k + 2 x 3 k = 100 + 2 x 2 k x 1 ⇒ x 2 ⇒ x 3 1. x 1 k + 1 = 100 + x 2 k 2. x 2 k + 1 = 400 + 4 x 1 k + 1 + 2 x 3 k 3. x 3 k + 1 = 100 + 2 x 2 k + 1 x 3 ⇒ x 2 ⇒ x 1 1. x 3 k + 1 = 100 + 2 x 2 k 2. x 2 k + 1 = 400 + 4 x 1 k + 2 x 3 k + 1 3. x 1 k + 1 = 100 + x 2 k + 1
Conjugate Gradient Methods
⌊ M ⌋ \lfloor M \rfloor ⌊ M ⌋ คือ Transpose matrix ของ [ M ] [M] [ M ]
Initial Value { R } 0 = [ A ] { X } 0 − { B } { D } 0 = − { R } 0 Each iteration (start at k = 0 ) λ k = − ⌊ D ⌋ k { R } k ⌊ D ⌋ k [ A ] { D } k { X } k + 1 = { X } k + λ k { D } k { R } k + 1 = [ A ] { X } k + 1 − { B } E r r o r = ⌊ R ⌋ k + 1 { R } k + 1 α k = ⌊ R ⌋ k + 1 [ A ] { D } k ⌊ D ⌋ k [ A ] { D } k { D } k + 1 = − { R } k + 1 + α k { D } k \begin{aligned}
& \text{Initial Value} \\\\
\{R\}^0 & = [A]\{X\}^0 - \{B\} \\~\\
\{D\}^0 & = -\{R\}^0 \\~\\
& \text{Each iteration (start at }k = 0) \\\\
\lambda_k & = - \dfrac{\lfloor D \rfloor^k\{R\}^k}{\lfloor D \rfloor^k [A] \{D\}^k} \\~\\
\{X\}^{k+1} & = \{X\}^k + \lambda_k \{D\}^k \\~\\
\{R\}^{k+1} & = [A]\{X\}^{k+1} - \{B\} \\~\\
Error & = \sqrt{\lfloor R \rfloor^{k+1} \{R\}^{k+1}} \\~\\
\alpha_k & = \dfrac{\lfloor R \rfloor^{k+1}[A]\{D\}^k}{\lfloor D \rfloor^k[A]\{D\}^k} \\~\\
\{D\}^{k+1} & = -\{R\}^{k+1} + \alpha_k\{D\}^k
\end{aligned} { R } 0 { D } 0 λ k { X } k + 1 { R } k + 1 E rror α k { D } k + 1 Initial Value = [ A ] { X } 0 − { B } = − { R } 0 Each iteration (start at k = 0 ) = − ⌊ D ⌋ k [ A ] { D } k ⌊ D ⌋ k { R } k = { X } k + λ k { D } k = [ A ] { X } k + 1 − { B } = ⌊ R ⌋ k + 1 { R } k + 1 = ⌊ D ⌋ k [ A ] { D } k ⌊ R ⌋ k + 1 [ A ] { D } k = − { R } k + 1 + α k { D } k
Interpolations
การหาค่าที่หายไปในระหว่างช่วง
วิธีการ รูปแบบ สูตร Newton divided-difference Linear Quadratic Polynomial Lagrange interpolation Linear Quadratic Polynomial Spline interpolation Linear Quadratic Cubic
Newton divided-difference
สูตร f ( x ) = c 0 + c 1 ( x − x 0 ) f(x) = c_0 + c_1(x - x_0) f ( x ) = c 0 + c 1 ( x − x 0 )
โดยที่ c 0 = f ( x 0 ) , c 1 = f ( x 1 ) − f ( x 0 ) x 1 − x 0 c_0 = f(x_0), c_1 = \dfrac{f(x_1) - f(x_0)}{x_1 - x_0} c 0 = f ( x 0 ) , c 1 = x 1 − x 0 f ( x 1 ) − f ( x 0 )
สูตร f ( x ) = c 0 + c 1 ( x − x 0 ) + c 2 ( x − x 0 ) ( x − x 1 ) f(x) = c_0 + c_1(x - x_0) + c_2(x - x_0)(x - x_1) f ( x ) = c 0 + c 1 ( x − x 0 ) + c 2 ( x − x 0 ) ( x − x 1 )
โดยที่
c 0 = f ( x 0 ) c 1 = f ( x 1 ) − f ( x 0 ) x 1 − x 0 c 2 = f ( x 2 ) − f ( x 1 ) x 2 − x 1 − f ( x 1 ) − f ( x 0 ) x 1 − x 0 x 2 − x 1 \begin{align*}
& c_0 = f(x_0) \\
& c_1 = \dfrac{f(x_1) - f(x_0)}{x_1 - x_0} \\
& c_2 = \dfrac{\dfrac{f(x_2) - f(x_1)}{x_2 - x_1} - \dfrac{f(x_1) - f(x_0)}{x_1 - x_0}}{x_2 - x_1}
\end{align*} c 0 = f ( x 0 ) c 1 = x 1 − x 0 f ( x 1 ) − f ( x 0 ) c 2 = x 2 − x 1 x 2 − x 1 f ( x 2 ) − f ( x 1 ) − x 1 − x 0 f ( x 1 ) − f ( x 0 )
Lagrange interpolation
สูตร f ( x ) = L 0 f ( x 0 ) + L 1 f ( x 1 ) f(x) = L_0f(x_0) + L_1f(x_1) f ( x ) = L 0 f ( x 0 ) + L 1 f ( x 1 )
โดยที่
L 0 = ( x 1 − x ) ( x 1 − x 0 ) L 1 = ( x 0 − x ) ( x 0 − x 1 ) L_0 = \dfrac{(x_1 - x)}{(x_1 - x_0)} \\
L_1 = \dfrac{(x_0 - x)}{(x_0 - x_1)} L 0 = ( x 1 − x 0 ) ( x 1 − x ) L 1 = ( x 0 − x 1 ) ( x 0 − x )
สูตร f ( x ) = L 0 f ( f x 0 ) + L 1 f ( f x 1 ) + L 2 f ( f x 2 ) f(x) = L_0f(fx_0) + L_1f(fx_1) + L_2f(fx_2) f ( x ) = L 0 f ( f x 0 ) + L 1 f ( f x 1 ) + L 2 f ( f x 2 )
โดยที่
L 0 = ( x 1 − x ) ( x 2 − x ) ( x 1 − x 0 ) ( x 2 − x 0 ) L 1 = ( x 0 − x ) ( x 2 − x ) ( x 0 − x 0 ) ( x 2 − x 0 ) L 2 = ( x 0 − x ) ( x 1 − x ) ( x 0 − x 0 ) ( x 1 − x 0 ) L_0 = \dfrac{(x_1 - x)(x_2 - x)}{(x_1 - x_0)(x_2 - x_0)} \\
L_1 = \dfrac{(x_0 - x)(x_2 - x)}{(x_0 - x_0)(x_2 - x_0)} \\
L_2 = \dfrac{(x_0 - x)(x_1 - x)}{(x_0 - x_0)(x_1 - x_0)} \\ L 0 = ( x 1 − x 0 ) ( x 2 − x 0 ) ( x 1 − x ) ( x 2 − x ) L 1 = ( x 0 − x 0 ) ( x 2 − x 0 ) ( x 0 − x ) ( x 2 − x ) L 2 = ( x 0 − x 0 ) ( x 1 − x 0 ) ( x 0 − x ) ( x 1 − x )
สูตร f ( x ) = L 0 f ( x 0 ) + L 1 f ( x 1 ) + . . . + L n f ( x n ) f(x) = L_0f(x_0) + L_1f(x_1) + ... + L_nf(x_n) f ( x ) = L 0 f ( x 0 ) + L 1 f ( x 1 ) + ... + L n f ( x n )
โดยที่
L a = ∏ i ≠ a n − 1 ( x i − x ) ∏ i ≠ a n − 1 ( x i − x a ) \displaystyle
L_a = \dfrac{\displaystyle\prod_{i \neq a}^{n - 1}{(x_i - x)}}{\displaystyle\prod_{i \neq a}^{n - 1}{(x_i - x_a)}} L a = i = a ∏ n − 1 ( x i − x a ) i = a ∏ n − 1 ( x i − x )
Spline
จะแบ่ง Functions ออกเป็นช่วงๆ
f 1 ( x ) = f ( x 0 ) + f ( x 1 ) − f ( x 0 ) x 1 − x 0 ( x − x 0 ) ; x 0 ≤ x ≤ x 1 f 2 ( x ) = f ( x 1 ) + f ( x 2 ) − f ( x 1 ) x 2 − x 1 ( x − x 1 ) ; x 1 ≤ x ≤ x 2 f 3 ( x ) = f ( x 2 ) + f ( x 3 ) − f ( x 2 ) x 3 − x 2 ( x − x 2 ) ; x 2 ≤ x ≤ x 3 f_1(x) = f(x_0) + \dfrac{f(x_1) - f(x_0)}{x_1 - x_0}(x - x_0) ; \quad x_0 \le x \le x_1 \\~\\
f_2(x) = f(x_1) + \dfrac{f(x_2) - f(x_1)}{x_2 - x_1}(x - x_1) ; \quad x_1 \le x \le x_2 \\~\\
f_3(x) = f(x_2) + \dfrac{f(x_3) - f(x_2)}{x_3 - x_2}(x - x_2) ; \quad x_2 \le x \le x_3 f 1 ( x ) = f ( x 0 ) + x 1 − x 0 f ( x 1 ) − f ( x 0 ) ( x − x 0 ) ; x 0 ≤ x ≤ x 1 f 2 ( x ) = f ( x 1 ) + x 2 − x 1 f ( x 2 ) − f ( x 1 ) ( x − x 1 ) ; x 1 ≤ x ≤ x 2 f 3 ( x ) = f ( x 2 ) + x 3 − x 2 f ( x 3 ) − f ( x 2 ) ( x − x 2 ) ; x 2 ≤ x ≤ x 3
จากสมการ f i ( x ) = a i x 2 + b i x + c i f_i(x) = a_ix^2 + b_ix + c_i f i ( x ) = a i x 2 + b i x + c i
จะใช้กฎ 4 ข้อในการคำนวนหาสมการ
Interior knots หรือปมใน หมายถึงจุด x x x ที่ไม่ได้อยู่ตรงริม
End knots หรือ End points หมายถึงจุด x x x ที่อยู่ตรงริม (จุดแรกและจุดสุดท้าย)
ค่าของ Functions ของจุดที่ติดกันมีค่าเท่ากัน ที่ปมด้านใน
At x i f ( x i ) = a i x i 2 + b i x i + c i f ( x i ) = a i + 1 x i 2 + b i + 1 x i 2 + c i + 1 \begin{aligned}
& \text{At }x_i \\
f(x_i) & = a_ix_i^2 + b_ix_i + c_i \\
f(x_i) & = a_{i+1}x_i^2 + b_{i+1}x_i^2 + c_{i+1} \\
\end{aligned} f ( x i ) f ( x i ) At x i = a i x i 2 + b i x i + c i = a i + 1 x i 2 + b i + 1 x i 2 + c i + 1
ค่าของ Functions อันแรก และ อันสุดท้ายผ่าน (ปมด้านนอก)
f ( x 0 ) = a 1 x 0 2 + b 1 x 0 + c 1 f ( x n ) = a n x n 2 + b n x n + c n \begin{aligned}
f(x_0) & = a_1x_0^2 + b_1x_0 + c_1\\
f(x_n) & = a_nx_n^2 + b_nx_n + c_n
\end{aligned} f ( x 0 ) f ( x n ) = a 1 x 0 2 + b 1 x 0 + c 1 = a n x n 2 + b n x n + c n
Derivative หรือ Slope ของปมด้านในมีค่าเท่ากัน
f ′ ( x ) = 2 a x + b At x i 2 a i x i + b i = 2 a i + 1 x i + b i + 1 f'(x) = 2ax + b \\~\\
\text{At }x_i \\
2a_ix_i + b_i = 2a_{i+1}x_i + b_{i+1} f ′ ( x ) = 2 a x + b At x i 2 a i x i + b i = 2 a i + 1 x i + b i + 1
ให้ a 1 = 0 a_1 = 0 a 1 = 0
จากสมการ f i ( x ) = a i x 3 + b i x 2 + c i x + d i f_i(x) = a_ix^3 + b_ix^2 + c_ix + d_i f i ( x ) = a i x 3 + b i x 2 + c i x + d i
ค่าของ Functions ของจุดที่ติดกันมีค่าเท่ากัน ที่ปมด้านใน
ค่าของ Functions อันแรก และ อันสุดท้ายผ่าน (ปมด้านนอก)
First Derivative หรือ Slope ของปมด้านในมีค่าเท่ากัน
Second Derivative ของปมด้านใน มีค่าเท่ากัน
Second Derivative ของปมด้านนอก มีค่าเท่ากับ 0
Simple Regression
g ( x ) = a 0 + a 1 x + a 2 x 2 + … + a m x m g(x) = a_0 + a_1x + a_2x^2 + \ldots + a_mx^m g ( x ) = a 0 + a 1 x + a 2 x 2 + … + a m x m
[ n ∑ i = 1 n x i ∑ i = 1 n x i 2 ⋯ ∑ i = 1 n x i m ∑ i = 1 n x i ∑ i = 1 n x i 2 ∑ i = 1 n x i 3 ⋯ ∑ i = 1 n x i m + 1 ∑ i = 1 n x i 2 ∑ i = 1 n x i 3 ∑ i = 1 n x i 4 ⋯ ∑ i = 1 n x i m + 2 ⋮ ⋮ ⋮ ⋱ ⋮ ∑ i = 1 n x i m ∑ i = 1 n x i m + 1 ∑ i = 1 n x i m + 2 ⋯ ∑ i = 1 n x i 2 m ] { a 0 a 1 a 2 ⋮ a m } = { ∑ i + 1 n y i ∑ i + 1 n x i y i ∑ i + 1 n x i 2 y i ⋮ ∑ i + 1 n x i m y i } \begin{bmatrix}
n & \displaystyle\sum_{i=1}^n x_i & \displaystyle\sum_{i=1}^n x_i^2 & \cdots & \displaystyle\sum_{i=1}^n x_i^m \\
\displaystyle\sum_{i=1}^n x_i & \displaystyle\sum_{i=1}^n x_i^2 & \displaystyle\sum_{i=1}^n x_i^3 & \cdots & \displaystyle\sum_{i=1}^n x_i^{m+1} \\
\displaystyle\sum_{i=1}^n x_i^2 & \displaystyle\sum_{i=1}^n x_i^3 & \displaystyle\sum_{i=1}^n x_i^4 & \cdots & \displaystyle\sum_{i=1}^n x_i^{m+2} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\displaystyle\sum_{i=1}^n x_i^m & \displaystyle\sum_{i=1}^n x_i^{m+1} & \displaystyle\sum_{i=1}^n x_i^{m+2} & \cdots & \displaystyle\sum_{i=1}^n x_i^{2m}
\end{bmatrix}
{\def\arraystretch{2.2}
\begin{Bmatrix}
a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_m
\end{Bmatrix}} =
\begin{Bmatrix}
\displaystyle\sum_{i+1}^n y_i \\
\displaystyle\sum_{i+1}^n x_iy_i \\
\displaystyle\sum_{i+1}^n x_i^2y_i \\
\vdots \\
\displaystyle\sum_{i+1}^n x_i^my_i
\end{Bmatrix} n i = 1 ∑ n x i i = 1 ∑ n x i 2 ⋮ i = 1 ∑ n x i m i = 1 ∑ n x i i = 1 ∑ n x i 2 i = 1 ∑ n x i 3 ⋮ i = 1 ∑ n x i m + 1 i = 1 ∑ n x i 2 i = 1 ∑ n x i 3 i = 1 ∑ n x i 4 ⋮ i = 1 ∑ n x i m + 2 ⋯ ⋯ ⋯ ⋱ ⋯ i = 1 ∑ n x i m i = 1 ∑ n x i m + 1 i = 1 ∑ n x i m + 2 ⋮ i = 1 ∑ n x i 2 m ⎩ ⎨ ⎧ a 0 a 1 a 2 ⋮ a m ⎭ ⎬ ⎫ = ⎩ ⎨ ⎧ i + 1 ∑ n y i i + 1 ∑ n x i y i i + 1 ∑ n x i 2 y i ⋮ i + 1 ∑ n x i m y i ⎭ ⎬ ⎫
Multiple Regression
g ( x ) + a 0 + a 1 x 1 + a 2 x 2 + … + a k x k g(x) + a_0 + a_1x_1 + a_2x_2 + \ldots + a_kx_k g ( x ) + a 0 + a 1 x 1 + a 2 x 2 + … + a k x k
[ n ∑ i = 1 n x 1 i ∑ i = 1 n x 2 i ⋯ ∑ i = 1 n x k i ∑ i = 1 n x 1 i ∑ i = 1 n x 1 i x 1 i ∑ i = 1 n x 1 i x 2 i ⋯ ∑ i = 1 n x 1 i x k i ∑ i = 1 n x 2 i ∑ i = 1 n x 2 i x 1 i ∑ i = 1 n x 2 i x 2 i ⋯ ∑ i = 1 n x 2 i x k i ⋮ ⋮ ⋮ ⋱ ⋮ ∑ i = 1 n x k i ∑ i = 1 n x k i x 1 i ∑ i = 1 n x k i x 2 i ⋯ ∑ i = 1 n x k i x k i ] { a 0 a 1 a 2 ⋮ a k } = { ∑ i = 1 n y i ∑ i = 1 n x 1 i y i ∑ i = 1 n x 2 i y i ⋮ ∑ i = 1 n x k i y i } \begin{bmatrix}
n & \displaystyle\sum_{i=1}^n x_{1i} & \displaystyle\sum_{i=1}^n x_{2i} & \cdots & \displaystyle\sum_{i=1}^n x_{ki} \\
\displaystyle\sum_{i=1}^n x_{1i} & \displaystyle\sum_{i=1}^n x_{1i}x_{1i} & \displaystyle\sum_{i=1}^n x_{1i}x_{2i} & \cdots & \displaystyle\sum_{i=1}^n x_{1i}x_{ki} \\
\displaystyle\sum_{i=1}^n x_{2i} & \displaystyle\sum_{i=1}^n x_{2i}x_{1i} & \displaystyle\sum_{i=1}^n x_{2i}x_{2i} & \cdots & \displaystyle\sum_{i=1}^n x_{2i}x_{ki} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\displaystyle\sum_{i=1}^n x_{ki} & \displaystyle\sum_{i=1}^n x_{ki}x_{1i} & \displaystyle\sum_{i=1}^n x_{ki}x_{2i} & \cdots & \displaystyle\sum_{i=1}^n x_{ki}x_{ki}
\end{bmatrix}
{\def\arraystretch{2.2}
\begin{Bmatrix}
a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_k
\end{Bmatrix}} =
\begin{Bmatrix}
\displaystyle\sum_{i=1}^n y_i \\
\displaystyle\sum_{i=1}^n x_{1i}y_i \\
\displaystyle\sum_{i=1}^n x_{2i}y_i \\
\vdots \\
\displaystyle\sum_{i=1}^n x_{ki}y_i
\end{Bmatrix} n i = 1 ∑ n x 1 i i = 1 ∑ n x 2 i ⋮ i = 1 ∑ n x ki i = 1 ∑ n x 1 i i = 1 ∑ n x 1 i x 1 i i = 1 ∑ n x 2 i x 1 i ⋮ i = 1 ∑ n x ki x 1 i i = 1 ∑ n x 2 i i = 1 ∑ n x 1 i x 2 i i = 1 ∑ n x 2 i x 2 i ⋮ i = 1 ∑ n x ki x 2 i ⋯ ⋯ ⋯ ⋱ ⋯ i = 1 ∑ n x ki i = 1 ∑ n x 1 i x ki i = 1 ∑ n x 2 i x ki ⋮ i = 1 ∑ n x ki x ki ⎩ ⎨ ⎧ a 0 a 1 a 2 ⋮ a k ⎭ ⎬ ⎫ = ⎩ ⎨ ⎧ i = 1 ∑ n y i i = 1 ∑ n x 1 i y i i = 1 ∑ n x 2 i y i ⋮ i = 1 ∑ n x ki y i ⎭ ⎬ ⎫
Integration
Single Trapezoidal Rule
I = b − a 2 [ f ( x 0 ) + f ( x 1 ) ] I = \dfrac{b - a}{2} \left[ f(x_0) + f(x_1) \right] I = 2 b − a [ f ( x 0 ) + f ( x 1 ) ]
Composite Trapezoidal Rule
h = ( b − a ) / n I = h 2 ( f ( x 0 ) + f ( x n ) + 2 ∑ i = 1 n − 1 f ( x i ) ) \begin{aligned}
h & = (b - a)/n \\
I & = \dfrac{h}{2} \left( f(x_0) + f(x_n) + 2\sum_{i=1}^{n-1} f(x_i) \right)
\end{aligned} h I = ( b − a ) / n = 2 h ( f ( x 0 ) + f ( x n ) + 2 i = 1 ∑ n − 1 f ( x i ) )
Simpson's Rule
h = b − a 2 I = h 3 [ f ( x 0 ) + 4 f ( x 1 ) + f ( x 2 ) ] \begin{aligned}
h & = \dfrac{b - a}{2} \\
I & = \dfrac{h}{3} \left[ f(x_0) + 4f(x_1) + f(x_2) \right]
\end{aligned} h I = 2 b − a = 3 h [ f ( x 0 ) + 4 f ( x 1 ) + f ( x 2 ) ]
Composite Simpson's Rule
h = b − a 2 n I = h 2 [ f ( x 0 ) + f ( x n ) + 4 ∑ i = 1 , 3 , 5 n − 1 f ( x i ) + 2 ∑ i = 2 , 4 , 6 n − 2 f ( x i ) ] \begin{aligned}
h & = \dfrac{b - a}{2n} \\
I & = \dfrac{h}{2} \left[ f(x_0) + f(x_n) + 4\sum_{i=1,3,5}^{n-1} f(x_i) + 2\sum_{i=2,4,6}^{n-2} f(x_i) \right]
\end{aligned} h I = 2 n b − a = 2 h [ f ( x 0 ) + f ( x n ) + 4 i = 1 , 3 , 5 ∑ n − 1 f ( x i ) + 2 i = 2 , 4 , 6 ∑ n − 2 f ( x i ) ]
Differential
ค่าประมาณ
Δ y Δ x = f ( x 0 + Δ x ) − f ( x 0 ) Δ x \dfrac{\Delta y}{\Delta x} = \dfrac{f(x_0 + \Delta x_) - f(x_0)}{\Delta x} Δ x Δ y = Δ x f ( x 0 + Δ x ) − f ( x 0 )
ค่าจริง
d y d x = lim Δ x → 0 f ( x 0 + Δ x ) − f ( x 0 ) Δ x \dfrac{dy}{dx}=\lim_{\Delta x \to 0}{\dfrac{f(x_0 + \Delta x_) - f(x_0)}{\Delta x}} d x d y = Δ x → 0 lim Δ x f ( x 0 + Δ x ) − f ( x 0 )
รวมสูตร
Forward divided-difference
Derivative Error First Derivative f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) h f'(x_i) = \dfrac{f(x_{i+1}) - f(x_i)}{h} f ′ ( x i ) = h f ( x i + 1 ) − f ( x i ) O ( h ) O(h) O ( h ) f ′ ( x i ) = − f ( x i + 2 ) + 4 f ( x i + 1 ) − 3 f ( x i ) 2 h f'(x_i) = \dfrac{-f(x_{i+2}) + 4f(x_{i+1}) - 3f(x_{i})}{2h} f ′ ( x i ) = 2 h − f ( x i + 2 ) + 4 f ( x i + 1 ) − 3 f ( x i ) O ( h 2 ) O(h^2) O ( h 2 ) Second Derivative f ′ ′ ( x i ) = f ( x i + 2 ) − 2 f ( x i + 1 ) + f ( x i ) h 2 f''(x_i) = \dfrac{f(x_{i+2}) - 2f(x_{i+1}) + f(x_{i})}{h^2} f ′′ ( x i ) = h 2 f ( x i + 2 ) − 2 f ( x i + 1 ) + f ( x i ) O ( h ) O(h) O ( h ) f ′ ′ ( x i ) = − f ( x i + 3 ) + 4 f ( x i + 2 ) − 5 f ( x i + 1 ) + 2 f ( x i ) h 2 f''(x_i) = \dfrac{-f(x_{i+3}) + 4f(x_{i+2}) - 5f(x_{i+1}) + 2f(x_{i})}{h^2} f ′′ ( x i ) = h 2 − f ( x i + 3 ) + 4 f ( x i + 2 ) − 5 f ( x i + 1 ) + 2 f ( x i ) O ( h 2 ) O(h^2) O ( h 2 )
First derivative
First forward devided-difference
จาก Taylor Series
f ( x ) = f ( x 0 ) + ( x − x 0 ) f ′ ( x 0 ) + ( x − x 0 ) 2 2 ! f ′ ′ ( x 0 ) + ( x − x 0 ) 3 3 ! f ′ ′ ′ ( x 0 ) + . . . f(x) = f(x_0) + (x - x_0)f'(x_0) + \dfrac{(x - x_0)^2}{2!}f''(x_0) + \dfrac{(x - x_0)^3}{3!}f'''(x_0) + ... f ( x ) = f ( x 0 ) + ( x − x 0 ) f ′ ( x 0 ) + 2 ! ( x − x 0 ) 2 f ′′ ( x 0 ) + 3 ! ( x − x 0 ) 3 f ′′′ ( x 0 ) + ...
เปลี่ยน x x x เป็น x i + 1 x_{i+1} x i + 1 และ x 0 x_0 x 0 เป็น x i x_i x i และ x − x 0 x-x_0 x − x 0 เป็น h h h
f ( x i + 1 ) = f ( x i ) + h f ′ ( x i ) + h 2 2 ! f ′ ′ ( x i ) + . . . f(x_{i+1}) = f(x_i) + hf'(x_i) + \dfrac{h^2}{2!}f''(x_i) + ... f ( x i + 1 ) = f ( x i ) + h f ′ ( x i ) + 2 ! h 2 f ′′ ( x i ) + ...
ย้ายฝั่งหา f ′ ( x i ) f'(x_i) f ′ ( x i )
f ( x i + 1 ) − f ( x i ) − h 2 2 ! f ′ ′ ( x i ) − . . . = h f ′ ( x i ) f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) − h 2 2 ! f ′ ′ ( x i ) − . . . h f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) h − h 2 ! f ′ ′ ( x i ) − . . . f(x_{i+1}) - f(x_i) - \dfrac{h^2}{2!}f''(x_i) - ... = hf'(x_i) \\
f'(x_i) = \dfrac{f(x_{i+1}) - f(x_i) - \dfrac{h^2}{2!}f''(x_i) - ...}{h} \\
f'(x_i) = \dfrac{f(x_{i+1}) - f(x_i)}{h} - \color{red}{\dfrac{h}{2!}f''(x_i) - ...} f ( x i + 1 ) − f ( x i ) − 2 ! h 2 f ′′ ( x i ) − ... = h f ′ ( x i ) f ′ ( x i ) = h f ( x i + 1 ) − f ( x i ) − 2 ! h 2 f ′′ ( x i ) − ... f ′ ( x i ) = h f ( x i + 1 ) − f ( x i ) − 2 ! h f ′′ ( x i ) − ...
ลดความละเอียดลงเป็น O ( h ) O(h) O ( h )
f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) h + O ( h ) f'(x_i) = \dfrac{f(x_{i+1}) - f(x_i)}{h} + \color{red}{O(h)} f ′ ( x i ) = h f ( x i + 1 ) − f ( x i ) + O ( h )
First backward divided-difference
f ′ ( x i ) = f ( x i ) − f ( x i + 1 ) h + O ( h ) f'(x_i) = \dfrac{f(x_i) - f(x_{i+1})}{h} + \color{red}{O(h)} f ′ ( x i ) = h f ( x i ) − f ( x i + 1 ) + O ( h )
First Central divided-difference
f ′ ( x i ) = f ( x i − 1 ) + f ( x i + 1 ) 2 h + O ( h 2 ) f'(x_i) = \dfrac{f(x_{i-1}) + f(x_{i+1})}{2h} + \color{red}{O(h^2)} f ′ ( x i ) = 2 h f ( x i − 1 ) + f ( x i + 1 ) + O ( h 2 )