Polynomial Fitting import numpy as np
import matplotlib.pyplot as plt
Goal Since an image is a discrete mapping of pixels, while the real world is "continuous", we want to fit the pixels by a continuous functions so that we can extract information.
Sliding Window Algorithm Define a "pixel window" centered at pixel ( x , y ) (x, y) ( x , y ) so that an 1D patch can be ( 0 , y ) ∼ ( 2 x , y ) (0, y)\sim (2x,y) ( 0 , y ) ∼ ( 2 x , y ) Fit a n-degree polynomial to the intensities (commonly n ≤ 2 n \leq 2 n ≤ 2 ) Assign the poly's derivatives at x = 0 x=0 x = 0 to pixel at windows's center Slide window one pixel over until window reaches border Taylor-series Approximation of Images Consider a 1D patch from ( 0 , y ) (0,y) ( 0 , y ) to ( 2 x , y ) (2x,y) ( 2 x , y ) Consider the Taylor expansion of I I I centered at 0 0 0
I ( x ⃗ ) = ∑ i = 0 ∞ d i I d x ⃗ i x ⃗ i I ( 0 ) ≈ ∑ i = 0 N d i I d x ⃗ i x ⃗ i I ( 0 ) + R N + 1 ( x ) I(\vec x) = \sum_{i=0}^\infty \frac{d^iI}{d\vec x^i}\vec x^i I(0) \approx \sum_{i=0}^N \frac{d^iI}{d\vec x^i}\vec x^i I(0) + R_{N+1}(x) I ( x ) = i = 0 ∑ ∞ d x i d i I x i I ( 0 ) ≈ i = 0 ∑ N d x i d i I x i I ( 0 ) + R N + 1 ( x ) Thus, we can write it as
I ( x ) = [ 1 , x , x 2 / 2 , x 3 / 6 , . . . , x n / n ! ] [ I ( 0 ) , d x I ( 0 ) , d x 2 2 I ( 0 ) , . . . , d x n n I ( 0 ) ] T I(x) = [1, x, x^2/2, x^3/6,...,x^n/n!][I(0), d_xI(0), d^2_{x^2}I(0), ..., d^n_{x^n}I(0)]^T I ( x ) = [ 1 , x , x 2 /2 , x 3 /6 , ... , x n / n !] [ I ( 0 ) , d x I ( 0 ) , d x 2 2 I ( 0 ) , ... , d x n n I ( 0 ) ] T Note that we have 2 x + 1 2x+1 2 x + 1 pixels in the patch, ranges from − x -x − x to x x x , let their intensities be I x I_x I x . So that we have
[ I − x I − x + 1 . . . I x − 1 I x ] = [ 1 − x . . . ( − x ) n / n ! 1 ( − x + 1 ) . . . ( − x + 1 ) n / n ! . . . . . . . . . . . . 1 ( x − 1 ) . . . ( x − 1 ) n / n ! 1 x . . . x n / n ! ] [ I ( 0 ) d x I ( 0 ) . . . d x n − 1 n − 1 I ( 0 ) d x n n I ( 0 ) ] \begin{bmatrix} I_{-x}\\ I_{-x+1}\\ ...\\ I_{x-1}\\ I_{x} \end{bmatrix}= \begin{bmatrix} 1 &-x &...&(-x)^n/n!\\ 1 &(-x+1) &...&(-x+1)^n/n!\\ ... &... &...&...\\ 1 &(x-1) &...&(x-1)^n/n!\\ 1 &x &...&x^n/n!\\ \end{bmatrix} \begin{bmatrix} I(0)\\d_xI(0)\\...\\d^{n-1}_{x^{n-1}}I(0)\\d^n_{x^n}I(0)\end{bmatrix} I − x I − x + 1 ... I x − 1 I x = 1 1 ... 1 1 − x ( − x + 1 ) ... ( x − 1 ) x ... ... ... ... ... ( − x ) n / n ! ( − x + 1 ) n / n ! ... ( x − 1 ) n / n ! x n / n ! I ( 0 ) d x I ( 0 ) ... d x n − 1 n − 1 I ( 0 ) d x n n I ( 0 ) I ( 2 x + 1 ) × 1 = X ( 2 x + 1 ) × n d n × 1 I_{(2x+1)\times 1}=X_{(2x+1)\times n}d_{n\times 1} I ( 2 x + 1 ) × 1 = X ( 2 x + 1 ) × n d n × 1 where I , X I, X I , X are known and we want to solve d d d
Since this is not always have a solution. We want to minimize the "fit error" ∥ I − X d ∥ 2 = ∑ n I i 2 \|I-Xd\|^2 = \sqrt{\sum^n I_i^2} ∥ I − X d ∥ 2 = ∑ n I i 2
When n = 0 n = 0 n = 0 , obviously d d d is minimized at mean, i.e. ∑ 2 x + 1 I i / ( 2 x + 1 ) \sum^{2x+1} I_i / (2x+1) ∑ 2 x + 1 I i / ( 2 x + 1 )
Weighted least square estimation If we want to extract more information, or in favor of, the neighboring points, we can have I ′ = Ω I I' = \Omega I I ′ = Ω I where Ω \Omega Ω is the weight function.
In 1-D case, let Ω = d i a g ( ω 1 , . . . , ω 2 x + 1 ) \Omega = diag(\omega_1, ..., \omega_{2x+1}) Ω = d ia g ( ω 1 , ... , ω 2 x + 1 ) , we want to solve Ω I = Ω X d \Omega I = \Omega X d Ω I = Ω X d which is to minimize ∥ Ω ( I − X d ) ∥ 2 \|\Omega(I-Xd)\|^2 ∥Ω ( I − X d ) ∥ 2
Random Sample Consensus (RANSAC) Suppose we have some outliers, then the line fitting with least square is not a good solution.
x = np . arange ( - 10 , 10 , 0.5 )
y = x * 0.5 + np . random . normal ( 0 , 1 , x . shape )
y += ( np . random . randn ( y . shape [ 0 ]) < - 1.5 ) * 8
y -= ( np . random . randn ( y . shape [ 0 ]) < - 1.5 ) * 8
fit = np . polyfit ( x , y , 1 )
f = fit [ 0 ] + fit [ 1 ] * x
plt . scatter ( x , y )
plt . plot ( x , f );
Algorithm Given: n = n= n = degree of freedom (unkonwn polynomial coefficients) p = p= p = fraction of inliers $t = $ fit threshold $P_s = $ success probability K = K= K = max number of iterations
RANSAC for _ in K :
randomly choose n + 1 pixels
fit a n - degree polynomial
count pixels whose vertical distance from poly is < t
if there are >= ( 2 w + 1 ) * p pixels , mark them as inliers :
do n - degree poly fitting on inliers
return the fitting
# if there is insufficient inliers
return None
Consider the value for K K K : Suppose inliers are independent distributed, hence
P ( choose n+1 inliers ) = p n + 1 P(\text{choose n+1 inliers}) = p^{n+1} P ( choose n+1 inliers ) = p n + 1 P ( at least 1 outliers ) = 1 − p n + 1 P(\text{at least 1 outliers}) = 1 - p^{n+1} P ( at least 1 outliers ) = 1 − p n + 1 P ( at least 1 outliers in all K trials ) = ( 1 − p n + 1 ) K P(\text{at least 1 outliers in all K trials}) = (1-p^{n+1})^K P ( at least 1 outliers in all K trials ) = ( 1 − p n + 1 ) K P s = 1 − ( 1 − p n + 1 ) K ⇒ K = log ( 1 − P s ) log ( 1 − p n + 1 ) P_s = 1 - (1-p^{n+1})^K\Rightarrow K = \frac{\log(1-P_s)}{\log(1-p^{n+1})} P s = 1 − ( 1 − p n + 1 ) K ⇒ K = log ( 1 − p n + 1 ) log ( 1 − P s ) 2D Image Patch Given an image patch I ( 1 : W , 1 : H ) , I ∈ [ 0 , 1 ] I(1:W, 1:H), I\in[0,1] I ( 1 : W , 1 : H ) , I ∈ [ 0 , 1 ] is the intensity mapping
2D Taylor Series Expansion Let I ( 0 , 0 ) I(0,0) I ( 0 , 0 ) be the center
I ( x , y ) = I ( 0 , 0 ) + x ∂ x I ( 0 , 0 ) + y ∂ y I ( 0 , 0 ) + 1 2 ( x 2 ∂ 2 I ∂ x 2 ( 0 , 0 ) + y 2 ∂ 2 I ∂ y 2 ( 0 , 0 ) + x y ∂ 2 I ∂ x ∂ y ( 0 , 0 ) ) + . . . \begin{align*} I(x,y) &= I(0,0) + x\partial_xI(0,0) + y\partial_yI(0,0) \\ & + \frac{1}{2}(x^2\frac{\partial^2I}{\partial x^2}(0,0) + y^2\frac{\partial^2I}{\partial y^2}(0,0) + xy\frac{\partial^2I}{\partial x\partial y}(0,0)) + ... \end{align*} I ( x , y ) = I ( 0 , 0 ) + x ∂ x I ( 0 , 0 ) + y ∂ y I ( 0 , 0 ) + 2 1 ( x 2 ∂ x 2 ∂ 2 I ( 0 , 0 ) + y 2 ∂ y 2 ∂ 2 I ( 0 , 0 ) + x y ∂ x ∂ y ∂ 2 I ( 0 , 0 )) + ... Directional Derivative Using 1st order Taylor series approximation. Let θ \theta θ be the direction, v = ( cos θ , sin θ ) v=(\cos \theta, \sin\theta) v = ( cos θ , sin θ ) be the unit vector in direction θ \theta θ . Consider the points ( t cos θ , t sin θ ) (t\cos\theta, t\sin\theta) ( t cos θ , t sin θ ) , i.e. t t t units away from ( 0 , 0 ) (0,0) ( 0 , 0 ) in the direction of θ \theta θ
I ( t cos θ , t sin θ ) = I ( 0 , 0 ) + I ( 0 , 0 ) + t cos θ ∂ x I ( 0 , 0 ) + t sin θ ∂ y I ( 0 , 0 ) I(t\cos\theta, t\sin\theta) = I(0,0) + I(0,0) + t\cos\theta\partial_xI(0,0) + t\sin\theta\partial_yI(0,0) I ( t cos θ , t sin θ ) = I ( 0 , 0 ) + I ( 0 , 0 ) + t cos θ ∂ x I ( 0 , 0 ) + t sin θ ∂ y I ( 0 , 0 ) Then consider the directional derivative
∂ v I ( 0 , 0 ) = lim t → 0 I ( t cos θ , t sin θ ) − I ( 0 , 0 ) t ≈ cos θ ∂ x I ( 0 , 0 ) + sin θ ∂ y I ( 0 , 0 ) = [ ∂ x I ( 0 , 0 ) ∂ y I ( 0 , 0 ) ] [ cos θ sin θ ] \begin{align*}\partial_v I(0,0) &= \frac{\lim_{t\rightarrow 0}I(t\cos\theta, t\sin\theta) - I(0,0)}{t}\\ &\approx \cos\theta\partial_xI(0,0) + \sin\theta\partial_yI(0,0)\\ &=\begin{bmatrix}\partial_xI(0,0)&\partial_yI(0,0)\end{bmatrix}\begin{bmatrix}\cos\theta\\\sin\theta\end{bmatrix} \end{align*} ∂ v I ( 0 , 0 ) = t lim t → 0 I ( t cos θ , t sin θ ) − I ( 0 , 0 ) ≈ cos θ ∂ x I ( 0 , 0 ) + sin θ ∂ y I ( 0 , 0 ) = [ ∂ x I ( 0 , 0 ) ∂ y I ( 0 , 0 ) ] [ cos θ sin θ ] Let ∇ I = ( ∂ x I , ∂ y I ) \nabla I = (\partial_xI, \partial_yI) ∇ I = ( ∂ x I , ∂ y I ) . Note that
∂ v I ( 0 , 0 ) = { 1 v is parallel to ∇ I ( 0 , 0 ) 0 v ⊥ ∇ I ( 0 , 0 ) ( isophote ) \partial_vI(0,0) = \begin{cases}1 &v\text{ is parallel to }\nabla I(0,0) \\0 &v\perp \nabla I(0,0) (\text{isophote})\end{cases} ∂ v I ( 0 , 0 ) = { 1 0 v is parallel to ∇ I ( 0 , 0 ) v ⊥ ∇ I ( 0 , 0 ) ( isophote ) Therefore, the gradient ∇ I \nabla I ∇ I tells the direction and magnitude of max changes in intensity.
Discrete Derivative Noting that the image are pixels, hence we can approximate the derivative by its limit definition.
∂ x I ( x , y ) = 1 2 ( I ( x + 1 , y ) − f ( x − 1 , y ) ) \partial_xI(x,y) = \frac{1}{2}(I(x+1,y) - f(x-1,y)) ∂ x I ( x , y ) = 2 1 ( I ( x + 1 , y ) − f ( x − 1 , y )) ∂ y I ( x , y ) = 1 2 ( I ( x , y + 1 ) − f ( x , y − 1 ) ) \partial_yI(x,y) = \frac{1}{2}(I(x,y+1) - f(x,y-1)) ∂ y I ( x , y ) = 2 1 ( I ( x , y + 1 ) − f ( x , y − 1 )) Or we can use forward difference ( x + 1 , x ) (x+1, x) ( x + 1 , x ) or backward difference ( x , x − 1 ) (x,x-1) ( x , x − 1 ) vs. ( x − 1 , x + 1 ) / 2 (x-1,x+1)/2 ( x − 1 , x + 1 ) /2
Magnitude ∥ ∇ I ( x , y ) ∥ = ∂ x I ( x , y ) 2 + ∂ y I ( x , y ) 2 \|\nabla I(x,y)\| = \sqrt{\partial_xI(x,y)^2 + \partial_yI(x,y)^2} ∥∇ I ( x , y ) ∥ = ∂ x I ( x , y ) 2 + ∂ y I ( x , y ) 2 is the L 2 L_2 L 2 norm Direction Θ ( x , y ) = arctan ( ∂ x I / ∂ y I ) \Theta(x,y) = \arctan(\partial_xI / \partial_yI) Θ ( x , y ) = arctan ( ∂ x I / ∂ y I )
Laplacian Based on central difference, the approximation to the laplacian of I I I is
d 2 f d x 2 ≈ f ( x + 1 ) − 2 f ( x ) + f ( x − 1 ) \frac{d^2f}{dx^2} \approx f(x+1)-2f(x) + f(x-1) d x 2 d 2 f ≈ f ( x + 1 ) − 2 f ( x ) + f ( x − 1 ) For 2D functions,
∂ 2 f ∂ x 2 ≈ f ( x + 1 , y ) − 2 f ( x , y ) + f ( x − 1 , y ) \frac{\partial^2f}{\partial x^2} \approx f(x+1, y)-2f(x,y) + f(x-1,y) ∂ x 2 ∂ 2 f ≈ f ( x + 1 , y ) − 2 f ( x , y ) + f ( x − 1 , y ) Hence
∇ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 = f ( x + 1 , y ) − 2 f ( x , y ) + f ( x − 1 , y ) + f ( x , y + 1 ) − 2 f ( x , y ) + f ( x , y − 1 ) = f ( x + 1 , y ) + f ( x − 1 , y ) + f ( x , y + 1 ) + f ( x , y − 1 ) − 4 f ( x , y ) = [ 0 1 0 1 − 4 1 0 1 0 ] \begin{align*} \nabla^2 f &= \frac{\partial^2f}{\partial x^2}+\frac{\partial^2f}{\partial y^2}\\ &= f(x+1, y)-2f(x,y) + f(x-1,y) + f(x, y+1)-2f(x,y) + f(x,y-1)\\ &= f(x+1, y) + f(x-1,y) + f(x, y+1)+ f(x,y-1)-4f(x,y)\\ &= \begin{bmatrix} 0&1&0\\1&-4&1\\0&1&0 \end{bmatrix} \end{align*} ∇ 2 f = ∂ x 2 ∂ 2 f + ∂ y 2 ∂ 2 f = f ( x + 1 , y ) − 2 f ( x , y ) + f ( x − 1 , y ) + f ( x , y + 1 ) − 2 f ( x , y ) + f ( x , y − 1 ) = f ( x + 1 , y ) + f ( x − 1 , y ) + f ( x , y + 1 ) + f ( x , y − 1 ) − 4 f ( x , y ) = 0 1 0 1 − 4 1 0 1 0 January 11, 2025 January 9, 2023