A. Familiarization of the discrete FFT.
Let us first familiarize ourselves with the discrete FT. In Paint, create a 128x128 image of a white circle with a black background (shown in Figure 1.a). Then, load this image in Scilab and convert it into grayscale. fft2() was applied to the grayscale image. Note that the result is a complex number array, and to see the intensity you need to take the absolute value by using the abs() function (shown in Figure 1.b).
Figure 1.a Image of a 128x128 white circle with a black background.
Notice that the result of the FFT looks like a black picture, but don't be fooled!! Look closely at the edges! There is a very small white spot on the corners. This represents the spatial frequency of the image. Application of fftshift() centers the FFT, as seen in Figure 1.c.
Figure 2.b FFT of the image in Figure 1.a
Figure 1.c Shifted FFT of the image in Figure 1.a (circle)
Figure 1.d Application of the FFT to the
image in Figure 1.a twice.
Figure 1. Output images for the familiarization of discrete FFT.
Notice that the application of the FFT twice gives back the original image (see Figure 1.d). Now, let us apply the same procedure to a 128x128 image of a letter A (shown in Figure 2.a).
Figure 2. a) 128x128 image of a letter A.
Figure 2. b) Fourier Transform of the image in Figure 2.a
Figure 2.c) Shifted FFT of the image in Figure 2.a
Figure 2.d) Application of the FFT to the
image in Figure 2.a twice
Figure 2. Output images for the familiarization of FFT using the image of letter a.
The resulting image is inverted! What happened?? To understand this, let us look at the formula of the Fourier Transform:
Equations 1 and 2 gives the formula for the FT and the inverse FT. Based from these two equations, the application of the FT twice would introduce a negative sign on the output. This explains the resulting inverted image. To get the original image, use the inverse FT. The inversion of the resulting image for the circle is not apparent because it is symmetric everywhere.
B. Simulation of an imaging device.
Now, let us go to the concept of convolution. Given two functions f and g, their convolution gives out a function that is similar to both f and g. In this part of the activity, we will simulate an imaging system. For the object, a 128x128 image of the letters "VIP" was used. For the imaging aperture, three centered circles of different radii were used.
Figure 3. A 128x128 image of letters VIP.
Figure 4. Circles of different radii to act as an aperture
of the imaging system.
Figure 5. Convolution of the object to circle of increasing radii (left to right).
As can be observed, as the radius of the aperture is decreased, the image becomes blurry. In an imaging system, the aperture acts as the limiter of the amount of light that will enter it. As the radius of the aperture is decreased, the amount of information from the object also decreases. This explains the image becoming blurry when the aperture is decreased.
C. Template matching using correlation.
The FT also has applications on template matching. Template matching is a pattern recognition technique that finds exact identical patterns in a scene [1]. Figure 6 shows the template used and the pattern to be detected. Their FFT was obtained. Then, the complex conjugate of the resulting FFT of the template was obtained using the function conj(). Then, it was multiplied element per element to the FFT of the letter A. Then, the FFT was obtained, as shown in Figure 7.
Figure 6. Image used for pattern recognition (left) and the pattern to be
recognized (right).
Figure 7. Output of the template matching.
The bright spots on the resulting image represents the positions of the letter A. So, the pattern to be detected is indeed detected. :)
D. Edge detection using the convolution integral.
This is like template matching in the previous activity, but this time, the ones to be detected are the edges. Using Scilab, horizontal, vertical, and diagonal patterns were generated. 3x3 matrices whose total sum is zero were used to generate this patterns, as shown in Figure 8.
Figure 8. 3x3 matrix for horizontal (left), vertical (middle), and
diagonal (right) patterns.
Using the function imcorrcoef(), the patterns were convolved with the VIP image. Figures 9-11 shows the output images of the convolution of the edges to the VIP image.
Figure 9. Horizontal edge detection.
Notice that horizontal lines are more pronounced than vertical and diagonal lines. More so, the vertical lines are completely not seen. This is because the horizontal and vertical lines are not related in any way. There is a hint of diagonal lines because it has a horizontal component.
Figure 10. Vertical edge detection.
Notice that the vertical lines are more pronounced than the other lines. This time, the horizontal lines are missing, and can be explained using the same argument in the horizontal edge detection. Again, there is a hint of diagonal lines because it has a vertical component.
Figure 11. Diagonal edge detection.
For the diagonal edge detection, diagonal, horizontal, and vertical lines are all apparent. As been explained earlier, the diagonal has horizontal and vertical components.
And that is the end of this activity. Hurray!!! :)
Though I posted this blog late, I would still give my self a grade of 10/10. I produced all the required outputs and I'm able to explain the results. I would like to thank Ma'am Jing and Joseph Raphael Bunao for the useful conversations regarding the FFT. :D