Chapter 2.1 Overview:
Image functions
The Dirac distribution and convolution
The Fourier transform
Images as a stochastic process
Images as linear systems
A signal is a function depending on some variable with physical meaning.  
Signals can be


A scalar function may be sufficient to describe a monochromatic image, while vector functions represent, for example, color images consisting of three component colors. 
The image can be modeled by a continuous function of two or three variables; arguments are coordinates x, y in a plane. If images change in time, a third variable t might be added.  
The image function values correspond to the brightness at image points. The function value can express other physical quantities as well (temperature, pressure distribution, distance from the observer, etc.). 

The brightness integrates different optical quantities – using brightness as a basic quantity allows us to avoid the description of the very complicated process of image formation.  
The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing information about brightness points anintensity image.  
The real world which surrounds us is intrinsically 3D. The 2D intensity image is the result of a perspective projection of the 3D scene.  
When 3D objects are mapped into the camera plane by perspective projection a lot of information disappears, as such a transformation is not onetoone. Recognizing or reconstructing objects in a 3D scene from one image is an illposed problem.  
Recovering information lost by perspective projection is only one, mainly geometric, problem of computer vision.  
The second problem is how to understand image brightness. The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as


Some scientific and technical disciplines work with 2D images directly; for example,


Many basic and useful methods used in digital image analysis do not depend on whether the object was originally 2D or 3D. Much of the material in this class restricts itself to the study of such methods — the problem of 3D understanding is addressed in Computer Vision class (EE628). 

Related disciplines are photometry which is concerned with brightness measurement, and colorimetry which studies light reflectance or emission depending on wavelength.  
A light source energy distribution C(x,y,t,lambda) depends in general on image coordinates (x, y), time t, and wavelength lambda. 

For the human eye and most technical image sensors (e.g., TV cameras) the brightness f depends on the light source energy distribution C and the spectral sensitivity of the sensor,S(lambda) (dependent on the wavelength). (Eq. 2.2) 
A monochromatic image f(x,y,t) provides the brightness distribution. Image processing often deals with static images, in which time t is constant. A monochromatic static image is represented by a continuous image function f(x,y) whose arguments are two coordinates in the plane.  
In a color or multispectral image, the image is represented by a real vector function f (Eq. 2.3) where, for example, there may be red, green and blue components.  
Computerized image processing uses digital image functions which are usually represented by matrices, so coordinates are integer numbers. The customary orientation of coordinates in an image is in the normal Cartesian fashion (horizontal x axis, vertical y axis), although the (row, column) orientation used in matrices is also quite often used in digital image processing. 

The range of image function values is also limited; by convention, in monochromatic images the lowest value corresponds to black and the highest to white.  
Brightness values bounded by these limits are gray levels.
Practical Experiment 2.A – Exploring Image Intensity Maps Open Matlab and run the demo imadjdemo. Explore the gray level properties of the aluminum image, compare the gray level values before and after gray level transformation. Are dark pixels represented by low values? What seems to be the maximum value of the image corresponding to white? How many bits are probably used to represent the gray level range (number of gray levels =2^(number of bits))? 

The quality of a digital image grows in proportion to the spatial, spectral, radiometric, and time resolution.

Will be covered next week in a separate handout.
Not covered in this section. Refer to section 11.2 The Fourier Transform.
Images f(x,y) can be treated as deterministic functions or as realizations of stochastic processes. 

Mathematical tools used in image description have roots in linear system theory, integral transformations, discrete mathematics and the theory of stochastic processes. 
Many image processing operations can be modeled as a linear system. Some examples are convolutional masks and Weiner filters.  
A linear system satisfies the operations of scaling and superposition:T{a f_{1} + a f_{2} } = a T{ f_{1} } + b T{ f_{2} }  
This allows us to apply linear systems theory towards the processing of images. This means that we can use the convolution theorem to implement image processing operations:g(x,y) = f * h => G(u,v) = F(u,v) .* H(u,v)  
Real images are not linear–since they are limited in size and the number of quantization levels. However, for many cases, they can be approximated by linear systems. 
Sampling
Quantization
Color images
An image captured by a sensor is expressed as a continuous function f(x,y) of two coordinates in the plane.  
Image digitization means that the function f(x,y) is sampled into a matrix with M rows and N columns.  
Image quantization assigns to each continuous sample an integer value.


Two questions should be answered in connection with image function sampling:

A continuous image function f(x,y) can be sampled using a discrete grid of sampling points in the plane. The image is sampled at points x = j ( Delta_x), y = k (Delta_y) 

Two neighboring sampling points are separated by distance Delta_x along the x axis and Delta_y along the y axis. Distances Delta_x and Delta_y are called thesampling interval and the matrix of samples constitutes the discrete image.  
The ideal sampling s(x,y) in the regular grid can be represented using a collection of Dirac distributions (Eq. 2.31)The sampled image is the product of the continuous image f(x,y) and the sampling function s(x,y) (Eq. 2.32)  
The collection of Dirac distributions in equation 2.32 can be regarded as periodic with period x, y and expanded into a Fourier series (assuming that the sampling grid covers the whole plane (infinite limits)). (Eq. 2.33)where the coefficients of the Fourier expansion can be calculated as given in Eq. 2.34  
Since only the terms for j=0 and k=0 in the sum is nonzero in the range of integration, the coefficients are 
Since the integral in equation 2.35 is uniformly equal to one, the coefficients can be expressed as given in Eq. 2.36 and 2.32 can be rewritten as Eq. 2.37. In the frequency domain then Eq. 2.38. 
Thus the Fourier transform of the sampled image is the sum of periodically repeated Fourier transforms F(u,v) of the image.  
Periodic repetition of the Fourier transform result F(u,v) may under certain conditions cause distortion of the image which is called aliasing; this happens when individual digitized components F(u,v) overlap.  
There is no aliasing if the image function f(x,y) has a band limited spectrum … its Fourier transform F(u,v)=0 outside a certain interval of frequencies u > U; v > V.  
As you know from general sampling theory, overlapping of the periodically repeated results of the Fourier transform F(u,v) of an image with band limited spectrum can be prevented if the sampling interval is chosen according to Eq. 2.39 

This is the Shannon sampling theorem that has a simple physical interpretation in image analysis: The sampling interval should be chosen in size such that it is less than or equal to half of the smallest interesting detail in the image.  
The sampling function is not the Dirac distribution in real digitizers — narrow impulses with limited amplitude are used instead. As a result, in real image digitizers a sampling interval about ten times smaller than that indicated by the Shannon sampling theorem is used – because the algorithms for image reconstruction use only a step function. 

Practical examples of digitization such as a flatbed scanner and digital cameras help to understand the reality of sampling. Try experimenting with a flatbed scanner at different sampling rates to see how this works.  
Examples of sampling and resampling:


A continuous image is digitized at sampling points. These sampling points are ordered in the plane and their geometric relation is called the grid. Grids used in practice are mainly square or hexagonal (Figure 2.4). 

One infinitely small sampling point in the grid corresponds to one picture element (pixel) in the digital image. The set of pixels together covers the entire image. Pixels captured by a real digitization device have finite sizes. The pixel is a unit which is not further divisible, sometimes pixels are also called points. 
A magnitude of the sampled image is expressed as a digital value in image processing.  
The transition between continuous values of the image function (brightness) and its digital equivalent is called quantization.  
The number of quantization levels should be high enough for human perception of fine shading details in the image.  
Most digital image processing devices use uniform quantization into k equal intervals. If b bits are used … the number of brightness levels is k=2^b.  
Eight bits per pixel (256 gray levels) are commonly used, specialized measuring devices use twelve and more bits per pixel. 
Download the file, ex2bquant.m and the image file, ssl.256. You also need to download the utility file, binread.m. From within Matlab, load the image and run the file as follows:
>X=binread(‘ssl.256’); >imquant(X);This will display the image in matrix X at 2,4, 8,16, 32, 64, 128 and 256 graylevels in two figure windows. At what number of quantization levels do false contours appear?
Not covered.
Metric and topological properties of digital images  
Histograms  
Visual perception of the image  
Image quality  
Noise in images 
Metric and topological properties of digital images
Distance is an important example. The distance between two pixels in a digital image is a significant quantitative measure.  
The distance between points with coordinates (i,j) and (h,k) may be defined in several different ways;


Pixel adjacency is another important concept in digital images.

Topological properties of images are invariant to rubber sheet transformations. Stretching does not change contiguity of the object parts and does not change the number of holes in regions.  
One topological image property is the Euler–Poincare characteristic defined as the difference between the number of regions and the number of holes in them.  
Convex hulls are used to describe topological properties of objects. The convex hull is the smallest region which contains the object, such that any two points of the region can be connected by a straight line, all points of which belong to the region. 
Brightness histogram provides the frequency of the brightness values z in the image.  
Algorithm 2.1 Finding a Brightness Histogram for a grayscale image1.Assign zero values to all elements of the vector, h, of size k by 1, where k is the number of graylevels in the image. 2.For all pixels (x,y) of the image f, increment h(f(x,y)) that corresponds to the graylevel by one. 

Histograms may have many local maxima … histogram smoothing. This can be used to help filter out the background in a image, if the background and the object have different intensities. 
Practical Experiment 2.C – Image Histograms
Start Matlab. Start the histogram demos, imadjdemo. Running the demo displays histograms of the original image and the equalized image. Analyzing the image and histogram differences, can you describe the functionality of the histogram equalization routine? In which image can you use the histogram to separate out the background and the image?
Need to take into account what a human perceives, even if we are working on computer vision. People are susceptible to many illusions; understanding these illusions gives clues about how the human visual systems works. The sensitivity of human senses is approximately logarithmically proportional to the intensity of an input signal.

Subjective: criteria depends on the perception of a selected group of viewers. Images are appraised according to a list of criteria  
Objective: Depends on a calculated metric, ideally it should correspond to good subjective quality as well. Usually the quality is compared to a known reference image using a meansquared approach or a maximum difference. Another method is to use calibration points within an image for testing resolution. 
Images are often degraded by random noise.  
Noise can occur during image capture, transmission or processing, and may be dependent on or independent of image content.  
Noise is usually described by its probabilistic characteristics.


During image transmission, noise which is usually independent of the image signal occurs.  
Noise may be
Open up Matlab and open the noise demo, nrfiltdemo. Choose an image and add in different types of Gaussian and saltandpepper noise. Speckle noise is multiplicative noise. Note the effect on the image of each noise type. Noise can be added to any image using the Matlab command, imnoise 
Source:
http://www.eng.iastate.edu/ee528/sonkamaterial/chapter_2.htm
Virtual Fashion Education
"chúng tôi chỉ là tôi tớ của anh em, vì Đức Kitô" (2Cr 4,5b)
hienphap.net
News About Tech, Money and Innovation
Modern art using the GPU
Find the perfect theme for your blog.
Learn to Learn
Con tằm đến thác vẫn còn vương tơ
Khoa Vật lý, Đại học Sư phạm Tp.HCM  ĐT :(08)38352020  109
Blog Toán Cao Cấp (M4Ps)
Indulge Travel, Adventure, & New Experiences
"Behind every stack of books there is a flood of knowledge."
The latest news on WordPress.com and the WordPress community.