Lib4U

‎"Behind every stack of books there is a flood of knowledge."

The digitized image and its properties: Basic concepts

resample_high

Home Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5.1 Chapter 6 Chapter 12 Chapter 13 Chapter 14

Chapter 2.1:    The digitized image and its properties: Basic concepts

Chapter 2.1 Overview: 
Image functions
The Dirac distribution and convolution
The Fourier transform
Images as a stochastic process
Images as linear systems

Basic Concepts 

A signal is a function depending on some variable with physical meaning. 
Signals can be 

one-dimensional (e.g., dependent on time), 
two-dimensional (e.g., images dependent on two co-ordinates in a plane), 
three-dimensional (e.g., describing an object in space), 
higher-dimensional. 
A scalar function may be sufficient to describe a monochromatic image, while vector
functions represent, for example, color images consisting of three component colors.

Image functions 

The image can be modeled by a continuous function of two or three variables; arguments are coordinates x, y in a plane. If images change in time, a third variable t might be added. 
The image function values correspond to the brightness at image points. The function value can express other physical quantities as well (temperature, pressure
distribution, distance from the observer, etc.). 
The brightness integrates different optical quantities – using brightness as a basic quantity allows us to avoid the description of the very complicated process of image formation. 
The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing information about brightness points anintensity image
The real world which surrounds us is intrinsically 3D.  The 2D intensity image is the result of a perspective projection of the 3D scene.  
When 3D objects are mapped into the camera plane by perspective projection a lot of information disappears, as such a transformation is not one-to-one.  Recognizing or reconstructing objects in a 3D scene from one image is an ill-posed problem.
Recovering information lost by perspective projection is only one, mainly geometric, problem of computer vision. 
The second problem is how to understand image brightness. The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as 

object surface reflectance properties (given by the surface material, microstructure
and marking), 
illumination properties, 
object surface orientation with respect to a viewer and light source. 
Some scientific and technical disciplines work with 2D images directly; for example, 

an image of the flat specimen viewed by a microscope with transparent illumination, 
a character drawn on a sheet of paper,
the image of a fingerprint, etc.
Many basic and useful methods used in digital image analysis do not depend on whether the object was originally 2D or 3D.  Much of the material in this class restricts itself to the study of such methods — the problem of 3D understanding is addressed in Computer Vision class (EE628).
Related disciplines are photometry which is concerned with brightness measurement, and colorimetry which studies light reflectance or emission depending on wavelength.  
A light source energy distribution C(x,y,t,lambda) depends in general on image co-ordinates (x, y), time t, and wavelength lambda.
For the human eye and most technical image sensors (e.g., TV cameras) the brightness f
depends on the light source energy distribution and the spectral sensitivity of the sensor,S(lambda) (dependent on the wavelength). (Eq. 2.2) 

monochromatic image f(x,y,t) provides the brightness distribution. Image processing often deals with static images, in which time t is constant. A monochromatic static image is represented by a continuous image function f(x,y) whose arguments are two co-ordinates in the plane.  
In a color or multispectral image, the image is represented by a real vector function f (Eq. 2.3) where, for example, there may be red, green and blue components.
Computerized image processing uses digital image functions which are usually represented by matrices, so co-ordinates are integer numbers. The customary orientation of co-ordinates in an image is in the normal Cartesian fashion
(horizontal x axis, vertical y axis), although the (row, column) orientation used in matrices is also quite often used in digital image processing. 
The range of image function values is also limited; by convention, in monochromatic images the lowest value corresponds to black and the highest to white. 
Brightness values bounded by these limits are gray levels.

Practical Experiment 2.A – Exploring Image Intensity Maps

Open Matlab and run the demo imadjdemo. Explore the gray level properties of the aluminum image, compare the gray level values before and after gray level transformation. Are dark pixels represented by low values? What seems to be the maximum value of the image corresponding to white? How many bits are probably used to represent the gray level range (number of gray levels =2^(number of bits))?

The quality of a digital image grows in proportion to the spatial, spectral, radiometric, and
time resolution.

The spatial resolution is given by the proximity of image samples in the image plane. 
The spectral resolution is given by the bandwidth of the light frequencies captured by the sensor. 
The radiometric resolution corresponds to the number of distinguishable gray levels. 
The time resolution is given by the interval between time samples at which images are captured.

The Dirac distribution and convolution 

Will be covered next week in a separate handout.

The Fourier transform 

Not covered in this section. Refer to section 11.2 The Fourier Transform.

Images as a stochastic process 

Images f(x,y) can be treated as deterministic functions or as realizations of stochastic
processes. 
Mathematical tools used in image description have roots in linear system theory, integral
transformations, discrete mathematics and the theory of stochastic processes.

Images as linear systems 

Many image processing operations can be modeled as a linear system. Some examples are convolutional masks and Weiner filters.
A linear system satisfies the operations of scaling and superposition:T{a f1 + a f2 } = a T{ f1 } + Tf2 }
This allows us to apply linear systems theory towards the processing of images. This means that we can use the convolution theorem to implement image processing operations:g(x,y) = f * h    =>  G(u,v) = F(u,v) .* H(u,v)
Real images are not linear–since they are limited in size and the number of quantization levels. However, for many cases, they can be approximated by linear systems.


Chapter 2.2 Digitized image and its properties: Image digitization

Chapter 2.2 Overview:
Sampling 
Quantization
Color images

Image digitization 

An image captured by a sensor is expressed as a continuous function f(x,y) of two co-ordinates in the plane. 
Image digitization means that the function f(x,y) is sampled into a matrix with M rows and N columns. 
Image quantization assigns to each continuous sample an integer value. 

The continuous range of the image function f(x,y) is split into K intervals. 
The finer the sampling (i.e., the larger M and N) and quantization (the larger K) the better the approximation of the continuous image function f(x,y).
Two questions should be answered in connection with image function sampling: 

The sampling period should be determined — the distance between two neighboring sampling points in the image 
The geometric arrangement of sampling points (sampling grid) should be set. 

Sampling 

A continuous image function f(x,y) can be sampled using a discrete grid of sampling points in the plane.  The image is sampled at points x = j ( Delta_x), y = k (Delta_y) 
Two neighboring sampling points are separated by distance Delta_x along the x axis and Delta_y along the y axis.  Distances Delta_x and Delta_y are called thesampling interval and the matrix of samples constitutes the discrete image. 
The ideal sampling s(x,y) in the regular grid can be represented using a collection of Dirac distributions (Eq. 2.31)The sampled image is the product of the continuous image f(x,y) and the sampling function s(x,y) (Eq. 2.32)
The collection of Dirac distributions in equation 2.32 can be regarded as periodic with period x, y and expanded into a Fourier series (assuming that the sampling grid covers the whole plane (infinite limits)). (Eq. 2.33)where the coefficients of the Fourier expansion can be calculated as given in Eq. 2.34
Since only the terms for j=0 and k=0 in the sum is nonzero in the range of integration, the coefficients are

Since the integral in equation 2.35 is uniformly equal to one, the coefficients can be expressed as given in Eq. 2.36 and 2.32 can be rewritten as Eq. 2.37. In the frequency domain then Eq. 2.38.

Thus the Fourier transform of the sampled image is the sum of periodically repeated Fourier transforms F(u,v) of the image. 
Periodic repetition of the Fourier transform result F(u,v) may under certain conditions cause distortion of the image which is called aliasing; this happens when individual digitized components F(u,v) overlap. 
There is no aliasing if the image function f(x,y) has a band limited spectrum … its Fourier transform F(u,v)=0 outside a certain interval of frequencies |u| > U; |v| > V
As you know from general sampling theory, overlapping of the periodically repeated results of the Fourier transform F(u,v) of an image with band limited spectrum can be prevented if the sampling interval is chosen according to Eq. 2.39
This is the Shannon sampling theorem that has a simple physical interpretation in image analysis: The sampling interval should be chosen in size such that it is less than or equal to half of the smallest interesting detail in the image. 
The sampling function is not the Dirac distribution in real digitizers — narrow impulses with limited amplitude are used instead. As a result, in real image digitizers a sampling interval about ten times smaller than that indicated by the Shannon sampling theorem is used – because the algorithms for image reconstruction use only a step function.
Practical examples of digitization such as a flatbed scanner and digital cameras help to understand the reality of sampling. Try experimenting with a flatbed scanner at different sampling rates to see how this works.
Examples of sampling and resampling: 

Original image of Golden Gate Bridge:

Image resampled at half the original resolution:
Image resampled at twice the original resolution:


With image size adjusted:
Image which has been downsampled at one-tenth resolution, and upsampled at 10x resolution:

A continuous image is digitized at sampling points.  These sampling points are ordered in the plane and their geometric relation is called the grid. Grids used in practice are mainly square or hexagonal (Figure 2.4).
One infinitely small sampling point in the grid corresponds to one picture element (pixel) in the digital image.  The set of pixels together covers the entire image. Pixels captured by a real digitization device have finite sizes. The pixel is a unit which is not further divisible, sometimes pixels are also called points. 

Quantization 

A magnitude of the sampled image is expressed as a digital value in image processing. 
The transition between continuous values of the image function (brightness) and its digital equivalent is called quantization. 
The number of quantization levels should be high enough for human perception of fine shading details in the image.
Most digital image processing devices use uniform quantization into k equal intervals. If b bits are used … the number of brightness levels is k=2^b
Eight bits per pixel (256 gray levels) are commonly used, specialized measuring devices use twelve and more bits per pixel. 

Practical Experiment 2.B: Quantization Effects

Download the file, ex2bquant.m and the image file, ssl.256. You also need to download the utility file, binread.m.  From within Matlab, load the image and run the file as follows:

>X=binread(‘ssl.256’);
>imquant(X);

This will display the image in matrix X at 2,4, 8,16, 32, 64, 128 and 256 gray-levels in two figure windows. At what number of quantization levels do false contours appear?

Color images 

Not covered.

Chapter 2.3 Digitized image and its properties: Digital image properties

Overview: 

Metric and topological properties of digital images
Histograms
Visual perception of the image
Image quality
Noise in images

Digital image properties

Metric and topological properties of digital images

Distance is an important example.  The distance between two pixels in a digital image is a significant quantitative measure. 
The distance between points with co-ordinates (i,j) and (h,k) may be defined in several different ways;

 Euclidean distance is defined by Eq. 2.42
city block distance … Eq. 2.43
chessboard distance Eq. 2.44

Pixel adjacency is another important concept in digital images.

4-neighborhood 
8-neighborhood (Fig. 2.6)
It will become necessary to consider important sets consisting of several adjacent pixels — regions. A region is a contiguous set of pixels. 
Contiguity paradoxes of the square grid … Figures 2.7, 2.8
One possible solution to contiguity paradoxes is to treat objects using 4-neighborhood and background using 8-neighborhood (or vice versa). 
A hexagonal grid solves many problems of the square grids … any point in the hexagonal raster has the same distance to all its six neighbors. 
Some common definitions for part of a region are:

Border R is the set of pixels within the region that have one or more neighbors outside R … inner borders, outer borders exist. 
Edge is a local property of a pixel and its immediate neighborhood –it is a vector given by a magnitude and direction. 
The edge direction is perpendicular to the gradient direction which points in the direction of image function growth. 
Border and edge … the border is a global concept related to a region, while edge expresses local properties of an image function. 
Crack edges … four crack edges are attached to each pixel, which are defined by its relation to its 4-neighbors. The direction of the crack edge is that of increasing brightness, and is a multiple of 90 degrees, while its magnitude is the absolute difference between the brightness of the relevant pair of pixels. (Fig. 2.9)

Topological properties of digital images

Topological properties of images are invariant to rubber sheet transformations. Stretching does not change contiguity of the object parts and does not change the number of holes in regions. 
One topological  image property is the Euler–Poincare characteristic defined as the difference between the number of regions and the number of holes in them. 
Convex hulls are used to describe topological properties of objects. The convex hull is the smallest region which contains the object, such that any two points of the region can be
connected by a straight line, all points of which belong to the region.

Histograms 

Brightness histogram provides the frequency of the brightness values z in the image.
Algorithm 2.1 Finding a Brightness Histogram for a gray-scale image1.Assign zero values to all elements of the vector, h, of size k by 1, where k is the number of gray-levels in the image.
2.For all pixels (x,y) of the image f, increment h(f(x,y)) that corresponds to the gray-level by one.
Histograms may have many local maxima … histogram smoothing. This can be used to help filter out the background in a image, if the background and the object have different intensities.

Practical Experiment 2.C – Image Histograms 
Start Matlab.  Start the histogram demos, imadjdemo. Running the demo displays histograms of the original image and the equalized image. Analyzing the image and histogram differences, can you describe the functionality of the histogram equalization routine?  In which image can you use the histogram to separate out the background and the image?

Visual perception of the image 

Need to take into account what a human perceives, even if we are working on computer vision. People are susceptible to many illusions; understanding these illusions gives clues about how the human visual systems works. The sensitivity of human senses is approximately logarithmically proportional to the intensity of an input signal.

Contrast: the local change in brightness; it is defined as the ratio between the average brightness of an object and the background brightness
Acuity: the ability to detect details in an image. Human are more sensitive to intermediate changes in brightness than to either fast or slow changes
Color: humans are more sensitive to color than to brightness.

Image quality 

Subjective: criteria depends on the perception of a selected group of viewers. Images are appraised according to a list of criteria
Objective: Depends on a calculated metric, ideally it should correspond to good subjective quality as well. Usually the quality is compared to a known reference image using a mean-squared approach or a maximum difference. Another method is to use calibration points within an image for testing resolution.

Noise in images 

Images are often degraded by random noise. 
Noise can occur during image capture, transmission or processing, and may be dependent on or independent of image content. 
Noise is usually described by its probabilistic characteristics. 

White noise – constant power spectrum (its intensity does not decrease with increasing frequency); very crude approximation of image noise 
Gaussian noise is a very good approximation of noise that occurs in many practical cases probability density of the random variable is given by the Gaussian curve; 1D Gaussian noise – µ is the mean and sigma is the standard deviation of the random variable. 
During image transmission, noise which is usually independent of the image signal occurs. 
Noise may be 

additive noise and image signal g are independent
multiplicative noise is a function of signal magnitude
impulsive noise (saturated = salt and pepper noise) 


Practical Experiment 2.D – Noise 

Open up Matlab and open the noise demo, nrfiltdemo. Choose an image and add in different types of Gaussian and salt-and-pepper noise. Speckle noise is multiplicative noise. Note the effect on the image of each noise type. Noise can be added to any image using the Matlab command, imnoise

resample_high

Source:

http://www.eng.iastate.edu/ee528/sonkamaterial/chapter_2.htm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Virtual Fashion Technology

Virtual Fashion Education

toitocuaanhem

"chúng tôi chỉ là tôi tớ của anh em, vì Đức Kitô" (2Cr 4,5b)

VentureBeat

News About Tech, Money and Innovation

digitalerr0r

Modern art using the GPU

Theme Showcase

Find the perfect theme for your blog.

lsuvietnam

Learn to Learn

Gocomay's Blog

Con tằm đến thác vẫn còn vương tơ

Toán cho Vật lý

Khoa Vật lý, Đại học Sư phạm Tp.HCM - ĐT :(08)-38352020 - 109

Maths 4 Physics & more...

Blog Toán Cao Cấp (M4Ps)

Bucket List Publications

Indulge- Travel, Adventure, & New Experiences

Lib4U

‎"Behind every stack of books there is a flood of knowledge."

The WordPress.com Blog

The latest news on WordPress.com and the WordPress community.

%d bloggers like this: