Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:


New tools for processing satellite images

Remote Sensing Group researchers based at the Universidad Politécnica de Madrid’s School of Computing (FIUPM) have developed a number of satellite image fusion techniques, making the images more useful and applicable in a variety of fields like urban planning, natural resources management or precision farming, etc.

To ease the use of these techniques, the Remote Sensing Group has built add-ons for ImageJ. ImageJ is public domain Java image processing software. These add-ons are necessary to process multi-spectral images, and there is now an image read/write module with specific formats for multi-band images (usable for any number of spectral bands).

Additionally, the Remote Sensing Group has developed a module of generic utilities. Another module implements the image transforms most commonly used in image fusion. And yet another includes different multi-spectral and panchromatic image fusion algorithms, including algorithms developed by the Remote Sensing Group, already published in specialized journals.

Information requirements

Some of our planet’s most pressing problems, like population growth, the need for sustainable intensive agriculture, safe food and energy production and distribution, soil management, climate change, public health and social conflicts, cannot be solved without immediate information on a local and a global scale.

Remote sensing is the only available technology at present for acquiring most of this information almost immediately and on any scale, as it supplies a tremendous amount of high quality (accurate, consistent and reliable) geographic, spatial and spectral data, etc.

One of the most commonly used products in remote sensing are images from remote optical sensors: multi-spectral and panchromatic images. A multi-spectral image is an image recorded by a sensor that picks up information contained in different bands of the electromagnetic scale. A multi-spectral image is composed of as many images of the same area of the earth’s surface as spectral bands recorded by the sensor (spectral resolution). Each of these images actually provides information about the amount of radiation reflected by the different surfaces in the area being observed within each of the spectral bands. Another feature of these images is their spatial resolution. Spatial resolution determines the size of the smallest detail that can be observed in the image.

On the other hand, panchromatic images provide the same type of information for a single, normally the broadest, spectral band.

Information extraction methods

The volume of data to be managed and interpreted can vary significantly depending on the sensor type used and the features of the images they record. Even so, data throughput is always very high and calls for objective and precise information extraction methods.

Traditionally these methods have been based on automatic classification and interpretation techniques. On the whole, these techniques output qualitative information that is not always easily quantifiable. Spectral indices or model inversion are other approaches that have recently proved to be more effective for extracting quantitative information from satellite images.

Whatever approach is taken, the precision of the results provided by the remote sensing techniques depends on a series of factors. Current sensor technology (ETM+, ASTER, MERIS, MOS, SPOT, IKONOS, QUICKBIRD...) and, consequently, the features (spatial, spectral and temporal resolution) of the images they record are two key factors. Due to space platform information transmission technology limitations, there is at present a trade-off between these three types of resolution. Sensors that provide a high spatial resolution tend to have a lower spectral resolution and vice versa. This is why panchromatic images have a higher spatial resolution than multi-spectral images. Note, for example, that multi-spectral images from space programmes like LANDSAT, SPOT or IRS, have a high spectral resolution compared with the respective panchromatic images, but their spatial resolution is lower.

Image fusion techniques

Still, there is a wide range of remote sensing applications that require satellite images combining both features (high spatial and spectral resolution).

Even though some of today’s satellites, like QUICKBIRD and IKONOS, do actually deliver such high resolution images, they are costly and not always to be had by ordinary users. One way of getting high resolution spatial and spectral images at fairly reasonable costs is to use image fusion techniques. What’s more, image fusion techniques can output very high resolution images if applied to data recorded by latest generation sensors. The goal of image fusion techniques is to consistently integrate information from different images, assuring that the final fused image retains the key information from the source images.

There are now many optical image fusion methodologies and algorithms. The most common are based on a number of different transforms. Some, like the methodologies based on the Brovey transform, principal component analysis or the IHS transformation method (intensity, hue, saturation), are conceptually very simple.

As demonstrated in many papers, however, the colour of the fused images output by these methodologies is quite distorted compared with the original multi-spectral images. This means that they cannot be used in a variety of routine tasks in the field of remote sensing, like image classification or change detection.

On the other hand, there are many methods based on multi-resolution analysis techniques. Most of these techniques use the discrete wavelet transform (DWT). Generally, these methods provide only a slight distortion of the colour of the fused images with respect to the multi-spectral image. They outperform the techniques described above, but there is no way of controlling the trade-off between the spatial and spectral quality of the fused images.

Remote Sensing Group Techniques

The School of Computing’s Remote Sensing Group has proposed a weighted version of DWT-mediated fusion calculated using the à trous algorithm. This algorithm can efficiently separate the background information from the detail of an image, avoiding the decimation process characteristic of other algorithms used to calculate the DWT.

The Remote Sensing Group has also proposed other fusion techniques based on multidirection-multiresolution transforms (MDMR). These methods output better quality fused images than DWT-based techniques and, at the same time, have a built-in facility for controlling the trade-off between the images’ spatial quality and colour distortion.

Accessible tools

With the aim of easing the use of the proposed algorithms already published in reputed international journals, the Remote Sensing Group set itself the goal of integrating the methods into a tool.

To do this, it set out to build the tools required to process multi-spectral images into public domain image processing software developed in Java: ImageJ.

As mentioned earlier, the modules that are now up and running are:

• Image read/write module with specific formats for multi-band images (any number of spectral bands)
• Generic utilities module
• Implementation of the image transforms most commonly used in image fusion
• Multi-spectral and panchromatic fusion algorithms. They include the algorithms developed within the School of Computing’s Remote Sensing Group, published in international journals and described above.

All these ImageJ add-ons have been packaged as IJFusion.

Eduardo Martínez | alfa
Further information:

More articles from Information Technology:

nachricht Fraunhofer FIT joins Facebook's Telecom Infra Project
25.10.2016 | Fraunhofer-Institut für Angewandte Informationstechnik FIT

nachricht Stanford researchers create new special-purpose computer that may someday save us billions
21.10.2016 | Stanford University

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Etching Microstructures with Lasers

Ultrafast lasers have introduced new possibilities in engraving ultrafine structures, and scientists are now also investigating how to use them to etch microstructures into thin glass. There are possible applications in analytics (lab on a chip) and especially in electronics and the consumer sector, where great interest has been shown.

This new method was born of a surprising phenomenon: irradiating glass in a particular way with an ultrafast laser has the effect of making the glass up to a...

Im Focus: Light-driven atomic rotations excite magnetic waves

Terahertz excitation of selected crystal vibrations leads to an effective magnetic field that drives coherent spin motion

Controlling functional properties by light is one of the grand goals in modern condensed matter physics and materials science. A new study now demonstrates how...

Im Focus: New 3-D wiring technique brings scalable quantum computers closer to reality

Researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo led the development of a new extensible wiring technique capable of controlling superconducting quantum bits, representing a significant step towards to the realization of a scalable quantum computer.

"The quantum socket is a wiring method that uses three-dimensional wires based on spring-loaded pins to address individual qubits," said Jeremy Béjanin, a PhD...

Im Focus: Scientists develop a semiconductor nanocomposite material that moves in response to light

In a paper in Scientific Reports, a research team at Worcester Polytechnic Institute describes a novel light-activated phenomenon that could become the basis for applications as diverse as microscopic robotic grippers and more efficient solar cells.

A research team at Worcester Polytechnic Institute (WPI) has developed a revolutionary, light-activated semiconductor nanocomposite material that can be used...

Im Focus: Diamonds aren't forever: Sandia, Harvard team create first quantum computer bridge

By forcefully embedding two silicon atoms in a diamond matrix, Sandia researchers have demonstrated for the first time on a single chip all the components needed to create a quantum bridge to link quantum computers together.

"People have already built small quantum computers," says Sandia researcher Ryan Camacho. "Maybe the first useful one won't be a single giant quantum computer...

All Focus news of the innovation-report >>>



Event News

#IC2S2: When Social Science meets Computer Science - GESIS will host the IC2S2 conference 2017

14.10.2016 | Event News

Agricultural Trade Developments and Potentials in Central Asia and the South Caucasus

14.10.2016 | Event News

World Health Summit – Day Three: A Call to Action

12.10.2016 | Event News

Latest News

How nanoscience will improve our health and lives in the coming years

27.10.2016 | Materials Sciences

OU-led team discovers rare, newborn tri-star system using ALMA

27.10.2016 | Physics and Astronomy

'Neighbor maps' reveal the genome's 3-D shape

27.10.2016 | Life Sciences

More VideoLinks >>>