To ease the use of these techniques, the Remote Sensing Group has built add-ons for ImageJ. ImageJ is public domain Java image processing software. These add-ons are necessary to process multi-spectral images, and there is now an image read/write module with specific formats for multi-band images (usable for any number of spectral bands).
Additionally, the Remote Sensing Group has developed a module of generic utilities. Another module implements the image transforms most commonly used in image fusion. And yet another includes different multi-spectral and panchromatic image fusion algorithms, including algorithms developed by the Remote Sensing Group, already published in specialized journals.
Some of our planet’s most pressing problems, like population growth, the need for sustainable intensive agriculture, safe food and energy production and distribution, soil management, climate change, public health and social conflicts, cannot be solved without immediate information on a local and a global scale.
Remote sensing is the only available technology at present for acquiring most of this information almost immediately and on any scale, as it supplies a tremendous amount of high quality (accurate, consistent and reliable) geographic, spatial and spectral data, etc.
One of the most commonly used products in remote sensing are images from remote optical sensors: multi-spectral and panchromatic images. A multi-spectral image is an image recorded by a sensor that picks up information contained in different bands of the electromagnetic scale. A multi-spectral image is composed of as many images of the same area of the earth’s surface as spectral bands recorded by the sensor (spectral resolution). Each of these images actually provides information about the amount of radiation reflected by the different surfaces in the area being observed within each of the spectral bands. Another feature of these images is their spatial resolution. Spatial resolution determines the size of the smallest detail that can be observed in the image.
On the other hand, panchromatic images provide the same type of information for a single, normally the broadest, spectral band.
Information extraction methods
The volume of data to be managed and interpreted can vary significantly depending on the sensor type used and the features of the images they record. Even so, data throughput is always very high and calls for objective and precise information extraction methods.
Traditionally these methods have been based on automatic classification and interpretation techniques. On the whole, these techniques output qualitative information that is not always easily quantifiable. Spectral indices or model inversion are other approaches that have recently proved to be more effective for extracting quantitative information from satellite images.
Whatever approach is taken, the precision of the results provided by the remote sensing techniques depends on a series of factors. Current sensor technology (ETM+, ASTER, MERIS, MOS, SPOT, IKONOS, QUICKBIRD...) and, consequently, the features (spatial, spectral and temporal resolution) of the images they record are two key factors. Due to space platform information transmission technology limitations, there is at present a trade-off between these three types of resolution. Sensors that provide a high spatial resolution tend to have a lower spectral resolution and vice versa. This is why panchromatic images have a higher spatial resolution than multi-spectral images. Note, for example, that multi-spectral images from space programmes like LANDSAT, SPOT or IRS, have a high spectral resolution compared with the respective panchromatic images, but their spatial resolution is lower.
Image fusion techniques
Still, there is a wide range of remote sensing applications that require satellite images combining both features (high spatial and spectral resolution).
Even though some of today’s satellites, like QUICKBIRD and IKONOS, do actually deliver such high resolution images, they are costly and not always to be had by ordinary users. One way of getting high resolution spatial and spectral images at fairly reasonable costs is to use image fusion techniques. What’s more, image fusion techniques can output very high resolution images if applied to data recorded by latest generation sensors. The goal of image fusion techniques is to consistently integrate information from different images, assuring that the final fused image retains the key information from the source images.
There are now many optical image fusion methodologies and algorithms. The most common are based on a number of different transforms. Some, like the methodologies based on the Brovey transform, principal component analysis or the IHS transformation method (intensity, hue, saturation), are conceptually very simple.
As demonstrated in many papers, however, the colour of the fused images output by these methodologies is quite distorted compared with the original multi-spectral images. This means that they cannot be used in a variety of routine tasks in the field of remote sensing, like image classification or change detection.
On the other hand, there are many methods based on multi-resolution analysis techniques. Most of these techniques use the discrete wavelet transform (DWT). Generally, these methods provide only a slight distortion of the colour of the fused images with respect to the multi-spectral image. They outperform the techniques described above, but there is no way of controlling the trade-off between the spatial and spectral quality of the fused images.
Remote Sensing Group TechniquesThe School of Computing’s Remote Sensing Group has proposed a weighted version of DWT-mediated fusion calculated using the à trous algorithm. This algorithm can efficiently separate the background information from the detail of an image, avoiding the decimation process characteristic of other algorithms used to calculate the DWT.
The Remote Sensing Group has also proposed other fusion techniques based on multidirection-multiresolution transforms (MDMR). These methods output better quality fused images than DWT-based techniques and, at the same time, have a built-in facility for controlling the trade-off between the images’ spatial quality and colour distortion.
With the aim of easing the use of the proposed algorithms already published in reputed international journals, the Remote Sensing Group set itself the goal of integrating the methods into a tool.
To do this, it set out to build the tools required to process multi-spectral images into public domain image processing software developed in Java: ImageJ.
As mentioned earlier, the modules that are now up and running are:• Image read/write module with specific formats for multi-band images (any number of spectral bands)
All these ImageJ add-ons have been packaged as IJFusion.
18.08.2017 | Albert-Ludwigs-Universität Freiburg im Breisgau
AI implications: Engineer's model lays groundwork for machine-learning device
18.08.2017 | Washington University in St. Louis
Whether you call it effervescent, fizzy, or sparkling, carbonated water is making a comeback as a beverage. Aside from quenching thirst, researchers at the University of Illinois at Urbana-Champaign have discovered a new use for these "bubbly" concoctions that will have major impact on the manufacturer of the world's thinnest, flattest, and one most useful materials -- graphene.
As graphene's popularity grows as an advanced "wonder" material, the speed and quality at which it can be manufactured will be paramount. With that in mind,...
Physicists at the University of Bonn have managed to create optical hollows and more complex patterns into which the light of a Bose-Einstein condensate flows. The creation of such highly low-loss structures for light is a prerequisite for complex light circuits, such as for quantum information processing for a new generation of computers. The researchers are now presenting their results in the journal Nature Photonics.
Light particles (photons) occur as tiny, indivisible portions. Many thousands of these light portions can be merged to form a single super-photon if they are...
For the first time, scientists have shown that circular RNA is linked to brain function. When a RNA molecule called Cdr1as was deleted from the genome of mice, the animals had problems filtering out unnecessary information – like patients suffering from neuropsychiatric disorders.
While hundreds of circular RNAs (circRNAs) are abundant in mammalian brains, one big question has remained unanswered: What are they actually good for? In the...
An experimental small satellite has successfully collected and delivered data on a key measurement for predicting changes in Earth's climate.
The Radiometer Assessment using Vertically Aligned Nanotubes (RAVAN) CubeSat was launched into low-Earth orbit on Nov. 11, 2016, in order to test new...
A study led by scientists of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science in Hamburg presents evidence of the coexistence of superconductivity and “charge-density-waves” in compounds of the poorly-studied family of bismuthates. This observation opens up new perspectives for a deeper understanding of the phenomenon of high-temperature superconductivity, a topic which is at the core of condensed matter research since more than 30 years. The paper by Nicoletti et al has been published in the PNAS.
Since the beginning of the 20th century, superconductivity had been observed in some metals at temperatures only a few degrees above the absolute zero (minus...
16.08.2017 | Event News
04.08.2017 | Event News
26.07.2017 | Event News
18.08.2017 | Life Sciences
18.08.2017 | Physics and Astronomy
18.08.2017 | Materials Sciences