GEOL 315 Introduction to Remote Sensing and GIS PDF

Summary

This document is a course outline for a geology course on remote sensing and geographic information systems. It introduces remote sensing concepts and methods, highlighting spatial data acquisition and different types of sensors commonly used in remote sensing.

Full Transcript

GEOL 315 - Introduction to Remote Sensing and GIS Course Outline – Remote Sensing  Topic 1 - Introduction to remote sensing  Topic 2 – Electromagnetic energy and remote sensing  Topic 3 – Sensors and platforms  Topic 4 – Aerial cameras  Topic 5 – Multispectral scanners  Topic 6 – RADAR  Top...

GEOL 315 - Introduction to Remote Sensing and GIS Course Outline – Remote Sensing  Topic 1 - Introduction to remote sensing  Topic 2 – Electromagnetic energy and remote sensing  Topic 3 – Sensors and platforms  Topic 4 – Aerial cameras  Topic 5 – Multispectral scanners  Topic 6 – RADAR  Topic 7 – Remote sensing below the ground surface  Topic 8 – Radiometric aspects  Topic 9 – Geometric aspects  Topic 10 – Image enhancement and visualization  Topic 11 – Visual image interpretation  Topic 12 – Digital image classification Introduction to Remote Sensing Spatial Data Acquisition All students (not only geology) deal with georeferenced data. They might be involved in collection of the data, analysis of the data or actually using the data for decision making. Data is representations that can be operated upon by a computer. Information is data that has been interpreted by human beings. The need for spatial data is illustrated by the following example; “A geologist is asked to map an area and to produce a geological map. We at times deal with spatio-temporal phenomena since time can be an important dimension. Ground-Based and Remote Sensing Methods In principle there are two main categories of spatial data acquisition: 1. Ground-based methods such as making field observations, taking in situ measurements and performing land surveying. Using ground-based methods, you operate in the real world environment (Fig. 1). 1 Fig. 1 The principle of a ground-based method: measurements and observations are performed in the real world. 2. Remote sensing methods are based on the use of image data acquired by a sensor such as aerial cameras, scanners or radar. Remote sensing approach means that information is derived from the image data, which forms a (limited) representation of the real world (Fig. 2). Fig. 2 The principle of a remote sensing based method: measurements and analysis are performed on image data. Remote Sensing Definitions 2 definitions of remote sensing are: “Remote sensing is the science of acquiring, processing and interpreting images that record the interaction between electromagnetic energy and matter”. “Remote sensing is the science and art of obtaining information about an object, area, or phenomenon through the analysis of data acquired by a device that is not in contact with the object, area or phenomenon under investigation”. Common to the two definitions is that data on characteristics of the Earth’s surface are acquired by a device that is not in contact with the objects being measured. The result is usually stored as image data. The characteristics measured by a sensor are the electromagnetic energy reflected or emitted by the Earth’s surface. This energy relates to some specific parts of the electromagnetic spectrum: usually visible light, but may also be infrared light or radio waves. Earth observation usually refers to spaceborne remote sensing. 2 Airborne and spaceborne sensors contribute to aerospace surveying, which is the combined use of remote sensing and ground-based methods to collect information. Image data need to be processed to yield the required information about the objects or phenomena of interest. The analysis and information extraction or information production is part of the overall remote sensing process. Application of Remote Sensing Remote sensing provides image data - Image data acquired by RS relate to electromagnetic properties of the Earth which can be related to real world parameters or features. Remote sensing requires ground data - Although remote sensing data can be interpreted and processed without other information, the best results are obtained by linking remote sensing measurements to ground (or surface) measurements and observations (Fig. 3). Remote sensing provides area covering data - remote sensing can cover large areas in much less time compared to ground surveys. E.g. Aeromagnetic and ground magnetic survey. Remote sensing provides surface information - in principle, remote sensing provides information about the upper few millimetres of the Earth’s surface. Some techniques, specifically in the microwave domain, relate to greater depth. The fact that measurements only refer to the surface is a limitation of remote sensing. Additional models or assumption are required to estimate subsurface characteristics. Fig. 3 In most situations, remote sensing based data acquisition is complemented by ground-based measurements and observations. Remote sensing is the only way to do it - A remote sensing approach is specifically suited for areas that are difficult to access. E.g. the Pacific Ocean is known for its unfavourable weather and ocean conditions. It is quite difficult to install and maintain a network of measuring buoys in this region. Remote sensing provides multipurpose image data - For example, sea surface temperature (SST) maps could be used years later to find, for example, a relation between SST and the algae blooms around pacific islands. SST maps are not only of interest to researchers but also to large fishing companies that want to guide their vessels to promising fishing grounds. Remote sensing is cost-effective - The validity of this statement is sometimes hard to assess, especially when dealing with spaceborne remote sensing. Consider an international scientific project; installations and 3 maintenance of buoys cost a lot of money whilst meteorological satellites have already been paid for and the data can be considered free. Remote Sensing Process and Topics Fig. 4 Relationship between the main topics and the remote sensing process Electromagnetic Energy and Remote Sensing Introduction Remote sensing relies on the measurement of electromagnetic (EM) energy. The most important source of EM energy at the Earth’s surface is the sun, which provides us, for example, with (visible) light, heat and UV-light, which can be harmful to our skin. Many sensors used in remote sensing measure reflected sunlight. Some sensors, however, detect energy emitted by the Earth itself or provide their own energy (Fig. 5). 4 Fig. 5 A remote sensing sensor measures reflected or emitted energy. An active sensor has its own source of energy A basic understanding of EM energy, its characteristics and its interactions is required to understand the principles of the remote sensor. This knowledge is also needed to interpret remote sensing data correctly. In between the remote sensor and the Earth’s surface is the atmosphere that influences the energy that travels from the earth’s surface to the sensor. Waves and Photons Electromagnetic (EM) energy can be modelled in two ways: by waves or energy bearing particles called photons. In the wave model, electromagnetic energy is considered to propagate through space in the form of sine waves characterised by two field, electrical (E) and magnetic (M) fields, which are perpendicular to each other. The vibration of both fields is perpendicular to the direction of travel of the wave (Fig. 6). Both fields propagate through space at the speed of light c, which is 299,790.000 m/s (approximately 3 x 108 m/s). The wavelength λ is defined as the distance between two successive wave crests (Fig. 6). Unit is metres (m) or some factor of metres such as nanometres (nm, 10-9 m) or micrometres (µm, 10-6 m). The frequency, ν, is the number of cycles of a wave passing a fixed point over a specific period of time. It is normally measured in hertz (Hz) which is equivalent to one cycle per second. Fig. 6 Electric (E) and magnetic (M) vectors of an electromagnetic wave. Since the speed of light is constant, wavelength and frequency are inversely related to each other: c=λxν 8 c = speed of light (3 x 10 m/s), λ = wavelength (m), ν = frequency (cycles per second, Hz) The shorter the wavelength, the higher the frequency and vice versa. The amount of energy held by a photon of a specific wavelength is then given by: 5 c Q=hxν=hx λ Q = energy of a photon (J), h = Planck's constant (6.6262 x 10-34 J s) From the equation, it follows that the longer the wavelength, the lower its energy content. Gamma rays (around 10-9 m) are the most energetic, and radio waves (> 1 m), the least energetic. Sources of EM Energy All matter with a temperature above absolute zero (0 K, where n oC = n + 273 K) radiates EM energy due to molecular agitation. This means that the Sun, and also the Earth, radiate energy in the form of waves. Matter that is capable of absorbing and re-emitting all EM energy is known as a blackbody; both emissivity, ϵ, and the absorptance, α, are equal to (the maximum value of) 1. The emitting ability of a real material compared to that of a blackbody is referred to as the material’s emissivity. Most natural objects have emissivities less than one meaning that only part, usually between 80 – 90 % of the received energy is re- emitted. Part of the energy is absorbed. The amount of energy radiated by an object depends on its absolute temperature, its emissivity and is a function of the wavelength. A higher temperature corresponds to a greater contribution of shorter wavelengths (Fig 7). Fig. 7 Blackbody radiation curves (with temperatures, T, in K) Electromagnetic Spectrum All matter with a certain temperature radiates electromagnetic waves of various wavelengths. The total range of wavelengths is commonly referred to as the electromagnetic spectrum (Fig. 8). It extends from gamma rays to radio waves. Remote sensing operates in several regions of the EM spectrum. 6 Fig. 8 The electro-magnetic spectrum. The optical part of the EM spectrum refers to that part of the EM spectrum in which optical laws can be applied. The optical range extends from X-rays (0.02 µm) through the visible part of the EM spectrum up to and including far-infrared (1000 µm). The ultraviolet (UV) portion of the spectrum has the shortest wavelengths that are of practical use for remote sensing. Some of the earth’s surface materials, primary rocks and minerals, emit or fluoresce visible light when illuminated with UV radiation. The visible region of the spectrum is commonly called ‘light’ and occupies a relatively small portion in the EM spectrum. This is the only portion of the spectrum that we can associate with the concept of colour. BGR are known as the primary colours or wavelengths of the visible spectrum. The longer wavelengths used for remote sensing are in the thermal IR and microwave regions. Thermal IR gives information about surface temperature (can be related to mineral composition of rocks or the condition of vegetation). Microwaves can provide information on surface roughness and the properties of the surface such as water content. Active and Passive Remote Sensing Passive remote sensing techniques employ natural sources of energy such as the sun. Active remote sensing techniques, e.g. radar and laser have their own source of energy. Active sensors emit a controlled beam of energy to the surface and measure the amount of energy reflected back to the sensor. Passive sensor system based on reflection of the sun’s energy can only work during daylight. Passive sensor systems that measure the longer wavelength related to the earth’s temperature do not depend on the sun as a source of illumination and can be operated at any time. Passive sensors need to deal with the varying illumination conditions of the sun, which are greatly influenced by atmospheric conditions. The main advantage of active sensor systems is that they can be operated day and night and have controlled illuminating signal. 7 Energy Interaction in the Atmosphere Before the sun’s energy reaches the Earth’s Surface, three fundamental interactions in the atmosphere are possible: absorption, transmission and scattering. The energy transmitted is then reflected or absorbed by the surface material (Fig. 9). Fig. 9 Energy interactions in the atmosphere and on land. Energy Interaction in the Atmosphere – Absorption and Transmission EM energy travelling through the atmosphere is partly absorbed by various molecules. The most efficient absorbers of solar radiation in the atmosphere are ozone (O3), water vapour (H2O) and carbon dioxide (CO2). Fig. 10 gives a schematic representation of the atmospheric transmission in the 0-22 µm wavelength region. Fig. 10 Atmospheric transmittance 8 About half of the spectrum between 0-22 µm is useless for remote sensing of the earth’s surface because none of the corresponding energy can penetrate the atmosphere. Only the wavelength regions outside the main absorption bands of the atmospheric gases can be used for remote sensing. These regions are referred to as the atmospheric transmission windows and include:  A window in the visible and reflected infrared region, between 0.4-2 µm. This is the window where the (optical) remote sensors operate.  Three windows in the thermal infrared region, namely two narrow windows around 3 and 5 µm, and a third, relatively broad, window extending from approximately 8 to 14 µm. Because of the presence of atmospheric moisture, strong absorption bands are found at longer wavelength. There is hardly any transmission of energy in the region from 22 µm to 1 mm. Energy Interaction in the Atmosphere – Atmospheric Scattering It occurs when the particles or gaseous molecules present in the atmosphere cause the EM waves to be redirected from their original path. The amount of scattering depends on several factors including the wavelength of the radiation, the amount of particles and gases, and the distance the radiation travels through the atmosphere. For the visible wavelength, 100 % (in the case of cloud cover) to 5 % (in the case of a clear atmosphere) of the energy received by the sensor is directly contributed by the atmosphere. Three types of scattering take place: Rayleigh scattering, Mie scattering and Non-selective scattering. Atmospheric Scattering - Rayleigh Scattering This predominates where electromagnetic radiation interacts with particles that are smaller than the wavelength of the incoming light, e.g. tiny specks of dust and nitrogen (NO2) and oxygen (O2) molecules. The effect of Rayleigh scattering is inversely proportional to the wavelength: shorter wavelengths are scattered more than longer wavelength (Fig. 11) Fig. 11 Rayleigh scattering is caused by particles smaller than the wavelength and is maximal for small wavelengths. In the absence of particles and scattering, the sky would appear black. Rayleigh scattering causes a clear sky to be observed as blue because this is the shortest wavelength the human eye can observe. At sunrise and s 9 unset, however, the sun rays travel a longer distance through the earth’s atmosphere before they reach the surface. All shorter wavelength are scattered after some distance and only the longer wavelengths reach the earth’s surface and the sky appears orange or red (Fig. 12). Fig. 12 Rayleigh scattering causes us to perceive a blue sky during day time and a red sky at sunset. Rayleigh scattering is the most important type of scattering in the context of remote sensing. It causes a distortion of spectral characteristics of the reflected light when compared to measurements taken on the ground; shorter wavelengths are overestimated. In general, Rayleigh scattering diminishes the ‘contrast’ in photos and thus has a negative effect on the possibilities for interpretation. Atmospheric Scattering - Mie Scattering It occurs when the wavelength of the incoming radiation is similar in size to the atmospheric particles. The most important cause of Mie scattering are the aerosols: a mixture of gases, water vapour and dust. Mie scattering is generally restricted to the lower atmosphere where larger particles are more abundant and dominates under overcast cloud conditions. Mie scattering influences the entire spectral region from the near-ultraviolet up to and including the near-infrared. Atmospheric Scattering - Non-selective Scattering It occurs when the particle size is much larger than the radiation wavelength. Typical particles responsible for this effect are water droplets and larger dust particles. Non-selective scattering is independent of wavelength, with all wavelengths scattered about equally. The most prominent example of none selective scattering includes the effect of clouds (clouds consist of water droplets). Since all wavelength are scattered equally, a cloud appears white. Optical remote sensing cannot penetrate clouds. Clouds also have a secondary effect: shadowed regions on the earth’s surface (Fig. 13). 10 Fig. 13 Direct and indirect effects of clouds in optical remote sensing Energy Interaction in the Earth’s Surface In land and water applications of remote sensing we are most interested in the reflected radiation because this tells us something about surface characteristics. Reflection occurs when radiation ‘bounces’ off the target and is then redirected. Absorption occurs when radiation is absorbed by the target. Transmission occurs when radiation passes through a target. Two types of reflection, which represent the two extremes of the way in which energy is reflected from a target, are specular reflection and diffuse reflection (Fig. 14). In the real world, usually a combination of both types is found. Fig. 14 Schematic diagrams showing (a) specular and (b) diffuse reflection. Energy Interaction in the Earth’s Surface - Specular Reflection 11 Specular reflection or mirror-like reflection, typically occurs when a surface is smooth and all (or almost all) of the energy is directed away from the surface in a single direction. It is most likely to occur when the sun is high in the sky. Specular reflection can be caused, for example, by a water surface or a glass house. It results in a very bright spot (also called hotspot) in the image. Energy Interaction in the Earth’s Surface - Diffuse Reflection It occurs in situations where the surface is rough and the energy is reflected almost uniformly in all directions. Whether a particular target reflects specularly or diffusely, or somewhere in between, depends on the surface roughness of the feature in comparison to the wavelength of the incoming radiation. Energy Interaction in the Earth’s Surface – Spectral Reflectance Curves We can establish for each material type of interest a reflectance curve. Such a curve shows the portion of the incident energy that is reflected as a function of wave length (expressed as percentage). Reflectance curves are made for the optical part of the electromagnetic spectrum (up to 2.5 µm). Today, large efforts are made to store collections of typical curves in spectral libraries. Reflectance measurements can be carried out in a laboratory or in the field using a field spectrometer. Spectral Reflectance Curves - Vegetation The reflectance characteristics of vegetation depend on the properties of the leaves including the orientation and the structure of the leaf canopy. The proportion of the radiation reflected in the different parts of the spectrum depends on the leaf pigmentation, leaf thickness and composition (cell structure) and on the amount of water in the leaf tissue. Spectral Reflectance Curves - Bare Soil Surface reflectance from bare soil is dependent on so many factors that it is difficult to give one typical soil reflectance curve. However, the main factors influencing the soil reflectance are soil colour, moisture content, the presence of carbonates, and iron oxide content. Spectral Reflectance Curves - Water 12 Compared to vegetation and soils, water has low reflectance. Vegetation may reflect up to 50 %, soils, up to 30 – 40 % while water reflects at the most 10 % of the incoming radiation. Water reflects EM energy in the visible up to the near IR. Beyond 1200 nm all energy is absorbed. Sensors and Platforms Introduction The measurement of EM energy are made by sensors that are attached to static or moving platforms. Different types of sensors have been developed for different applications. Aircraft and satellites are generally used to carry one or more sensors. The sensor-platform combination determines the characteristics of the resulting image data. When a particular sensor is operated from a higher altitude, the total area imaged is increased while the level of detail that can be obtained is reduced. Based on your information needs and on time and budgetary criteria, you can determine which image data are most appropriate. Fig. 15 gives an overview of some types of sensors. Fig. 15 Overview of some sensors. Passive Sensors – Gamma-ray Spectrometer It measures the amount of gamma rays emitted by the upper soil or rock layers due to radioactive decay. The energy measured in specific wavelength bands provides information on the abundance of (radio isotopes that relate to) specific minerals. The main application is found in mineral exploration. Gamma rays have a very short wavelength on the order of picometers (pm). Because of large atmospheric absorption of these waves, this type of energy can only be measured up to a few hundred meters above the earth’s surface. Passive Sensors – Multispectral Scanner 13 It mainly measures the reflected sunlight in the optical domain. A scanner systematically ‘scans’ the earth’s surface thereby measuring the energy reflected from the viewed area. This is done simultaneously for several wavelength bands (multispectral). A wave length band is an interval of the EM spectrum for which the average reflected energy is measured. Each band is related to specific characteristic of the earth’s surface. Reflection characteristics of ‘blue’ light give information about the mineral composition; reflection characteristics of ‘infrared light’ tell something about the type and health of vegetation. The definition of the wavebands of a scanner depends on the applications for which the sensor has been designed. Passive Sensors – Thermal Scanner Thermal scanners measure thermal data in the range of 10–14 µm. Wavelength in this range are directly related to an objects temperature. Data on cloud, land and sea surface temperature are extremely useful for weather forecasting. Most remote sensing systems designed for meteorology include a thermal scanner. Used to study the effects of drought (water stress) on agricultural crop, and to monitor the temperature of cooling water discharged from thermal power plants. Another application is in the detection of coal fire. Active Sensors – Laser Scanner Laser scanners are mounted on aircrafts and use a laser beam (infrared light) to measure the distance from the aircraft to points located on the ground. This distance measurement is then combined with exact information on the aircraft’s position to calculate the terrain elevation. Laser scanning is mainly used to produce detailed, high-resolution, digital Terrain Model (DTM) for topographic mapping. Laser scanning is increasingly used for other purposes such as the production of detailed 3D models of city buildings and for measuring tree heights in forestry. Active Sensors – Radar Altimeter They are used to measure the topographic profile parallel to the satellite orbit. They provide profiles (single lines of measurements) rather than image data. They operate in the 1-6 cm domain and are able to determine height with a precision of 2-4 cm. They are useful for measuring relatively smooth surfaces such as oceans and for small scale mapping of continental terrain models. Platforms - Introduction Remote sensing sensors are attached to moving platforms such as aircrafts and satellites. Static platforms are occasionally used in an experimental context. E.g. by using a multispectral sensor mounted on a pole, the 14 changing reflectance characteristics of a specific crop during the day or season can be assessed. Airborne observations are carried out using aircrafts with specific modifications to carry sensors. An aircraft that carries an aerial camera or a scanner needs a hole in the floor on the aircraft. Sometimes Ultra-Light Vehicles (ULV’s), balloons, airship or kites are used for airborne remote sensing. Airborne observations are possible from 100 m up to 30-40 km height. The availability of satellite navigation technology has significantly improved the quality of flight execution. For spaceborne remote sensing, satellites are used. Satellites are launched into space with rockets. Satellites for earth observation are positioned in orbits between 150-36000 km altitudes. The specific orbit depends on the objectives of the mission, e.g., continuous observation of large areas or detailed observation of smaller areas. Platforms – Airborne Remote Sensing The speed of the aircraft can vary between 140-600 km/hour and is amongst others related to the mounted sensor system. Apart from the altitude, the aircraft orientation also affects the geometric characteristics of the remote sensing data acquired. The orientation of the aircraft is influenced by wind conditions and can be corrected for to some extent by the pilot. Three different aircraft rotations relative to a reference path are possible: roll, pitch and yaw (Fig. 16). An Inertial Measurement Unit (IMU) can be installed in the aircraft to measure these rotations. The measurements can be used to correct the sensor data for the resulting geometric distortions. Today, aircraft are equipped with satellite navigation technology which yield the approximate position. More precise positioning and navigation (up to decimetre accuracy) are now possible. In aerial photography, the measurements are stored on hard-copy material: the negative film. For other sensors, e.g. a scanner, the digital data can be stored on tape or mass memory devices. Tape recorders offer the fastest way to store the vast amount of data. The recorded data are only available after the aircraft has returned to base. There is an increasing trend towards contracting specialized private aerial survey companies. Still this requires basic understanding of the process involved. A sample contract for outsourcing aerial photography is provided by the American Society of Photogrammetry and Remote Sensing at their ASPRS web site. 15 Fig. 16 Attitude angles and IMU attached to an aerial camera Zeiss RMK-TOP. Platforms – Spaceborne Remote Sensing Spaceborne remote sensing is carried out using sensors that are mounted on satellites. The monitoring capabilities of a sensor are to a large extent determined by the parameters of the satellite’s orbit. Different types of orbits are required to achieve continuous monitoring (meteorology), global mapping (land cover mapping) or selective mapping (urban areas). For remote sensing purposes, the following orbit characteristics are relevant; altitude, inclination angle, period, repeat cycle. Altitude It is the distance (in km) from the satellite to the mean surface level of the earth. Typically remote sensing satellites orbit either at 600 - 800 km (polar orbit) or at 36,000 km (geo-stationary orbit) distance from the earth. The distance influences to a large extent which area is viewed and at which detail. Inclination angle It is the angle (in degrees) between the orbit and the equator. The inclination angle of the orbit determines together with the field of view of the sensor, which latitudes can be observed. If the inclination is 60o then the satellite flies over the earth between the latitudes 60o south and 60o north; it cannot observe parts of the earth at latitudes above 60o. Period 16 It is the time (in minutes) required to complete one full orbit. A polar satellite orbits at 800 km altitude and has a period of 90 minutes with a ground speed of 28,000 km/hour (8 km/s). Compare this figure with the speed of an aircraft, which is around 400 km/hour. The speed of the platform has implications for the type of images that can be acquired (time for exposure). Repeat cycle It is the time in days between two successive identical orbits. The revisit time, the time between two subsequent images of the same area, is determined by the repeat cycle together with the pointing capability of the sensor. Pointing capability refers to the possibility of the sensor to look sideways. The following orbit types are most common for remote sensing missions: Polar, or near polar, orbit - These are orbit s with inclination angle between 80 and 100 degrees and enable observation of the whole globe. The satellite is typically placed in orbit at 600 – 800 km altitude. Sun-synchronous orbit - The orbit is chosen in such a way that the satellite always passes overhead at the same local solar time. Most sun-synchronous orbit cross the equator at mid-morning (around 10:30 h). At that moment, the sun angle is low and the resultant shadows reveal terrain relief. Sun-synchronous orbits allow a satellite to record images at two fixed times during one 24-hour period: one during the day and one at night. Geostationary orbit - This refers to orbits in which the satellite is placed above the equator (inclination angle is 0o) at a distance of some 36,000 km. At this distance, the period of the satellite is equal to the period of the earth. The result is that the satellite is at a fixed position relative to the earth. Geostationary orbits are used for the meteorological and telecommunications satellites. The data of spaceborne sensors need to be sent to the ground for further analysis and processing. Practically all earth observation satellites apply satellite communication technology for downlink of the data. The acquired data are sent to a receiving station or to another communication satellite that downlink the data to receiving antennae on the ground. If the satellite is outside the range of a receiving station the data can be temporarily stored by a tape recorder in the satellite and transmitted latter. Image Data Characteristics Remote sensing image data are more than a picture – they are measurements of EM energy stored in a regular grid format (rows and columns). The single elements are called pixels, an abbreviation of ‘picture 17 elements’. For each pixel, the measurements are stored as Digital Number or DN-values. Typically, for each wavelength band measured, a separate layer is stored (Fig. 17). Fig. 17 An image file comprises a number of bands. The DN values corresponding to the measurements are stored in a row-column system. The quality of image data is primarily determined by characteristics of the sensor-platform system. These sensor-platform characteristics are usually referred to as:  Spectral resolution and radiometric resolution; refers respectively to the part of the EM spectrum measured and the differences in energy that can be observed.  Spatial resolution; refers to the smallest unit-area measured, it indicates the minimum size of the objects that can be detected.  Revisit time; the time between two successive image acquisitions over the same location on Earth. The characteristics of image data are:  Image size; the number of rows and columns in a scene.  Number of bands; the number of wavelength band stored. For e.g. 1 (black/white photography), 4 (SPOT multispectral image) or 256 (imaging spectroscopy data).  Quantization; the data format used to store the energy measurements. Typically, for each measurement, one (8 bits) byte is used, which represents the (discrete) values of 0-255. These values are known as Digital Numbers. Using sensor specific calibration parameters, the DN-values can be converted into measured energy (Watt)  Ground pixel size; the area coverage of a pixel on the ground. Usually this is a round figure (e.g., 20 m or 30 m) the ground pixel size of image data is related to but not necessarily the same as the spatial resolution. The image size, the number of bands and the quantification, allow the disk space required to be calculated. E.g., a SPOT multispectral image requires 3000 (columns) x 3000 (rows) x 4 (bands) x 1 byte = 36 Mbyte of storage. 18 Aerial Cameras Introduction It is the oldest, yet most commonly applied remote sensing technique. Photogrammetry is the science and technique of making measurements from photos or image data. Nowadays, almost all topographic maps are based on aerial photographs. Aerial photographs also provide the accurate data required for many cadastral surveys and civil engineering projects. Two broad categories of aerial photography can be distinguished: vertical photography and oblique photography (Fig 18). In most mapping applications, vertical photography is required. Vertical aerial photography is produced with a camera mounted in the floor of an aircraft. The resulting image is rather similar to a map and has a scale that is approximately constant throughout the image area. Usually, vertical aerial photography is taken in stereo; successive photos have a degree of overlap to enable stereo-interpretation and stereo measurements. Oblique photographs are obtained when the axis of the camera is not vertical. Oblique photographs can be made using a hand held camera and shooting through the (open) window of an aircraft. The scale of an oblique photo varies from the foreground to the back ground. This scale variation complicates the measurement of positions from the image and for this reason, oblique photographs are rarely used for mapping purposes. Nevertheless, oblique images can be useful for purposes of viewing sides of buildings and for inventories of wildlife. Fig 18 Vertical (a) and oblique (b) photography 19 Aerial Camera A camera used for vertical aerial photography for mapping purposes is called an aerial survey camera. At present there are only two major manufacturers of aerial survey cameras namely Zeiss and Leica. These two companies produce the RMK-TOP and the RC-30 respectively. Just like a typical hand held camera, the aerial survey camera contains a number of common components as well as a number of specialised ones necessary for its specific role. The large size of the camera results from the need to acquire images of large areas with a high spatial resolution. This is realized by using a large film size. Modern aerial survey cameras produce negatives measuring 23 cm x 23 cm. Up to 600 photographs may be recorded on a single roll of film. Film Magazine and Auxiliary Data The aerial camera is fitted with a system to record various items relevant information onto the side of the negative: mission identifier, date and time, flying height and frame number (Fig. 19). A vacuum plate is used for flattening the film at the instance of exposure. So called fiducial marks are recorded in all corners of the film. The fiducial marks are required to determine the optical centre of the photo needed to align photos for stereo viewing. The fiducials are also used to record the precise position of the film in relation to the optical system, which is required in photogrammetric processes. 20 Fig. 19 Auxiliary data annotation on an aerial photograph Spectral and Radiometric Characteristics Photographic recording is a multi-stage process that involves film exposure and chemical processing (development). It is usually followed by printing. Photographic film comprises a light sensitive emulsion layer coated onto a base material (Fig. 20). The emulsion layer contains silver halide crystals or grains suspended in gelatine. The emulsion is supported on a stable polyester base. Light changes the silver halide into silver metal which after processing on the film appears black. The exposed film before processing, contains a latent image. The film emulsion type applied determines the spectral and radiometric characteristics of the photograph. Two terms are important in this context; General sensitivity and Spectral sensitivity. 21 Fig. 20 Section of a photographic film showing the emulsion layer with the silver halide crystals. Spectral and Radiometric Characteristics - General Sensitivity It is a measure of how much light energy is required to bring about a certain change in film density. Given specific illumination conditions, the general sensitivity of a film can be selected, for example to minimise exposure time. The energy of a light photon is inversely proportional to the light wavelength. In the visible range, therefore, the blue light has the highest energy. For a normal silver halide grain, only blue light photons have sufficient energy to form the latent image and hence a raw emulsion is only sensitive to blue light. The sensitivity of a film can be increased by increasing the mean grain size of the silver grains: larger grains produce more metallic silver per input light photon. The mean grain size of aerial films is in the order of a few µm. There is a problem related to increasing the grain size: larger grains are unable to record small details i.e. the spatial resolution is decreased. The other technique to improve the general sensitivity of a film is to perform a sensitization of the emulsion by adding small quantities of chemicals, such as gold or sulphur. General sensitivity is often referred to as film speed. For a scene of a given average brightness, the higher the film speed, the shorter the exposure time required to record the optical image on the film. Similarly the higher the film speed, the less bright an object need to be in order to be recorded upon the film. Spectral and Radiometric Characteristics - Spectral Sensitivity It describes the range of wavelength to which the emulsion is sensitive. For the study of vegetation the near- infrared wavelengths yield much information and should be recorded; for other purposes a standard colour photograph normally can yield the optimal basis for interpretation. Sensitization techniques are used not only to increase the general sensitivity but also to produce films that are sensitive to longer wavelengths. By adding sensitizing dyes to the basic silver halide emulsion, the energy of longer light wavelengths becomes sufficient to produce latent images. In this way, a monochrome film can be made sensitive to green, red or infrared wavelengths. A black-and-white (monochrome) type of film has one emulsion layer. Using sensitization techniques, different types of monochrome films are available. Most common are panchromatic and infra-red sensitive film. Colour photography uses an emulsion with three sensitive layers to record three 22 wavelength bands corresponding to the three primary colours of the spectrum i.e., blue, green and red. There are two types of colour photography; true colour and false colour infrared. Spectral and Radiometric Characteristics - Scanning Classical photogrammetric techniques as well as visual photo-interpretation generally employ hard-copy photographic images. These can be the original negatives, positives prints or dispositive. Digital photogrammetric systems, as well as geographic information systems, require digital photographic images. A scanner is used to convert a film or print into a digital form. The scanner samples the image with an optical detector and measures the brightness of small areas (pixels). The brightness value are then represented as a digital number (DN) on a given scale. In the case of a monochrome image, a single measurement is made for each pixel area. In the case of a coloured image, separate red, green and blue values are measured. For simple visualization purposes, a standard office scanner can be used; but high metric quality scanners are required if the digital photos are to be used in precise photogrammetric procedures. In the scanning process, the setting of the size of the scanning aperture is most relevant. This is also referred to as the scanning density and is expressed in dots per inch (dpi; 1 inch =2.54 cm). The dpi- setting depends on the detail required for the application and is usually limited by the scanner. Office scanners permit around 600 dpi (43 µm) whilst photogrammetric scanners may produce 3600 dpi (7 µm). Spatial Characteristics - Introduction Two important properties of an aerial photograph are scale and spatial resolution. These properties are determined by sensor (lens cone and film) and platform (flying height) characteristics. Lens cones are produced with different focal length. Focal length is the most important property of a lens cone since together with flying height, it determines the photo scale. The focal length also determines the angle of view of the camera. The longer the focal length, the narrower the angle of view. The 152 mm lens is the most commonly used lens. Spatial Characteristics - Scale The relationship between the photo scale factor, s, flying height, H, and lens focal length, f, is given by; s= Hence, the same scale can be achieved with different combinations of focal length and flying height. If the focal length of a lens is decreased whilst the flying height remains constant), then; 23 a) The image scale factor will increase and the size of the individual details in the image becomes smaller (Fig 21). In the example shown in Fig 21 using a 150 mm and 300 mm lens at H = 2000 m results in a scale of 13,333 and 6,667 respectively. Fig. 21 Effects of a different focal length at the same flying height on the ground coverage. b) The ground coverage increases. A 23 cm negative covers a length (and width) of respectively 3066 m and 1533 m using a 150 mm and 300 mm lens (Fig. 21). This has implications on the number of photos required for the mapping of a certain area, which in turn affects the subsequent processing (in terms of labour) of the photos. c) The angular field of view increases and the image perspective changes. The total field of view in situations A and B respectively is 74 degrees and 41 degrees. When wide angle photography is used in mapping, the measurement of height information (z dimension) in a stereoscopic model is more accurate than when long focal length lenses are used. The combination of a low flying height with a wide-angle lens can be problematic when there are large terrain differences or high man-made objects in the scene. Some areas may become hidden behind taller objects. This phenomenon is called the dead ground effect. Spatial Characteristics - Spatial Resolution Spatial resolution refers to the ability to record small adjacent objects in an image. The better the resolution of a recording system, the more easily the structure of the objects on the ground can be viewed in the image. The spatial resolution of an aerial photograph depends on: a) The image scale factor: spatial resolution decreases as the scale factor increases; 24 b) The quality of the optical system: expensive high quality aerial lenses give much better performance than the inexpensive lenses on amateur cameras; c) The grain structure of the photographic film – the larger the grains, the poorer the resolution; d) The contrast of the original objects - the higher the target contrast, the better the resolution; e) Atmospheric scattering effects – this leads to loss of contrast and resolution; f) Image motion – the relative motion between the camera and the ground causes blurring and loss of resolution. The most variable factor is the atmospheric condition, which can change from mission to mission and, during a mission. Aerial Photography Missions - Mission Planning When a mapping project requires aerial photographs, one of the first tasks is to select the required photo scale factor, the type of lens to be used, the type of film to be used and the required percentage of overlap. Forward overlap usually is around 60 %; sideways overlap typically is around 20 %. Fig. 22 shows a survey area that is covered by a number of flight lines. The date and time of acquisition should be considered with respect to growing season, light conditions and shadowing effects. Fig. 22 Example survey area for aerial photography. Note the forward and sideways overlap of the photographs If the required scale is defined the following parameters can be determined: The flying height required above the terrain; 25 The ground coverage of a single photograph The number of photos required along a flight line; The number of flight lines required. After completion of the necessary calculations, mission maps are prepared for use by the survey navigator in flight. Aerial Photography Missions - Mission Execution During the execution of the mission, the navigator alternatively observes the terrain (through a navigation sight) and the flight map and advises small corrections to the pilot. The two main corrections required are:  Aircraft heading to compensate for the effect of wind. The wind vector continuously changes in direction and magnitude. The navigator needs to monitor the track of the aircraft and apply small corrections in heading to ensure that the aircraft stays accurately on the required flight line.  Aircraft speed or exposure interval to maintain the required forward overlap. The camera viewfinder or navigation sight employs a system to regulate the overlap by reference to the ground. By keeping a series of moving lines in the viewfinder synchronised with ground features, the forward overlap is maintained. Radiometric Aspects Introduction The previous lectures have examined remote sensing as a means of producing image data for a variety of purposes. The lectures now will deal with processing of the image data for rectification, visualization and interpretation. The first step in the processing chain, often referred to as pre-processing, involves radiometric and geometric corrections. Two groups of radiometric corrections are identified:  the cosmetic rectification to compensate for data errors and  the atmospheric corrections to compensate for the effect of atmospheric and illumination parameters, such as haze, sun angle and skylight on the image data. Cosmetic Corrections They involve all those operations that are aimed at correcting visible errors and noise in the image data. Defects in the data may be in the form of periodic or random missing line (line dropouts), line striping, and random or spike noise. These effects can be identified visually and automatically. 26 Cosmetic Corrections - Periodic Line Dropouts Periodic line dropouts occur due to recording problems when one of the detectors of the sensor in question either gives wrong data or stops functioning. The Landsat Thematic Mapper, for example has 16 detectors in all its bands except the thermal band. A loss of one of these detectors would result in every sixteenth scan line being a string of zeros that would plot a black line on the image (Fig. 27). Fig. 27 An image with line dropouts (a) and the DN-values (b) The first step in the restoration process is to calculate the average DN-value per scan line for the entire scene. The average DN-value for each scan line is then compared with the scene average. For each pixel in a defective line, an average DN is calculated using DN’s for the corresponding pixel in the preceding and succeeding scan lines (Fig.28). Fig. 28 The image after correction (a) and the DN-values (b) 27 The average DN is then substituted for the defective pixel. Any scan line deviating from the average by more than a designated threshold value is identified as defective. The next step is to replace the defective lines. The resulting image is a major improvement, although every sixteenth scan line consist of artificial data. Cosmetic Corrections - Line Stripping Line striping is far more common than line dropouts are. Line striping often occurs due to non-identical detector response. Although the detectors for all satellite sensors are carefully calibrated and matched before the launch of the satellite, with time the response of some detectors may drift to higher or lower levels. As a result, every scan line recorded by that detector is brighter or darker than the other lines (Fig. 29). It is important to understand that valid data are present in the defective lines, but these must be corrected to match the overall scene. The most popular correction is the histogram matching. Separate histograms corresponding to each detector unit are constructed and matched. Fig. 29 The image with line striping (a) and the DN-values (b). Note that the destriped image would look almost similar to the original image. Taking one response as standard, the gain (rate of increase of DN) and offset (relative shift of mean) for all other detector units are suitably adjusted and new DN-values are computed and assigned. This yields a destriped image in which all DN-values conform to the reference level and scale. Cosmetic Corrections - Random Noise or Spike Noise Random noise or spike noise may be due to errors during transmission of data or to a temporary disturbance. Here, individual pixels acquire DN-values that are much higher or lower than the surrounding pixels (Fig. 30). 28 Fig. 30 The image with spike errors (a) and the DN-values (b). In the image, these pixels produce bright and dark spots that interfere with information extraction procedures. A spike noise can be detected by mutually comparing neighbouring pixel values. If neighbouring pixel values differ by more than a specific threshold margin, it is designated as a spike noise and the DN is replaced by an interpolated DN-value. Atmospheric Corrections All reflected and emitted radiations leaving the earth’s surface are attenuated mainly due to absorption and scattering by the constituents in the atmosphere. The atmospheric induced distortions occur twice in case of sunlight reflection and once in case of emitted radiation. These distortions are wavelength dependent. Their effect on remote sensing data can be reduced by applying ‘atmospheric correction’ techniques. These corrections are related to the influence of haze, sun angle and skylight. Atmospheric Corrections - Haze Corrections Light scattered by the atmospheric constituents that reaches the sensor constitutes the ‘haze’ in remote sensing image data. Haze has an additive effect resulting in higher DN-values and a decrease in the overall contrast in the image data. The effects is wavelength dependent, being more pronounced in the shorter wavelength range and negligible in the infrared. Haze corrections are based on the assumption that the infrared bands are essentially free of atmospheric effects and in these bands black bodies such as large clear water bodies and shallow zones, will have zero DN-values. The DN-values in other bands for the corresponding pixels can be attributed to haze and should be subtracted from all pixels in the corresponding band. 29 Atmospheric Corrections - Sun Angle Correction The position of the sun, relative to the earth, changes depending on the time of day and the day of the year. In the northern hemisphere, the solar elevation angle is smaller in winter than in summer. As a result, the image data of different seasons are acquired under different solar illumination. Sun angle correction becomes more important when one wants to generate mosaics taken at different times or perform change detection studies. An absolute correction involves dividing the DN-values in the image data by the sine of the solar elevation angle (the size of the angle is given in the header of the image data) as per the following formula; DNl = DN/Sin (α) DN is the input pixel value, DNl is the output pixel values, and α is the solar elevation angle. Note that since the angle is smaller than 90o, the sign will be smaller than 1 and DN' will be larger than DN. Atmospheric Corrections - Skylight Correction Scattered light reaching the sensor after being reflected from the Earth’s surface constitutes the skylight or sky irradiance. This also causes reduced contrast on image data. Correcting for this effect requires additional information that cannot be extracted from the image grid data itself. This information e.g., aerosol distribution, gas composition) is difficult to obtain and needs to be later input into a numerical model. Geometric Aspects Introduction These geometric characteristics have to be taken into consideration when the data are used:  to derive two-dimensional (x, y) and three-dimensional (x,y,z) coordinate information. 2D geometric descriptions of objects (points, lines, areas) can be derived from a single image or photo. 3D geometric descriptions (2.5D terrain relief, 3D objects as volumes) can be derived from stereo pairs of images or photos. Extraction of 3D information from images requires a specific process called orientation.  to merge different types of image data for integrated processing and analysis. Consider a land cover classification based on multispectral Landsat and SPOT data. Both data sets need to be converted into the same geometric grid before they can be processed simultaneously. This can be achieved by a geocoding process. 30  To visualize the image data in a GIS environment. There is a growing amount of image data available that are used as a backdrop for other (vector stored) data. To enable such integration, the image data need to be georeferenced to the coordinate system of the vector stored data. Relief Displacement A characteristic of most sensor systems is the distortion of the geometric relationship between the image data and the terrain caused by relief differences on the ground. This effect is most apparent in aerial photographs and airborne scanner data and is illustrated in Fig. 31. Consider the situation on the left in which a true vertical aerial photograph is taken of a flat terrain. The distance (A – B) and (a – b) are proportional to the total width of the scene and its image on the negative respectively. In the left hand situation, by using the scale factor, we can compute (A –B) from a measurement of (a – b) in the negative. In the right hand situation there is significant terrain relief difference. As you can now observe, the distance between a and b in the negative has become larger, although when measured in the terrain system, it is still the same as in the left-hand situation. Fig. 31 Illustration of the effect of terrain topography on the relationship between A-B (on the ground) and a-b (on the photograph). Flat terrain (a); significant height difference (b). This phenomenon does not occur in the centre of the photo but becomes increasingly prominent towards the edges of the photo. This effect is called relief displacement: terrain points whose elevation is above or below the reference elevation are displayed respectively away from or towards the nadir point. The magnitude of displacement δr (mm), is approximated by: δr = r x Where; r = is the distance (mm) from the nadir, h (m) is the height of the terrain above the reference plane and H (m) is the flying height above the reference plane (where nadir intersects the terrain). 31 The equation shows that the amount of relief displacement is zero at the nadir (r = 0), greatest at the corners of the photograph and is inversely proportional to the flying height. In addition to relief displacement you can imagine that also buildings and other tall objects can cause displacement (height displacement). This effect for example, encountered when dealing with large scale photos of urban or forest areas. The main effect of relief displacement is that inaccurate or wrong coordinates might be determined when for example digitizing from image data. Whether relief displacement should be considered in geometric processing of the image data depends on its impact on the required accuracy of the geometric information derived from the images. Relief displacement can be corrected for if information on the terrain topography is available (in the form of a DTM). Two-dimensional Approaches In this, we consider the geometric processing of image data in situations where relief displacement can be neglected. Examples of such image data are a digitized aerial photograph of a flat area (flat being defined by h/H < 1/1000). For images taken from space with only a medium resolution, the relief displacement usually is less than a few pixels and thus not important. These data are stored in column-row system in which columns and rows are indicated by index i and j respectively. The objective is to relate the image coordinate system to a specific map coordinate system (Fig. 32). Fig. 32 Coordinate system of the image defined by rows and columns (a) and map coordinate system with x- and y-axis Two-dimensional Approaches - Georeferencing The simplest way to link an image to a map projection system is to use a geometric transformation. A transformation is a function that relates the coordinates of two systems. A transformation relating (x, y) to (i, j) is typically defined by linear equations, such as: x = 3 + 5i, and, y = -2 + 2.5j. Using the above transformation, for example, image position (i = 5, j = 8) relates to map coordinates (x = 28, y = 18). Once such a transformation has been determined, the map coordinates for each image pixel can be calculated. The 32 resulting image is called a georeferenced image. It allows the superimposition of vector data and the storage of the data in map coordinates when applying on-screen digitizing. Note that the image as such remains stored in the original (i. j) raster structure, and that its geometry is not altered. Two-dimensional Approaches - Geocoding The georeferencing approach is useful in many situations. However, in other situations a geocoding approach, in which the image grid is also transformed is required. Geocoding is required when different images need to be combined or when the image data are used in a GIS environment that requires all data to be stored in the same map projection. The effect of georeferencing / geocoding is illustrated in Fig. 33. Geocoding is georeferencing with subsequent resampling on the image raster. This means that a new image raster is defined along the xy-axes of the selected map projection. The geocoding process comprises two main steps: first each new raster element is projected (using the transformation parameters) onto the original image, secondly a (DN) value for a new pixel is determined and stored. Fig. 33 Original, georeferenced and geocoded satellite image. Three-dimensional Approaches - Stereoplotting The basic process of stereoplotting is to form a stereo model of the terrain and to digitize features by measurements made in this stereo model. A stereo model is a special combination of two photographs on the same area taken from, aerial photographs are usually flown with 60 % overlap between subsequent photos. Stereo pairs can also be derived from other sensors such as multispectral scanners and imaging radar. The measurement made in a stereo model refer to a phenomenon that is called parallax. Parallax refers to the fact that an object photographed from different positions has different relative positions in the two images. Since this effect is directly related to the relative height, measurement of these parallax differences yield height information. (Fig. 34). 33 Fig. 34 The same tree is present in two (overlapping) photographs. Because of the height of the tree, the positions of the tree top and base relative to photo centres are different. This difference (parallax) can be used to calculate its height. Three-dimensional Approaches - Stereoplotting A stereo model enables measurement using a special (3D) cursor. If the stereo model is appropriately oriented the parallax measurements yield (x, y, z) coordinates. To view and navigate in a stereo model, various hardware solutions may be used. So-called analogue and analytical plotters were used in the past. These instruments are called ‘plotters’ since the features delineated in the stereo model were directly plotted onto film (for production). Today, digital photogrammetric workstations (DPWs) are increasingly used. Stereovision, the impression of depth, in a DPW is realised using a dedicated combination of monitor and special spectacles (e.g., polarized). To form a stereo model for 3D measurements, the stereo model needs to be oriented. The orientation process involves 3 steps:  The relation between the film-photo and the camera system is defined - This is the so-called inner orientation. It requires identification of fiducial marks on the photos and the exact focal length.  The relative tilts of the two photographs is determined - This is called the relative orientation and requires identification of identical points (‘tie points’) in both photographs.  After the inner and relative orientations, a geometrically correct three-dimensional model is formed - This model must be brought to a known scale and levelled with respect to the horizontal reference datum of the terrain coordinate system. For this purpose, 3D ground control points (i.e., x, y and elevation) are identified. This third step is known as absolute orientation. 34 One of the possibilities in stereoplotting is to superimpose vector data into a stereo model for updating purposes. In this way the operator can limit his/her work to the changed features rather than recording all information. Image Enhancement and Visualisation Introduction There is a need to visualize image data at most stages of remote sensing process. It is in the process of information extraction that visualization plays an important role. This is particularly so in the case of visual interpretation but also during automated classification procedures.An understanding of how we perceive colour is required at two main stages in the remote sensing process. In the first instance, it is required in order to produce optimal pictures from (multispectral) image data on the computer screen or as a (printed) hard-copy. Thereafter, the theory of colour perception plays an important role in the subsequent interpretation of these pictures. Perception of Colour Colour perception takes place in the human eye and the associated part of the brain. Colour perception concerns our ability to identify and distinguish colours which in turn enables us to identify and distinguish entities in the real world. It is not completely known how human vision works or what exactly happens in the eyes and brain before someone decides that an object is for example light blue. Some theoretical models, supported by experimental results are however generally accepted. Colour perception theory is applied whenever colours are reproduced, for example in colour photography, TV, printing and computer animation. Perception of Colour - Tri-stimuli model The eye’s general sensitivity is to wavelengths between 400 - 700 nm. Different wavelengths in this range are experienced as different colours. The retinas in our eyes have cones (light sensitive receptors) that send signals to the brain when they are hit by photons with energy levels that correspond to different wavelengths in the visible range of the electromagnetic spectrum. There are three different types of cones, responding to blue, green and red wavelength (Fig. 35). 35 Fig. 35 Visible range of electromagnetic spectrum including the sensitivity curves of cones in the human eye The signals sent to our brain by these cones and the difference between them, give us colour-sensations. In addition to cones, we have rods, which do not contribute to colour vision. The rods can operate with less light than cones. For this reason, objects appear less colourful in low light conditions. Screens of colour television sets and computer monitors are composed of a large number of small dots arranged in a regular pattern of groups of three; a red, a green and a blue dot. At a normal viewing distance from the screen we cannot distinguish the individual dots. Electron guns for red green and blue are positioned at the back-end of the tube. The number of electrons fired by these guns at a certain position on the screen determines the amount of (red, green, and blue) light emitted from that position. All colours visible on such a screen are therefore created by mixing different amounts of red, green and blue. This mixing takes place in our brain. When we see monochromatic yellow light (i.e., with a distinct wavelength of, say, 570 nm) we get the same impression as when we see a mixture of red (say, 700 nm) and green (530 nm). In both cases the cones are stimulated in the same way. According to the tri-stimuli model, therefore, three different kinds of dots are necessary and sufficient. Perception of Colour - Colour Spaces The tri model of colour states that there are three degrees of Freedom in the description of a colour. Various three dimensional spaces are used to describe and define colours. For our purposes the following three are sufficient;  Red Green Blue (RGB) space based on the additive principle of colours.  Intensity Hue Saturation (IHS) space, which is most related to our intuitive perception of colour.  Yellow Magenta Cyan (YMC) space based on the subtractive principle of colours. RGB 36 The RGB definition of colours is directly related to the way in which computer and television monitors function. Three channels (RGB) directly related to the red, green and blue dots are input to the monitor. When we look at the result, our brains combines the stimuli from the red, green and blue dots and enables us to perceive all possible colours from the visible part of the spectrum. During the combination, the three colours are added. When green dots are illuminated in addition to red ones, we see yellow. This principle is called the additive colour scheme. Fig. 36 illustrates the additive colours caused by bundles of light from red green and blue spotlights shining on a white wall in a dark room. Fig 36. Comparison of the (a) additive and (b) subtractive colour schemes When only red and green light occurs, the result is yellow. In the central area there are equal amounts of light from all three of the spotlights and we experience white. In the additive colour scheme, all visible colours can be expressed as combinations of red, green and blue and can therefore be plotted in a three- dimensional space with R, G and B along the axes. IHS In daily speech we do not express colours in the red, green and blue of the RGB system. The IHS system, which refers to intensity, hue and saturation more naturally reflects our sensation. Intensity describes whether a colour is light or dark. Hue refers to the names we give the colours: red, green yellow orange, purple, et cetera. Saturation describes a colour in terms of pale versus vivid. Pastel colours have low saturation; grey has zero saturation. As in the RGB model, three degrees of freedom are sufficient to describe any colour. YMC Whereas RGB is used in computer and TV display, the YMC colour description is used in colour definition on hard copy, for example printed pictures but also photographic films and paper. The principle of the YMC colour definition is to consider each component as a coloured filter (yellow, magenta and cyan). Each filter 37 subtracts one primary colour from the white light: the magenta filter subtracts green, so that only red and blue are left; the cyan filter subtracts red, and the yellow one blue. Where the magenta filter overlaps the cyan one, both green and red are subtracted, and we see blue. In the central area, all light is filtered away and the result is black. Colour printing which uses white paper and yellow, magenta and cyan ink, is based on the subtractive colour scheme. When white light falls on the document part is filtered out by the ink layers and the remainder is reflected from the underlying paper (Fig. 36). Visual Image Interpretation Introduction Up to now we have been dealing with acquisition of image data. The data acquired still need to be interpreted (or analysed) to extract the required information. In general, information extraction methods from remote sensing imagery can be subdivided into two groups:  Information extraction based on visual analysis or interpretation of the data. E.g., generation / updating of topographic maps from aerial photographs is based on visual interpretation.  Information extraction based on semi-automatic processing by the computer. Examples include automatic generation of DTMs, image classification and calculation of surface parameters. The most intuitive way to extract information from remote sensing images is by visual image interpretation, which is based on man’s ability to relate colours and patterns in an image to real world features. In some situations, pictures are studied to find evidence of the presence of features. Most often the result of the interpretation is made explicit by digitizing the geometric and thematic data of relevant objects (‘mapping’). The digitizing of 2D features (points, lines and areas) is carried out using a digitizer tablet or on-screen digitizing. 3D features interpreted in stereopairs can be digitized using stereoplotters or digital photogrammetric workstations. Image Understanding and Interpretation - Human Vision Human vision goes a step beyond perception of colour: it deals with the ability of a person to draw conclusions from visual observation. In analysing a picture, typically you are somewhere between the following two situations:  direct and spontaneous recognition or;  using several clues to draw conclusions by a reasoning process (logical inference) 38 Spontaneous recognition refers to the ability of an interpreter to identify objects or phenomena at a first glance. E.g. most people can directly relate an aerial photo to their local environment. The quote from people that are shown an aerial photograph for the first time “I see because I know” refers to spontaneous recognition. Logical inference means that the interpreter applies reasoning. In the reasoning the interpreter will use his/her professional knowledge and experience. Logical inference is, for example, concluding that a rectangular shape is swimming pool because of its location in a garden and near to a house. Sometimes logical inference alone cannot help you in interpreting images and field observations are required. Image Understanding and Interpretation -Interpretation Elements When dealing with image data, visualized as pictures, a set of terms is required to express and define characteristics present in a pictures. These characteristics are called interpretation element and are used, for example, to define interpretation keys, which provide guidelines on how to recognise certain objects. The following interpretation elements are distinguished; tone/hue, texture, shape, size, pattern, site and association. Tone is defined as the relative brightness of a black/white image. Hue refers to the colour on the image as defined in the intensity-hue-saturation (HIS) system. Tonal variations are an important interpretation element in an image interpretation. The tonal expression are an important interpretation element in image interpretation. The tonal expression of objects on the image is directly related to the amount of light (energy) reflected form the surface. Different types of rocks, soil or vegetation most likely have different tones. Variations in moisture conditions are also reflected as tonal differences in the image: increasing moisture content gives darker grey tones. Variations in hue are primarily related to the spectral characteristics of the measured area and also to the bands selected for visualization. The advantage of hue over tone is that the human eye has a much larger sensitivity for variations in colour (approximately 1000 colours) as compared to tone (approximately 200 grey levels). Shape or form characterized many terrain objects visible in the image. Shape also relates to (relative) height when dealing with stereo-images. Height differences are important to distinguish between different vegetation types and also in geomorphological mapping. The shape of objects often helps to determine the character of the object (built-up areas, roads and railroads, agricultural fields, etc. Size of objects can be considered in relative or absolute sense. The width of a road can be estimated, for example, by comparing it to the size of the cars, which is generally known. Subsequently this width determines the road type, e.g., primary road, secondary road, etc. 39 Pattern refers to the spatial arrangement objects and implies the characteristic repetition of certain forms or relationships. Pattern can be described by terms such as concentric, radial, checkerboard. Some land uses, however have specific and characteristic patterns when observed on aerospace data e.g. patterns related to erosion. Texture relates to the frequency of tonal change. Texture may be described by terms such as coarse or fine, smooth or rough, even or uneven, mottled, speckled, granular, linear, woolly etc. Texture can often be related to terrain roughness. Texture is strongly related to the spatial resolution of the sensor applied. Site relates to the topographic or geographic location. A typical example of this interpretation element is that back swamps can be found in a flood plain but not in the centre of a city area. Similarly, a large building at the end of a number of converging railroads is likely to be a railway station – we do not expect a hospital at this site. Association refers to the fact that a combination of objects makes it possible to infer about the meaning or function. An example of the use of ‘association’ is an interpretation of a thermal power plant based on the combined recognition of high chimneys, large buildings, cooling towers, coal heaps and transportation belts. Tone or hue can be defined for a single pixel; texture is defined for a neighbouring groups of pixels, not for a single pixels. The other interpretation elements relate to the individual objects or a combination of objects. The simultaneous and often implicit use of all these elements is the strength of visual image interpretation. In standard image classification, only hue is applied, which explains the limitations of automated methods compared to visual image interpretation. Image Understanding and Interpretation -Stereoscopic Vision The impression of depth encountered in the real world can also be realized by images of the same object that are taken from different positions. Such a pair of images, photographs or digital images that are separated and observed at the same time by the two eyes. These give images on the retinas in which objects at different positions in space are projected on relatively different positions. We call this stereoscopic vision. Pairs of images that can be viewed stereoscopically are called stereograms. Stereoscopic vison is explained here because the impression of height and height differences is important in the interpretation of both natural and man-made features from image data. Under normal conditions we can focus on objects between 150 mm distance and infinity. In doing so we direct both eyes to the object (point) of interest. This is known as convergence. To view the stereoscopic model formed by a pair of overlapping photographs, the two images have to be separated so that the left and right eyes see the left and right photographs. In addition 40 one should not focus on the photo itself but at infinity. Some experienced persons can experience ‘stereo’ by putting the two photos at a suitable distance from their eyes. Most of us need some help and different methods have been developed. Pocket and mirror stereoscopes, and also the photogrammetric plotters, use a system of lenses and, mirrors to ‘feed’ one image into one eye. Pocket and mirror stereoscopes are mainly applied in mapping applications related to vegetation, forest, soil and geomorphology (Fig. 37). Fig. 37 The mirror stereoscope enables stereoscopic vision of stereograms. Each photo is projected onto one eye Photogrammetric plotters are used in topographic and large scale mapping activities. Another way of achieving stereovision is to project the two images in two colours. Most often red and green colours are applied; the corresponding spectacles comprise one red and one green glass. This method is known as the anaglyph system and is particularly suited to viewing overlapping images on a computer screen. 41 Multispectral Scanners Introduction Multispectral scanners measure reflected electromagnetic energy by scanning the Earth’s surface. This results in digital image data, of which the elementary unit is a picture element: pixel. As the name multispectral suggests, the measurement are made for different ranges of the EM spectrum. Multispectral scanners have been used in remote sensing since 1972 when the first Landsat satellite was launched. After the aerial camera, it is the most commonly used sensors. Applications of multispectral scanner data are mainly in the mapping of land cover, vegetation, surface mineralogy and surface water. Two types of multispectral scanners are distinguished: the whiskbroom scanner and the pushbroom scanner. Multispectral scanners are mounted on airborne and sapceborne platforms. Whiskbroom Scanner A combination of a single detector plus a rotating mirror can be arranged in such a way that the detector beam sweeps in a straight line over the Earth across the tract of the satellite at the rotation of the mirror. In this way, the Earth’s surface is scanned systematically line by line as the satellite move forward. Because of this sweeping motion, the whiskbroom scanner is also known as the across-track scanner (Fig. 23). The first multispectral scanners applied the whiskbroom principle. Today, many scanners are still based on this principle: NOAA/AVHRR and Landsat /TM, for instance. Fig. 23 Principle of the whiskbroom scanner 42 Spectral Characteristics Whiskbroom scanners use solid state detectors for measuring the energy transferred by the optical system to the sensor. This optical system focusses the incoming radiation into spectral components that each have their own detector. The detector transforms the electromagnetic radiation (photons) into electrons. The electrons are input to an electronic device that quantifies the level of energy into the required units. In digital imaging systems, a discrete value is used to store the level of energy. These discrete levels are referred to as Digital Number values or DN-values. The fact that the input is measured in discrete levels is also referred to as quantization. One can calculate the amount of energy of a photon corresponding to a specific wavelength, using Q=hxν Q = energy of a photon (J), h = Planck's constant (6.6262 x 10-34 J s), ν = frequency (cycles per second, Hz) The solid state detectors measures the amount of energy (J) during a specific time period, which results in J/s = Watt (W). The range of input radiance, between a maximum and a minimum level that a detector can handle is called the dynamic range. Geometric Characteristics At any instant the mirror of the whiskbroom scanner ‘sees’ a circle-like area on the ground. Directly below the platform (at nadir), the diameter, D, depends on the viewing angle of the system, β, and the height H: D=βxH D should be expressed in metres, β in radians and H in metres. Consider a scanner with β = 2.5 mrad that is operated at 4000 m. Using the formula, the diameter of the area observed under the platform is 10 m. The viewing angle of the system (β) is also referred to as instantaneous field of view, abbreviated as IFOV. The IFOV determines the spatial resolution of a scanner. The field of view (FOV) describes the total angle that is scanned. For aircraft scanners it is usually expressed as an angle; for satellite based scanners with a fixed height the effective image width is used. Consider a single line scanned by a whiskbroom scanner mounted to a static platform. This results in a series of measurements going from left to the right side. The values for a single pixels are calculated by integrating over a carefully selected time interval. Pushbroom Scanner The pushbroom scanner is based on the use of Charged Coupled Devices (CCDs) for measuring the electromagnetic energy (Fig. 24). A CCD-array is a line of photo-sensitive detectors that function similar to solid state detectors. A single element can be as small as 5 µm. Today, the two dimensional CCD-arrays are 43 used in digital cameras and video recorders. The CCD-arrays used in remote sensing are more sensitive and have larger dimensions. The first satellite sensor using this technology was SPOT-1 HRV. High resolution sensors such as IKONOS and Orbview3 also apply the push broom principle. Fig. 24 Principle of the pushbroom scanner The pushbroom scanner records one entire line at a time. The principal advantage over the whiskbroom scanner is that each position (pixel) in the line has its own detector. This enables a longer period of measurement over a certain area, resulting in less noise and a relatively stable geometry. Since the CCD elements continuously measure along the direction of the platform, this scanner is also referred to as along- track scanner. Spectral Characteristics To a large extent, the characteristics of a solid state detector are also valid for a CCD-array. In principle, one CCD-array corresponds to a spectral band and all the detectors in the array are sensitive to a specific range of wavelengths. With the current state-of-the-art technology, CCD-array sensitivity stops at 2.5 µm wavelength. If longer wavelength are to be measured, other detectors need to be used. One drawback of the CCD arrays is that it is difficult to produce an array in which all the elements have similar sensitivity. Differences between the detectors may be visible in the recorded images as vertical banding. Geometric Characteristics For each single line, pushbroom scanners have a geometry similar to that of aerial photos (which have a ‘central projection’). In the case of flat terrain and a limited total field of view (FOV), the scale is the same 44 over the line, resulting in equally spaced pixels. The concept of IFOV cannot be applied to pushbroom scanners. Typical for most pushbroom scanners is the ability for off-track viewing. In such a situation, the scanner is pointed towards areas to the left or right of the orbit track (off-track) or to the back or forth (along-track). This characteristic has two advantages: it is used to produce stereo-images, and it can be used to image off-track viewing, similar to oblique photography, the scale in an image varies and should be corrected for. As with whiskbroom scanners, an integration over time takes place in pushbroom scanners. Consider a moving platform with a pushbroom scanner. Each element of the CCD-array measures the energy related to a small strip below the platform. Every n milliseconds the recorded energy (W) is averaged to determine the DN-value for each pixel along the line. RADAR What is radar? So far, we have discussed remote sensing using the visible and infrared part of the electromagnetic spectrum. Microwave remote sensing uses electromagnetic waves with wavelength between 1 cm and 1 m (Fig. 8). These relatively longer wavelengths have the advantage that they can penetrate clouds and are independent of atmospheric conditions, like haze. In microwave remote sensing there are active and passive sensors. Passive sensors operate similarly to thermal sensors by detecting naturally emitted microwave energy. They are used in meteorology, hydrology and oceanography. In active systems, the antenna transmits microwave signals from an antenna to the earth’s surface where they are backscattered. The part of the electromagnetic energy that is scattered into the direction of the antenna is detected by the sensor as illustrated in Fig. 25. Fig. 25 Principle of active microwave remote sensing 45 There are several advantages to be gained from the use of active sensors, which have their own energy source:  It is possible to acquire data at any time including the night (similar to thermal remote sensing).  Since the waves are created actively, the signal characteristics are fully controlled (e.g., wavelength, polarization, incidence angle, etc.) and can be adjusted according to the desired application. Active sensors are divided into two groups: imaging and non-imaging sensors. RADAR sensors belong to the group of most commonly used active imaging microwave sensors. The term RADAR is an acronym for RAdio Detection And Ranging. Radio stands for the microwave and range is another term for distance. Radar sensors were originally developed and used by the military. Nowadays, radar sensors are widely used in civil applications too, such as environmental monitoring. To the group of non-imaging microwave instruments belong altimeters, which collect distance information (e.g., sea surface height), and scatterometers, which acquire information about the object properties (e.g., wind speed). Principles of Imaging Radar Imaging radar systems include several components: a transmitter, a receiver, an antenna and a recorder. The transmitter is used to generate the microwave signal and transmit the energy to the antenna from where it is emitted towards the Earth’s surface. The receiver accepts the backscattered signal as received by the antenna, filters and amplifies it as required for recording. The recorder then stores the received signal. Imaging radar acquires an image in which each pixel contains a digital number according to the strength of the backscattered energy that is received from the ground. The energy received from each transmitted radar pulse can be expressed in terms of the physical parameters and illumination geometry using the so-called radar equation Pr = Where Pr is the received energy, G is the antenna gain, λ is the wavelength, Pt is the transmitted energy, σ is the radar cross section, it is a function of the object characteristics and the size of the illuminated area, R is the range from the sensor to the object. From this equation it can be seen that there are three main factors that influence the strength of the backscattered received energy:  radar system properties, i.e., wavelength, antenna and transmitted power, 46  radar imaging geometry, that defines the size of the illuminated area which is a function of i.e., beam-width, incidence angle and range,  object characteristics in relation to the radar signal, i.e., surface roughness and composition and terrain topography and orientation. What exactly does a radar system measure? To interpret radar images correctly, it is important to understand what radar sensor detects. Image the transmitter creates microwave signals i.e., pulses of microwaves at regular intervals, the Pulse Repetition Frequency (PRF), that are bundled by the antenna into a beam. This beam travels through the atmosphere, illuminates a portion of the Earth’s surface, is backscattered and passes through the atmosphere again to reach the antenna where the signal intensity is received. From the time interval the signal needs to pass twice the distance between object and antenna, and knowing the speed of light, the distance (range) between sensor and object can be derived. To create an image, the return signal of a single pulse is sampled and these samples are stored in an image line. With the movement of the sensor, emitting pulses, a two-dimensional image is created (each pulse defines one line). The radar sensor, therefore, measures distances and detects backscattered signal intensities. Commonly used imaging radar bands Similarly to optical remote sensing, radar sensors operate with different bands. For better identification, a standard has been established that defines various wavelength ranges using letters to distinguish among the various bands (Fig. 26). In the description of different radar missions you will recognise the different wavelengths used if you see the letters. The European ERS mission and Canadian Radarsat, for example, use C-band radar. Just like multispectral bands, different radar bands provide information about different object characteristics. Fig. 26 Microwave spectrum and band identification by letters Applications of radar 47 There are many useful applications of radar images. Radar data provide complementary information to visible and infrared remote sensing data. In the case of forestry, radar images can be used to obtain information about forest canopy, biomass and different forest types. Radar images also allow the differentiation of different land cover types such as urban areas, agricultural fields, water bodies, et cetera. In agricultural crop identification, the use of radar images acquired using different polarization (mainly airborne) is quite effective. It is crucial for agricultural applications to acquire data at a certain point in time (season) to obtain the necessary parameters. This is possible because radar can operate independently of weather or daylight conditions. In geology and geomorphology the fact that radar provides information about surface texture and roughness plays an important role in lineament detection and geological mapping. Other successful applications of radar include hydrological modelling and soil moisture estimation, based on the sensitivity of the microwaves with ocean surfaces and ice provides useful data for oceanography and ice monitoring. Operational systems use data from the European SAR system ERS-2 and the Canadian Radarstat programme. In this framework the data is also used for oil slick monitoring and environmental protection. Looking at SAR interferometry there are plenty of interesting examples in the field of natural disaster monitoring and assessment i.e. earthquakes, volcano eruptions, flooding, et cetera. Remote sensing below the ground surface The foregoing methods have relied on the electromagnetic spectrum in or near the wavelength of visible light and are largely confined in their application to the investigation of reflections and emissions from the Earth’s surface. To probe more deeply into the ground, a range of methods that have their origin in the physical or chemical properties of the buried rocks themselves may be employed. These methods are often called ‘geophysical methods’. While the methods of data collection may, of necessity, differ from the methods employed in others types of remote sensing, presentation of geophysical data nowadays uses much of the technology originally developed for remote sensing data sensu stricto. To maximize our ability to probe into the depth dimension, the integrated interpretation of all available data is to be recommended, so much is to be gained by a closer integration of remote sensing and geophysics, while appreciating the important differences in the origins of the ‘images’ obtained. Summary Geophysical methods therefore provide a wide range of possible methods of imaging the subsurface. Some are used routinely, others only for special applications. All are potentially useful to the alert geoscientist. Gravity and magnetic anomaly mapping has been carried out for almost 50 years. While most countries have national programmes, achievements to data are somewhat variable from country to country. The data are 48 primarily useful for geological reconnaissance at scales from 1:250000 to 1: 1,000,000. Gamma-ray spectrometry, flown simultaneously with aeromagnetic surveys, has joined the airborne geophysical programmes supporting geological mapping in the past decade. All three methods are therefore used primarily by national geological surveys to support basic geoscience mapping, alongside conventional field and photo-geology, and to set the regional scene for dedicated mineral and oil exploration. It is normal that the results are published at a nominal cost for the benefit of all potential users. Geophysical surveys for mineral exploration are applied on those more limited areas (typically at scales 1:50000 to 1:10,000) selected as being promising for closer (and more expensive!) examination. Typically this such as |EM and IP) on the ground. Once accurately located in position (x, y) and depth, the most promising anomalies can be tested further by drilling. Groundwater exploration has historically relied on electrical sounding and profiling but has been supplemented in some cases by EM profiling and sounding an shallow seismics surveys. Regrettably poor funding usually dictates that such surveys are less thorough and systematic that is the case in mineral exploration despite the fact that drilling (especially the drilling of non-productive boreholes!) is such an expensive item. Oil exploration relies almost entirely on detailed seismic surveys, once their locationhavs been selected on the basis of all available geological and regional geophysical data. The surveys are carried out by highly specialised contractors, up to data with the latest technology in this complex and sophisticated industry. 49

Use Quizgecko on...
Browser
Browser