LBRES2101 Smart Technologies in Environmental Engineering 2023-2024 PDF
Document Details
Uploaded by Deleted User
2024
Sébastien Lambot, Claire Lathuy
Tags
Summary
This document is lecture notes for a course on smart technologies in environmental engineering. It covers topics including geophysical methods like GPR, EMI, and TDR. It also discusses photogrammetry, UAVs for environmental monitoring, and topographic surveying.
Full Transcript
LBRES2101 - Smart technologies in environmental engineering Prof. Sébastien Lambot Claire Lathuy 2023-2024 Table of contents 1 General introduction...
LBRES2101 - Smart technologies in environmental engineering Prof. Sébastien Lambot Claire Lathuy 2023-2024 Table of contents 1 General introduction 3 2 Geophysical methods 3 2.1 Definition of permittivity........................................ 3 2.2 Ground-penetrating radar (GPR).................................... 4 2.2.1 GPR principles.......................................... 4 2.2.2 GPR image processing...................................... 5 2.2.3 Radar systems and antennas.................................. 7 2.2.4 Governing equations....................................... 9 2.2.5 Soil moisture determination................................... 10 2.3 Electromagnetic induction (EMI).................................... 13 2.3.1 EMI principles.......................................... 13 2.3.2 The soil electrical conductivity................................. 14 2.3.3 The soil magnetic susceptibility................................. 15 2.3.4 EMI data processing....................................... 15 2.3.5 Applications........................................... 15 2.4 Time domain reflectometry (TDR)................................... 16 2.4.1 TDR principles.......................................... 16 2.4.2 TDR signal processing...................................... 16 2.5 Electrical resistivity tomography (ERT)................................ 17 2.5.1 ERT principles.......................................... 17 2.5.2 ERT data processing....................................... 18 2.5.3 Applications........................................... 19 2.6 Seismis................................................... 19 2.6.1 Seismic principles......................................... 19 2.6.2 Seismics waves.......................................... 19 2.6.3 Applications........................................... 20 2.7 Computing methods in geophysics................................... 20 2.7.1 Introduction........................................... 20 2.7.2 Inverse modeling......................................... 20 2.7.3 Artificial Neural Networks (ANN)............................... 21 3 Photogrammetry 22 3.1 Introduction................................................ 22 3.2 Camera basics............................................... 23 3.3 Stereophotogrammetry.......................................... 25 3.4 Interior Orientation........................................... 26 3.5 Exterior Orientation........................................... 27 3.5.1 Relative orientation....................................... 28 3.5.2 Absolute orientation....................................... 29 3.5.3 Exterior orientation - summary................................. 30 3.6 Aero-triangulation............................................ 30 3.7 Orthophoto................................................ 31 4 UAVs for environmental monitoring 31 4.1 Introduction................................................ 31 4.2 Unmanned Aerial System (UAS).................................... 31 4.2.1 The Unmanned Aerial Vehicle (UAV)............................. 32 4.2.2 The Ground Control Station (GCS).............................. 33 4.3 Mission Planning............................................. 34 4.4 Belgian and EU Airspace and Regulations............................... 35 4.5 Common Data Types........................................... 35 4.6 Sensor information............................................ 36 4.6.1 UAV Multispectral Camera................................... 36 1 4.6.2 UAV Thermal IR Camera.................................... 36 4.6.3 UAV LiDAR........................................... 37 4.7 Product Examples............................................ 39 4.7.1 UAV in Precision Agriculture.................................. 39 4.7.2 UAV in Forestry......................................... 39 4.7.3 UAV for Rock Glacier Tracking................................. 40 4.8 Future Perspectives............................................ 40 5 Instruments and methods for topographic surveying 40 5.1 Angle measurement............................................ 40 5.2 Horizontal levelling............................................ 42 5.3 Distance measurement.......................................... 43 5.4 Determination of point coordinates................................... 44 5.5 Implantation techniques......................................... 44 5.6 Total station............................................... 46 2 1 General introduction Sustainable and optimal management of soil and water resources is required to preserve our environment and sustain increasing societal demands (eg. food). Soil properties govern infiltration and runoff, evaporation, energy exchanges with the atmosphere, plant growth (food and energy), contamination,... Uses of freshwater : 70% for agriculture, 20% for industries, and 10% for domestic uses. Soil has to face pressures due to climate change as salinization, sealing, pollutant contamination, and com- paction. Therefore, characterizing those properties and their dynamics is essential for managing soil and water resources. The measurement method changes in function of the scale. At a small scale, soil samplings are used to characterize the physical and chemical properties of different points (huge variability between them). There are also handheld sensors, geophysics at a field scale = management scale (for an agricultural interest), close-range sensing, and remote sensing. The tools used vary with the scale. To manage the system, observation is the first step with the characterization (destructive methods or not) and monitoring of the environment. Then, an understanding of the process is needed to model. Finally, management strategies are selected. 2 Geophysical methods 2.1 Definition of permittivity Dielectric techniques are based on the fact that water is a strongly polarizable molecule. Indeed, at the molecular level, the distribution of the electrons around the water molecule is asymmetric. Water molecules constitute what is called permanent dipoles. In the absence of any external electric field, the dipoles do not have any specific orientation. On the other hand, when dipoles are placed in an electric field E, the dipoles tend to align with the electric field and the material polarizes. This field will be proportional to the strength of the imposed electric field and a material parameter that measures the capacity of polarization : the permittivity of the medium. Given the polar nature of water, this permittivity is quite high (i.e., the relative dielectric permittivity of water is about 80, while the air relative permittivity is ≈1). The permittivity depends not only on the properties of the materials but also on the frequency of the applied electric field. The orientation of the dipoles, and therefore the polarization, in an electric field is not instantaneous but is characterized by some inertia. If we apply a very fast alternative electric field, the dipoles fail to keep up with the rapid fluctuations of the alternative field. At this point, the molecules are absorbing energy. This dependence of the polarization mechanism on the frequency of the alternating electric field can be described using a frequency representation of the law of polarization and therefore a complex description of the permittivity. The dielectric constant of soils can be determined by the impact that the dielectric properties of a medium have on the propagation of electromagnetic waves in that medium. The dielectric permittivity will influence the speed of the propagated electromagnetic waves, and these speeds are measurable with modern technologies. Dielectric permittivity also influences the reflection coefficient at an interface, and hence, influences the amplitude of the reflected field. The theory of electromagnetic waves, based on Maxwell’s equations, allows us to calculate the speed of propagation of this electromagnetic wave. It is given by v = √cϵr v = speed of the wave [m/s] c = speed of propagation of light [m/s] ϵr = relative permittivity of the medium [F/m] As the relationship between ϵr and SWC (soil water content) is not very dependent of the soil type, a calibration between the dielectric constant and the volumetric water content is necessary. 3 2.2 Ground-penetrating radar (GPR) 2.2.1 GPR principles Ground-penetrating radar (GPR) transmits electromagnetic waves into the ground and records the echoes coming from electromagnetic contrasts. It is one of the first EM tools to be developed. The radar transmits into the ground EM waves via an antenna T x. Each material is characterized by a dielectric permittivity ϵ (influences the wave velocity), an electric conduc- tivity σ (influences the wave attenuation), and a magnetic permeability µ. Once, the wave faces a material, a part of the wave is reflected and captured by another antenna Rx. The wave is fully reflected if the material is metallic. With EM contrasts, there are reflection, transmission, and refraction. One important characteristic of GPR is its ultra-wideband. At the same time, a GPR uses a very wide range (10 cm-10 m) of frequencies to get a high-resolution image. Applications of GPR : civil engineering, archaeology (detecting wrecks), glaciology (thickness of melting ice), geology, environment, and agriculture. The waves are propagating vertically but also in different angles (reflection hyperbola). When a lateral wave reaches a pipe, part of it is reflected the receiving antenna as an echo, and the traveled distance will be longer. The amplitude will also be smaller. When we get closer to the object the traveled time is reduced, as well as the attenuation. At the apex of the target, the signal has the strongest amplitude because less losses. When we move away again from the object, the wave comes back later and is attenuated again. The shape drawn by the reflection is called a reflection hyperbola. To detect if the hyperbola shows a pipe or a stone, we use 3D imaging 4 to have images of the soil at different depths. √ 2 (x0 )2 +(|y−y0 |)2 Propagation time : t = 2d v = v t2 |y−y0 |2 → Hyperbola : t20 − t2 =1 0 4 ( √cϵr )2 Propagation velocity : v = √cϵr If the hyperbola is narrow, the permittivity is larger. In fact, the velocity is smaller and the propagation time is longer. 2.2.2 GPR image processing Waves are more attenuated with a bigger propagation distance due to scattering (small objects interfering) or decreasing of wave energy after a certain distance or 3D dispersion of waves. The first correction to implement is used to get rid of the wave attenuation effects on the signal. The EM signal amplitude indeed gets weaker when it goes deeper in the soil, both through spherical attenuation and (di-)electrical losses. If we do not implement any correction, the signal is attenuated and rapidly becomes undetectable. To amplify the signal, waves are multiplied with a gain function g(t). Temporal and spatial frequency filters are also used to decrease noise or to increase the signal. Those filters use the Fourier transform (FFT) and inverse Fourier transform (IFFT). Fourier transform decomposes a waveform into a sum of sinusoids of different frequencies and amplitudes. 5 Figure 2 – FTT as a radar pulse Figure 1 – FFT as a sine wave In both representations, there is a maximum on the frequency domain. A band-pass filter keeps information about a certain band. It can be used to remove some noise. The 0-frequency represents what is constant in the soil. The first peak of frequency is the target. At 0.15m, there are metal bars in the concrete, it explains the small peak. Figure 3 – Band-pass filters with respect to time (t → f ) and space (x → k) Average background removal aims to remove everything that is constant (= removing the zero spatial frequency). Afterward, it’s easier to detect local targets. This removes constant reflectors, including antenna reflections that are independent of the medium. 6 2.2.3 Radar systems and antennas There are two different GPR. The time domain GPR (impulse GPR) is prevalent on the market. — A time domain pulse is transmitted and the reflected energy is received as a function of time. — Range information is based on the travel time principle. — Associated bandwidth is determined by the pulse width. — Duration of the pulse as short as possible (monocycle) to have a good depth resolution. The shorter the pulse duration in the time domain, the larger the bandwidth in the frequency domain, and hence, the finer the range resolution. Frequency domain GPR — This GPR is slower than a pulse radar but it’s more accurate in dynamic range. — At each frequency, a continuous wave is transmitted. — The frequency synthesizer steps through a range of frequencies with equal steps (sweep). — For each frequency the amplitude and phase of the signal are compared with the transmitted signal, which is mixed. — The frequency data can be transformed in the time domain by an inverse Fourier transformation. — The result is known as a synthesized pulse. There are three major antennas for GPR. The larger the wavelength is, the lower the frequency is, and the deeper the penetration depth will be. Dipole antenna — The pole is half the wavelength. — Wavelength : λ = vT = fv with v = c in free-space (c = 3 × 108 m/s) — Limitations : narrow bandwidth and used for low-frequency applications (telecommunications) Bow-tie antenna — Larger bandwidth than dipole → better resolution in GPR images — The most commonly used in GPR applications 7 — It is worth noting that the impedance of these antennas is set to be close to the one of the average ground to transmit a maximum of energy within the ground. The larger is the difference between the antenna impedance and the ground impedance, the larger is the reflection at the soil surface and the smaller is the energy transmitted into the ground. TEM horn antenna — High directivity — Used off the ground — Usually coupled with air Figure 6 – TEM horn antenna Figure 5 – Bow-tie antenna Figure 4 – Dipole antenna Antenna arrays combine a large number of different antennas (receivers or transmitters). In those series of antennas, it is possible to use a part of the whole array. Dynamic range = ratio of the largest receivable signal to the minimal detectable signal. DR = 20 log VVmax min [dB] — Vmas (Volts) = max voltage that can be received, must not overload the radar front end. — Vmin (Volts) must be above the receiver noise level and have a minimum signal-to-noise ratio (SNR) to be detected. — With most radar systems, a SNR of a least 60 dB is reached. It is usually expressed in decibels (dB) for a specified bandwidth in Hertz. Attenuation loss in a material q √ µ0 µr ϵ0 ϵr 1 + tan2 δ − 1 = 8.686 × 2 × R × α [dB/m] La = 8.686 × 2 × R × 2πf 2 α attenuation constant 8 f frequency (Hz) tanδ loss tangent of the material ϵr relative permittivity of material ϵ0 absolute permittivity of free space µr relative magnetic susceptibility of the material µ0 absolute magnetic susceptibility of free space R range to the target = 1.0m 2 A A1 LdB = 10 log10 A21 = 20 log10 A 0 0 Range resolution = ability of a radar to resolve between 2 closely spaced targets. — For a GPR return, 2 targets separated in time can be distinguished if the envelopes of their respective transient returns are clearly separated. — The half-width (-3dB in voltage) or half-power (-6dB in power) point of the return signal is used as a reference point of clear separation. — Rres = 2B ϵr where c = speed of light, B = bandwidth, and ϵr = dielectric constant 1.39c √ — The larger the dielectric permittivity is, the better the resolution will be. The choice of frequency is a tradeoff between penetration depth and resolution because the deeper the depth is, the worse the resolution will be. 2.2.4 Governing equations Electromagnetic wave propagation is described by Maxwell’s equations. −∇ × H + (σ + jωϵ)E = −J ∇ × E + jωµH = −K with ϵ = dielectric permittivity, σ = electric conductivity, µ = magnetic permeability If we take the 1D solution (only Ex and Hy ̸= 0), two equations are obtained. They can be combined together to give the Helmholtz equation. −∂z Hy + (σ + jωϵ)Ex = 0 ⇒ −∂z −∂jωµ z Ex + (σ + jωϵ)Ex = 0 ⇒ ∂z ∂z Ex + jωµ(σ + jωϵ)Ex = 0 ∂z Ex + jωµHy = 0 ⇒ k 2 = jωµ(σ + jωϵ) Helmholtz equation ∂ 2 Ex 2 −γz ∂z 2 + k Ex = 0 ⇔ Ex = Ae + Beγz The propagation constant is complex-valued : γ = +jβ q 2 √ The real part α is the attenuation constant. α = ω 2µϵ ( 1 + tan2 δ − 1) q 2 √ The imaginary part β is the phase constant. β = ω 2µϵ ( 1 + tan2 δ + 1) where tanδ = ωϵσ 2π Wave velocity v = Tλ = 2π T = ωk = ω β ≈ √c ϵr λ k = number of wavelengths c = √µ10 ϵ0 ϵr = ϵϵ0 At an interface between 2 materials, the incident electric field Ei is separated into a scattered Es and a trans- mitted Et part. 9 R= Es Ei = γγ11 −γ +γ2 2 Et 2γ1 T = Ei =1+R= γ1 +γ2 2.2.5 Soil moisture determination Ultra-wideband microwave radars can also be used for estimating the electromagnetic properties of a medium. Indeed, the dielectric permittivity is directly dependent on soil moisture and the electrical conductivity σ is dependent on moisture, clay content, salinity, etc Applications — Digital soil mapping — Road inspection — Wood characterization — Concrete evaluation — Non-destructive testing of materials Link between soil physicochemical properties and electromagnetic properties — Empirical Topp’s equation : θ = −5.3 × 10−2 ϵr − 5.5 × 10−4 ϵ2r + 4.3 × 10−4 ϵ3r ϵα α α b −(1−n)ϵs −nϵa — Dielectric mixing models (e.g., Complex Refractive Mixing Model (CRIM) : θ = ϵα −ϵα w a There are different straight-ray models for soil permittivity estimation. Ground-wave propagation time between antennas Transmitter and receiver in 2 different boxes. There is an air wave due to a direct transmission into the air between the source and the receiver. The slope corresponds to the speed of light. The ground wave has a slope 2 that corresponds to the speed of the wave in the ground. v = xt and ϵr = vc Surface reflection coefficient — The soil is assumed to be a homogeneous half-space. — Electric conductivity is assumed to be negligible. — Soil magnetic permeability is µ0 (free space). — Plane wave approximation — The method requires a measurement above a metal sheet (PEC), with the antenna exactly at the same height. √ 2 1− ϵ — R = 1+√ϵrr ⇔ ϵr = 1−R 1+R 10 Es — R RP EC = Ei Es,P EC with RP EC = −1 Ei Single offset measurement — The depth d of the interface should be known. — Time-lapse √ measurements permit monitoring changes in water content. 2 d2 +(0.5a)2 — vsoil = trw c 2 — ϵr = v Reflection hyperbola Velocity is determined from the convexity of the reflection hyperbola. It requires the presence of a scattering object. √ √ (x−0.5a)2 +d2 + (x+0.5a)2 +d2 vsoil = trw,x with 2 unknowns (vsoil , d) a = distance between the 2 antennas d = depth of the object Common midpoint method (CMP) √ x21 +4d2 Theoretical travel times as a function of the offset between the antennas : t1 = v with v = √c ϵr Borehole radar and transmission tomography This radar requires 2 different boreholes. The wave transmission requires a several-meter distance between boreholes to avoid attenuation of the wave if the distance was too large. 11 1D profiling 2D tomography 12 After performing the GPR, there is the reconstruction of electrical conductivity distribution. There is a difference between model and reality. 2D tomographic inversion using ray approximation (optics) Psegments Knowing the distance and permittivity in each cell, we can calculate the total propagation time : t = i=1 si li where s = slowness=inverse of the velocity. If measurements are performed at different depths, there is a chance to retrieve the unknowns. Linear system (objective function to minimize) : ||Gm − d||2 + λ||Dm||2 G = matrix with path lengths m = slowness Gm is like sl but with all cells d = observed travel times λ||Dm||2 = regularization term As the solution is inherently not unique, the objective function is regularized, here using a smoothness constraint (D is the second-order spatial derivative). Methods are uncertain and not always robust, we estimate the medium electromagnetic properties from the radar signal using full-wave inverse modeling. Formulation of the inverse problem Objective function : least squares problem (diff between measurements and model) PN min ϕ(b) = i=1 |(Si∗ − Si (b)|Θ |Si∗ − Si (b)| with 2 unknowns : height and relative permittivity Well-posedness of the inverse problem — Solution must be unique — Solution must be stable with respect to measurement and modeling errors — Optimization algorithm must be able to find the solution It’s difficult to evaluate permittivity because of the presence of minima. One local minimum is not always the global minimum. GPR needs strong regulations because we can’t emit strong waves for health. Also drone-borne GPR can be used too high above the ground because of the wave propagation in the air. 2.3 Electromagnetic induction (EMI) 2.3.1 EMI principles As for GPR, electromagnetic induction (EMI) also transmits an electromagnetic field into the ground and records the field coming back. Yet, the operating frequency is much lower than for GPR, i.e., it works in the kHz ranges (very low frequency (VLF) - low frequency (LF)). Electromagnetic induction operates 1 kHz-100 13 kHz. At these frequencies, induction phenomena dominate and the recorded field is related to the soil electrical conductivity and magnetic permeability. The effect of permittivity is negligible. EMI involves the generation of an alternating magnetic field by an electric current in a transmitting coil. When this coil is brought close to a conductive material, such as a metal object, the changing magnetic field induces eddy currents in the metal. These eddy currents, governed by Faraday’s law of electromagnetic induction, create a secondary magnetic field that opposes the original field. The receiving coil of the metal detector detects the secondary magnetic field, and the characteristics of this signal provide information about the electrical conductivity of the material σ. The detection system analyzes this signal, triggering an alert to indicate the presence of a metal object. During measurement, we should not wear metallic objects because they interfere with the field. Horizontal or vertical loops provide different sensitivity for depth. A vertical dipole has a peak of sensitivity at 0.5 m. For shallower sensitivity, we use horizontal loops. 2.3.2 The soil electrical conductivity What determines soil electrical conductivity σ (S/m) ? — Texture, and mainly clay content — Water content — Salinity (nutrients, salts, pH) — Organic matter — Structure and density — Temperature As the soil electrical conductivity depends simultaneously on several variables, it can not be linked directly to one or the other, but site-specific calibrations permit to correlation of electrical conductivity with these va- riables. Soil electrical conductivity is mainly useful to delineate homogeneous management zones and smartly define soil sampling positions. The soil’s electrical conductivity is amongst the most useful and easily obtained spatial properties of soil that influence crop productivity. 14 Soil electrical conductivity model Rhoades’ relationship : σ = σw θτ (θ) + σs with tortuosity factor is τ (θ) = aθ + b σw = soil electrical conductivity of the soil water θ = volumetric water content σs = electrical conductivity of the solid phase. It is usually related to clay content as σs = c%clay + d a, b, c, d = soil specific parameters 2.3.3 The soil magnetic susceptibility Magnetic susceptibility χm [-] is also measured by EMI and indicates the degree of magnetization of a material in response to an applied magnetic field. It is related to the relative magnetic permeability as µr = 1 + χm [H/m] The magnetic susceptibility of soil depends on the shape, size, and concentration of magnetic minerals, such as maghemite, magnetite, and titanomagnetites. It is therefore useful to distinguish different pedogenic environ- ments. Processes affecting magnetic susceptibility : — Concentration in the original sediment — magnetic mineral neoformation in soils (microbially induced, ultrafine, pedogenic magnetite/maghemite) — magnetic mineral dissolution (iron-reducing bacteria) — concentration or dilution by additions or weathering of other minerals — soil firing (can form maghemite) or soil mixing 2.3.4 EMI data processing Regarding data processing, several approaches have been developed to infer the medium electrical conductivity from EMI measurements. A first simplified approach, presented by McNeill (1980), assumes a linear relationship between the ratio of the quadrature components of the secondary and the primary magnetic fields. This approach provides reliable estimates for low values of the induction number, defined as the ratio between the transmit- ter–receiver coil spacing and the skin depth. Furthermore, it assumes uniform subsurface electrical conductivity in the volume of influence and, in particular, in the normal direction, and measurements correspond therefore to an apparent electrical conductivity, i.e., depth-weighted average of electrical conductivity (André and Lambot, 2014). McNeill’s equation : σ = ωµ40 ρ2 HHs p where ω = angular frequency µ0 = magnetic permeability of free space ρ = distance between the coils Hs = quadrature component (quad-phase) of the secondary electromagnetic field Hp = quadrature component (quad-phase) of the primary electromagnetic field Magnetic susceptibility is obtained from the in-phase components of the electromagnetic field. This model is applied by performing calibration measurements with the EMI system. Numerous other approaches are based on the modeling and inversion of solutions of 2-D or 3-D Maxwell’s equations, governing the propagation of EM fields in a medium. These approaches are expected to provide better reconstruction results compared with the simplified solutions. Nevertheless, commercial EMI instruments are based on the McNeill model and a calibration protocol to directly output the electrical conductivity and magnetic susceptibility. 2.3.5 Applications Cheval Blanc vineyard : where conductivity is lower, NDVI is lower (vegetation is growing less). Mapping ice thickness : Knowing the conductivity of ice, it is possible to evaluate its thickness. The slower the conductivity is, the thicker the ice will be. 15 2.4 Time domain reflectometry (TDR) 2.4.1 TDR principles Time Domain Reflectometry (TDR) involves sending short electrical pulses down transmission lines, analyzing the reflections, and using the time delay of these reflections to determine the distance and nature of discontinui- ties. In the case of soil moisture measurement, TDR operates by propagating microwave electromagnetic energy through a transmission line embedded in the soil. The transmitted energy reflects off various electromagnetic interfaces, including the termination point of the probe. The reflected signal is subsequently received by an oscilloscope for further analysis. — Probe end → full reflection (r = 1) — Dielectric permittivity ϵ → wave velocity — Electric conductivity σ → wave attenuation (larger σ is, lower the signal will be) — Magnetic permeability µ 2.4.2 TDR signal processing Soil electrical conductivity Method of Giese and Tiemann 1+ρ∞ Rtot = Zc 1−ρ ∞ with ρ∞ = VVmax −V0 0 −Vref Rtot = total resistance (Ω) Zc = cable tester impedance (50 Ω) 16 ρ∞ = reflection coefficient at infinite time 1 σdc = Rs (Ω)g(m) (S/m) g = geometry factor of the probe Full-wave equation (as for GPR) - we assume homogeneous soil 2.5 Electrical resistivity tomography (ERT) 2.5.1 ERT principles Electrical Resistivity Tomography (ERT) involves injecting electrical current into the ground, measuring the resulting voltages, and using the data to create a detailed image of subsurface resistivity variations. This non- invasive geophysical method provides valuable insights into the composition and structure of the subsurface. ERT achieves this by providing a spatial distribution of subsurface electrical resistivity through numerous resistance measurements taken from electrodes positioned either on the ground surface or within boreholes, following a predefined geometric pattern. Four electrodes are used to minimize the effect of contact resistance : a current is forced through two electrodes and the potential difference is measured on the other two electrodes 17 — To avoid problems between probe and contact, we inject the current and measure the potential difference. — To avoid the polarization effect, we use a slightly alternative current (e.g. 0.5-2 Hz). — A switched square wave is the most common waveform. We can use this technique in boreholes. 2.5.2 ERT data processing Resistivity of a homogeneous material : ρ = R A L Ω, and ρ = 1 σ Ohm’s law : R = V I with V = potential (Volts) and I = current (Ampere). In a homogeneous and isotropic half-space, the current density J (A/m²) : J = I 2πr 2 (F A/m2 ). 2 I = 2πr with 2πr = surface of the hemispherical surface. r ρI 2 V = RI = ρ 2πr 18 in practice, there are different configurations. 2.5.3 Applications conductivity large, resistivity low –> seawater reaching the continent bedrock is highly resistive –> can go deeper 2.6 Seismis 2.6.1 Seismic principles Elastic waves (shake horizontally, vertically and transversely) are produced into the ground and resulting echoes are recorded, providing information regarding the nature and extent of the underlying strata in terms of me- chanical properties. Several receivers record reflections from the different ground materials. freq range : very wide range 2.6.2 Seismics waves Body waves P-wave (compression wave) — At Earth’s surface, P waves travel somewhere between 5-8 km/s. Deeper within the earth, where pressures are higher and material is typically more dense, these waves can travel up to 13 km/s. — P waves can travel through solids, liquids and gases. — P wave velocity is affected by water saturation. S-wave (shear wave) — In general, S waves are only 60% as fast as P waves. So, along Earth’s surface, they move at speeds of between 3-5 km/s. — S waves cannot travel through liquids or gases. — S wave velocity is relatively insensitive to water saturation. Surface waves — Love wave — Rayleigh wave The seismic wave equation (1D) Seismic wave propagation can be described by two physical laws : Hooke’s law (An object subjected to a force dx ) and 2 will be deformed : S = E du nd Newton’s law (F⃗ = m⃗a). 2 F = ma = ρdxdA ddt2u d2 u d2 u d2 u ρ d2 u dS F = [S(x + dx) − S(x)]dA = dx dxdA ⇒E dxdA = ρ 2 dxdA ⇒ 2 = dx 2 dt dx E dt2 S = E du dx with E = Young’s modulus ; = relative change of size of a material du dx q It can be shown that 1 v2 = ρ E ⇔ v = Eρ Seismic q velocities v = Eρ 19 In general, the velocity decreases when porosity increases. For a given porosity, velocities increase with increasing saturation. For instance, with a porosity of 60%, the velocity in a dry rock corresponds to the air velocity (330m/s). In a saturated rock, the velocity is the water velocity (1500m/s). Wyllie’s law estimates the porosity of a saturated rock : V1r = Vϕf + (1−ϕ) Vma ϕ = porosity (%) Vr = velocity measured in the rock (m/s) Vf = propagation velocity in the fluid traveling through the rock (m/s) Vma = propagation velocity in the matrix (m/s) Seismic wave attenuation Attenuation results from geometrical spreading (spherical divergence 1/R) and mechanical losses (energy dis- sipation into heat). Mechanical losses increase with frequency. The absorption constant is defined as : Q = αλ π where Q-values different between P and S-waves. 2.6.3 Applications Investigation of a river bed Investigation of sedimentary basins : With a seismic device, we can see much deeper (larger wavelengths) than GPR, but with a worse resolution. Fault detection Tree trunk inspection 2.7 Computing methods in geophysics 2.7.1 Introduction Computing methods retrieve properties of interest from geophysical data (physical properties, classes, structures, etc...). There are two types of methods : — Inverse modeling based on an explicit, physical or empirical, mathematical model — Artificial Neural Networks (ANN) based on a neuron-like mathematical architecture (empirical) without specifying a particular mathematical model (implicit). 2.7.2 Inverse modeling Inverse modeling is used to solve Maxwell’s equations in GPR measurements. The inverse problem is a least squares problem (objective function). The solution must be unique and stable (well-posedness) and the optimi- zation algorithm must be able to find a solution (local VS global). Well-posedness of the inverse problem — Identifiability : one and only one parameter set leads to a system response — Uniqueness : the system response leads to one and only one parameter set — Stability : the solution is not very sensitive to measurement and modeling errors — Depends on — Quantity of information contained in the system response — Model sensitivity to the parameters — Parameter correlations Parameter uncertainty eT e ∂ϕ(b) Variance-covariance matrix : C = n−p H −1 with the Hessian matrix Hi,j = ∂bi ∂bj n−p n−p p Confidence intervals : bi − t1 − α/2 p Ci,i ≤ bi ≤ bi + t1 − α/2 Ci,i C Correlation matrix : Ai,j = √ i,j √ Ci,i Cj,j Model sensitivity with respect to the parameters Si,j = bj Ji,j with Ji,j = ∂b ∂ei j and ∂bj = 0.01 × bj 20 2.7.3 Artificial Neural Networks (ANN) ANN are based on training examples. It is learning from those examples by automatically inferring rules to recognize these patterns. Basic perceptron Small changes in inputs and weights lead to small changes in outputs, except for the step function. Find weights w and biases b so that the outputP a from the network approximates y(x) for all training inputs x. Objective function to minimize : C(w, b) = 2n 1 x ||y(x) − a|| 2 A convolutional neural network is a deep learning algorithm that can take in an input image, assign im- portance (learnable weights and biases) to various aspects/objects in the image, and be able to differentiate one from the other. It reduces the images into a form that is easier to process, without losing features that are critical for getting a good prediction. The pooling layer is responsible for reducing the spatial size of the convolved feature. — Max pooling returns the maximum values from the portion of the image covered by the Kernel. — Average pooling returns the average of all the values from the portion of the image covered by the Kernel. Limitations — Massive training data — Computing resources — Training time — Black box ANN applications in geophysics — Wave form recognition — Picking wave arrival times — Geological pattern recognition — Detection of layer boundaries — GPR hyperbola detection — Inverse modeling of geophysical data 21 3 Photogrammetry 3.1 Introduction Photogrammetry = science of studying and defining precisely the shapes, dimensions and position in space of any object, essentially by means of measurements (distance, angle, surface, volume, altitude, coordinates, etc.) made from one or more photographs of that object. Stereoscopy = set of techniques implemented to reproduce a perception of relief from two plane images. Photogrammetry exploits the parallax obtained between images from different viewpoints to reconstruct a 3D model of reality. Parallax = change in the apparent position of an object as seen from two different positions. Multiple observations from different directions allow for estimating the 3D location of points via triangulation. With γ = 180° − α − β and sinα BC = sinβ AC = AB , sinγ we obtain AC = AB.sinβ sinγ and BC = sinγ. AB.sinα d is equal to d = RC = AC.sinα Benefits — Contact-free sensing — Relatively easy to acquire a large number of measurements — Flexible resolution — 3D sensing — More than just geometry, RGB values are also provided for semantic interpretation — Data can be interpreted by humans Limitations — Sensitive to operational conditions (e.g., light, camera calibration, lens distortion, featureless surfaces, picture quality,...) — Occlusions and visibility constraints — Requires control points for accurate absolute reconstruction — Significant processing resources are needed Photogrammetry workflow 1. Data Acquisition — Capture images using a camera. — Ensure sufficient image overlap and coverage. — Consider ground control points for georeferencing. 2. Image Preprocessing — Lens distortion correction. — Radiometric and atmospheric corrections. — Georeferencing of images using ground control points (GCP). 3. Image Orientation — Determine interior orientation parameters (e.g., camera calibration). — Establish exterior orientation parameters (e.g., camera position and orientation). — Aerial triangulation for refining exterior orientation. 22 4. Feature Matching — Identify and match corresponding points in multiple images. — Establish tie points or ground control points for precise alignment. 5. Point Cloud Generation — Triangulate matched points to create a 3D point cloud. — Use dense matching techniques for detailed point cloud generation. 6. Surface Modeling — Create a digital surface model (DSM) or digital terrain model (DTM) from the point cloud. — Perform filtering and interpolation as needed. 7. Orthophoto Generation — Orthorectify the images to remove perspective distortions. — Generate orthophotos (georeferenced, corrected, and scaled images). 8. 3D Reconstruction — Create a 3D mesh or model from the point cloud and surface model. — Texture mapping to apply imagery to the 3D model for realistic visualization. 3.2 Camera basics There are lots of cameras : for industrial or consumer uses. Their range of the electromagnetic spectrum is going from IR (1 µm) to UV (100 nm). There are also different platforms depending on the altitude and the photo coverage. — UAV-helicopter sensors are used at a low altitude for a small photo coverage. — Airborne sensors are used at a higher altitude for a bigger spatial extent. — Spaceborne sensors are used in space for a huge spatial extent. An airborne observation system is composed of a chamber, an emulsion/sensor, and a platform that will carry it all. An analog photogrammetric chamber will be composed of three things : — Filter — Optical system, composed of a diaphragm, a shutter, and the lenses ; the perspective/optical center is in the center of the lenses ; — Focal plane, where the detector is located (i.e. where the information, digital or analog, is stored). In the case of an analog chamber, a vacuum is created under the film to keep it taut on the focal plane. As we need this system to be extremely precise, it is very important to create a vacuum to press the film against the bottom of the chamber to avoid small deformations on that film that could lead to errors in the information stored there. We also want to avoid the thermal effects of expansion/contraction due to the temperature difference between the aircraft at height and the ground. 2 types of digital cameras according to the type of acquisition — Acquisition by snapshot : images cover a zone by doing overlaps — Acquisition by lines : a triple coverage when acquiring lines. 3 strips : one backward, one forward and one below. In this way, all the points will be seen from three views. It is very interesting to obtain information on objects with slopes for example, but the processing time is longer because we have to manage three different types of deformations at the same time. Focal length f = theoretical optical distance between the perspective center, i.e., the point where light rays converge, and the focal plane (sensor). 23 Long focal length (narrow angle of view) — Long focal length = large zoom and small field of view — 300 to 400 mm (up to 600) — Mountain, city, photo interpretation, orthophoto Short focal lengths (wide angle of view) — Short focal length = low zoom and large field of view — < 150 mm — Aero triangulation, reconnaissance flights, digital elevation models Principal point = point on the photograph located at the foot of the photograph perpendicular passing through the perspective center. Nadir point = point on the photograph which corresponds to the ground nadir. The point at which a vertical line (plumb line) from the perspective center to the ground nadir intersects the photograph. Coordinate systems : The x, y coordinate plane is parallel to the photograph and the positive x-axis points toward the flight direction. Scale : It is not constant and varies with the ground distance (relief, curvature of the earth, altitude variations of the aircraft). But, for orthophoto, the scale is constant. scale = ground photo distance distance= f H′ Height of the vertical object from only one image : h = d×H r 24 The projection differs from the ground observation and is a function of the camera. An aerial photo will have a central projection while a scanner will have a projection around the nadir line. 3.3 Stereophotogrammetry Stereophotogrammetry is used to combine two or more overlapping images. The technique is triangulation. The end lap is the overlapping of successive photos along a flight strip and the side lap is the overlap of adjacent flight strips (e.g., 60-80% endlaps, 20-30% sidelaps). The relative displacement of an object for the center of the image will depend on its vertical distance from the camera. Linear parallax = apparent displacement of the position of an object concerning a reference point or system, when viewed from a different position of observation. The differential parallax is the difference between the stereoscopic parallax at the top and base of the object. 25 The top moves faster because it is closer and the foot moves slower because it is further away. a = top and b= foot and dP = parallax difference (dP = Pa − Pb ). Parallax measurements are made on stereopairs of vertical aerial photographs, which have equal focal lengths and equal camera heights above the datum. The parallax difference between two (non-corresponding) image points is the value used to determine the difference in elevation between the imaged ground points. Among the many formulas used to calculate ground point elevation differences from parallax measurements, the following can be used with b1 and b2, which are air-base images, they are similar : Reconstruct a 3D spatial model From photographs considering that the pose (pose = position [X, Y, Z] and heading [ω, ϕ, κ]) of the camera with respect to the world is unknown (for each photograph). Orientation operations are used to determine these unknowns thanks to the principle of collinearity. — Interior orientation (2D transformation) = relationship between a measuring system and the photo- coordinate system. — Exterior orientation of a stereopair = relationship between the photo-coordinate system and the object coordinate system. — Relative orientation (collinearity eq. and coplanarity condition) — Absolute orientation (7-parameter transformation) 3.4 Interior Orientation Interior orientation refers to the parameters and information that describe the geometric characteristics and calibration of the camera. It includes parameters like the focal length of the lens, principal point offsets, lens distortion corrections, and other camera-specific characteristics. First, there are preliminary steps to recreate the conditions of a central projection : correction for Radial Distortion (optical system), Atmospheric Refraction, Earth Curvature. Correction for radial distortion — Central projection is an ideal representation that does not generally correspond to physical reality. — Optics used are thick lenses that are often asymmetrical (lenses made up of various types of lenses to correct aberrations). 26 — This is because the reference axis RPA (calibrated main ray) = theoretical optical axis, the image plane is not strictly perpendicular to the optical axis and the focal length is different from the optical focal length. — The camera requires a Calibration Certificate : with a calibrated focal length (c), with x, y coordinates of the principal point in the measuring system and with parameters k1, k2, k3 of the radial optical distortion ∆ρ = ρ(k1ρ2 + k2ρ4 + k3ρ6 )with = ρ radial distance Correction for atmospheric refraction — Function of air pressure P , air temperature T and air humidity H — It varies along the light ray through the different layers of the atmosphere (with different densities) — Function of wavelength — Curved light ray refracted by the atmosphere (positive radial distortion) 3 — The radial displacement from the ideal perspective geometry can be computed by : ∆rref = k(r + rc2 ). It’s based on a model atmosphere with r the radial distance, c the focal distance, Z or h the ground elevation (km), and Z0 or H the flying height (km). Correction for earth curvature — Related to the definition of coordinate systems — Only if the control points (elevations) are not in a cartesian coordinate systems — Correction with opposite sign of the refraction — If we approximate the datum by a sphere, radius R = 6372.2 km, then the radial displacement can be 3 computed by : ∆rearth = r 2c (H−h) 2R 3.5 Exterior Orientation It refers to the parameters and information that describe the position and orientation of the camera in the 3D space. This includes the camera’s position (X,Y,Z coordinates) and orientation (roll, pitch, yaw angles) in relation to the ground or objects being imaged. Collinearity An object, its image and the perspective center are located on a single line. Collinearity equations to mathe- matically represent the general relationship between the photo coordinates and object coordinates. Vector from the perspective center to the image a of point A : xa xp xa − xp v⃗i = ya − yp = ya − yp for the image coord. system, with xp , yp the coord. of the perspective center. 0 c −c Vector from the perspective centerto A : Xa X0 Xa − X0 V⃗0 = Ya − Y0 = Ya − Y0 for the object coord. system, with X0 , Y0 , Z0 the coord. of the perspective Za Z0 Za − Z0 center. To mathematically represent the general relationship between the photo and object coordinates : v⃗i = 1/e.RT (ω, ϕ, κ)V⃗0 27 xa − xp r11 r21 r31 Xa − X0 ya − yp = r12 r22 r32 Ya − Y0 1/e −c r13 r23 r33 Za − Z0 xa − xp = 1/e[r11 (Xa − X0 ) + r21 (Ya − Y0 ) + r31 (Za − Z0 )] (1) ya − yp = 1/e[r12 (Xa − X0 ) + r22 (Ya − Y0 ) + r32 (Za − Z0 )] (2) −c = 1/e[r13 (Xa − X0 ) + r23 (Ya − Y0 ) + r33 (Za − Z0 )] (3) By dividing the first by the third and the second by the third equation, the scale factor 1/e is eliminated leading to the following collinearity equations : 11 (Xa −X0 )+r21 (Ya −Y0 )+r31 (Za −Z0 )] xa = xp − c [r [r13 (Xa −X0 )+r23 (Ya −Y0 )+r33 (Za −Z0 )] 12 (Xa −X0 )+r22 (Ya −Y0 )+r32 (Za −Z0 )] ya = yp − c [r [r13 (Xa −X0 )+r23 (Ya −Y0 )+r33 (Za −Z0 )] There is a single point xa , ya on the photo for any object in space. But we want to obtain the object coordinates from the photo coordinates : 11 (Xa −X0 )+r21 (Ya −Y0 )+r31 (Za −Z0 )] Xa = X0 + (Za − A0 ) [r [r13 (Xa −X0 )+r23 (Ya −Y0 )+r33 (Za −Z0 )] 12 (Xa −X0 )+r22 (Ya −Y0 )+r32 (Za −Z0 )] Ya = Y0 + (Za − A0 ) [r [r13 (Xa −X0 )+r23 (Ya −Y0 )+r33 (Za −Z0 )] For each point xa , ya on the photo corresponds an infinite number of objects in space. Every measured point leads to 2 equations but also adds three other unknowns, namely the coordinates of the object points (X0 , Y0 , Z0 ). The problem cannot be solved with only one photograph, unless we have information about Za. The six parameters X0 , Y0 , Z0 , ω, ϕ, κ are the unknowns elements of exterior orientation, i.e., the camera position determined by the location of its perspective center and by its attitude, expressed by three independent angles. The image coordinates xa , ya are normally known (measured). xa , ya = f (X0 , Y0 , Z0 , ω, ϕ, κ, Xa , Ya , Za ) → 3 control points (in object coord.) are measured (6 equations) to solve for the 6 parameters of exterior orientation. The parameters of the interior orientation are expected to be known, i.e., the position of the perspective center xp , yp , and the calibrated focal length c in photo coordinates. Additionally, 3 parameters for radial distortion and 3 parameters for tangential distortion can be added. The application of single photographs in photogrammetry is limited because they cannot be used for recons- tructing the object space. Even though the exterior orientation elements may be known, it will not be possible to determine ground points unless the scale factor of every bundle ray is known. Stereopair : 12 independent parameters : XO1 , YO1 , ZO1 , ω1 , ϕ1 , κ1 + XO2 , YO2 , ZO2 , ω2 , ϕ2 , κ2 The external orientation of a stereopair is done first by orienting a first photograph in regard to the second (this is the relative orientation), then we will orient this pair of photos in regard to the object coordinate system on the field (this is the absolute orientation). 3.5.1 Relative orientation To establish the relative orientation, we need to find five homologous points on the two photographs. These are referred to as corresponding points. The rotation of the second camera with regard to the first one gives us : — The differences in the angles of view between the two shots (dω, dϕ, dκ) — The direction b of the line connecting the two centers of projection (2 parameters : by , bz ). A this stage, the distance b separating the two images is still unknown. Those parameters dω, dϕ, dκ, by , bz are the five out of twelve first parameters that are needed to solve the equation system. The result of the relative orientation is the so-called Photogrammetric Model. It is an intermediate system that allows the transition from the photo-coordinate system to the object coordinate system ; the photogramme- tric model allows reproducing the geometry that existed at the time of the shooting between the two photographs, but at an arbitrary scale, position and orientation, specific to the defined model system. 28 To identify homologous/corresponding points from one image to another, we can use epipolar geometry. When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. For a better understanding, it is worth referring to figure 7 : using a distortion-free lens, (i) the projection centers, (ii) the observed point, (iii) the epipolar lines and (iv) the image points, all lie in the epipolar plane. This simplifies the task of predicting the location of one specific point in the other image. In other words, the corresponding point (a2 ) lies on the epipolar line : it reduces the search space from 2D (i.e., the whole image) to 1D (on one line). Figure 7 – Epipolar geometry. The epipolar axis (B) is the line through the two projection centers L1 and L2. The epipolar plane depends on the projection centers and the point (A). The epipolar lines are the intersection of the epipolar plane with the images planes 3.5.2 Absolute orientation The relative orientation helped to retrieve the 5 (out of 12) first parameters necessary for the external orientation. The absolute orientation now aims to place the image coordinate system in relation to the object coordinate system. For this we need 7 additional parameters (to reach 12) : — XU , YU , ZU : 3 translations, object coordinates of image frame origin — e : 1 scaling factor — ω, ϕ, κ : 3 rotation parameters to switch from image to object coordinate system In conclusion, we need at least 3 control points in 3D (for 7 parameters). 29 3.5.3 Exterior orientation - summary What do we need to calculate a 3D model of the scene ? — 3 known 3D coordinates — or a camera pose (position + heading) and the baseline — or baseline only for a 3D model without a reference frame 3.6 Aero-triangulation Aero-triangulation = task of estimating the 3D location of points as well as the camera parameters using aerial images. Overall restitution of a block of images limiting the number of control points thanks to the end (ρ = 60%...90%) and side (q = 20%...80%) laps (overlap). Bundle (block) adjustment is the default technique for addressing this problem. — Least squares approach to estimating the camera poses and the location of 3D points in the environment. — External orientation of multiple images and observed points (image coord. → object coord.) — Key idea — Start with an initial guess — Project the estimated 3D points into the estimated camera image — Compare locations of the projected 3D points with measured (2D) ones — Adjust to minimize error in the images — Requires solving a large system of equations (requires numerical tricks) — Only a small number of control points is needed (typically full CPs at the boundaries) (D)GPS + IMU + Camera — Combination with GPS/IMU is standard today ("CPs in the projection centers") — With D-GPS/IMU, we obtain a 5-10 cm accuracy for the projection centers — Roll, pitch, and yaw information form IMU — Information is added as additional observations for the bundle adjustment Summary — Photogrammetric measurements lead to precise estimates at large scales — Aero-triangulation results in estimates with σXY ≈ 2.5cm — CPs at the borders and HCP along the stripes are needed for camera — Combination with GPS/IMU is standard today — Bundle adjustment : gold standard approach — Fully automated software solutions available 30 3.7 Orthophoto Orthophoto = image of the object surface in orthogonal parallel projection. All the rays from the image plane to the object are roughly parallel. It is an image at a constant scale in a map coordinate system. It allows us to measure distances, which correspond to distances in the real world. Generate from (an) image(s) of the surface of interest but need also : — Information about the geometry (3D) of the scene : Digital Surface Model DSM (LiDAR, dense image matching,...) — Information about the image has been taken from (exterior orientation) — Camera parameters (interior orientation) 2 main error sources — Strong reliefs lead to occlusions (= regions that are visible in the orthophoto are not observable in the original image) → use of multiple images to obtain an orthophoto without holes — Heights errors in the surface model are mapped to planimetric errors in the orthophoto. Points are displaced in radial direction w.r.t the nadir. → Vertical DSM errors lead to a radial displacment An orthophoto is called a true orthophoto if the parallax of vertical objects is also corrected. Applications of photogrammetry — Environmental monitoring — Elevation model — Map — Orthophotos — Architecture — 3D city models 4 UAVs for environmental monitoring 4.1 Introduction Drones work at the management scale. They are used for commercial and industrial uses : shipping, parcel delivery, mapping, insurance claims, weather forecast, land surveying... There are many names but the most widely used is UAS (Unmanned Aircraft System). The three Ds of drone development and ideology - Dull, Dirty, Dangerous. Why UAVs ? Satellites (100 cm) Planes (25 cm) UAS (1 cm) Lower resolution Middle resolution High resolution with less corrections Scheduled Expensive On demand Clouds Clouds Under clouds Limited coverage in some areas Inefficient for small areas Small areas 4.2 Unmanned Aerial System (UAS) The drone is controlled by a ground control station (GCS) and the drone sends information to the station. 31 4.2.1 The Unmanned Aerial Vehicle (UAV) Drones come in all shapes and sizes (from 10−1 to 106 grams). Each type has its advantages and disadvantages. Multirotor Helicopter (single ro- Fixed wing VTOL (Vertical Take- tor) off and Landing) / Hy- brid Pros Ease of use VTOL and hover Long Endurance Rotor and wing ad- vantages Confined areas Increased endurance Last flight speed Accessibility Heavier payload Feasible pricing Hover flight Cons Short flight times More dangerous Needs larger landing Not the best at for- areas ward or hovering flight Small payload Harder to fly Harder to fly Still in development Expensive Expensive Expensive No hover There are several configurations of multirotors (+, X, hexagon, octagon,...) with rotors turning clockwise (CCW) or counterclockwise (CCW). The 4 forces of flight are thrust/drag and lift/weight. There’s more pressure below the wings and less above, which is why it glides. The outside of the wing is narrower than the inside because it has to turn faster, so we have to compensate with thickness. Fixed-wing and multirotors use different methods of control. Multirotors use differential thrust and torque : to turn (right or left engines), to move forward (rear engines go faster), and to turn on themselves (diagonal engines go faster). Traditional fixed-wing change their wing chord to the relative wind. Propeller speed depends on the wind and sometimes goes in different directions than the propellers. 32 Roll is the rotation on the x-axis and yaw is the rotation on the y-axis, it turns on itself. UAV components — Control receiver : inputs from the RC transmitter — Power Distribution Board (PDB) distributes power to motors and other components — Flight controller/autopilot contains inertial measurement units (IMU) and calculates interactions between parameters, sensors, and motors — GPS (not shown in photo) : For positioning and also contains magnetometer for heading — Electronic speed controllers regulate motor power, which controls motor speed (connected to switchboard and autopilot) — FPV/ Downlink transmitter receives mission packets and sends telemetry and sensor data When you fly a drone with a camera, it’s like seeing through a tube - you can’t see around it. Now sensors are developed to make drones more aware of their surroundings. Sensors weight influences motor power and maxi- mum flight time. It’s spectific to the platform and further affected by placement (stability) and aerodynamics. Additional factor when the sensor is powered with the UAV battery and depends on its power consumption. Emerging methods : Radar, Laser Obstacle Avoidance and Monitoring (LOAM), Motion Detection, Electro- optical (EO), Infrared, Sonar or ultrasonic sensors, Transponders Sensors : RGB, Modified RGB, Multispectral, LiDAR, Thermal, Hyperspectral 4.2.2 The Ground Control Station (GCS) GCS are combination of hardware and software for communication, information, planning, and control to and from the drone. Sizes vary widely (it can be a room, a computer, or more commonly, telephones). Equipment 33 and complexity vary also. Communication distances vary from 100 m (Bluetooth) to global (satellite). The most common is remote control at 2 km with frequency hopping spread spectrum (2.4 GHz, 5.8GHz, 900MHz, 400 MHz). 4.3 Mission Planning Common parameters are path (e.g. survey grid), speed, height above ground, heading, and sensor forward & sensor overlap. Some parameters are sensor- and project-specific. Flight profiles depend on the project scope. Several templates are prepared in most software packages : Single grid/hatch (the most common) for orthomosaics ; Double grid/crosshatch (better for 3D information) ; Orbit (3D objects). These flight profiles are usually already programmed. Mission planning software — Automates complex parameters — Uses graphical and geospatial domain — Has typical Graphical User Interfaces (GUI) — Reference base map — Drone location & flight tracking — Waypoints — Drone telemetry or health information — Commercial - integrated with most popular UAV platforms and most intuitive — Drone Deploy — Pix4D Capture — Drone Harmony — UgCS — DJI GS Pro — Open source - most possibilities but biggest learning curve — Mission Planner/APM — QGroundControl — UAV and Company specific - usually comes with training from the manufacturer When a flight is planning, it is important to know the surrounding environment : changes in terrain, trees, structures (towers, power lines), and airspace (airports, restricted areas, geo-zones). Scouting with Google Earth or in person is essential before planning the mission. Ground Sampling Distance GSD is defined as followed GSD = HP f with H = height (m), P = sensor pixel size (m), and f = focal length (m). The height of the drone changes the size of the pixel : the higher the drone, the sensor width (m) larger the pixel. The pixel size is pixel size (m) = sensor width definition. Image overlap is required for the photogrammetry and computer vision. The desired overlap needs to be planned. A forward overlap of at least 70% and a side overlap of at least 60% are generally suggested. Side overlap affects project efficiency (duration) the most. You can be generous with the forward overlap. Terrain changes must be taken into account when overlapping if the flight does not use terrain altitude correction. Other considerations Don’t forget to take flight speed v, altitude, and shutter speed t into account to avoid blurred images. Indeed, if you fly at a low altitude, you may get blurred images, depending on speed too. Blurry parameter b = GSD vt ≤ 1.5 To minimize blur, the distance traveled by the platform during exposure should be < 1.5 times the GSD. Shutter speeds range from 1/250 s to 1/1000 s, with slower speeds increasing the likelihood of damaging motion blur. Georeferencing GCPs are spread evenly across the study area and their location is defined with a differential GPS. The size (according to GSD) and the color are important for visibility from the air. It’s time-consuming but it’s a trusted method. GPS-marked locations are referenced in images in post-processing. For the location, UAV GPS is the fastest way but less precise (1m) than dGPS GCP (1cm). It is possible to use a GPS Base Station (stationary) to collect satellite information about the UAS GPS (rover) to increase georeferencing accuracy (1cm). Base station must be installed for at least one hour. It can be faster 34 and safer than placing GCPs (ground control points). (RTK (Real-Time Kinematic) - PPK (Post-Processed Kinematic)) Weather — High winds — Visibility — Rain — Turbulent air — Thermals — Wind shear (storm downdraft) — Kp index - global geomagnetic activity There’s an app (UAV Forecast App) that tells you whether conditions are right for flying or not. In project planning, take account of — Area to cover — Desired accuracy or GSD — Flight height and path (regulations and terrain) — Overlap and side lap (sensor type, resolution, mission type, time) — Weather and regulations 4.4 Belgian and EU Airspace and Regulations There are regulatory aspects to drone flights, such as not flying over airports. Since 2020, there has been a homogenization of regulations, because before all countries had different rules. Belgian airspace and regulations — Royal Decree (April 10, 2016) — New Royal Decree (December 31, 2020) — Executing/complementary to EU regulation (where a choice is left to member states — Competent authority : Belgian Civil Aviation Authority (AACB) Airspace and European regulations — New European regulation on drones (December 31, 2020) — Implementing act (IA) : operation and registration — Delegated act (DA) : CE marking, technical requirements,... Easa-europe sets out the rules by country. 3 categories, the main characteristic of which is the increase in risk and the consequent need for authorization : open, specific and certified. Legislation is based on drone weight categories for opens : under 900g they can fly over people, under 4kg they can fly near people and under 25kg they must fly away from people. You need a certificate/authorization/declaration and a test to fly a drone, and there are flight restriction zones. 4.5 Common Data Types Common output options for UAV photogrammetry software — 3D point cloud — 3D mesh — Digital Elevation Models (DEM or DTM/DSM) (2D but considered 2.5D due to the height attribute). — Orthomosaics (2D) 3D point clouds and 3D meshes Point clouds are points with X,Y,Z coordinates typically with additional attributes such as color or point intensity. They keep details and aren’t interpolated (better for a survey). Point clouds can be very accurate, but there may be missing points, whereas meshes fill in the points, but are less accurate, missing details. Point clouds are commonly interpolated into solid 3D elements or surface models (mesh and DEMs for hydrological models or even 3D printing). Errors are harder to quantify from the photogrammetry process with mesh and DEMs. 35 DEM has height values and can be visualized as 3D but is not true 3D. 3D Mesh is real 3D. DEM is used because it’s easier to process, but we won’t get as much information. To do photogrammetry of buildings, the drone has to be at an angle to avoid the effects of backlighting and shadows. The drone must pass over and then offset at an angle of 60°. The same applies to a line of trees : as the angle increases, more information will be obtained on the height of the trees. 4.6 Sensor information 4.6.1 UAV Multispectral Camera UAV multispectral camera works from 250 nm to 15000 nm wavelength. We use this UAV type to calculate vegetation indexes. Normalized Difference Vegetation Index (NDVI) = generic index used for leaf coverage and plant health. N DV I = N IR−RED N IR+RED Normalized Difference Red Edge (NDRE) = index sensitive to chlorophyll content in leaves against soil back- ground effects. N DRE = N IR−REDEDGE N IR+REDEDGE 4.6.2 UAV Thermal IR Camera When the material gets hot, it radiates showing its strength in each wavelength. When a plant is water-stressed, it’s not transpiring and radiates more heat. 36 Thermal cameras have low resolution and therefore need higher altitudes for increasing textures for the tie points in the photogrammetry process. Thermal calibration targets are needed to get absolute temperature values. Sensor affected by camera temperature changes that often correlate with flight direction and wind. Thermal sensors on drones : we look at the temperature of the ground according to the time of day, which gives us an idea of the properties of the ground, and we’ve been able to look at humidity... Thermal sensors are quite difficult to use. To find out the temperature, we put a standard on the ground, and then we get the information directly, so we can compare it with the data. Calibration methods Small UAS thermal imagery issues — Vignetting from non-uniform cooling and temperatures around lens and during non-uniformity corrections (NUC) — Sensor temperature noise — Wind affects on sensor temperature fluctuations — Ground references for accurate assessment and corrections Presented solutions — Externally heated shutter — Sensor covering to contain sensor heat and shield from wind — Slower flights are planned to minimize the impacts of wind — Construction of ground thermal reference targets 4.6.3 UAV LiDAR Light Detection and Ranging (LiDAR) = method for measuring distances with differences in laser return times by illuminating the target with laser light and measuring the reflection with a sensor. There are several considerations to bear in mind. Decision should be based on manufacturer specifications. — Airspeed v (m/s) : faster = lower point density — Altitude h (m) : higher = lower density and larger/smaller swath — Scan angle θ (°) : lower = less overlap = less point density ; higher = less precision and accuracy — Pulse repetition frequency P RF (Hz) : higher = more points ; lower = stronger signal — Surveyed area rate A = 2vh × tan(θ) — Mean point density d = P RF/A (points/m2 ) 37 Error sources — Increased scan angle and distance — Weaker signal — Increased beam divergence — Longer atmospheric exposure experiencing more refraction — Signal bouncing — Increased interpolation error LiDAR intensity is recorded as the return strength of a laser beam and depends on signal bandwidth, surface reflectivity, angle of incidence and range. This intensity, compared with the NDVI, indicates water stress. When NDVI is low (i.e. vegetation is dead), LiDAR intensity is low. However, beyond an angle of 30°, the signal becomes just noise and is useless. For facades, a double grid (high point density) and a wide-angle provide the best coverage in urban environments. LiDAR Project Planning — High scan angle : urban reconstruction — Medium scan angle : trees or vineyard for volume — Lower scan angle : accurate crop height LiDAR VS Photogrammetry LiDAR (active sensor) Optical (passive sensor) More accurate in Z axis (nadir configuration) More accurate in X and Y Can provide information underneath vegetation canopy Cheaper and more accessible sensors Not affected by shadows More software Attributes (intensity, echo, scan angle, time) Attributes (RGB texture), can verify features Max altitude = 50m Max altitude = 120m Limitations LiDAR Optical Lower operating flight altitudes (less efficient data collection) Moving objects More expensive and heavier sensors Shiny, transparent, reflective surface Less available software Poor texture, no texture surfaces Requires in-house software and computing hardware and a specia- list who can properly render and correct the raw data (no cloud services like photogrammetry) 38 At present, there is a wide range of manufacturing-specific software for extracting and creating LAS (LASer) files from LiDAR on-board computers with the appropriate calibrations. Several post-processing options for LiDAR products - CloudCompare (opensource), LASTools (commercial), LiDAR360 (commercial) Photogrammetry has 3 common variations : open source, commercial office software, commercial cloud proces- sing. — Open Drone Map (ODM) - common open source software but with a greater learning curve and fewer features. — Pix4D - common commercial desktop version with moderate user interaction in processing flow. Features a cloud computing option with less user interaction. — Metashape - comparable to Pix4D but with lower computing speed and quality. Less expensive and includes an API for automation, unlike Pix4D. — DroneDeploy - popular commercial cloud computing software with little user interaction or options, but the easiest to use and with lots of ready-to-use software. 4.7 Product Examples 4.7.1 UAV in Precision Agriculture Applications — early detection of weeds : RGB and hyperspectral — nutrient status assessment : multispectral + satellite — pathogen detection : hyperspectral + thermal — water stress detection : thermal with NDVI comparison — yield prediction : multispectral + satellite Using several sensors combines the advantages of high-resolution UAS, on-demand, which provides a more complete image to improve strategies. The difference between PAI’s LiDAR methods and GAI’s multispectral methods and products has enabled hybrid estimation of the Brown Area Index (BAI). Agricultural case study : Evapotranspiration Thermal IR can be used to estimate evapotranspiration using energy balance models such as TSEB. These measurements can be compared with traditional eddy covariance estimates. 4.7.2 UAV in Forestry Applications — Overhead photographic insight into hard-to-inspect areas — Measuring volumes quickly while providing accurate tree counts — Mapping harvest units, and conducting post-harvest waste assessments 39 — Inspections completed in a fraction of the time — Assess plant health and damage — View 3D images of forests 4.7.3 UAV for Rock Glacier Tracking Advantages of multi-rotor perspectives (oblique and nadir) and flight maneuverability in confined space have enabled the creation of a well-depicted 3D model of a rocky glacier in Austria. Measurements between two temporally different point clouds (2012 - 2019) (Terrestrial LiDAR Scanner (TLS) from 2012 and UAS photo- grammetry from 2019) enable the movement of the face of a rock glacier to be tracked. 4.8 Future Perspectives — Sensors : continue to get smaller with hybrid sensor concepts — Flight time : new battery chemistries, ground-based laser charging, autonomous charging stations — Automation : ongoing AI refinement, detection and avoidance technology, and redundancies — Increasingly open and accessible regulations 5 Instruments and methods for topographic surveying 5.1 Angle measurement Optical theodolite is an instrument able to measure horizontal (goniometer) and vertical (eclimeter) angles. Stationing = make the main axis vertical over a specific point 1. Fix tripod height 2. Coarse wedging with an optical plummet and a spherical bubble above the station point with the setting screws 3. Fine wedging with the toric level 4. Verification by rotating the device 5. Wedging V3 6. Verification station point Sensitivity = angle value of a level division Instrumental errors — Axis orientation errors — Eccentricity errors (coincidence of points) 40 Non-vertical main axis : station setup error Error ϵ (horizontal) — The double reversal of the telescope does not change the sign of the — tanϕ = hd error ϵ (horizontal collimation) — tanα = ha — The error on ϕ (vertical collimation) is not eliminated but is usually — tanϵ = ad = tanα ∗ tanϕ negligible. Error ϕ (vertical) — tanϕ = hd Secondary axis not perpendicular to primary axis : tilt or trunnion error — The error ϵ changes sign when performing a double reversal. — cosα = hr — The error on ϕ is not eliminated but is usually negligible — tanϕ′ = dr = tanϕ cosα Figure 8 – Station setup error Figure 9 – Tilt or trunnion error Optical axis not perpendicular to secondary axis : ho- rizontal collimation error — The error ϵ changes sign when performing a double reversal. — There is no error on ϕ. The projection of the angle on the vertical plane remains unchanged. Error ϵ (horizontal) — cosϕ = dc — tanα = ac — tanϵ = ad = tanα cosϕ Figure 10 – Horizontal collimation error Vertical collimation error : the error α changes sign when performing a double reversal. Other errors : axis eccentricity faults, graduation defects,... Horizontal angles — Basic reading : HZ = LB − LA — Double reversal : simultaneous half-turn on the bezel and the alidade : reduction of measurement errors : H +(H ′ ±200) HZ = z 2z — Sequence : generally, we close the horizon circle on the first direction to check that the instrument has not moved : LR = (LR1 +L 2 R2 ) and L′j = Lj − LR — Bearing angle of a direction AB = measured horizontal angle + clockwise between the Y axis and the AB direction : GAB = arctan N EB −EA B −NA 41 Vertical angle Relation between zenithal angle Dz, elevation angle i and slope : Dz + i = π 2 with Dz ∈ [0; π] and i ∈ [ −π π 2 , 2] 5.2 Horizontal levelling Determine the height difference between points A and B : ∆HAB = ma − mb — Level : horizontal sighting plane — Rod reading The range can be estimated by stadimetry (stadia). Stadia refers to a method used in theodolites or levels that employ a stadia reticle in the telescope. The stadia reticle consists of three horizontal hairs, where the middle hair is the reference, and the upper and lower hairs are used to determine the stadia intercept, which is the distance between the upper and lower hairs on the rod.. m1 −m2 tan( α2 ) = 2(Dh−E) s = 2f ⇔ fs = K = 2tan(α/2) 1 ⇒ Dh = K(m1 − m2 ) + E ≈ KL where K is the stadia interval factor (a known constant) and L is the stadia intercept observed through the telescope. Figure 11 – Horizontal levelling Figure 12 – Stadimetry Error sources Station with equal ranges — Curvature of the earth — Atmospheric refraction — Correction : wrong height = actual height + D 2 ∗ tan(ϵ) with ϵ the perpendicularity defect (vertical collimation) Levelling path — Larger distances Pn — Correction : Round-path : i=1 ∆Hi = ∆HAB — Verification : Path closed — Closing error |fh | = |HB,obs −√ HB,known | < T∆H — Hayford’s criterion |fg | < 3σα n — The error is distributed on all intermediate measurements, proportionally to (i) the number of eleva- tion steps, (ii) the range, (iii) the absolute elevation step Figure 14 – Levelling path Figure 13 – Station with equal ranges Different types of levels 42 — Optical level — Automated level : It is equipped with a system which makes it possible to compensate for the lack of calibration of the device when setting up (suspended prism, suspended reticle, pendular systems, etc.). The wedging device is then a spherical spirit level. — Electronic level : It uses similar principles of automatic level compensation, a CCD camera and a barcode target → digital image processing. — Rotating laser level : visible or infrared laser beam 5.3 Distance measurement Spatial distance = distance measuring devices make it possible to obtain the spatial distance between the instrument and the measuring point Surveyor’s tape or chain — Cloth tape : fiberglass fabric coated with polyester plastic — Precision : 5 mm / 10 m — No calibration certificate — Metal tape — 10, 20, 50, 100 m — Calibration certificate — Corrections — Stress correction with Hooke’s law — Temperature expansion correction by using α an expansion coefficient specific to each material : D = D0 (1 + α∆T ) Parallactic measurement involves taking angular observations from two or more points to a distant target, and the resulting angles are used to calculate the horizontal distance. Surveyors measure the angle between the line of sight to the target from two different points, ideally forming a baseline. The parallax caused by the change in the observer’s position provides the necessary information for distance calculations. An auxiliary basis refers to a secondary or additional reference framework used in conjunction with the primary coordinate system. It may consist of additional control points, benchmarks, or survey stations strategically placed to enhance the accuracy and reliability of measurements across a larger area. The auxiliary basis provides additional control and reference points beyond the main survey network, aiding in the connection and adjustment of different survey segments. This helps ensure consistency and accuracy in the overall topographic survey. 43 Electronic distance measurement instruments emit modulated EM waves toward a target, and the time taken for the signal to travel to the target and back is measured. By knowing the speed of the wave and the time 2. EDM is faster and more accurate than traditional of flight, the distance can be accurately calculated : D = c∆t methods like tape measurements. Error sources — Errors due to the measurement system — Atmospheric conditions — Dispersion of the light beam — Wave path 5.4 Determination of point coordinates Triangulation involves measuring the angles of a triangle formed by three known points, and the lengths of one or more sides. Using trigonometry, the coordinates of an unknown point can be calculated based on the angles and distances within the triangle. Triangulation is suitable for large-scale surveys where direct measurements may be challenging. It provides a method for determining point locations based on angle and distance measurements. Bearing and distance involves measuring the angle (bearing) from a reference direction (usually north) and the distance from a known point to the target point. The coordinates are then calculated using trigonometry. This method is commonly used in plane surveys where linear measurements are feasible. Bearings and distances are measured successively from one point to another, building a network of interconnected points. Polar method involves specifying the distance and direction (polar coordinates) from a known point to an unknown point. The coordinates of the unknown point are then calculated using trigonometry. This method is useful for quick measurements when the direction and distance to a point can be easily observed and recorded. It is often employed in the preliminary stages of a survey. Polygonal path involves establishing a series of interconnected straight-line segments (the sides of a polygon) between known points. The coordinates of each point are calculated based on the lengths and directions of the sides. This method is practical for surveys in relatively flat terrain where straight-line measurements are feasible. It allows for the systematic layout of survey points along the edges of a polygon. Modern surveying often combines these methods with advanced technologies like Global Navigation Satellite Systems (GNSS) and total stations to enhance accuracy and efficiency in determining point coordinates. 5.5 Implantation techniques Implantation techniques = operations that consist of transferring on the ground, according to the indications of a plan, the position of isolated points, axes or polygons. Instruments — Theodolite : For measuring angles and ensuring precision in alignment. — Level : For establishing a horizontal reference. — Tape Measure : For linear measurements along baselines. — Optical Square : For ensuring right angles and perpendicularity. 44 Perpendicular to an existing alignment — An alignment is a straight line passing through two materialized points on the ground. — Use a tape : properties of the isosceles triangle — Use an optical square — Use a theodolite : From the theodolite position, mark the ground at the calculated 90-degree angle. This point is now perpendicular to the existing alignment. Figure 15 – With a tape Figure 16 – With an optical square Figure 17 – With a theodolite Parallel to an existing alignment — Drawing 2 perpendiculars (same length) to the first perpendicular. — Parallelogram : the diagonals of a parallelogram bisect each other. — Alternate-internal angles (most accurate method) : arbitrary α value to set out C at distance d from AB Figure 18 – 2 perpendiculars Figure 20 – Alternate-internal Figure 19 – Parallelogram angles Secant to an existing alignment — Use a theodolite : — If S is accessible : we extend AB with SA, we station on S and we open angle 2π − α, we construct A’. 2 − α, we report the distance h, we station — If S is not accessible : we station on A and we open angle 3π on A’ and we open angle π/2. — Use a tape : — Perpendiculars to AB — Set out E : AE = h/cosα — Set out F : BF = AE + dtanα 45 Staking — Position milestones on an existing alignment (with or without obstacles) — Bypassing an obstacle with tape and optical square — Bypassing an obstacle with a tape and a theodolite 5.6 Total station A total station is an advanced surveying instrument that combines the functions of a theodolite and an electronic distance measurement (EDM) device. It is widely used in the field of surveying and construction for measuring angles and distances with high precision. The total station integrates electronic technology to streamline data collection and improve accuracy in determining spatial positions. — A coordinate measurement system allows surveyors to direc