Course Review PDF
Document Details
Uploaded by Deleted User
BCIT
Tags
Summary
This document is a course review for photogrammetry and mapping, covering various modules, including introduction to photogrammetry, models, and sensors. It also touches on image enhancement, resolutions, and some concepts related to 2D, 2.5D, and 3D raster data.
Full Transcript
Course Review Module 1 Introduction to Photogrammetry & Platforms Module 2 Models, Orientation& Stereoscopic Vision Module 3 Planimetric Capture & 3D Digitizing Module 4 Terrestrial Photogrammetry & 3D Models Module 5 Flight Planning Module 6 Photo Ac...
Course Review Module 1 Introduction to Photogrammetry & Platforms Module 2 Models, Orientation& Stereoscopic Vision Module 3 Planimetric Capture & 3D Digitizing Module 4 Terrestrial Photogrammetry & 3D Models Module 5 Flight Planning Module 6 Photo Acquisition, Processing and ABGPS Module 7 Orthophotos Module 8 Data Visualization and Mapping Module 9 Image Interpretation and Analysis Module 10 Digital Elevation Model(s) Module 11 Photogrammetry Applications, Limitations & Review Module 12 RPAS Applications - Guest lecture Dec 3 1 We learned Platforms Errors Nadir/Oblique Image Scale Mathematical Stereo Cameras Stereovision Model Workstation Image Band Flight Planning Producing DTM Enhancements Combinations Terrestrial Producing Other minor Object DTM Sources Orthophoto concepts Modeling 2 Platforms & Basics 3 Platforms Manned Aircraft (large areas) RPAS(Drones) (small areas) Terrestrial (usually close range) The course doesn’t deal with satellite images 4 Sensors/Frames Large/Medium Format Cameras UltraCam Eagle Leica DMC-4 Leica CityMapper- Phase One PAS 4.15 423 MP 2 880 506 MP 302 MP 280 MP Drones/close range Cameras Phase one P5 DJI The Zenmuse P1 DJI ZENMUSE L2 Sony RX1R ii 128 MP 45 MP 20MP 42.4 MP 5 Sensors/Frames Sensor Field of View vs Instantaneous field of view Sensor Field of View - the angle corresponding to each side of the frame. The image frame doesn’t have to be a square Instantaneous field of view – the angle corresponding to a single pixel Each Sensor/Camera frame is usually rectangle Oblique 6 Sensors/Frames NADIR and Oblique Images Nadir Oblique 7 Sensors/Frames Waves and surfaces interaction Oblique 8 Sensors/Frames Passive Sensors (our subject for this course) Don’t emit/send energy Example Optical/Infrared Active Sensors (not the subject of this course) Emit/send energy Examples: LiDAR/RADAR 9 Distortions in Photos (1/8) Geometric Distortions Lens distortion (use calibration) Terrain/ Relief Displacement (use digital terrain model) Camera Optical Axis/ Tip and Tilt (use model & calibration) Atmosphere refraction (use model) Earth curvature (use model) Image motion (use motion compensator) Oblique 10 Resolutions Spectral Resolution (Ability to detect different spectral bands) Black and white (Panchromatic) RGB Multispectral/ RGB/NIR Hyperspectral Hybrid Imaging/LiDAR Nadir Oblique 11 Resolutions Radiometric Resolution (ability to detect small differences in energy) Expressed in bits 8 bits -> 28-> 256 color levels 16 bits -> 216 > 65,536 color levels Hybrid Imaging/LiDAR Nadir Oblique 12 Resolutions Spatial Resolution (ability to detect small ground details) (from 2 cm and up) Hybrid Imaging/LiDAR Temporal Resolution (frequency of acquisition for the same area) Usually related to satellites Oblique Nadir 13 Stereo Pairs, Overlap and Geometry Overlap Side lap Front lap Aerial 60/30 ( varies) Drones 80/80 (varies) Increasing overlap No overlap overlap 14 Photo Frame and Scale Photo Scale Ground pixel size varies function in topography 90 𝑚𝑚𝑚𝑚 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 1 Average Scale = = 1800 𝑚𝑚 ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 20000 1 mm on image = 20 m on ground Our unit is a pixel ( let’s say 4 microns size) 1 Pixel = 8 cm on ground ‘Reference: #1’ 15 Photo Frame and Scale Photo Frame Camera Frame Principal Point: The geometric center of the photo Perpendicular from the center of the lens Fiducial Centre The intersection of the fiducial mark lines Nadir Point The intersection of the plumb line through the ~lens center prospective center with the image plane In a perfect scenario, the three points are the same ‘‘Reference: #2’ 16 Data types & Presentations Data Pyramids Mainly to address data size & generalization Displaying raster data at different resolutions Mainly resampling the data at predefined levels. Help visualize large data sets Data Tiles Dividing the photo’s final product into predefined squares Separated small files Easy to upload/download/visualize City of Chilliwack 17 Data types & Presentations Compression Effect Lossy – compressed beyond recovery Lossless – compressed but recoverable Famous Formats Lossy Geotiffs MrSid (multispectral -lossless compression- proprietary) ECW (multispectral -lossless compression- proprietary) ESRI ASCII Grid (.asc) Specialized format for close range photogrammetry models 18 Mathematical Models 19 Relief Displacement Top ℎ𝐴𝐴 𝑑𝑑𝑎𝑎 =r Bottom 𝐻𝐻 Corrections On a map, it should be the same point Building top and bottom! Image Map ℎ𝐴𝐴 To correct relief distances, you will need ℎ𝐴𝐴 (i.e. height) 𝑑𝑑𝑎𝑎 =r 𝐻𝐻 We can know an object’s height from the relief distance, giving 𝑑𝑑 ℎ𝐴𝐴 =H 𝑟𝑟𝑎𝑎 H 20 Relief Displacement Relief displacement increases with the radial distance 21 Parallax Parallax= 𝑑𝑑𝑎𝑎𝑎 - 𝑑𝑑𝑎𝑎2 The apparent displacement of an object as seen from two different points (i.e different P.C.). 22 Parallax Parallax (𝑃𝑃𝑎𝑎 )= 𝑑𝑑𝑎𝑎𝑎 - 𝑑𝑑𝑎𝑎2 B∗ f B∗ f hA =H - Pa = Pa H−ha x XA =B Pa a y YA =B Pa a Stereo model Parallax requires two images Parallax increases with object height (i.e close to camera) Parallax allows for calc object height, given the flight height, focal length and base distance 23 Fundamental Models The Backbone of Photogrammetry Collinearity model Coplanarity model ya xa The models establish relationships between object space and image space A ω, omega, rotation around photo x axis φ, phi, rotation around photo y axis ZA κ, kappa, rotation around photo z axis XA YA Collinearity Condition 24 Fundamental Models Collinearity Collinearity Condition: The exposure centre, image object point, and object point are on a straight line 𝑚𝑚11 (𝑋𝑋𝐴𝐴 − 𝑋𝑋𝑂𝑂 ) + 𝑚𝑚21 (𝑌𝑌𝐴𝐴 − 𝑌𝑌𝑂𝑂 ) + 𝑚𝑚31 (𝑍𝑍𝐴𝐴 − 𝑍𝑍𝑂𝑂 ) 𝑥𝑥𝑎𝑎 = 𝑥𝑥𝑂𝑂 − 𝑓𝑓 ya 𝑚𝑚13 𝑋𝑋𝐴𝐴 − 𝑋𝑋𝑂𝑂 + 𝑚𝑚23 𝑌𝑌𝐴𝐴 − 𝑌𝑌𝑂𝑂 + 𝑚𝑚33 (𝑍𝑍𝐴𝐴 − 𝑍𝑍𝑂𝑂 ) One point- two xa 𝑚𝑚21 (𝑋𝑋𝐴𝐴 − 𝑋𝑋𝑂𝑂 ) + 𝑚𝑚22 (𝑌𝑌𝐴𝐴 − 𝑌𝑌𝑂𝑂 ) + 𝑚𝑚23 (𝑍𝑍𝐴𝐴 − 𝑍𝑍𝑂𝑂 ) equations 𝑦𝑦𝑎𝑎 = 𝑦𝑦𝑂𝑂 − 𝑓𝑓 𝑚𝑚31 𝑋𝑋𝐴𝐴 − 𝑋𝑋𝑂𝑂 + 𝑚𝑚32 𝑌𝑌𝐴𝐴 − 𝑌𝑌𝑂𝑂 + 𝑚𝑚33 (𝑍𝑍𝐴𝐴 − 𝑍𝑍𝑂𝑂 ) 𝑚𝑚11 (𝑥𝑥𝑎𝑎 − 𝑥𝑥𝑂𝑂 ) + 𝑚𝑚21 (𝑦𝑦𝑎𝑎 − 𝑦𝑦𝑂𝑂 ) + 𝑚𝑚31 (−𝑓𝑓) 𝑋𝑋𝐴𝐴 = 𝑋𝑋𝑂𝑂 − (𝑍𝑍𝐴𝐴 −𝑍𝑍𝑂𝑂 ) 𝑚𝑚13 𝑥𝑥𝑎𝑎 − 𝑥𝑥𝑂𝑂 + 𝑚𝑚23 𝑦𝑦𝑎𝑎 − 𝑦𝑦𝑂𝑂 + 𝑚𝑚33 (−𝑓𝑓) Re-arranging the A 𝑌𝑌𝐴𝐴 = 𝑌𝑌𝑂𝑂 − (𝑍𝑍𝐴𝐴 −𝑍𝑍𝑂𝑂 ) 𝑚𝑚12 (𝑥𝑥𝑎𝑎 − 𝑥𝑥𝑂𝑂 ) + 𝑚𝑚22 (𝑦𝑦𝑎𝑎 − 𝑦𝑦𝑂𝑂 ) + 𝑚𝑚32 (−𝑓𝑓) same equations 𝑚𝑚13 𝑥𝑥𝑎𝑎 − 𝑥𝑥𝑂𝑂 + 𝑚𝑚23 𝑦𝑦𝑎𝑎 − 𝑦𝑦𝑂𝑂 + 𝑚𝑚33 (−𝑓𝑓) ZA Applications: Getting the EOP ( X, Y, Z, ω,φ,κ) using a min of three known ground controls i.e. Space XA Resection YA Collinearity Condition 25 Fundamental Models Coplanarity B Coplanarity Condition: The two exposure centres and object point are on the same plane Occurs when the stereo pair images are oriented right relative to each other 𝑅𝑅1 (R1 x R2 )=0 B 𝑅𝑅2 Applications: Getting object space coordinates (𝑋𝑋𝐴𝐴 , 𝑌𝑌𝐴𝐴 , 𝑍𝑍𝐴𝐴 ) Image matching, relative orientation, Bundle adjustment Coplanarity Condition We can drive 3D coordinates from stereo pair Epipolar Geometry 26 Interior, relative, and absolute orientations Interior orientation Relation between sensor and image coordinates system x,y of PP, focal length Relative orientation Aligning one photo with respect to another one to correct any misalignment. Helps with 3D visualization ( digital or analog) Will need six common (pass) points between the images Absolute orientation Orient a photo with respect to a ground coordinates system Most modern digital camera has GPS/IMU for position and orientation RTK (Real-time kinematics)- requires a base station PPK (Post processing Kinematics)- requires a base station PPP Precise Point Positioning – No need for a base station Ground controls may be used to provide absolute orientation. 27 Stereoscopic Vision 28 Stereoscopic Vision Inter-Ocular Distance (~7cm) Air Base Human eyes and 3D object perception Stereo Pairs with proper overlap Stereo model is based on something we covered a moment ago, perspective view 29 Stereoscopic Vision RO – Misaligned vs Aligned Stereo Models 30 Stereoscopic Vision RO – Von-Gruber Points Otto Von-Gruber discovered that if alignment could be achieved in 6 points in a stereo model, all light 1 2 rays in both images will intersect Inertial Measurement Unit (IMU) continuously 3 4 monitors and records aircraft roll, pitch, and yaw, thus solving for relative orientation. 5 6 31 Stereoscopic Vision Pocket Stereoscope Arrange two photos overlapped w.r.t. each other. A basic form of the modern VR gears. Anaglyph Two photos, each has only two bands Red and Cyan are filtered out Slightly shifted and combined Please put your anaglyph glasses 32 Stereoscopic Vision Softcopy/Digital Photogrammetry Workstation Uses digital images Fully computerized with vision system 33 Stereo Workstation Combine standard computer hardware with specialized video card, software, transmitter, viewing glasses and input device. Capable of creating all photogrammetry-related products including; 2D and 3D Planimetric maps Topographic maps Digital elevation models (DEMs) Orthophoto maps Perspective views including fly-throughs Analysis, storage, & output 34 DAT/EM Summit Stereo Viewer. Stereo Workstation Visualizes the stereo model Computes all the geometry Allows 3D digitizing Maintains the positions of the aerial photographs Reflects what is drawn in the CADD DAT/EM Keypad Issues commands through the CADD program to Summit Sets the level, symbology, and the type of tool that would be used to digitize different features. (e.g. Manhole cover size, light post height, curb with offsets). MicroStation CADD program used. Very similar to ACAD More capacity for large files Auto saves after every line is completed. No loss of data No need for recovery mode 35 Photogrammetry and mapping 36 Large and Medium Scale Compilation Large Scale Compilation Curbs, crosswalks, sidewalks, bike lanes, parking lot Individual houses, detailed roofs, property lines, BCIT campus map Individual trees and shrubs, garden beds, detailed map of a botanical garden Medium Scale Compilation Bus route, highways, emergency response routes, train lines Entire neighborhoods, industrial vs commercial vs residential zoning Forest inventory map, distinct biomes, forestry cut blocks 37 Terrestrial Photogrammetry 38 Cameras, Properties Fixed focal length A wide angle lens is better than a narrow angle lens Fixed focus distance Disable auto focus Focus at infinity or a fixed value Capture images in the highest quality format (RAW) A large sensor is better than a smaller sensor Geotagged images 39 Controls for Terrestrial Photogrammetry Control Points Minimum four control points per side Targets Unique features Scale Define one vertical and one horizontal distance per side Scale bars Targets Unique features 40 Base-to-Distance Ratio International Committee of Architectural Photogrammetry recommends ratios between 1:4 to 1:15 41 BIM & Digital Twin BIM Collaborative design and build process that visualizes the physical and functional aspects of a building Detailed model that provides a central point of reference throughout the creation of a building, with updates to the model at key project stages Main focus is design and construction, not ongoing operational management and building optimisation Digital Twin Virtual replica (1:1) of physical entities that behaves in the same way as its real- world counterpart by using real-time data and physics-based simulations Incorporates factors outside of the building (weather, security, occupancy, etc.) Ability to extend beyond a single building and incorporate entire communities 42 Flight Planning 43 43 Flight Planning Design of Flight path & height, speed Cover the entire area of interest Meet the purpose/specifications Efficient cost 44 44 Optimum Flying Height Factors affecting flying height Map / Orthophoto Scale Contour Factor and Interval GSD Pixel Size Image Resolution Topography (mountain areas) Air traffic control restrictions Accuracy 45 Pixel Size - Orthorectification Factor During orthorectification, the resampling process (geometric and spectral) causes the pixel size to increase by a factor of 1.2 Need to account for this when planning. E.g. If we want to achieve a GSD of 5cm/pixel, we would need to divide it by 1.2. Therefore, GSD can not be larger than 4.17cm/pixel 46 OFH - Contour Factor and Interval Contour factors are empirical rather than statistical Selected based on experience or manufacturer recommendations Values typically range from 1800 to 2400 (dimensionless) Used to determine flying height when combined with a set contour interval Client Specification - 3cm contours, we will use a C-factor of 2000 Cf = H/Ci H = Ci*Cf Cf = Contour Factor or C-factor H = 0.03m*2000 Ci = Contour Interval (m) H = 60m H = flying height (m) 47 OFH - GSD GSD = Ground Sampling Distance Distance between two consecutive pixel centers measured on the ground. E.g. 5cm/pixel GSD means that each pixel is 5cm linearly on the ground We could also say 0.2pixels/cm Fly higher, increase GSD, decrease spatial resolution and decrease visible details. Smallest size of an object that can be discerned in the image. E.g. 5cm/pixel GSD means that any object