DVG504_SBA307_HT24_F1_Fundamentals (1).pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Data Visualization Fundamentals (DVG504, SBA307) Stefan Seipel, Professor HT2024 Outline Introduction into visualization Visualization (image synthesis vs. image analysis) Delineation InfoVis/SciVis/GeoVis Visualization...

Data Visualization Fundamentals (DVG504, SBA307) Stefan Seipel, Professor HT2024 Outline Introduction into visualization Visualization (image synthesis vs. image analysis) Delineation InfoVis/SciVis/GeoVis Visualization workflow Visual mapping (Data->Graphics), requires understanding of data and perceptual limits Graphical models (representation) geometric 2D/3D representations -> points, lines, triangles, strips, vectors implicit models 2D/3D -> discrete raster images, volumes Graphics methods for image formation (rendering) polygon rendering vs. volume rendering rasterization vs. tracing, shading (culling) texture mapping Visualization ”Forming a mental image of something” - a cognitive process - does not involve computers - involves not only the visual senses ”Putting something into visible form” - a creative process - results is some visible artifact Objectives:  Communication of information (emphasizing, narrating)  Improve understanding (illustrating, interpreting, finding)  Decision support (analyzing, extrapolation)  Support creativity (inspiration) Visualization Visualization – Classical examples William Playfair (1759 –1823) Scottish engineer and political economist Considered the founder of graphical methods of statistics “Inventor” of: Trade-balance time-series chart (1786) Line Graph Bar Chart Pie Chart Proportions of the Turkish Empire Exports and Imports to Scottland (1786) located in Asia, Europe, and Africa before 1789. Visualization (Computer Based) Visualization in relation to other dicsiplines Visualization: Synthesis of visual representations (images/videos) from non-visual data Computer Graphics: Synthesis of images from geometric representations Image analysis: Interpretation and extraction of data from of visual material (images) Computer/Machine Vision: Analysis and (spatial) understanding of visual information, as well as reconstruction Visualization Visualization and different (academic) sub-disciplines Scientific Visualization: ( The Language of Computer – Dictionary and Research Guide ) ”Scientific visualization is a branch of computer graphics which is concerned with the presentation of interactive or animated digital images to scientists who interpret potentially huge quantities of laboratory or simulation data or the results from sensors out in the field” ”The data is often obtained from a medical instrument or from the numerical simulation of a physical, chemical or biological process. Often scalar (pressure) or vector (velocity) fields, or both, form the input” Information Visualization: ”The use of computer supported, interactive, visual representations of abstract data to amplify cognition.” [Card et al., 1999] “Information visualization (InfoVis) is the communication of abstract data through the use of interactive visual interfaces.” [Keim et al., 2006] “Information visualizations attempt to efficiently map data variables onto visual dimensions in order to create graphic representations.” [Gee at al., 2005] Geo-Visualization: Incorporates elements from both InfoVis and SciVis MedVis… BioVis… BuildVis… Scientific Visualization – Typical Examples Nuclear, Quantum, and Molecular Modeling Structures, Fluids, and Fields Advanced Medical Imaging Data is related to some spatial concepts from the beginning Information Visualization – Typical Examples Cone tree Tree-map G. Robertson, J. Mackinlay, S. Card, 1991 Cam tree Hyberbolic trees http://www.ischool.utexas.edu/~geisler/info/infovis/paper.html Munzner 97 Data is related to abstract (non-spatial) concepts from the beginning Geovisualization – Some Examples Geospatial visualization Meteorological visuslization (Reis G, univ-kl.de) (Courtesy: Bureau of Meteorology, Australia) Communication of flood risks and uncertainty Location network based on social media data (Lin N., Brandt A., Seipel S.; HiG) (Ma Ding and Bin Jiang; HiG) Visualization – Computerized Visual Information Presentation (Display) Computer Graphics is one pillar of Scientific/Information Visualization Visualization is more than computer graphics Computer Graphics 101 Rendering methods are needed to create realistic or synthetic images (inside orange box prev. slide). What is needed to mimic photography i.e. to render images with a computer? Virtual objects: 3D models, geometry, material properties Virtual light sources: position, color, attenuation, etc. Virtual camera: position, direction, lens projection Illumination model: algorithms that model the propagation of light and its interaction with objects in the scene Graphical 2D/3D Models 2D/3D graphical models form the basis for all visualization Explicit model representation: A graphical entity is described by explicitly stating its spatial parameters (e.g. position, size) Example 1: A line in 2D is defined by explicit definition of start point s=(sx,sy) and end point e=(ex,ey). S and e likewise define exactly length and orientation. Implicit model representation: A graphical entity is resulting from certain properties defined over all positions in a specific domain. Example 1: A line on a 2D pixel display is the set of all colored pixels forming the line. Example 2: All points on a circle defined as Graphical Models Explicit and implicit models Explicit models: Lists of vertices, information regarding connectivity and topology Points Polyline Triangle strip Triangles Quads/Quad strip Implicit discrete model representations - arrays (raster) carrying sampled data Digital 2D image 2D raster-based 3D raster-based (here PET) binary sphere Implicit continuous functions – exact mathematical functional definition Note: Most observed/sampled data are suited for discrete implicit representation. Graphical Models Explicit graphical models in GIS - examples Triangular Irregular Net (TIN) Regular Quadriliteral or Triangular Mesh Autodesk Infrastructure Modeler Pencil and Helix Glyphs 3D Structural Models, e.g. Building Models 3D Model of abstract data, e.g. glyphs, diagrams Graphical Models Implicit graphical models – Some examples Example of GIS data (in geology) : Example from medicine: 3D elevation grid surface with transparency applied to reveal Direct volume rendering of a skull a thresholded voxel model of computed magnetic inversion (UBC format) beneath. Render Techniques in Visualization Different general categories of rendering algorithms Polygon Rendering Image order rendering techniques: Image is rendered for each image element (pixel) Ray-casting and ray-tracing are the most common techniques RC/RT simulate the interaction of light with objects by following the path of each light ray Objects can be polygonal surfaces, parametric surfaces or discrete (3D) images Ray-casting / ray-tracing (explicit polygonal models) Ray-casting Ray-tracing same as ray casting plus: image sampling in screen space iterative back-tracing of reflected and establish ray from eye through pixel transmitted rays into the scene global lighting effects local lighting model assumed at surface (shadows, reflections, transparency, (shading and specular highlights) atmospheric effects) Polygon Rendering Object order rendering techniques: Vertices of polygons are projected from 3D object space onto 2D screen space Pixels in the interior area of a polygon on screen are then rasterized (drawn) For transformations see slides further down Screen Space Object Space 3 2 0 1 2 3 2 3 1 0 3 2 3 3 22 Transformation 0 1 0 1 & Illumination 1 0 0 1 vp Illumination models Illumination in Visualization & Computer Graphics o Light is emitted in all directions from a single point in space o We simplify by assuming an infinitely distant point light source o Far distance implies parallel light rays o Light intensity falls of non-linearly with a 1/distance2 relationship Illumination model for surface rendering Polygon rasterization Illumination model for surface rendering => composed color Classical Example in GeoViz Hillshade from Digital Elevation Maps (prtactical -> GeoLab1) Polygon rasterization From 3D world to 2D screen coordinates Visualization uses linear transformations to model a virtual camera a) Transformation from 3D -> 2D (projection matrix) b) Viewing Position/Direction (Camera pose) -> 3D translation & rotation matrices Example of the projection process Example of camera/object orientation Coordinate systems 4 coordinate systems Model: where the object is defined World: 3D space where actors are positioned View: what is visible to the camera Display: (x, y) pixel locations Model coordinates Coordinate transformations Needed to model relationship between 3D objects and their appearance on 2D screen o 3D to 3D (object/camera articulation) o 3D to 2D (projection) o Homogeneous coordinates o 4x4 transformation matrices o Rotation, translation, scaling o (Perspective) projection Coordinate transformations Matrix-Vector Multiplication Transformation represented by Mn,k where n=4, k=4. Coordinates of a point represented by vector Pn The resulting matrix-vector product has n=4 rows and k=1 columns m11 m12 m13 m14 x m11x + m12y + m13z + m14 P’ = M.P = m21 m22 m31 m32 m23 m24 m33 m34 y z = m21x + m22y + m23z + m24 m31x + m32y + m33z + m34 m41 m42 m43 m44 1 m41x + m42y + m43z + m44 Coordinate transformations Rotations Basic Transforms  cosϑ − sin ϑ 0 0    sin ϑ cos ϑ 0 0 Rz (ϑ ) =  0 0 1 0    0 0 0 1 Scale Translation  sx 0 0 0 1 0 0 tx  1 0 0 0        0 sy 0 0 0 1 0 ty  0 cosϑ − sin ϑ 0 S= T = Rx (ϑ ) =  0 0 sz 0 0 0 1 tz  0 sin ϑ cosϑ 0       0 0 0 1 0 0 0 1 0 0 0 1  cosϑ 0 sin ϑ 0    0 1 0 0 R y (ϑ ) =  − sin ϑ 0 cosϑ 0    0 0 0 1 Coordinate transformations Projective Transform (3D->2D) (x,y,z,1) -> (xp,yp,const,1) Perspective Projection Matrix: Vertex (normalized homogenous coordinates): 1 0 0 0 V = ( x, y, z ,1)   0 1 0 0 P= 0 0 1 0   0 0 1d 0 Vertex projection: V ' = P ⋅ V = ( x, y , z , z / d ) Vertex normalization:  x y  , VN ' =  , , d ,1  z/d z/d  Coordinate transformations Example of projective transform (3D->2D) Assuming a virtual camera model as above Polygon rendering (explicit polygonal models) How are polygons finally drawn on screen? -> see following slides Screen Space Object Space 3 2 0 1 2 3 2 3 1 0 3 2 1 3 3 22 Transformation √ 0 1 0 & 1 Illumination √ 0 0 Drawing into pixel 1 raster ? vp Polygon rasterization (scan conversion) Rasterization = Converting an explicit geometric representation (graphical primitive) into a discretized raster image Primitives: Point, Line, Polyline, Polygon, Triangle Strip Scan conversion algorithms: e.g. Line drawing DDA – digital differential analyzer (Bresenham algorithm ) e.g. Triangle filling algorithms Flood filling Scan conversion J. E. Bresenham, "Algorithm for Computer Control of a Digital Plotter", IBM Systems J., vol. 4, no. 1, pp. 25-30, Jan. 1965. J.D. Foley, A.Van Dam, Fundamentals of interactive Computer Graphics, 1982 Polygon shading (while scan-converting) Transparent Surface Rendering Dataset may include many nested objects Contextual view of multiple spatially co-located A solution is semi-transparend surface rendering objects (building planning and land management) How can simple transparency be dealt with in rendering? Transparency in Surface Rendering Based on opacity blending as described in the seminal paper by Porter&Duff, 1984. Opaque objects reflect, scatter, or absorb light at their surface – no light is transmitted The level of opacity of a surface is referred to as alpha in computer graphics alpha = 0.5 -> object is semitransparent alpha = 1.0 -> object is fully opaque alpha = 0.0 -> object id fully transparent *) Porter T, Duff T (1984) Compositing Digital Images, Computer Graphics. Proc. SIGGRAPH, pp. 253-259. Concept of alpha-compositing Colors are mixed (blended) according to their alpha values The color of a pixel is represented by a quadruple (R,G,B,A) Subscript S indicates current surface to be rendered, and B indicates color behind the object (i.e. what is presently in the frame buffer). Assumes “over” drawing operation. Example: Alpha-compositing Example: Object 1 behind Object 2 on black background Color of object 1: Color of object 2: Color of Background: R = 200 R = 100 R=0 G = 100 G = 40 G=0 B = 20 B = 200 B=0 A = 1.0 A = 0.7 A = 1.0 1. Object 1 over background: Background Rout = 200*1.0 + (1.0-1.0)*0 = 200 Gout = 100*1.0 + (1.0-1.0)*0 = 100 Object 1 Bout = 20*1.0 + (1.0-1.0)*0 = 20 2. Object 2 over background: Rout = 100*0.7 + (1.0-0.7)*0 = 70 Gout = 40*0.7 + (1.0-0.7)*0 = 28 Bout = 200*0.7 + (1.0-0.7)*0 = 140 3. Object 2 over Object 1: Rout = 100*0.7 + (1.0-0.7)*200 = 70+60=130 Gout = 40*0.7 + (1.0-0.7)*100 = 28+30=58 Object 2 Bout = 200*0.7 + (1.0-0.7)*20 = 140+6=146 Alpha-compositing Known issues with transparent rendering Objects must be rendered in the correct order (back-to-front) for object order rendering algorithms like polygon rendering This is solved by sorting all objects before rendering Object order and sorting is view-point dependent Sorting must be at least on a per-polygon level to handle self-occluding objects correctly This can make transparency rendering quite slow if models to be rendered have a high polygon count In raycasting/raytracing, samples are always ordered along the line of sight; therefore, correct occlusion and blending is granted Texture mapping Adds visual detail to explicit geometry General idea: Explicit models with fairly big polygons lack visual information between vertices Texture images are used to add visual detail In visualization: Texture images can represent additional information Texture mapping + 2D discrete image 2D polygon Texture-mapped (e.g. orthophoto) (defined in 3D space) polygon Applying 2D texture maps to the surface of an object is analogous to pasting a picture. The location of the texture image (map) relative to the texture-maped polygon is specified with texture coordinates. Texture mapping, continued (0,1) (1,1) (0,0.5) (0.5,0.5) (0,0) (0,0) (0.5,0) (1,0) Each texel (texture element) Assign the texture coordinates has 2D coordinates assigned to each polygon to establish to it. the mapping. Texture mapping Textures…. can be images of the real world (discrete) can represent (multi)scalar data sampled on a regular grid are often (but do not have to be) 2D discrete images can be sampled data on 3D grids and are then referred to as volumetric textures can represent computed data on 2D or 3D grids, either with procedural algorithms or by evaluating continuous functions

Use Quizgecko on...
Browser
Browser