Summary

This document is about the 3D pipeline, covering topics such as the history of 3D, coordinate systems, and triangles. It explains concepts including how to calculate triangles and how they're used to visualize 3D information.

Full Transcript

How did we get here In T02 - the 3D Pipeline we got introduced to how the 3D pipeline works. However, the foundations of using the computer to calculate 3D environments started a bit earlier than computers were around. History Euclid (300 BCE) laid the foundations of solid geometry that we still use...

How did we get here In T02 - the 3D Pipeline we got introduced to how the 3D pipeline works. However, the foundations of using the computer to calculate 3D environments started a bit earlier than computers were around. History Euclid (300 BCE) laid the foundations of solid geometry that we still use to this day. (Math is kind of awesome like that.) throughout the millennia mathematicians have pushed this further and further to the point that, when we figured out how to do this on a computer, the math was already there. Now we just needed to be able to visualize it. Now to visualize 3D information, we first need a way to store it. Coordinates Of course, we would need a system to place solid geometry in a space, along comes Descartes with coordinate geometry, where we define geometry in 3D space. but why 3? For that we have to look at the origins of coordinates in the first place: Place names are in general a fine way of finding the way. But for sailors, cartographers, astronomers, etc. it became more and more important to have some numbers to calculate. So longitude and latitude became two numbers able to define a point on a map. Now we have cartesian maps with Greenwich having the latitude coordinate of 0 and the equator on the longitude of 0. We use the same technique to define a point on any flat surface, like a graph. Where Descartes used X and Y as the letters to define the horizontal axis and the vertical axis respectively, 2 coordinates means 2 dimensions, giving us 2D. However, when talking about 3D we need a third coordinate, in world coordinates this corresponds to the height of an object, in most applications this is known to be the depth, adding to the height (Y) and width (X). Euclidean Space Common names Cartesian Coordinates X Width Latitude Y Height Longitude Z Depth Height Now we have a way to store the location data of our vertices that we use to make our 3d meshes. Fun read: Analytic geometry - Wikipedia Remember in the previous chapters we talked about how the axis can differ per software, this stems from a simple disagreement if we should take screen up (Y) as the basis for computer 3D up, or the real-world counterpart where Z is up. Just bear in mind: Getting Triangular https://www.youtube.com/watch?v=T5seU-5U0ms Ed Catmull and Fred Parke created the first ever 3D render using (basically) the same technique that is used to this day in all polygonal 3d renders. And we see that a lot of modelling techniques and meshes consist of quads, this to this day still makes it easier to predict behavior of subdivided and animated meshes, as well giving a more fluent modelling workflow. However, a computer needs a triangle to render, why? good question The things we can do with triangles By using triangles, we have a lot of data we can work with and a lot of control how a triangle is rendered on our screen. First and foremost, how do we get a triangle to show up on screen? Calculating a triangle To render a triangle on our screen we need three things. We need the space with its world origin (our 0,0,0 spot), we need our triangle, a trio of 3-dimensional points or vertices connected with each other by edges creating a singular triangular face between them. There is always only one way to connect three points which will always result in a planar triangle. Now having this triangle, we need to convert it from 3D data to a 2D screen, since our interface, renders and games are shown on screens. This operation is done by the vertex shader in real time renderers (path-tracing will just trace the pixels), which (amongst other things) uses fancy math to do this. The idea is older than you think, when a certain Albrecht Dürer conducted some experiments on how to precisely get a 3D image to 2D. Now we use fancy computerized vector math to do one of two projection techniques, perspective or orthographic, where perspective emulates what we see in real life. Perspective Orthographic The near clip plane and far clip plane are the start and end point of what the camera renders (to avoid having to check things to infinity). This results in the often- observed camera clipping where objects too close or too far from the camera disappear. Now we have a triangle. Normal A nice (and essential) thing about triangles is that a triangle is always planar, or more simply put, flat. If we add any other point to the face, we can’t guarantee the face to be planar. However, 3 points in 3 dimensions can never create a shape that exists in more than 2 dimensions locally. As it is planar we can use a cross product between the vector of two edges to find the normal vector or simply normal of a face, which stands perpendicular to flat face. https://sketchfab.com/3d-models/triangle-normal- 8fa0a24df0b745fc82a3e06e6c502ab4 So, the normal is a vector that shows us in which direction the face is pointing. We now for instance can take the direction of a light position starting from the triangle in a scene, and calculate the angle between the light direction and the normal. https://sketchfab.com/3d-models/triangle-normal-light- 75ad6ab4c4cb4d63b8658341cf6339f0 if we now say “make this triangle brighter as the the angle gets smaller”, we get the following: Let’s say we create an ico sphere in our 3d scene, we can visualize the normal direction of each face: If we do this with every triangle in our view we can draw this on our screen: Shading is happening. By combining the normal and the light direction we can see how bright a triangle should be. This technique evolved into what we call Lambertian reflectance and describes the light that reflects from a surface independent from its viewing angle. Normal interpolation Now, having geometry showing up on our 2D screens is nice and all, but our geometry looks kind of digital. we’d have to increase the polycount drastically before our mesh looks smooth even with more than 60k triangles we can still see the individual faces on this sphere. But instead of increasing the polycount, what if we could adjust the way our normals get calculated? well good news! we can! Gouraud Shading Gourad was the first technique where we basicly shaded a single triangle by taking the relative average of the vertex normal. Yes, vertices also get a normal, which is the average of its neighboring face normals. you can see the vertex normals have different results depending on how many faces they are connected to and take the average of those face normals (in cyan). now, if during rendering we interpolate a gradient between these vertex normals we get a result that looks like the one on the left. However, the edge effect (though very typical of an old-fashioned style of games) is not desired in a realistic setting. A game that used Gouraud shading: https://www.youtube.com/watch?v=tP5dy_7588E The technique stuck around for a while (Gouraud Shading Games - Giant Bomb), even if there was good research being done in getting it to look better, one other interpolation technique gave us much better results. Phong shading Simply interpolating a gradient between vertex normals wasn’t quite enough, what if we interpolated the vertex normals themselves? We get phong shading. a technique where even a very low poly object can appear to have a smooth surface. In essence we just check how close we are to each vertex of the triangle and take the relative average of that normal. We can change the way our renderer shades this in most 3d software, by enabling smooth shading, or marking edges as sharp or smooth. In Blender specially we enable smooth shading and mark our sharp edges to not interpolate from those edges. This is an artist friendly approach since we can now define the smooth and sharp edges. Normal Direction be aware that the normal points to the outside of the face, and one of the reasons we create non-manifold geometry is to have a correct calculation of our shading. If our normals point to the wrong direction (mostly inside of an object) the calculations the shader makes might be wrong and results might not what we want. a weird artifact seems to appear on the sphere once we put the shading to smooth. we can overlay the face direction and see inverted faces highlighted in most software, with the extra normal direction overlay we can indeed see the normal is pointing inwards. The operation to fix this is called flipping normals.

Use Quizgecko on...
Browser
Browser