JavaScript Tutorial PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document provides a beginner-friendly overview of JavaScript, covering fundamental concepts like variables, data types, strings, arrays, and basic programming structures. It also touches on asynchronous programming techniques. The document focuses on the theoretical aspects of JavaScript without any practical examples, and is best suited for learners already familiar with basic computer science principles.
Full Transcript
Motivation HTML & CSS only allow for static web content, HTML & CSS do not allow for dynamic web content, that can change based on user input activity Web browsers have an engine for generating dynamic content using JavaScript What can be done with JavaScript? To interact with...
Motivation HTML & CSS only allow for static web content, HTML & CSS do not allow for dynamic web content, that can change based on user input activity Web browsers have an engine for generating dynamic content using JavaScript What can be done with JavaScript? To interact with the HTML and CSS content of a document, respond to events. To run in browser methods and functions from the standard JavaScript APIs – defined in W3C standard. To use numerous APIs in addition to the DOM and selector APIs: multimedia, drawing, animating, geolocation, webcam, etc. To work with remote data from a remote HTTP Web server. Deploying JavaScript in HTML Include in HTML file, like: External script: element; src attribute points to an external.js file -> could be located on a web server Embedded script: section ->embedded in HTML Chrome bowser includes “REPL (read evaluate print loop)” console for evaluating JS code Run.js file in a runtime environment, such as Node.js Use tag to handle situations when scripts have been disabled or a certain browser does not support them. JavaScript – Overview & Definitions Derived from ECMAScript standard Originally designated to run on Netscape Navigator Not related to Java JavaScript interpreter embedded in web browsers: Add dynamic behaviour to static web content Scripts can run: When an HTML document loads Accompany form to process input as it is entered Can be triggered by events that affect a document To dynamically create multimedia resources. JavaScript types can be divided into two categories: primitive types and object types. JavaScript Primitive types Number: 5, 1.24, 1.1e5, +Infinity (10308 ), – Infinity (10−324 ), NaN (Number(‘abc’) returns NaN)-> stored using floating-point notation String ‘Hello’ Boolean: true, false Null: null Undefined: undefined boolean, if, delete, var, function, etc. = reserved words of the JavaScript language. There are no behaviour or methods associated with primitive data types. Some primitive data types have wrapper objects. JavaScript Variables & Constants The first letter of a variable can only be "$", "_", "a" to "z", or "A" to "Z" count = 42; // A variable can be a number or a string or a boolean. valleyQuote = “the text” precision = 0.42; condition = true; const DEN_MAX = 5; Variable has two scopes: 1) a global, and 2) block scope. Var keyword is used to declare a local variable or object. If the var keyword is omitted, a global variable is declared. typeof operator - useful for knowing the type of a variable depending in its value. The keyword new is used to create instances of wrapper objects. JavaScript Strings JS strings are series of 16-bit unsigned integers, each integer representing a character. ‘I am a teacher’ vs “I’m a teacher” Escape characters use backslash: ‘\n \t \\’ JS strings are immutable=>any manipulation results in a new string phoneNumber = "(040) 745-789044"; name = “Ion Popescu"; // is the same as: name = 'Ion Popescu'; firstName = ‘Ion'; lastName = ‘Popescu'; firstName + ' ' + lastName; // 'Ion Popescu‘ fullname=firstName.concat(‘ ‘, lastName); String Methods opinion = 'Beyonce is not the best singer in the world'; // searches for 'is', returns its position or -1 if not found isIndex = opinion.indexOf('is'); // returns 8 // extract substring from 0 (inclusive) to isIndex (exclusive) part1 = opinion.substring(0, isIndex); // "Beyonce is" part2 = opinion.substring(14); // "the best singer in the world" fact = part1 + part2; // And now we are good. JavaScript Booleans Any value can be used as a Boolean in JS (when used with logical operators): Falsy values: null, undefined, 0, NaN, ‘’’ Truthy vaues: ‘true’, 6… JavaScript Null & Undefined Null- a value that can be assigned to variables to represent “no value” occupation=null; occupation; //null Undefined – the variable was declared but no value has been assigned salary; salary; //undefined An integer starting with 0 (zero) is considered in JavaScript an octal value. Arrays Declaration of an array can use: An array literal – create the array and initialize the elements numbers = [1, 2, 3]; moreNumbers = ["four", "five", "six", "seven"]; mya =[‘car’, 14, false]; //value can be of any type an array constructor – new keyword numArray= new array(1,2,3); numbers; // 1 numbers.length; // 3 numbers; //undefined // adds 4 to the end of the array numbers.push(4); // returns new length, 4 numbers.pop(); // removes 4 and returns it Arrays are JavaScript objects! Array Indices Use a zero-based indexing scheme. Grow or shrink dynamically by adding and removing elements => length property = the largest integer index in the array. When witting an array value by its index arrayVar[index] will: Add an element at that index if index>=arrayVar.length Create a mapping from the index to the element if index properties contain information about error: Message – a descriptive message Name – type of error: TypeError, RangeError, URIError… Custom error objects can be created: throw new Error(“Only positive values are permitted”); Conditionals and Loops if (condition) { // do something } else {//something else } 5first of all JS tries to convert string to numbers 5 affect the current working instance only -> without affecting the static object prototype. To add a new function to the template for the object –modify the prototype of the object. Debuging JavaScript Write messages to the devtool console of your browser Inside the web page –using addXToThePage() function Asynchronous JavaScript (1) JavaScript programs in a web browser are typically event-driven. JavaScript-based servers typically wait for client requests to arrive over the network before they do anything. There are no features of the core language that are themselves asynchronous BUT JavaScript provides powerful features for working with asynchronous code: promises, async, await, and for/await. Asynchronous JavaScript – Common Methods The duality between synchronous and asynchronous behavior is a fundamental concept in a single-threaded event loop model such as JavaScript. I. Using Callbacks II. Using Promises III. Using async and await IV. Using asynchronous Iteration I. Asynchronous Programming with Callback It is derived from a programming paradigm known as functional programming. It is very used to add interactivity in HTML documents. Callback functions in JavaScript: a function passed to another function, and executed inside the function we called. That other function then invokes (“calls back”) your function when some condition is met or some (asynchronous) event occurs. The callback function: notifies you of the condition or event, may include arguments that provide additional details. Common forms of callback-based asynchronous programming: 1. Events - Event-driven JavaScript programs register callback functions for specified types of events in specified contexts. 2. Timers 3. Network events - JavaScript running in the browser can fetch data from a web server. XMLHttpRequest class Events Client-side JavaScript is almost universally event driven=>rather than running some kind of predetermined computation, they typically wait for the user to do something and then respond to the user’s actions. The web browser generates an event when the user presses a key on the keyboard, moves the mouse, clicks a mouse button, or touches a touchscreen device. Event-driven JavaScript programs register callback functions for specified types of events in specified contexts, and the web browser invokes those functions whenever the specified events occur. Callback functions are called event handlers or event listeners, and they are registered with addEventListener(): JAVASCRIPT IMPLEMENTATIONS Basic DOM Model for Browsers THE WINDOW OBJECT Window object properties are available in the global scope=> browser APIs it as an access point. Window object’s properties drastically diverge between browsers due to differing vendor implementations. The Global Scope of Window=> all variables and functions declared globally (using var) are properties and methods of the Window object. innerWidth, innerHeight indicate the size of the page viewport inside the browser window (minus borders and toolbars). outerWidth, and outerHeight. outerWidth and outerHeight return the dimensions of the browser window itself. The browser window can be resized using the resizeTo(x,y) and resizeBy(x,y) methods. To navigate to a particular URL and to open a new browser: window.open("http://www.ase.ro/", "topFrame"); THE LOCATION OBJECT Location provides information about the document that is currently loaded in the window, as well as general navigation functionality. The location object is unique. It is a property of both window and document; both window.location and document.location point to the same object. PROPERTY NAME DESCRIPTION location.hash The URL hash (the pound sign followed by zero or more characters), or an empty string if the URL doesn’t have a hash. location.host The name of the server and port number if present. location.hostname The name of the server without the port number. location.href The full URL of the currently loaded page. The toString() method of location returns this value. location.pathname The directory and/or filename of the URL. location.port The port of the request if specified in the URL. If a URL does not contain a port, then this property returns an empty string. location.protocol The protocol used by the page. Typically "http:" or "https:". location.search The query string of the URL. It returns a string beginning with a question mark. location.username The username specified before the domain name. location.password The password specified before the domain name. location.origin The origin of the URL. Read only. THE NAVIGATOR OBJECT The navigator object properties which offer insight into system capabilities. PROPERTY/METHOD DESCRIPTION battery Returns a BatteryManager object to interact with the Battery Status API. connection Returns a NetworkInformation object to interact with the Network Information API. cookieEnabled Indicates if cookies are enabled. credentials A CredentialsContainer to interact with the Credentials Management API. deviceMemory The amount of device memory in gigabytes. doNotTrack The user’s do-not-track preference. geolocation A Geolocation object to interact with the Geolocation API. getVRDisplays() Returns an array of every VRDisplay instance available. getUserMedia() Returns the stream associated with the available media device hardware. languages An array of all the browser’s preferred languages. locks A LockManager object to interact with the Web Locks API. mediaCapabilities A MediaCapabilities object to interact with the Media Capabilities API. mediaDevices The available media devices. maxTouchPoints The maximum number of supported touch points for the device’s touchscreen. onLine Indicates if the browser is connected to the Internet. oscpu The operating system and/or CPU on which the browser is running. permissions A Permissions object to interact with the Permissions API. platform The system platform on which the browser is running. plugins Array of plug-ins installed on the browser. In Internet Explorer only, this is an array of all elements on the page. registerProtocol Registers a website as a handler for a particular protocol. Handler() requestMediaKey Returns a Promise which resolves to a MediaKeySystemAccess object. SystemAccess() sendBeacon() Asynchronously transmits a small payload. THE SCREEN OBJECT It provides details about client’s display outside the browser window. PROPERTY DESCRIPTION The pixel height of the screen minus system elements such as Windows availHeight (read only). The first pixel from the left that is not taken up by system elements (read availLeft only). The first pixel from the top that is not taken up by system elements (read availTop only). availWidth The pixel width of the screen minus system elements (read only). The number of bits used to represent colors; for most systems, 32 (read colorDepth only). height The pixel height of the screen. left The pixel distance of the current screen’s left side. pixelDepth The bit depth of the screen (read only). top The pixel distance of the current screen’s top. width The pixel width of the screen. orientation Returns the screen orientation as specified in the Screen Orientation API HTML as a Tree DOM Document Object Model (DOM) – a structured tree representation of a web page HTML of every web page is turned into a DOM representation by the browser DOM provides the way to programmatically access HTML structure in JavaScript The root DOM object => accessed by the document object All JavaScript objects generate the three Each element within the page is referred to as a node. Includes references to childNodes / children Includes references to (parentNode) and next node (nextSibling) Includes methods to manage the child node: appendChild(nod), removeChild(nod)… Nodes have specific methods and properties function on the related HTML tag. What does the DOM look like? HTML DOM Retrieving a DOM Node Nested objects are accessed using a dot notation. To access the first image in a document, using the index: document.images By element name: document.getElementsByTagName(HTML tag name) ->retrieves any element or elements you specify as an argument paragraphs = document.getElementsByTagName("p"); for( i = 0; i < paragraphs.length; i++ ) {// do something } By id attribute value: document.getElementById() By class attribute value: document.getElementsByClassName() By selector: querySelectorAll(CSS selector) allows to access nodes of the DOM based on a CSS style selector. b=document.querySelectorAll("button"); b.innerText; Accessing DOM Node’s Attributes Accessing attributes: elem.AttributeName – to refer to the individual attribute name elem.attributes – attribute collection elem.hasAttribute(name) – check the attribute availability elem.getAttribute(name) – get the attribute value elem.setAttribute(name, value) – set the attribute value elem.removeAttribute(name) – remove the attribute: Accessing CSS attributes: elem.style.CSSAtributeName elem.className - class string name as used in the HTML document elem.classList – object that allows accessing class list (add / remove / toggle / contains methods) Accessing DOM Node’ Attributes - Examples Accessing an attribute value: getAttribute(attribute name) ->to get the value of an attribute attached to an element node. img=document.getElementsByTagName("img"); img.getAttribute("src"); setAttribute(the attribute to be changed, the new value for that attribute) img.setAttribute("alt","New photo"); innerHTML – for accessing and changing the text and markup inside an element h2=document.getElementsByTagName("h2"); h2.innerHTML="Accommodation"; Note: innerHTML content is refreshed every time and thus is slower. There is no scope for validation in innerHTML. Therefore, it is easier to insert rogue code in the document and make the web page unstable. Modify or remove a CSS style from an element by using the style property. It works similarly to applying a style with the inline style attribute. document.querySelector(h2.style.color="green"); 8 Adding and Removing Elements Create a new element: el=document.createElement(HtmlTagName) -> it remains floating in the JavaScript until we add it to the document. li=document.createElement("li"); li.innerHTML=“New element"; Adding nodes: To the parent node: parent.appendChild(childNode); ul=document.querySelector("ul"); ul.appendChild(li); To insert an element before another element: parentNode.insertBefore(InsertedNode, Node2); To replace one node with another one: parentNode.replaceChild(new child, the replaced node); To remove a node (not only elements) or an entire branch from the document tree: parent.removeChild( Node to be removed); DOM Events & Event Flow Events are actions performed either by the user or by the browser. Event handler (event listener) - a function that is called in response to an event. Event handlers have names beginning with "on", DOM Event Flow has three phases: the event capturing phase - provides the opportunity to intercept events, at the target, the event bubbling phase - allows a final response to the event. DOM Events Accessing source element– this Event parameters –event object General: target, type, preventDefault() Keyboard: key, keyCode, altKey, ctrlKey, shiftKey Mouse: pageX, pageY, button, altKey, ctrlKey, shiftKey Events: General: load (window), DOMContentLoaded (document) Keyboard: keydown, keypress, keyup Mouse: mouseenter, mouseleave, mousemove, mouseup, mousedown, click, dblclick DOM Event Handlers Assigning event handlers can be accomplished in a number of different ways: By HTML Event Handlers: assigned using an HTML attribute with the name of the event handler … Example: Function Using Node Properties: to assign a function to an event handler property btn = document.getElementById("myBtn"); btn.onclick = function() { console.log("Clicked");}; By Event Handlers: addEventListener() and removeEventListener()-> multiple event handlers can be added. Arguments: - the event name to handle, - the event handler function, and - a Boolean value indicating whether to call the event handler during the capture phase (true) or during the bubble phase (false). document.getElementById(“test”).addEventListener("click", function() {console.log(“The message”);}); Event handlers added via addEventListener() can be removed only by using removeEventListener() and passing in the same arguments as were used when the handler was added: element.removeEventListener(type,function); Intrinsic Events in Web App Scripts may be assigned to a number of elements through intrinsic event attributes, onload onclick onmouseover onfocus onkeyup onsubmit onselect And so on... Properties of Light “Light” = narrow frequency band of electromagnetic spectrum The electromagnetic spectrum: Red: 3.8x1014 hertz Violet: 7.9x1014 hertz A ray of light contains many different waves with individual frequencies. The associated distribution of wavelength intensities per wavelength is referred to as the spectrum of a given ray or light source. Colour Fundamentals Luminance = a measure of the light strength, that is actually perceived by the human eye. Hue = The nuance: red, blue, green,..... Saturation - the intensity or purity of a hue Brightness = a subjective, psychological measure of perceived intensity. Brightness is the relative degree of black or white mixed with a given hue. Humans Only Perceive Relative Brightness Colour Properties Psychological colours characteristics: Dominant frequency (hue, colour), Brightness =total light energy, Purity (saturation), how close a light appear to be a pure spectral colour, such as red Chromaticity, used to refer collectively to the two properties describing colour characteristics: purity and dominant frequency Colours can be produced by: a light source (natural or artificial), chemical pigments. Intuitive colour concepts: Shades, tints and tones in scene can be produced by mixing colour pigments (hues) with white and black pigments Shades - add black pigment to pure colour. The more black pigment, the darker the shade Tints - add white pigment to the original colour. Making it lighter as more white is added Tones - produced by adding both black and white pigments Colour Systems Additive Model (RGB) Subtractive Model (CMYK) - light based model- - Pigment based model - Used by emissive devices: TV set, monitor, projector, Prints (paper based) lighted display, photo camera, scanner. stained glass Complementary Colours subYM Subtractive Additive Orange (between red and subCR Blue is one-third yellow)cyan-blue Yellow (red+green) is two-thirds green-cyanmagenta-red colour When blue and yellow light are added together, they produce white light Pair of complementary colours blue and yellow green and magenta red and cyan addRG Colour Models Method for explaining the properties or behavior of colour within some particular context Combine the light from two or more sources with different dominant frequencies and vary the intensity of light to generate a range of additional colours Primary colours: 3 primaries are sufficient for most purposes Hues that we choose for the sources Colour gamut is the set of all colours that we can produce from the primary colours Complementary colour is two primary colours that produce white: Red and Cyan, Green and Magenta, Blue and Yellow. The range of colours that can be produced by a specific method = colour space. Colour space = abstract mathematical model that describes how colors can be represented/produced. The RGB Colour Model Basic theory of RGB colour model The tristimulus theory of vision. It states that human eyes perceive colour through the stimulation of three visual pigment of the cones of the retina: Red, Green and Blue Model can be represented by the unit cube defined on R,G and B axes. RGB Colour Model (cont.) ▪ Uses the 3 primary colours (red, green, blue) to generate colours displayed on a monitor. ▪ By adjusting the intensity of each primary component, all colors from the visible light spectrum c a n b e g e n e r a te d. ▪ Ex. colours with 8 bits/channel use values in the range: 0-255 for each pixel => 256 power 3 =16.777.216 colours. RGB Colour Model (cont.) The possible colour combinations of any pixel in an eight-bit graphic (or a 24-bit display). CMYK Colour Model The CMYK (Cyan Magenta Yellow black) colour model (process color) is a subtractive colour model, used in colour printing describes the printing process itself. Colour models for hard-copy devices, such as printers: Produce a colour picture by coating a paper with colour pigments Obtain colour patterns on the paper by reflected light, which is a subtractive process The CMY parameters: A subtractive color model can be formed with the primary colours: cyan, magenta and yellow Unit cube representation for the CMY model with white at origin Printing images suppose image transformation from RGB model to CMYK model, CMYK Colour Model (cont.) The blacK was added to this model because of the following reasons: Black ink is the cheapest one, from the four (C, M, Y, K) inks, The time required for drying the ink is reduced, The text printed on paper must be legible.. Indicate colour intensity: (0-100%) Spot colours (fluorescent, lacquered colours), Derived colour model CcMmYyK – used by Jet printers (photos). CMYK Colour Model (cont.) Transformation between RGB and CMY colour spaces: Transformation matrix of conversion from RGB to CMY C 1 R M = 1 − G Y 1 B Transformation matrix of conversion from CMY to RGB R 1 C G = 1 − M B 1 Y HSB Colour Model ▪ HSB (Hue, Saturation Brightness) model is also known as HSV (Hue, Saturation and Value) model. ▪ Together with HSL (Hue, Saturation and Lightness) are the most common cylindrical-coordinate representations of colours. ▪ The two representations rearrange the geometryof RGB in an attempt to be more intuitive and perceptually relevant than the Cartesian (cube) representation. HSB Colour Model (cont.) Describes three fundamental characteristics of colours: Hue = the colour reflected or transmitted through an object. It is measured as the colour location on the colour wheel (0 ° -360 °). Saturation = the intensity or colour purity. It is the rapport of gray and the hue, measured as a percentage; the values are in the range 0% (gray) to 100% (fully saturated). Brightness - the lightness or darkness of the colour, measured as a percentage from 0% (black) to 100% (white). HSB Colour Model Hue is the most obvious characteristic of a colour. Saturation is the purity of a colour: High chroma colours look rich and full, Low chroma colours look dull and grayish, Sometimes chroma is called saturation. Brightness is the lightness or darkness of a colour Sometimes light colours are called tints, and Dark colours are called shades. HSB Colour Model (cont.) Interface for selecting colours often use a colour model based on intuitive concepts rather than a set of primary colours. Derived by relating the HSB parameters to the direction in the RGB cube, Obtain a colour hexagon by viewing the RGB cube along the diagonal from the white vertex to the origin. Transformation RGB->HSB To move from RGB space to HSB space: Can we use a matrix? No, it’s non-linear. min = the minimum R, G, or B value max = the maximum R, G, or B value 0 if max = min g −b + 60 max − min 0 if max = r and g ≥ b g −b 0max − min if max = 0 60 + 360 if max = r and g < b s= h= max − min max otherwise b−r 60 + 120 if max = g max − min r−g 60 + 240 if max = b v = max max − min HLS Colour Model HLS colour model Another model based on intuitive colour parameter The colour space has the double-cone representation: It uses hue (H), lightness (L) and saturation (S) as parameters. YIQ and Related Colour Models YIQ is the NTSC colour encoding for forming the composite video signal. YIQ parameters: Y= luminance Calculated Y from the RGB equations: Y = 0.299 R + 0.587 G + 0.114 B Chromaticity information (hue and purity) is incorporated with I and Q parameters, respectively. Calculated by subtracting the luminance from the red and blue components of colour. I =R – Y Q = B – Y Separate luminance or brightness from colour, because we perceive brightness ranges better than colour. YIQ and Related Colour Models (cont.) Transformation between RGB and YIQ colour spaces: Transformation matrix of conversion from RGB to YIQ Y 0.299 0.587 0.114 R I = 0.701 − 0.587 − 0.114 ⋅ G Q − 0.299 − 0.587 0.886 B Transformation matrix of conversion from YIQ to RGB: Obtain from the inverse matrix of the RGB to YIQ conversion R 1 1 0 Y G = 1 − 0.509 − 0.194 ⋅ I B 1 0 1 Q Lab Colour Model Lab model uses human perception of colours to generate them. Contains numerical values that describe all the nuances that can be recognized by the human eyes, L = lightness (0-100); a = the green-red (+127, -128), b = blue- yellow component (+127, -128) It is a device independent model, and describes how the colour looks and not how the colour is produced. Grayscale Colour Model Grayscale model generates gray tones. An image created by using 8 bits/pixel may contain 256 (8-bit gray) tones. Each pixel has a brightness value between 0 (black) - 255 (white). The values can be measured as percentages of black ink intensity (0% - White, 100% -Black). Comparison RGB CMY YIQ HSB HSL CMYK Colour Selection and Applications Graphical package provides colour capabilities in a way that aid users in making colour selections. For example, it contains sliders and colour wheels for RGB components instead of numerical values. Colour applications guidelines: Displaying blue pattern next to a red pattern can cause eye fatigue=> Prevent by separating these colour or by using colours from one-half or less of the colour hexagon in the HSV model. Smaller number of colours produces a better looking display, Tints and shades tend to blend better than pure hues, Gray or complement of one of the foreground colour is usually best for background. | COLOUR PROPERTY CSS syntax allows you to configure colors in a variety of ways: colour name: p { color: red;} hexadecimal colour value: p { color: #FF0000; } hexadecimal shorthand colour value (web-safe colours): p { color: #F00; } decimal color value (RGB triplet): p { color: rgb(255,0,0); } HSL (Hue, Saturation, and Lightness) - p { color: hsl(0, 100%, 50%); } CSS3; http://www.w3.org/TR/css3-color/#hsl-color https://meyerweb.com/eric/css/colors/ - configuring color values using different notations https://color.adobe.com/create - colour selection wheel Colour Wheel ▪ A colour wheel = the visual representation of colours, arranged according to their chromatic relationship. ▪ Primar y colours: Colours at their basic essence; those colours that cannot be created by mixing others. ▪ Secondar y colours: Those colors achieved by a mixture of two primaries. ▪ Tertiary colours: Those colours achieved by a mixture of primary and secondar y hues. ▪ https://color.adobe.com/create Colour Conversion When does it occur? when displaying an image on a monitor or a projector, when capturing an image with a scanner, a digital camera or a camcorder, When printing the image. Is it achieved with quality loss. It is done by a colour management system, Uses colour profiles, Profile - mathematical description of the colour space of the device. What is a ? Canvas is one of the "Flash killer" features of HTML5. The canvas element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly. It is fully interactive, Every object drawn on canvas can be animated at 60fps, It allows adding audio and video with special effects, to display a webcam stream, All major browsers support canvas -> the support and implementation differs from browser to browser. Canvas JavaScript drawing API supports different kind of shapes: lines, rectangles, ellipses, arcs, curves, text, images. Some drawing styles need to be specified that affect the way shapes are drawn (colour, drawing width, shadows, etc.). An alpha channel for drawing in transparent mode is also supported, as well as many advanced drawing modes and global filters (blur, etc.). Drawing principles HTML5 uses a graphic context for all main operations (shape, text, or image) The current values (there are default values) of the different properties of the graphic context are taken into account; some are relevant only for certain kinds of shapes or drawing modes. Drawing: Immediate mode –executing a call to a drawing method means immediately drawing in the canvas Methods like: strokeRect, strokeText, fillRect, fillText,, drawImage are used for drawing text or drawing images, Path/buffered mode - fill a buffer then execute all buffered orders at once to enable optimization and parallelism First send drawing orders to the graphics processor, and these orders are stored in a buffer. Then you call methods to draw the whole buffer at once. There are also methods to erase the buffer's content. Path drawing mode allows parallelism With the buffered mode, the Graphic Processing Unit (GPU) of the graphics card hardware will be able to parallelize the computations (modern graphics cards can execute hundreds/thousands of things in parallel). Drawing with Canvas Element The canvas element is configured with the Canvas 2D Context API (http://www.w3.org/TR/2dcontext2), Create a canvas HTML element: Content Managing canvas with JavaScript - getting graphical context reference: // getting HTMLCanvasElement context from DOM canvas = document.getElementById(' MyCanvas'); w = canvas.width, h = canvas.height; // getting graphical context (CanvasRenderingContext2D object) ctx = canvas.getContext('2d'); Context includes methods for drawing. Summary of path mode principles Call drawing methods that work in path mode, for example call ctx.rect(...) instead of ctx.strokeRect(...) or ctx.fillRect(...) Call ctx.stroke() or ctx.fill() to draw the buffer's contents, The buffer is never emptied, two consecutive calls to ctx.stroke() will draw the buffer contents twice! It is possible to empty the buffer by calling ctx.beginPath(). Path drawing is faster than immediate drawing (parallelization is possible). Dynamically Processing Shapes Direct drawing - rectangles: fillRect(x, y, width, height) – draws a filled rectangle strokeRect(x, y, width, height) – draws an outline around the rectangle clearRect(x, y, width, height) – surface wiping Drawing using paths: beginPath() – opens a drawing path Drawing and moving functions closePath() (optional)– closes up the path by drawing a line back to the starting point fill() – fill the path and / or stroke() – draw the lines Drawing and Moving Functions Movement: moveTo(x,y) – changes the current position Drawing: lineTo(x, y) – draws a line from the current position to the specified point rect(x, y, width, height) – draws a rectangle arc(x, y, radius, startAngle, endAngle, anticlockwise) – adds an arc arcTo(xctrl,yctrl,xPos,yPos,radius). Value in Radians = value in degrees * (π/180). quadraticCurveTo(cp1x, cp1y, epx, epy) – adds a quadratic curve (a checkpoint) -> used to create custom shapes bezierCurveTo(cp1x, cp1y, cp2x, cp2y, epx, epy) – adds a Bézier curve (two control points) Drawing Text fillText(string, x, y) – draw fill text-> colour specified by fillStyle property strokeText(string, x, y) – draw outline text -> line colour indicated by strokeStyle property font property - contains the size of the text and one or more fonts To achieve the 3D effect – used for both text and shape shadowOffsetY - sets or returns the vertical distance of the shadow from the shape. shadowOffsetX - sets or returns the horizontal distance of the shadow from the shape. shadowBlur - sets or returns the blur level for shadows. shadowColor - sets or returns the color to use for shadows. Canvas Attributes Attributes: fillStyle and strokeStyle = “colour” – changes the colour used for drawing ctx.fillStyle = "#FFA500"; ctx.strokeStyle = "rgba(255,165,0,1)"; lineWidth = size – defines the thickness of the line lineCap - defines the cap style: butt (default), round and square. lineJoin - allows to join two lines with three different effects: bevel (default), round, miter font = “font specification” – sets font features (e.g.: "bold 18px Arial") textAlign –text position relative to the x coordinate (left, right or center) textBaseline – position of the text relative to the y coordinate (top, hanging, middle, alphabetic or bottom) Saving and restoring context attributes using the stack: save() and restore() Drawing Bézier Curves JS provides support for drawing Bézier curves: quadratic (with a control point) cubic (two control points) Formulae: P = (1-t)P1 + tP2 t𝜖𝜖[0,1] P = (1-t)2P1 + 2(1-t)P2 + t2P3 Computing Bézier curves using de Casteljau's algorithm: Considering P0, P1 and P2 points Constructing P0 P1 and P1P2 segments For t from 0 to 1 Determining t point at a proportional distance to the beginning of the segment for P0P1 and P1P2 A new segment is constructed from the two points obtained, and the process is repeated 2D Transformations - Canvas Basic transformation operations: translation, rotation, scaling: translate(x, y) –translates the coordinate system with the specified number of pixels rotate(angle) – rotates the coordinate system with the specified angle (in radians) scale(x, y) – scales the coordinate system with the specified factors Every transformation method affects all elements drawn after it has been called. This is because they all act directly on the 2D rendering context, not on the shape you’re drawing. 2D Transformations ▪ Translation ▪ translate(x, y) ▪ changes the origin. ▪ x indicates the horizontal distance to move, and y indicates how far to move the grid vertically. Rotation ▪ rotate(angle) ▪ Rotates the canvas clockwise around the current origin by the angle number of radians. ▪ The rotation center point is always the canvas origin, unless it has been changed using the translate() method ▪ Note: Angles are in radians, ▪ To convert from degrees: radians = (Math.PI/180)*degrees. ▪ rectangle 1: rotated based on the canvas origin ▪ rectangle 2: rotated from the center of the rectangle itself with the help of translate() method. Scaling ▪ scale(x, y) ▪ Scales the canvas units by x horizontally and by y vertically. ▪ Both parameters are real numbers. Values that are smaller than 1.0 reduce the unit size and values above 1.0 increase the unit size. Values of 1.0 leave the units the same size. ▪ Using negative numbers you can do axis mirroring 2D Rendering Context Transformation 2D Rendering Context Transformation Matrix : A new 2D rendering context contains a fresh transformation matrix, called identity matrix: To manipulate the transformation matrix of the 2D rendering context: To compose transformation (by multiplication) - multiplies the existing transformation matrix by the supplied values: 𝑥𝑥 ′ 𝑎𝑎 𝑏𝑏 𝑐𝑐 𝑥𝑥 𝑦𝑦 ′ ∗ 𝑑𝑑 𝑒𝑒 𝑓𝑓 = 𝑦𝑦 => a cumulative 1 0 0 1 1 effect context.transform(a,b,c,d,e,f) To replace the actual transformation: setTransform(a,b,c,d,e,f) resetTransform() – to return to the standard coordinate system. Drawing Images Motivation: to perform 2D rendering context methods and transformations on an image that wasn’t originally created in canvas. Image source: HTMLImageElement object – created using Image() constructor / element/ an image provided through an URL Another canvas element - referred to by the CanvasImageSource type. A video element Importing images into a canvas = a two step process: Get a reference to image source using – getImageData / create an image object using new Image(), Draw the image on the canvas using the drawImage() function. Note: For security reasons getImageData() requires a web server to function on. putImageData() - paints data from the given ImageData object onto the canvas, where ImageData = the underlying pixel data of an area of a canvas object, retrieved from a canvas using the getImageData() method. Note: This method is not affected by the canvas transformation matrix. ImageData Interface ▪ It is created using the creator methods on the CanvasRenderingContext2D object associated with a canvas: ▪ CanvasRenderingContext2D..createImageData(): creates a new, blank ImageData object with the specified dimensions. All of the pixels in the new object are transparent black. ▪ CanvasRenderingContext2D..getImageData(sx, sy, sw, sh): returns an ImageData object representing the underlying pixel data for the area of the canvas denoted by the rectangle which starts at (sx, sy) and has an sw width and sh height ▪ It can also be used to set a part of the canvas by using: ▪ void ctx.putImageData(imagedata, dx, dy): paints data from the given ImageData object onto the bitmap at the given coordinates. ImageData Interface [255,0,0,255, 0,0,0,255, 255,255,255,255, 203,53,148,255] Drawing, Resizing and Cropping Images drawImage() function: Drawing image without scaling: drawImage(image, x, y) Drawing & resizing image: drawImage(image, x, y, width, height) => the quality of the scaled image will be noticeably reduced Drawing & cropping image: drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh) Accessing Pixel Values getImageData(x, y, w, h) – extracts an area of image as an ImageData object. ▪ iterating over all the pixels in a canvas imageData = context.getImageData(0, 0, canvas.width, canvas.height); for (y = 0; y < canvas.height; y++) { for (x = 0; x < canvas.width; x++) { i = (y * canvas.width * 4) + x * 4; red = imageData.data[i]; // [0..255] green = imageData.data[i+1]; // [0..255] blue = imageData.data[i+2]; // [0..255] transparency = imageData.data[i+3]; // [0..255] } } Interactivity Through Events Animation is the art of creating illusion of motion through the use of computer technology. 2D Animation = figures are created or edited on the computer using 2D bitmap graphics or created and edited using 2D vector graphics. 3D Animation = is digitally modeled and manipulated by an animator. The animator usually starts by creating a 3D polygon mesh to manipulate. Surprise or wonder Portrayal of wondrous creatures, action by inanimate objects Combine humans with inhuman characters Portray real places, events, etc. that cannot be filmed (efficiently) Present content that would be considered inappropriate for live actors to engage in, Provide a viewpoint that cannot effectively be presented otherwise Allow for enhanced interaction with the video Real-time interactivity Save money But only for limited animation Gain increased control over the video Keyframe animation Motion capture animation Procedural animation Colour change Show/Hide/Toggle Animation Techniques | Keyframe Animation Keyframe is a drawing (image) of a key moment in an animation sequence, where the motion is at its extreme in-betweens fill the gaps between keyframes. In computer animations, animators set up parameter values for keyframes; software interpolates parameter values between keyframes for in-betweens Different interpolation methods create different timing. Linear interpolation Spline interpolation Animation Techniques | Motion capture animation What is motion capture? Sampling and recording motion of humans, animals, and inanimate objects as 3D data. Data can be applied to 3D computer models. Faster to produce animation than keyframing (if an established production pipeline exists). Secondary motions and all the subtle motions are captured, providing more realism. Animation Techniques | Procedural animation Motion is generated by a procedure, a set of rules Animator specifies rules and initial conditions and runs simulation Provides more realism in natural phenomena than keyframing Frees animators from generating complex objects and keyframing a large number of objects The “12 Laws” or 12 basic principles of animation are a set of rules to adhere by for consistent and beautiful animation. First outlined by Ollie Johnston, the directing animator of Pinocchio, and Frank Thomas of Snow White and the Seven Dwarves fame, animation studios the world over look back to these tenants from the golden age of cartoons. The most important principle is "squash and stretch", the purpose of which is to give a sense of weight and flexibility to drawn objects. Living flesh distorts during motion. Exaggerated deformations will emphasize motion and impact. Although objects deform like rubber, they must maintain volume while being squashed and stretched. A bouncing ball will squash or elongate on impact and stretch vertically as it leaves the point of impact. Anticipation is used to prepare the audience for an action, and to make the action appear more realistic. By exaggerating this action, the animator can guide the viewer’s eyes. The formula for most animations is anticipation, action, and reaction. Staging is the clear presentation of an idea. The purpose of staging is to direct the audience's attention, and make it clear what is of greatest importance in a scene. The animator can use the camera viewpoint, the framing of the shot, and the position of the characters to create a feeling or strengthen understanding. Straight Ahead animation means drawing the frames in sequence. This leads to spontaneous motion. It works well with abstract animation and fluids. Pose To Pose is the more often used animation technique. It requires the animator to create strong poses (keyframes) first and then add the in-between frames. Follow Through - when the main body of the character stops all other parts continue to catch up to the main mass. Follow Through is the action that follows the main action. It is the opposite of anticipation. When a baseball bat hits the baseball, it does not stop abruptly. A boxer does not freeze at the moment a punch lands. Overlapping actions means that all elements do not stop at the same time. A good example of overlapping action is the movement of an animal’s tail. Also known as ease in and ease out. Slow-ins and Slow-outs soften the action, making it more life like. Most motion starts slowly, accelerates, and then slows again before stopping. Gravity has an effect on slow in / slow out. When a ball bounces, it increases in speed as it gets closer to the ground. It decreases in speed at the top of the arch. Almost all actions natural motion, with few exceptions (such as the animation of a mechanical device), follow an arc or slightly circular path. Arcs give animation a more natural actions and better flow. Secondary action adds to and enriches the main action. Secondary actions are also minor actions that occur due to a major action. Most people blink their eyes when they turn their head. Facial expressions are secondary actions. Secondary action adds more dimension to the character animation, supplementing and/or re-enforcing the main action. Timing refers to the number of drawings or frames for a given action, which translates to the speed of the action on film. Timing can imply weight. Light objects have less resistance and move much quicker than heavy objects. Actors work with their timing to get the maximum impact from their lines. Speed can imply emotion. A fast walk may mean happiness and a slow walk may mean depression. An animator must determine how many frames are needed for a given movement. A stopwatch or video reference can be helpful. Exaggeration is used to increase the readability of emotions and actions. Animation is not a subtle medium. Individual exaggerated poses may look silly as stills but add dramatic impact when viewed for a split second. Animators should use exaggeration to increase understanding of feeling, but be careful to not over-exaggerate everything. To get maximum feeling from the audience, animated characters must be drawn or modeled precisely. Proper drawing and modeling can reveal a characters weight, character, and emotion. Proper drawing and modeling are needed to give the character proper depth and balance. When creating animated characters, it is a good idea to not add too much detail. In a cartoon character corresponds to what would be called charisma in an actor. Animated characters need to have a unique personality and have a wide range of emotions (happy, excited, fearful, embarrassed, angry, scared, etc.). Character flaws are actually a good thing. Audiences can be sympathetic to characters that have a flaw or two. Complex personalities and moral ethical dilemmas add to character appeal. CSS Animation @keyframes animation-name animation-duration animation-delay animation-iteration-count animation-direction animation-timing-function animation-fill-mode Animation Some older browsers need specific prefixes (-webkit-) to understand the animation properties. Can be implemented by changing HTML/CSS properties. Using setInterval – an example: let start = Date.now(); // remember start time let timer = setInterval(function() { // how much time passed from the start? let timePassed = Date.now() - start; if (timePassed >= 2000) { clearInterval(timer); // finish the animation after 2 seconds return;} // draw the animation at the moment timePassed draw(timePassed);}, 20); // as timePassed goes from 0 to 2000 // left gets values from 0px to 400px function draw(timePassed) { AnimatedObject.style.left = timePassed / 5 + 'px';} Instead of: setInterval(function() { animate1(); animate2(); animate3();}, 20) Animation loop using requestAnimationFrame groups together several independent animations to make the redraw easier, to load less CPU and for looking smoother. The requestAnimationFrame API targets 60 frames per second animation in canvases. The syntax: let requestId = requestAnimationFrame(callback); callback function runs when the browser will be preparing a repaint. (Usually that’s very soon, but the exact time depends on the browser) callback gets one argument – the time passed from the beginning of the page load in microseconds. If we do changes in elements in callback then they will be grouped together with other requestAnimationFrame callbacks and with CSS animations. So there will be one geometry recalculation and repaint instead of many. requestId can be used to cancel the call: cancelAnimationFrame(requestId); A general animation function based on requestAnimationFrame: function animate({timing, draw, duration}) { let start = performance.now(); requestAnimationFrame(function animate(time) { // timeFraction goes from 0 to 1 let timeFraction = (time - start) / duration; if (timeFraction > 1) timeFraction = 1; // calculate the current animation state let progress = timing(timeFraction) draw(progress); // draw it if (timeFraction < 1) { requestAnimationFrame(animate);} });} Where: duration – the total animation time (in ms). timing – the function to calculate animation progress. Gets a time fraction from 0 to 1. draw – the function to draw the animation. Asynchronous JavaScript (2) JavaScript programs in a web browser are typically event-driven. JavaScript-based servers typically wait for client requests to arrive over the network before they do anything. There are no features of the core language that are themselves asynchronous BUT JavaScript provides powerful features for working with asynchronous code: promises, async, await, and for/await. Asynchronous JavaScript – Common Methods I. Using Callbacks II. Using Promises III. Using async and await IV. Using asynchronous Iteration I. Asynchronous Programming with Callback It is derived from a programming paradigm known as functional programming. It is very used to add interactivity in HTML documents. Callback functions in JavaScript: a function passed to another function, and executed inside the function we called. That other function then invokes (“calls back”) your function when some condition is met or some (asynchronous) event occurs. The callback function: notifies you of the condition or event, may include arguments that provide additional details. Common forms of callback-based asynchronous programming: 1. Events - Event-driven JavaScript programs register callback functions for specified types of events in specified contexts. 2. Timers 3. Network events - JavaScript running in the browser can fetch data from a web server. XMLHttpRequest class Events and Event Handlers Listener determines whether an element should act on a particular event. Handler is a function that is called when the event occurs. Assigning event handlers can be accomplished in a number of different ways: By HTML Event Handlers: assigned using an HTML attribute with the name of the event handler … Example: Function Using Node Properties: to assign a function to an event handler property btn = document.getElementById("myBtn"); btn.onclick = function() { console.log("Clicked");}; By Event Handlers: addEventListener() and removeEventListener()-> multiple event handlers can be added. Arguments: - the event name to handle, - the event handler function, and - a Boolean value indicating whether to call the event handler during the capture phase (true) or during the bubble phase (false). document.getElementById(“test”).addEventListener("click", function() {console.log(“The message”);}); Event handlers added via addEventListener() can be removed only by using removeEventListener() and passing in the same arguments as were used when the handler was added: element.removeEventListener(type,function); I.2 - JavaScript Timing Events / Timers Used to run some code after a certain amount of time has elapsed. The window object allows execution of code at specified time intervals. These time intervals are called timing events. The two key methods to use with JavaScript are: myTimer1 =setTimeout(function, milliseconds[, param1, param2,...]) - run the function once, after a time of milliseconds have elapsed. myTimer2=setInterval(function, milliseconds[, param1, param2,...])- run repeatedly the function. To stop the repeated invocations: clearTimeout(myTimer1) - stops running the function specified in setimeout(). clearInterval(myTimer2) - stops running the function specified in the setInterval() method. Image Characteristics Image characteristics Hue = the colour reflected by an object or transmitted through it, Saturation - "the purity " of color; the gray level of the colour, Brightness, Contrast = the difference between the darkest colours and the lightest colours. Histogram: Shows how the pixels are distributed in an image, function on the number of pixels from each level of colour intensity. It provides information about the proportion of shaded areas (left) to mid tones (middle) and light (right). It provides an overview of the image tonality. Image Types 1. Raster image 2. Vector image Raster Image (Bitmap Image) Raster image represents the image as a matrix of dots, called pixels.(picture elements). A pixel is the smallest element of resolution on a computer screen (Screen Resolution) It is acquired from external sources. Bitmap images are resolution-dependent and generate large file sizes. Raster image is stored in image file as a sequence of bits -> the colour code for each point. Bitmap image advantages: The bitmap can be more photorealistic. We can set the color of every individual pixel in the image Disadvantages: Bitmaps are memory intensive, and the higher the resolution, the larger the file size. When an image is enlarged, the individual colored squares become visible and the illusion of a smooth image is lost to the viewer. Changing the visualization scale => loss in image quality. To reduce the effect and to improve the outcome image interpolation algorithms are used. The file size depends on the image size and on the colour depth. Bitmap Image image must be mapped to the matrix problem : jagged Bitmap Image Quality Three factors: Image Size - refers to the height and width of the image, measured in inches, centimeters, pixels, or any other unit of measure. If the image size is measured in dots or pixels, then you know exactly how much image data exists because a 300 pixel by 500 pixel image contains 15,000 pixels no matter how many pixels you designate per inch. Color Depth or bit depth refers to the number of bits used to describe the color of a single pixel. It determines the number of colors that can be displayed at one time. (it is measured in bits per pixel) Resolution – refers to the number of pixels per inch, in the image. It also refers to the sharpness and clarity of an image. Bitmap Image Quality Overview Images Pixel Filters Neighborhood Filters Dithering Image as a Function We can think of an image as a function, f, f: R2 R – f (x, y) gives the intensity at position (x, y) – Realistically, we expect the image only to be defined over a rectangle, with a finite range: f: [a,b]x[c,d] [0,1] A color image is just three functions pasted together. We can write this as a “vector- valued” function: r ( x, y ) f ( x, y ) = g ( x, y ) b( x, y ) Image Processing Define a new image g in terms of an existing image f We can transform either the domain or the range of f Range transformation: What kinds of operations can this perform? Image Processing Some operations preserve the range but change the domain of f : What kinds of operations can this perform? Still other operations operate on both the domain and the range of f. Nonlinear Lower Original Darken Lower Contrast Contrast Nonlinear Raise Invert Lighten Raise Contrast Contrast Nonlinear Lower Original Darken Lower Contrast Contrast x x - 128 x / 2 ((x / 255.0) ^ 0.33) * 255.0 Nonlinear Raise Invert Lighten Raise Contrast Contrast 255 - x x + 128 x * 2 ((x / 255.0) ^2) * 255.0 Convolution filters- generic filters for image processing. Convolution filters can be used for blurring, sharpening, embossing, edge detection etc. A filter is generated by doing a convolution between a kernel and an image 0.2 0.1 -1.0 0.3 0.0 0.9 0.1 0.3 -1.0 The filter is taking values from around the pixel of interest Kernels are typically 3x3 matrices For instance: to maintain the brightness of the image, the sum of the matrix values should be one. Commutative a ∗b = b ∗ a Associative (a ∗ b ) ∗ c = a ∗ (b ∗ c ) Cascade system f h1 h2 g = f h1 ∗ h2 g = f h2 ∗ h1 g Filters – Applying a convolution filter f- Original image g - filter f ∗ g = output Filters ▪ Color Filter ▪ Red filter: r’ = r ; g’ = 0; b’ = 0 ▪ Negative ▪ r ‘ = 255 – r ; g‘ = 255 – g; b‘ = 255 – b; ▪ Grayscale ▪ r ‘ = g’ = b’ = (r + g + b) / 3 ▪ r ‘ = g’ = b’ = 0.299 * r + 0.587 * g + 0.114 * b - Weighted Method (luminosity) Filters (2) ▪ Brightness ▪ r ‘ = r + value; if (r ’ > 255) r’ = 255 else if (r ’ < 0) r’ = 0; ▪ g ‘ = g + value; if (g’ > 255) g’ = 255 else if (g’ < 0) g’ = 0; ▪ b ‘ = b + value; if (b’ > 255) b’ = 255 else if (b’ < 0) b’ = 0; ▪ Thresholding ▪ ▪ v = (0.2126*r + 0.7152*g + 0.0722*b >= threshold) ? 255 : 0; r ’= g’ = b’ = v ▪ Sepia ▪ r = 0.3 9 3 r + 0.7 6 9 g + 0.1 8 9 b g = 0.3 4 9 r + 0.6 8 6 g + 0.1 6 8 b b = 0.2 7 2 r + 0.5 3 4 g + 0.1 3 1 b ▪ More examples: ▪ https://developer.mozilla.org/en- US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas Convolution filters ▪ Allow to implement generic filters for image processing, like: blurring, sharpening, embossing, edge detection. ▪ Determine the new value for a pixel based on the values of nearby pixels in the original image. [ 1/9, 1/9, 1/9, ▪ Example: blur filter 1/9, 1/9, 1/9, 1/9, 1/9, 1/9 ] ▪ Note: to maintain the brightness of the image, the sum of the matrix values should be one. Convolution filters −2 −1 0 Emboss −1 1 1 0 1 2 0 −1 0 Sharpen −1 5 −1 0 −1 0 −1 −1 −1 Edge detection – Laplacian −1 8 −1 −1 −1 −1 −1 −1 −1 – Sobel −1 8 −1 −1 −1 −1 1 2 1 Gaussian Blur 2 4 2 1 2 1 /16 Common Digital Image File Formats BMP -Microsoft bitmap (.bmp)- used in Microsoft windows – known as device independent bitmap (DIB) TIFF - Tagged Image File Format (.tif) - used for faxing images (amongst other things) JFIF/ JPEG - Joint Photographic Expert Group (.jpg) - useful for storing photographic images GIF - Graphics Interchange Format (.gif) - used a lot on web sites PNG - Portable Network Graphics (.png) - web graphics file format ICO - Icon Resource File; WebP - designed by Google. It is used for both lossless and lossy compression. Reducing image file size, it speeds up web page loading. BPG - a new image format created for replacing JPEG format to cope with quality or file size issue. Its file size is much smaller that JPEG and high compression ratio. It is supported by the most web browser. It supports all color spaces and lossless compression. It also supports various Meta data. JPS file is actually a JPEG file and it is used for stereoscopic images. Usually, stereo image have to copies of same image that are arranged side by side. In such image, there is slight variations regarding lighting or perspective. This image format will enable viewers to see a 3D effect from 2D photos in one of three ways Image File Formats Comparison Calculate Digital Image File Size Example 1 - A full screen graphic resolution (640 x 480 pixels) at an 8-bit color will yield the following file size: (640 x 480 x 8) / 8 = 307200 bytes (b) Example 2 - A full screen graphic resolution (320 x 240 pixels) with 16-bit colors will yield the following file size: (320 x 240 x 16) / 8 = 153600 bytes (b) An Introduction ▪ Compression - the process of coding that will effectively reduce the total number of bits needed to represent certain information. A General Data Compression Scheme ▪ There are two major categories of compression algorithms: ▪ Lossless compression ▪ Lossy compression ▪ If the compression and decompression processes induce no information loss, then the compression scheme is lossless; otherwise, it is lossy compression. ▪ Compression algorithms used in multimedia are usually asymmetrical –the compression process is longer than the decompression one. Loosy vs. Lossless Compression Image File Compression Image File Compression Lossless Compression Lossless compression algorithms reduce image file size without losing image quality. They are not compressed as small a file as a lossy compression file. Lossy Compression A lossy compression method is one where compressing data and then decompressing it retrieves data that is different from the original, but is close enough to be useful in some way. Lossy compression is most commonly used to compress multimedia resources. Compression ratio Compression ratio – measures the effectiveness of a compression algorithm: B0 compression ratio = B1 B0 – number of bits before compression B1 – number of bits after compression Lossless Compression ▪ Lossless compression uses an efficient encoding to reduce the file size, while preserving all of the original data. When the file is decompressed it will be identical to the original file. ▪ Ex. of commonly used lossless compression algorithms: ▪ Run Length Encoding – RLE, ▪ Shannon-Fano algorithm, ▪ Huffman Coding, ▪ Lempel–Ziv–Welch (LZW), ▪ Arithmetic Coding. Lossy Compression ▪ Lossy compression reduces the size of original file and some data is lost. Lossy compression is not an option for text files. ▪ It exploits the limits in human perception - often possible to maintain high- quality images or sounds with less data than was originally present. ▪ Examples of common lossy compression algorithms : ▪ JPEG – images; ▪ MPEG – sound and video. ▪ MP3 compression - analyzes the sound file and discards data that is not critical for high-quality playback. It removes frequencies above the range of human hearing. Run-Length Coding Memoryless Source: an information source that is independently distributed. Namely, the value of the current symbol does not depend on the values of the previously appeared symbols. Instead of assuming memoryless source, Run-Length Coding (RLC) exploits memory present in the information source. Rationale for RLC: if the information source has the property that symbols tend to form continuous groups, then such symbol and the length of the group can be coded. ▪ one of the simpler strategies to achieve lossless compression ▪ can be used to compress bitmapped image file. Bitmapped images can easily become ver y large because each pixel is represented with a series of bits that provide information about its color. RLE generates a code to “flag” the beginning of a line of pixels of the same color. That color information is then recorded just once for each pixel. In effect, RLE tells the computer to repeat a color for a given number of adjacent pixels rather than repeating the same information for each pixel over and over. The RLE compressed file will be smaller, but it will retain all the original image data—it is “lossless.” Variable-Length Coding (VLC) Shannon-Fano Algorithm — a top-down approach 1. Sort the symbols according to the frequency count of their occurrences. 2. Recursively divide the symbols into two parts, each with approximately the same number of counts, until all parts contain only one symbol. An Example: coding of “HELLO” Symbol H E L O Count 1 1 2 1 Frequency count of the symbols in ”HELLO”. Coding Tree for HELLO by Shannon-Fano. Symbol Count Log2 Code # of bits used L 2 1.32 0 2 Result of Performing Shannon- H 1 2.32 10 2 Fano on HELLO E 1 2.32 110 3 O 1 2.32 111 3 TOTAL # of bits: 10 Another coding tree for HELLO by Shannon-Fano. Result of Performing Shannon-Fano on HELLO Symbol Count Log2 Code # of bits used L 2 1.32 00 4 H 1 2.32 01 2 E 1 2.32 10 2 O 1 2.32 11 2 TOTAL # of bits: 10 Huffman Coding ALGORITHM - Huffman Coding Algorithm — a bottom-up approach 1. Initialization: Put all symbols on a list sorted according to their frequency counts. 2. Repeat until the list has only one symbol left: 1) From the list pick two symbols with the lowest frequency counts. Form a Huffman subtree that has these two symbols as child nodes and create a parent node. 2) Assign the sum of the children’s frequency counts to the parent and insert it into the list such that the order is maintained. 3) Delete the children from the list. 3. Assign a codeword for each leaf based on the path from the root. Coding Tree for “HELLO” using the Huffman Algorithm. New symbols P1, P2, P3 are created to refer to the parent nodes in the Huffman coding tree. The contents in the list are illustrated below: After initialization: L H E O After iteration (a): L P1 H After iteration (b): L P2 After iteration (c): P3 Properties of Huffman Coding 1. Unique Prefix Property: No Huffman code is a prefix of any other Huffman code - precludes any ambiguity in decoding. 2. Optimality: minimum redundancy code - proved optimal for a given data model (i.e., a given, accurate, probability distribution): The two least frequent symbols will have the same length for their Huffman codes, differing only at the last bit. Symbols that occur more frequently will have shorter Huffman codes than symbols that occur less frequently. The average code length for an information source S is strictly less than η + 1. l < η +1 Extended Huffman Coding Motivation: All codewords in Huffman coding have integer bit lengths. It is wasteful when pi is very large and hence log 2 p1i is close to 0. Why not group several symbols together and assign a single codeword to the group as a whole? Extended Alphabet: For alphabet S = {s1, s2,... , sn}, if k symbols are grouped together, then the extended alphabet is: — the size of the new alphabet S(k) is nk. Dictionary-based Coding - Lempel–Ziv–Welch (LZW) LZW uses fixed-length codewords to represent variable-length strings of symbols/characters that commonly occur together, e.g., words in English text. The LZW encoder and decoder build up the same dictionary dynamically while receiving the data. LZW places longer and longer repeated entries into a dictionary, and then emits the code for an element, rather than the string itself, if the element has already been placed in the dictionary. In real applications, the code length l is kept in the range of [l0, lmax]. The dictionary initially has a size of 2l0. When it is filled up, the code length will be increased by 1; this is allowed to repeat until l = lmax. When lmax is reached and the dictionary is filled up, it needs to be flushed (as in Unix compress, or to have the LRU (least recently used) entries removed. Example LZW compression for string “ABABBABCABABBA” Let’s start with a very simple dictionary (also referred to as a “string table”), initially containing only 3 characters, with codes as follows: Code String 1 A 2 B 3 C Now if the input string is “ABABBABCABABBA”, the LZW compression algorithm works as follows: S C Output Code String 1 A 2 B 3 C A B 1 4 AB B A 2 5 BA A B AB B 4 6 ABB B A BA B 5 7 BAB B C 2 8 BC C A 3 9 CA A B AB A 4 10 ABA A B AB B ABB A 6 11 ABBA A EOF 1 The output codes are: 1 2 4 5 2 3 4 6 1. Instead of sending 14 characters, only 9 codes need to be sent (compression ratio = 14/9 = 1.56). Arithmetic Coding Arithmetic coding is a more modern coding method that usually out- performs Huffman coding. Huffman coding assigns each symbol a codeword which has an integral bit length. Arithmetic coding can treat the whole message as one unit. A message is represented by a half-open interval [a, b) where a and b are real numbers between 0 and 1. Initially, the interval is [0, 1). When the message becomes longer, the length of the interval shortens and the number of bits needed to represent the interval increases. Example: Encoding in Arithmetic Coding Symbol Probability Range A 0.2 [0, 0.2) B 0.1 [0.2, 0.3) C 0.2 [0.3, 0.5) D 0.05 [0.5, 0.55) E 0.3 [0.55, 0.85) F 0.05 [0.85, 0.9) G 0.1 [0.9, 1.0) (a) Probability distribution of symbols. Arithmetic Coding: Encode Symbols “CAEE$” Graphical display of shrinking ranges. Example: Encoding in Arithmetic Coding Symbol Low High Range 0 1.0 1.0 C 0.3 0.5 0.2 A 0.30 0.34 0.04 E 0.322 0.334 0.012 E 0.3286 0.3322 0.0036 $ 0.33184 0.33220 0.00036 (c) New low, high, and range generated. Arithmetic Coding: Encode Symbols “CAEE$” JPEG Compression ▪ Lossy data compression algorithm. ▪ It allows a tradeoff between storage size and the degree of compression can be adjusted. ▪ The raster that results from decompression is not guaranteed to be exactly the same as the original. ▪ Some of the data is actually discarded => it tries to prioritize so that the least perceptually significant data is discarded first ▪ JPEG can achieve a compression ratio ranging from typically around 10:1 for high quality to 100:1 for low quality image. ▪ JPEG algorithm is well-suited for: ▪ Scenes where colors change continuously, without sharp edges, ▪ Real-world scenes captured with cameras often compress well with JPEG, ▪ computer-generated line drawings and other art will become blurred at sharp edges. Image constructed with increasing JPEG compression from left to right JPEG Compression ▪ JEPG Encoding Steps: 1. Colour space transformation 2. downsampling 3. block splitting 4. transformation 5. quantization 6. encoding JPEG Compression - Step 1 - Colour space transformation ▪ The original RGB data is converted to Y′CBCR, consisting of one luma component (Y'), representing brightness, and two chroma components, (CB and CR), representing color. ▪ The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. JPEG Compression – Step 2 – Downsampling JPEG Compression – Step 2 – Downsampling (cont.) ▪ The ratios at which the downsampling is ordinarily done for JPEG images are: ▪ 4:4:4 - no downsampling, ▪ 4:2:2 - reduction by a factor of 2 in the horizontal direction ▪ 4:2:0 - reduction by a factor of 2 in both the horizontal and vertical directions (most common). ▪ For the rest of the compression process, Y', Cb and Cr are processed separately and in a ver y similar manner. JPEG Compression – Step 3 - Block splitting ▪ Each channel is split into 8×8 blocks ▪ Depending on chroma subsampling, this yields Minimum Coded Unit (MCU) blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0). JPEG Compression – Step 4 - Discrete cosine transform ▪ Each channel must be split into blocks of size 8 x 8 pixels DCT Formula: JPEG Compression – Step 4 - Discrete cosine transform (cont.) ▪ Before computing the DCT of the 8×8 block, its values are shifted from a positive range to one centered on zero. ▪ We subtract 127 from each pixel intensity in each block. This step centers the intensities about the value 0 and it is done to simple the mathematics of the transformation and quantization steps. ▪ Loosely speaking, the DCT tends to push most of the high intensity information (larger values) in the 8 x 8 block to the upper left-hand of C with the remaining values in C taking on relatively small values. JPEG Compression – Step 5 - Quantization ▪ Humans are unable to see important aspects of the image with high frequencies. ▪ After it has been transformed using DCT, the image is quantized so that negligible info about colours and high frequency variations can be thrown away. ▪ Quantization is used to reduce the number of bits per sample. ▪ There are two types of Quantization: ▪ Uniform Quantization ▪ Non-Uniform Quantization JPEG Compression – Step 6 - Encoding The zigzag scan is used to map the 8x8 matrix to a 1x64 vector. Zigzag scanning is used: to group low-frequency coefficients to the top level of the vector, to group the high coefficient to the bottom, to remove the large number of zero in the quantized matrix. It involves arranging the image components in a “zig-zag” order employing Steps: RLE encoding that groups similar Vectoring frequencies together, inserting length Run Length Encoding (RLE) coding coding zeros, and then using Huffman Huffman coding coding on what is left. JPEG Compression – Step 6 – Encoding (cont.) Vectoring - the different pulse code modulation (DPCM) is applied to the DC component. DC components are large and vary, but they are usually close to the previous value. DPCM encodes the difference between the current block and the previous block. JPEG Compression – Step 6 – Encoding (cont.) Run Length Encoding (RLE) is applied to AC components. This is done because AC components have a lot of zeros in it. It encodes in pair of (skip, value) in which skip is non zero value and value is the actual coded value of the non zero components. DC components are coded using Huffman. JPEG Modes JPEG supports several different modes: Sequential Mode Progressive Mode Hierarchical Mode Lossless Mode The default mode is “sequential” Image is encoded in a single scan (left-to-right, top-to-bottom). Progressive JPEG Image is encoded in multiple scans. Produce a quick, roughly decoded image when transmission time is long. Sequential Progressive Progressive JPEG (cont.) We’ll examine the following algorithms: (1) Progressive spectral selection algorithm (2) Progressive successive approximation algorithm (3) Hybrid progressive algorithm Progressive JPEG (cont) (1) Progressive spectral selection algorithm Group DCT coefficients into several spectral bands Send low-frequency DCT coefficients first Send higher-frequency DCT coefficients next Example Progressive JPEG (cont.) (2) Progressive successive approximation algorithm Send all DCT coefficients but with lower precision. Refine DCT coefficients in later scans. Example Example after 0.9s after 1.6s after 3.6s after 7.0s Progressive JPEG (cont.) (3) Combined progressive algorithm Combines spectral selection and successive approximation. Hierarchical JPEG Hierarchical mode encodes the image at different resolutions. Image is transmitted in multiple passes with increased resolution at each pass. Hierarchical JPEG (cont.) N/4 x N/4 N/2 x N/2 NxN Versus Raster Image Raster (bitmap) images are made of pixels (the smallest single element in a display device). Vector images are mathematical calculations from one point to another that form geometrical shapes. When you take a photograph using a Vector graphics are scalable - digital camera or scan an image, you are ie when you resize them, they creating a raster (bitmap) graphic. do not lose quality. Scaling / Zoom 170 VECTOR GRAPHICS | ADVANTAGES Scaled up without losing quality => paths can be mathematically resized, vector graphics can be scaled up or down without losing any picture clarity. Small file size - the list of drawing commands takes up much less file space than a bitmapped version of the same graphic. Smooth scaling. Vector images are enlarged by changing the parameters of their component shapes. The new image can then be accurately redrawn at the larger size without the distortions typical of enlarged bitmapped graphics. They are ideal for logo designs, as they can be printed both, without quality loss very small on business cards or printed large on a billboard poster. business cards VECTOR GRAPHICS | DISADVANTAGES It is not the best format for photographs or photo-like elements with blends of colour. VECTOR GRAPHICS | Common File Formats SVG (Scalable Vector Graphics) XML-based vector image format for two-dimensional graphics provides supports for interactivity and animation DXF (Drawing Exchange Format) CAD data file format developed by Autodesk for enabling data interoperability between AutoCAD and other programs. EPS (Encapsulated Post Script) Created by Adobe Systems for representing vector graphics Uses a computer language called Post Script SHP (Shapefile) popular geospatial vector data format for geographic information system (GIS) software Developed and regulated by Esri as a (mostly) open specification for data interoperability among Esri and other GIS software products 2D Graphics in XML Introduction - S Scalable Scalable Vector Graphics is an XML grammar for syllable graphics that can be used as an XML namespace. Scalable graphics allows for uniform dynamic pixel sizing for graphics. Graphic content can be stand-alone, referenced or included inside of other SVG graphics allowing for a complex illustration to be built in parts and possibly by several different people Scalable graphics allows for different display resolutions i.e. content magnification to aid people with low vision. Scalable Vector Graphics and the Web Scalable meaning the technology can grow to include a large number of users, files and/or application and still be efficient and effective. Introduction - V Vector Vector graphics have geometric objects like lines and curves giving greater flexibility than raster-only formats (JPG, PNG) that store information for each and every pixel of the graphic. In general vector formats can include raster images as well as geometric objects and combine them with vector information. Introduction - G Graphics Most existing XML grammars represent textual information or raw data and typically provide simply rudimentary graphical capabilities which are often less capable than the HTML ‘img’ element. SVG provides a rich, structured description of vector and mixed vector/raster graphics allowing it to be used in a stand-along function or as an XML namespace with other grammars. SVG | Introduction So, what SVG is ? SVG is a web format that allows content developers to create two-dimensional graphics in a standard way, by using an XML grammar. Why do we use SVG? Size Matters- file size, resizing It’s still XML Versatile Static Graphics Rendering Self-contained Applications Server-based applications Intuitive Specified as shapes, not pixels - so, scale better. Each SVG object is an HTML DOM element, so: You can change its attributes. SVG | Language SVG has four different DTDs Original version of SVG, 1.0 Full version - SVG 1.1 Basic version – SVGB Tiny version – SVGT Current version – SVG2 – work in progress SVG documents are required to have a root element - the svg element SVG content Three fundamental types of graphical objects that can be used within SVG drawings: Primitive vector shapes (lines, circles, squares, etc) Vector text – text rendered in a mathematical font such as true type fonts. This is done using cascading style sheet attributes. External bitmap images SVG | Elements Element—The element type name can be thought of as the tag name. SVG | Attributes Attribute Description Angles are specified in one of two ways. As the value of a property in a stylesheet or as an SVG attribute. A basic type that is a sequence of zero or more characters in the production for a character as defined in XML 1.0 A CSS2 compatible specification for a color in the sRGB color space. A length in the user coordinate system that is the given distance form the origin of the user coordinate system along the relevant axis. Used with aural properties. units such as Hz or kHz Funcational notation for an IRI: An ICC color specification given by a name which references a ‘color-profile’ element and one or more color component values. Specified as an optional sign character followed by one or more digits An Internationalized Resource Identifier A distance measurement, given as a number along with a unit SVG | Attributes Attribute Description A list that consists of a separated sequence of values. A string name Real numbers The values for properties “fill” and “stroke” are specifications of the type of paint to use when filing or stroking a given graphics element Percentage specified as a number followed by % Time value followed by a time unit identifier Used to specify a list of coordinate system transformations. An XML name SVG | Data Types Data Type Description Number SVGT and SVGB support fixed point numbers with a range of -32,767.9999 to +32,767.9999 or the scientific notation equivalent Length User units are supported with the exception that ‘width’ and ‘height’ attributes on the outermost ‘SVG’ element can specify values in any of the following CSS units: in, cm, mm, pt, pc, and %. SVGB supports lengths in user coordinate space and in CSS units Coordinate SVGT and SVGB support the coordinate date type, represented with a value. List of XXX (Where XXX represents a value of some type): SVGT and SVGB support the list specification. Angle SVGT only supports angles if specified with no CSS unit identifiers (which means all angles are in degrees). SVGB supports angles with CSS unit identifiers. Color SVGT and SVGB support in the CSS2 compatible specification for a color in the sRGB color space, and system colors. Additionally, SVGB and SVGT support 16 original color keywords from XHTML and do not support X11 colors. SVGB also allows optional support of ICC color profiles. Paint SVGB supports paint specification for fill and stroke operations, as well as linear and radial gradients. SVGT does not support the more general notation of paint specification and thus only supports solid color fills and strokes Percentage SVGB supports percentages. SVGT does not support percentage values except for the ‘width’ and ‘height’ attributes on the outermost ‘SVG’ element Transform List SVGB and SVGT support transform lists URI SVGB and SVGT support the URI data type Frequency SVGB and SVGT do not support frequency value Time SVGB and SVGT support time values, with unit identifiers (ms, s) SVG | Elements Line: Rectangle: Elipse: Polygon: Polyline: Text: content Grouping elements: … Defining groups: … Reusing groups: 303 SVG | Interaction with CSS & Javascript Interaction with SVG using CSS - Link: https://developer.mozilla.org/en- US/docs/Web/Guide/CSS/Getting_started/SVG_and_CSS Interaction with SVG using JavaScript / jQuery - similar to the approach used for HTML elements; Particularities when creating an element we need to use the SVG namespace: document.createElementNS("http://www.w3.org/2000/svg", „TAG_SVG") Further readings: https://developer.mozilla.org/en-US/docs/Web/SVG Setting a Responsive Web Design Responsive design- to change the layout of your web app function on the device’s browser dimensions. It involves: HTML, CSS, and media queries. Framework: 1. Set the viewport: Inside HTML element, for ALL the pages of your app: Name of attribute Value Description of attribute autoplay autoplay Specifies that the video will start playing as soon as it is ready loop loop Specifies that the video will start over again every time it finishes playing muted muted Specifies that the audio output of the video should be muted preload auto Specifies if and how the author thinks the video metadata should be loaded when the page loads none HTML5 Video How-To.. Adding cross-browser support – needed because not all formats are compatible with all browsers: multiple event handlers can be added. Arguments: - the event name to handle, - the event handler function, and - a Boolean value indicating whether to call the event handler during the capture phase (true) or during the bubble phase (false). document.getElementById(“test”).addEventListener("click", function() {console.log(“The message”);}); Event handlers added via addEventListener() can be removed only by using removeEventListener() and passing in the same arguments as were used when the handler was added: element.removeEventListener(type,function); I.2 - JavaScript Timing Events / Timers Used to run some code after a certain amount of time has elapsed. The window object allows execution of code at specified time intervals. These time intervals are called timing events. The two key methods to use with JavaScript are: myTimer1 =setTimeout(function, milliseconds[, param1, param2,...]) - run the function once, after a time of milliseconds have elapsed. myTimer2=setInterval(func