Fundamental Concepts of Video (DCS2353 DEB1253) PDF
Document Details
Tags
Summary
This document provides an overview of fundamental video concepts in multimedia. It discusses video formats, video properties, and the video production process. The document also deals with video camera angles and movements and describes how video works.
Full Transcript
MULTIMEDIA PRINCIPLES (DCS2353 DEB1253) UNIT 04 FUNDAMENTAL CONCEPTS FUNDAMENT LEARNING OUTCOME AL CONCEPTS Student should be able to : (VIDEO) Understand basic concept of video in Multimedia. Describe the elements of video in Multimedia....
MULTIMEDIA PRINCIPLES (DCS2353 DEB1253) UNIT 04 FUNDAMENTAL CONCEPTS FUNDAMENT LEARNING OUTCOME AL CONCEPTS Student should be able to : (VIDEO) Understand basic concept of video in Multimedia. Describe the elements of video in Multimedia. Apply best practices method of recoding a video for a Multimedia project. FUNDAMENT CONTENTS AL INTRODUCTION CONCEPTS HOW VIDEO WORK (VIDEO) VIDEO FORMAT STANDARDS VIDEO PROPERTIES VIDEO EDITING VIDEO PRODUCTION PROCESS CAMERA ANGLE CAMERA MOVEMENT CAMERA SETTING AND CONFIGURATION VIDEO INTRODUCTION Video is a program, movie, or other visual media product featuring moving images, with or without audio, that is recorded and saved digitally or on analog devices for certain duration of time. Video also can be projected "live" that will involve broadcasting elements. Video usually accompanied by sound such as a picture in television, with duration of time, video can be measured by frame per second, frame bitrate and many more. VIDEO INTRODUCTION Video captured when light reflected from an object passes through a video camera lens, that light is converted into an electronic signal by a special sensor called a charge- coupled device (CCD). Top-quality broadcast cameras and even camcorders may have as many as three CCDs (one for each color of red, green, and blue) to enhance the resolution of the camera and the quality of the image. Digital video is useful in multimedia application for showing real life objects. Video have highest performance demand on the computer memory and on the bandwidth if placed on the internet. Digital video files can be stored like any other files in the computer and the quality of the video can still be maintained. The digital video files can be transferred within a computer network. The digital video clips can VIDEO INTRODUCTION Since the first silent film movie flickered to life, people have been fascinated with “motion” pictures. To this day, motion video is the element of multimedia that can draw gasps from a crowd at a trade show or firmly hold a student’s interest in a computer-based learning project. Digital video is the most engaging of multimedia venues, and it is a powerful tool for bringing computer users closer to the real world. It is also an excellent method for delivering multimedia to an audience raised on television. With video elements in your project, you can effectively present your messages and reinforce your story, and viewers tend to retain more of what they see. But take care! Video that is not thought out or well produced can degrade your presentation. ANALOG HISTORY VIDEO In the early days of television broadcasting, programs were often produced and transmitted simultaneously. During a “live” production, the audience would see the on- screen performance as it was carried out in real-time, most likely in a remote television studio far away. Recording technologies, like videotape, that have long been associated with television and video production were not invented until long after television was established as a viable commercial enterprise. Prior to 1956, motion picture film was the only recording medium available for capturing and storing televised images. Using a device called a kinescope, a 16mm or 35mm motion picture camera was set up to record electronic images directly from the surface of a television monitor. This pioneering method of video transcoding was used to generate a photographic archive of scanned television images in real-time. Transcoding is the process of VIDEO HISTORY Thus, in the early days of broadcasting the only way to make a recording of a television signal was to transcode it from its native form, as a scanned projection on the surface of a cathode ray tube, to an optical reproduction of light on photographic film. While the filmed image represented aspects of the original form, transcoding resulted in the creation of something entirely different as well. For example, while a film could be shown to a local audience gathered in a theater using a mechanical projection system, it could not be transmitted electronically to a remote television audience. In order for a motion picture to be broadcast electronically, it had to undergo reverse transcoding, using a device called a telecine. As a companion to the kinescope, the telecine used a television camera to electronically capture and encode photographic frames of a film as they were projected. The telecine also gave broadcasters a tool for VIDEO HISTORY Transcoding is not without its problems. In the previous example, it doesn’t take a great deal of imagination to realize that a 16mm reproduction of a scanned television image is going to look noticeably inferior to the quality of the original transmission in its native form. And in fact, this was often the case. With the kinescope, a mechanical shutter on the film camera was used to convert the native frame rate of U.S. television signals (30 fps) to the native frame rate of film (24 fps), resulting in the permanent loss of six frames of visual information every second. VIDEO HISTORY A camcorder is a self-contained portable electronic device with video and recording as its primary function. It is typically equipped with an articulating screen mounted on the left side, a belt to facilitate holding on the right side, hot-swappable battery facing towards the user, hot- swappable recording media, and an internally contained quiet optical zoom lens. The earliest camcorders were tape-based, recording analog signals onto videotape cassettes. In 2006, digital recording became the norm, with tape replaced by storage media such as mini-HD, microDVD, internal flash memory and SD cards. More recent devices capable of recording video are camera phones and digital cameras primarily intended for still pictures, whereas dedicated camcorders are often equipped with more functions and interfaces than more common cameras. VIDEO HISTORY A camcorder is a self-contained portable electronic device with video and recording as its primary function. It is typically equipped with an articulating screen mounted on the left side, a belt to facilitate holding on the right side, hot-swappable battery facing towards the user, hot- swappable recording media, and an internally contained quiet optical zoom lens. VIDEO HISTORY The Alexa, announced in April 2010, was the first camera released of the product family. The Alexa's CMOS Super- 35mm sensor is rated at 2.8K and ISO 800. That sensitivity allows the camera to see a full seven stops of over exposure and another seven stops of underexposure. To take advantage of this, ARRI offers both industry- standard REC709 HD video output as well as the Log-C mode that shows the entire range of the chip's sensitivity, allowing for an extreme range of color correction options in post. VIDEO HOW VIDEO WORKS? When light reflected from an object passes through a video camera lens, that light is converted into an electronic signal by a special sensor called a charge- coupled device (CCD). Top-quality broadcast cameras and even camcorders may have as many as three CCDs (one for each color of red, green, and blue) to enhance the resolution of the camera and the quality of the image. It’s important to understand the difference between analog and digital video. Analog video has a resolution measured in the number of horizontal scan lines (due to the nature of early cathode-tube cameras), but each of those lines represents continuous measurements of the color and brightness along the horizontal axis, in a linear signal that is analogous to an audio signal. Digital video signals consist of a discrete color and brightness (RGB) value for each pixel. Digitizing analog video involves reading the analog signal and breaking it VIDEO HOW VIDEO WORKS? CAMERA CCD VS CMOS SENSOR SENSOR Once considered the gold-standard for performance in machine vision, CCD sensors (Charge Coupled Device) are being discontinued in favor of modern CMOS imaging sensors (Complementary Metal-Oxide Semiconductor) in many applications. Both CCD and CMOS image sensors convert light into electrons by capturing light photons with thousands—or millions—of light-capturing wells called photosites. When an image is being taken, the photosites are uncovered to collect photons and store them as an electrical signal. The next step is to quantify the accumulated charge of each photosite in the image. Here’s where the technologies start to differ; in a CCD device, the charge is transported across the chip and read at one corner of the array, and an analog-to-digital converter turns each CAMERA CCD VS CMOS SENSOR SENSOR In most CMOS devices, on the other hand, there are several transistors at each photosite that amplify and move the charge using more traditional wires. This makes the sensor more flexible for different applications, because each photosite can be read individually. CCD sensors create high-quality, low-noise images. CMOS sensors are usually more susceptible to noise. Because each photosite on a CMOS sensor has several transistors located next to it, the light sensitivity of a CMOS chip tends to be lower, as many of the photons hit the transistors instead of the photosite. CAMERA CCD VS CMOS SENSOR SENSOR CMOS sensors can be manufactured on most standard silicon production lines, so are inexpensive to produce compared to CCD sensors. Overall, CMOS sensors are much less expensive to manufacture than CCD sensors and are rapidly improving in performance, but CCD sensors may still be required for some demanding applications. VIDEO TYPE OF VIDEO CABLE VIDEO VIDEO FORMAT STANDARDS FORMAT STANDARDS Many countries use a different video format standard to broadcast and display video. There are three major video format standards used around the world: 1. NTSC 2. PAL 3. SECAM Most countries of the world use TV Standards that are incompatible with other countries. For example, a video recording made in the Germany could not be played back on an American standard VCR or shown on the American TV. NTSC is an abbreviation for National Television Standards Committee, named for the group that originally developed the black & white and subsequently color television system that is used in the United States, Japan and many other countries. An NTSC picture is made up of 525 interlaced lines and is displayed at a rate VIDEO VIDEO FORMAT STANDARDS FORMAT STANDARDS PAL is an abbreviation for Phase Alternate Line. This is the video format standard used in many European countries. A PAL picture is made up of 625 interlaced lines and is displayed at a rate of 25 frames per second. SECAM is an abbreviation for Sequential Color and Memory. This video format is used in many Eastern countries such as the USSR, China, Pakistan, France, and a few others. Like PAL, a SECAM picture is also made up of 625 interlaced lines and is displayed at a rate of 25 frames per second. However, the way SECAM processes VIDEO VIDEO PROPERTIES PROPERTIES Video can be classified throughout several properties. The information of the properties are important for next action plan such as to make the video compatible and suitable for various situation. Here are the following video properties: 1. Video Resolution 2. Video Frame Rate 3. Video Bitrate 4. Video Codec 5. Video Format VIDEO VIDEO RESOLUTION PROPERTIES 720 resolution (HD) This is the lowest resolution to still be considered HDTV and is often called simply “HD.” Most videos are shot in at least 1080, but 720 (1280 x 720 pixels) can be an acceptable resolution for smaller web content. However, now that most computer screens are HD, best practice is to aim for a higher resolution than 720 for web use and streaming. 1080 resolution (full HD) Often referred to as “full HD,” 1080 (1920 x 1080 pixels) has become the industry standard for a crisp HD digital video that doesn’t break your storage space. This is also a common screen resolution for smartphones. 2K resolution or QHD (quad high definition) The next steps up are QHD (2560 x 1440 pixels) or 2K resolution (2048 x 1080 pixels). These formats provide more room for image edits, larger displays, and reframing without lost quality. VIDEO VIDEO RESOLUTION PROPERTIES VIDEO VIDEO PROPERTIES PROPERTIES 4K resolution (ultra-HD) Called 4K and marketed often as UHD (ultra-high-definition television), this resolution is technically 3840 x 2160 pixels. It looks quite like 2K to most viewers but gives filmmakers more room to zoom in and edit. “Resolutions of 2K and 4K are really for theatrical viewing or intense coloring or graphics,” explains video editor and director Margaret Kurniawan. “And there’s not enough noticeable difference between 4K and 2K, unless you wanted to cut in closer or edit colors. So, it matters in post, but it doesn’t matter much when someone’s viewing it.” 8K resolution Videographers rarely need to shoot in 8K (7680 x 4320 pixels), but this extremely high-res option leaves the most room for creating amazing effects or zooming into a faraway shot without pixelation. “There are two main reasons to film in 8K,” says Leonard. “One is visual effects, because it’s more pixel information for things like green screens or rotoscoping. And the other is reframing. You can reframe to a VIDEO VIDEO ASPECT RATIOS PROPERTIES A video's aspect ratio is an important creative choice that can affect the feel of footage, but it's also a key technical consideration that affects how and where this footage can be displayed. VIDEO VIDEO FRAME RATE PROPERTIES Frames Per Second or FPS is the rate at which back-to- back images called frames appear in a display and form moving imagery. Video content that we consume daily isn’t moving. In fact, they are still images that play one after the other. If a video is shot at 24fps, this means that 24 individual frames are played back in a second. They change at a different rate across mediums depending on a lot of other factors. VIDEO VIDEO FRAME RATE PROPERTIES 30 and 60 are the most common fps available on smartphones for your regular social media posts. Shooting at higher FPS will deliver smooth footage and give you the ability to slow it down processing it through a video editor without any jitters or shakes. This is used mostly in case of a slo-mo video. Anything above 60fps or higher can be used for a slo-mo, some phones have a slo-mo mode built-in that shoots in VIDEO VIDEO BITRATE PROPERTIES Video bitrate defines video data transferred at any given time. A high bitrate is doubtlessly one of the most crucial factors in the quality of a video. Together with a satisfactory bitrate value, high resolution and frame rate contribute to a good-looking video. Otherwise, they have little effect on the way it looks. Bitrate is measured in bits per second, which is referred to as “bps.” While kilobits per second is fine to use for measuring audio files, this same value is not suitable for video. Because the volume of footage is large, we’d rather talk about it in megabits VIDEO VIDEO BITRATE PROPERTIES A higher bitrate results in better quality and larger file sizes. This is where you should consider keeping them balanced. The larger the file, the more buffering issues it may cause, since the server resources of most viewers can hardly process an extensive file quickly. Quite the opposite, a lower bitrate results in worse quality and a less professional look for the video streamed. VIDEO VIDEO CODING FORMAT (CODEC) PROPERTIES A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder. A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital video content (such as in a data file or bitstream). It typically uses a standardized video compression algorithm, most based on discrete cosine transform (DCT) coding and motion compensation. Examples of video coding formats include : 1. MPEG-2 (H.262) 2. MPEG-4 (H.264) or AVC 3. HEVC (H.265) VIDEO VIDEO CODING FORMAT (CODEC) PROPERTIES MPEG-2 (H.222/H.262) is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard. Several filename extensions used for MPEG-2 video file VIDEO VIDEO CODING FORMAT (CODEC) PROPERTIES MPEG-4 (H.264) Part 10, also referred to as Advanced Video Coding (AVC), is a video compression standard based on block-oriented, motion-compensated coding. It is by far the most used format for the recording, compression, and distribution of video content, used by 91% of video industry developers as of September 2019. It supports resolutions up to and including 8K UHD. The intent of the H.264/AVC project was to create a standard capable of providing good video quality at substantially lower bit rates than previous standards, without increasing the complexity of design so much that it would be impractical or excessively expensive to implement. Several filename VIDEO VIDEO CODING FORMAT (CODEC) PROPERTIES High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding (AVC, H.264, or MPEG-4 Part 10). In comparison to AVC, HEVC offers from 25% to 50% better data compression at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K UHD. Several filename extensions used for MPEG-2 video file VIDEO VIDEO FILE FORMAT (EXTENSION) PROPERTIES A container and a codec are two components of any video file. A video format is the media container that stores audio, video, subtitles and any other metadata. A codec encodes and decodes multimedia data such as audio and video. When creating a video, a video codec encodes and compresses the video while the audio codec does the same with sound. Afterwards the encoded video and audio are synchronized and stored in a media container—the file format. To choose the best digital video format, it is a must to understand the difference between them. Examples of most common video file format include : 1. MP4 2. MOV 3. WMV 4. FLV 5. AVI 6. AVCHD 7. WEBM VIDEO VIDEO FILE FORMAT (EXTENSION) PROPERTIES MP4 MPEG-4 or MP4 is one of the earliest digital video file formats introduced in 2001. Most digital platforms and devices support MP4. An MP4 format can store audio files, video files, still images, and text. Additionally, MP4 provides high quality video while maintaining relatively small file sizes. MOV MOV is a popular video file format designed by Apple. It was designed to support the QuickTime player. MOV files contain videos, audio, subtitles, timecodes and other media types. It is compatible across different versions of QuickTimePlayer, both for Mac and Windows. Since it is a very high-quality video format, MOV files take significantly more memory space on a computer. WMV The WMV video format was designed by Microsoft and is widely used in Windows media players. WMV format provides small file sizes with better compression than MP4. That is why it’s popular for online video streaming. Although it is not compatible with VIDEO VIDEO FILE FORMAT (EXTENSION) PROPERTIES FLV FLV is a file format used by Adobe Flash Player. It is one of the most popular and versatile video formats supported by all video platforms and browsers. The FLV format is a good choice for online video streaming platforms like YouTube. They have a relatively small file size which makes them easy to download. The only drawback is that it’s not compatible with many mobile devices like iPhones. AVI The AVI file format was introduced in 1992 by Microsoft and is still widely used today. The AVI video format uses less compression than other video formats such as MPEG or MOV. This results in very large file sizes, approximately 2-3 GB per minute of video. It can be a problem for users with limited storage space. AVCHD (Advanced Video Coding High Definition) AVCHD is a format used for HD video playback and digital recording. This video format was designed by Panasonic and VIDEO VIDEO FILE FORMAT (EXTENSION) PROPERTIES WebM First introduced by Google in 2010, WebM is an open-source video format that was developed with the current and future state of the Internet in mind. WebM is intended for use with HTML5. The video codecs of WebM require very little computer power to compress and unzip the files. The aim of this design is to enable online video streaming on almost any device, such as tablets, desktop, smartphones or devices like smart TV. MKV MKV file format incorporates audio, video and subtitles in a single file. MKV format was developed to be future proof, meaning that the video files will always stay updated. MKV containers support almost any video and audio format, making VIDEO VIDEO EDITING EDITING Video editing is the manipulation and arrangement of video shots. Video editing is used to structure and present all video information, including films and television shows, video advertisements and video essays. Video editing has been dramatically democratized in recent years by editing software available for personal computers. Editing video can be difficult and tedious, so several technologies have been produced to aid people in this task. Example of common Video Editing Software: Adobe Premiere Pro DaVinci Resolve Final Cut Pro Wondershare Filmora Canva There are several elements that can be included into video editing process which involve special effects such as title, transition, chroma key, color grading, filters and applying VIDEO LINEAR AND NON-LINEAR EDITING EDITING Linear editing is the process of editing video projects in a predetermined sequence from start to finish. Editor start editing the project at the beginning and finish at the end, with everything staying in order. Linear editing was the most common form of video editing before digital editing software became readily available. Film rolls had to be cut and spliced together to form the final project. Non-linear editing (NLE) is an editing process that enables the editor to make changes to a video project without regard to the linear timeline. In other words, editor can work on whichever clip he or she want in any order. It doesn’t matter if it lands in the beginning, middle, or end of the project. Non-linear editing gives creators much more freedom throughout the editing process. On certain types of production, a linear editing system may be SPECIAL TITLE EFFECTS Clearly, titles have a big impact on the way of videos get discovered and watched. There’s a lot of video content on the internet already, and more is uploaded all the time. Good titles also make it easier to stand out from the crowd. After all, not all the videos on the internet are well-made, let alone well-titled. Publishing high-quality videos with good titles puts creator in a smaller pool of competitors. SPECIAL TITLE EFFECTS It’s succinct. If you write a long, rambling title, your video might get ignored. People skim more than they read on the internet, and you have just seconds (literally) to grab a viewer’s attention. It’s on-topic. A good title tells viewers what they can expect from a video. It’s clear, direct, and honest. People are busy, and they need a reason to watch your video, so using an unclear or ambiguous title is a bad idea. It’s unique. Producing a title that’s descriptive, brief, and unique can be a tall order. However, a good video title avoids being completely generic. It’s keyword-focused. Keywords aren’t just important for your website. They also matter for video titles. It’s intriguing. An effective title drives people to click on it. It offers a solution to a problem, touches on a viewer’s emotions, or promises a juicy secret. People won’t pay much SPECIAL TRANSITION EFFECTS Video transitions are a post-production technique used in film or video editing to connect one shot to another. Often when a filmmaker wants to join two shots together, they use a basic cut where the first image is instantly replaced by the next. Keep them consistent: There’s nothing more amateur than using different transitions for every scene. Keep them subtle: Unless for a specific effect, it’s best to use transitions sparingly. Remember, most of the time, directors use basic cuts between scenes. Transitions are usually only used when they serve a storytelling purpose (and good design should go unnoticed.) Keep the meaning of each transition type in mind: It's best to bear in mind what different transitions symbolize and use them only where appropriate. For example, as fade to black is a dramatic transition that signifies completion. SPECIAL TRANSITION (EXAMPLE) EFFECTS SPECIAL CHROMA KEY EFFECTS Chroma key compositing, or chroma keying, is a visual-effects and post-production technique for compositing (layering) two images or video streams together based on colour hues (chroma range). The technique has been used in many fields to remove a background from the subject of a photo or video – particularly the newscasting, motion picture, and video game industries. A colour range in the foreground footage is made transparent, allowing separately filmed background footage or a static image to be inserted into the scene. The chroma keying technique is commonly used in video production and post-production. SPECIAL CHROMA KEY EFFECTS This technique is also referred to as colour keying, colour- separation overlay (CSO), or by various terms for specific colour- related variants such as green screen or blue screen. Chroma keying can be done with backgrounds of any colour that are uniform and distinct, but green and blue backgrounds are more commonly used because they differ most distinctly in hue from any human skin colour. No part of the subject being filmed or photographed may duplicate the colour used as the backing, or the part may be erroneously identified as part of the backing. SPECIAL COLOR PALETTE EFFECTS Two colors on opposite sides of the color wheel make a complimentary COMPLEMEN pair. This is by far the most used pairing. TARY COLOR SCHEME A common example is orange and blue, or teal. This pairs a warm color with a cool color and produces a high contrast and vibrant result. Saturation must be managed but a complimentary pair are often quite naturally pleasing to the eye. Orange and blue colors can often be associated with conflict in action, internally or externally. Often an internal conflict within a character can be reflected in the color choice in his or her external environment. SPECIAL COLOR PALETTE Amélie (2001) EFFECTS COMPLEMEN TARY COLOR SCHEME SPECIAL COLOR PALETTE EFFECTS Analogous colors sit next to each ANALOGOUS other on the color wheel. They match COLOR well and can create an overall SCHEME harmony in color palette. It’s either warmer colors, or cooler colors so doesn’t have the contrast and tension of the complementary colors. Analogous colors are easy to take advantage of in landscapes and exteriors as they are often found in nature. Often one color can be chosen to dominate, a second to support, and a third along with blacks, whites and grey tones to accent. SPECIAL COLOR PALETTE American Hustle EFFECTS (2013) ANALOGOUS COLOR SCHEME Alice through a looking glass (2016) SPECIAL COLOR PALETTE EFFECTS TRIADIC Triadic colors are three colors arranged COLOR evenly spaced around the color wheel. SCHEME One should be dominant, the others for accent. They will give a vibrant feel even if the hues are quite unsaturated. Triadic is one of the least common color schemes in film and although difficult, can be quite striking. SPECIAL COLOR PALETTE EFFECTS TRIADIC COLOR SCHEME Jean-Luc Goddard’s 1964 “Pierrot Le Fou” makes use of a triadic color scheme of red, blue and green. SPECIAL COLOR PALETTE EFFECTS A split-complimentary color scheme is really very similar to complimentary SPLIT- colors but instead of using the direct COMPLEMENTARY opposite color of the base color, it uses the two colors next to the opposite. COLOR SCHEME It has the same high contrast but less tension than a complimentary pair. A split complimentary color scheme in this scene of the Coen Brother’s “Burn After Reading” of red, green and teal. SPECIAL COLOR PALETTE EFFECTS Tetradic colors consist of four colors arranged into two complementary pairs. TETRADIC The result is a full palette with many COLOR SCHEME possible variations. As with most of these color harmonies, one color is usually dominant. “Mama Mia’s” colorful party scene falls into the example of a tetradic choice of colors creating a well balanced and harmonious palette in a scene that could otherwise have SPECIAL COLOR CORRECTION AND COLOR EFFECTS GRADING Color correction is a technical process that occurs during a film’s post-production phase. Film colorists use editing software to adjust the color, contrast, and exposure of film footage so that it appears natural and unprocessed—the way the human eye experiences it in real life. Working closely with a film's director and cinematographer, the colorist ensures that the completed film looks precisely as intended. Color grading is the process of stylizing the color scheme of your footage by “painting” on top of what you’ve established through color correction. After the colorist completes color correction, they can begin the process of grading the footage. During color grading, colorists use editing software to stylize the footage—emphasizing the visual tone and SPECIAL COLOR CORRECTION EFFECTS SPECIAL COLOR GRADING EFFECTS HORRO R DRAMA ACTION SPECIAL VIDEO FILTERS EFFECTS A video filter is a software component that performs some operation on a multimedia stream. Multiple filters can be used in a chain, known as a filter graph, in which each filter receives input from its upstream filter, processes the input and outputs the processed video to its downstream filter. SPECIAL VIDEO FILTERS EFFECTS VIDEO BEST PRACTISE OF VIDEO CONTENT EDITING SEQUENCE Most videos, especially in marketing industries, it will consist of several parts. The video must be organized as following parts : 1. Establishing Shot 2. Content (A-Roll and B-Roll) 3. Call to Action VIDEO ESTABLISHING SHOT CONTENT In filmmaking and television, an establishing shot lets the SEQUENCE audience know the setting for the scene they’re about to watch. An establishing shot usually only lasts a few seconds. An establishing shot sends a clear message that a new scene is starting. Showing a recognizable landmark tells the audience where the story or the next scene is set. Use of an establishing shot can also give the audience supporting details they might not have known about a setting otherwise. An establishing shot can also introduce a concept or overall theme. For example, in the Star Wars films, establishing shots reveal what different planets look like, what futuristic cities look like, and the different aircraft on which people travel through space. VIDEO ESTABLISHING SHOT CONTENT SEQUENCE An establishing shot is usually a wide shot (also called a long shot), an extreme long shot, or an aerial shot that shows a lot of the setting for context. For example, establishing shot took by a drone for Peninsula VIDEO A – ROLL AND B-ROLL VIDEO CONTENT SEQUENCE In video production, A-roll is the primary footage of a project’s main subject, while B-roll shots are supplemental footage. B-roll provides filmmakers with flexibility in the editing process and is often spliced together with A-roll footage to bolster the story, create dramatic tension, or further illustrate a point. Stories that rely entirely on A-roll footage might feel off-balance; therefore, shooting B-roll is important. VIDEO A – ROLL VIDEO CONTENT SEQUENCE A great way to think of A-Roll is media that “tells” the story, such as an interview or a news segment. It’s the primary audio and video that often consists of one or more people discussing a topic or relating a narrative. A-Roll is the driving media in most documentaries, news broadcasts, talk shows, and reality shows. VIDEO B – ROLL VIDEO CONTENT SEQUENCE B-Roll is supplemental footage used to visually support the A-Roll. Think of it as video that “shows” the story. If the A-Roll narrative talks about residences, then the B-Roll might show a house. It just needs to complement, and if possible, confirm the story told by the A-Roll media. Using B-Roll footage helps break up the monotony of a common A-Roll interview shot, making the whole thing much more engaging. VIDEO CALL TO ACTION VIDEO CONTENT SEQUENCE One of the last (and most important) steps to include in any video is the call to action (CTA). This is where you close the deal, so to speak, and prompt your viewer to do something. It's the virtual handshake with the viewer. Call to Action consist of contact information whereby the viewer can query for further interest questions. This is where the reflection of effective and good marketing video. Contact information can be phone number, website, social media CAMERA 11 COMMON CAMERA ANGLE ANGLE Wide Shot usually captured on wider angle lens, to see more on surrounding, more frequently used to establish a location. Wide shot also can represent loneliness or situation been removed from the action. Long Shot also used to establish the location but bigger emphasis on the subject filling the entire frame. CAMERA 11 COMMON CAMERA ANGLE ANGLE Medium Shot is use from waist up. Medium shot is used to focus on what does the subject doing. This shot help the viewer feel closer to the subject. Medium shot represent the distance of having normal conversation. Tight Close Up Shot use mainly for dialogue or to show detail expression of the subject. It helps viewer to know what being said CAMERA 11 COMMON CAMERA ANGLE ANGLE Detail Extreme Close-Up Shot is for emphasis the detail of the subject. This shot is a great way to create mystery in depth of the story helps draw attention from the viewers. Low Angle Shot use to make your subject appear larger and alive. This shot to portray power and dominance. It also use to deliver wonder and majestic situation. CAMERA 11 COMMON CAMERA ANGLE ANGLE High Angle Shot is to portray your subject is weak and fear, or smaller and vulnerable. Dutch Angle Shot use convey uneasy motion like something is not quite right. This shot use to tell that something is wrong. CAMERA 11 COMMON CAMERA ANGLE ANGLE Over The Shoulder Shot give the perspective of viewer as the person who talked to. Point of View (POV) Shot use to show what the character looking at. It shows the perspective of the character and help viewer to understand character state of mind. CAMERA 11 COMMON CAMERA ANGLE ANGLE Cutaway Shot is an interruption of continuously film scene by inserting a view of something completely different. It is transition to new scene or to shows side of what happening beside your main story at same time. CAMERA 6 COMMON CAMERA MOVEMENT MOVEMENT THANK YOU ! ANY QUESTION ?