Mansoura University Multimedia (IS411P) Lecture Notes PDF

Summary

These lecture notes from Mansoura University's Information Systems Department cover fundamental concepts of digital audio. The document introduces various aspects, including amplitude, frequency, and the process of analog-to-digital conversion (ADC).

Full Transcript

Mansoura University Faculty of Computers & Information Science Information Systems Department Academic Year 2024-2025 Multimedia (IS411P) Lecture No. 02 15/10/2024 Digital Au...

Mansoura University Faculty of Computers & Information Science Information Systems Department Academic Year 2024-2025 Multimedia (IS411P) Lecture No. 02 15/10/2024 Digital Audio Amplitude: determines the volume of the sound. The unit of measurement of volume is a decibel. Period: is the time between the formations of two crests (Peaks). It is measured in seconds. Frequency (pitch): is the number of peaks that occur in one second. It is measured by the number of cycles (vibrations) per second and the unit of frequency is hertz (Hz). Bandwidth (BW): It is the difference between the highest and the lowest frequency contained in a signal. Wavelength (ʎ): is the distance from the midpoint of one crest to the midpoint of the next crest.  The human ear can perceive a range of frequencies from 20Hz- 20 kHz.  The human is most sensitive to sounds in the range of 2- 4 kHz.  Velocity of Sound  It may be found directly by measuring the time required for the waves to travel a measured distance.  The velocity varies greatly with the medium through which it travels. Doppler Effect  Sound waves are compressions and rarefactions of air.  When the object making the sound is moving toward you, the frequency goes up due to the waves getting pushed more tightly together.  The opposite happens when the object moves away from you and the pitch goes down. This is called the Doppler Effect. Doppler Effect (Cont.) Air compressions Air rarefactions Harmonics  Few objects produce sound of a single frequency  The sounds that we hear from vibrating objects are complex in the sense that they contain many different frequencies.  The harmonic series is a series of frequencies that are whole number multiples of a fundamental frequency. A complex sound wave Basic Characteristics of Audio Signal  Audio is caused by a disturbance in air pressure that reaches the human eardrum.  The frequency of audible sound ranges from 20 to 20,000 Hz  Another parameter used to measure sound is amplitude. The dynamic range of human hearing is very large: the lower limit is the threshold of audibility, and the upper limit is the threshold of pain and it is expressed in decibels (dB). Example sound wave Note that:  A sound wave is continuous in both time and amplitude.  It changes all the time and the amplitude can take any value within the audible range. Digital Representation of Audio  The continuous audio waveform is converted into a continuous electrical signal of the same shape by a microphone. microphone audio waveform analog signal  For computers to process and communicate an audio signal; the analog signal must be converted into a digital signal. Three stages are involved in ADC:  Sampling  Quantization  Coding  Sampling The process of converting continuous time into discrete values. Continuous time signal Discrete signal Note that:  The time axis is divided into fixed intervals. The reading of the instantaneous value of the analog signal is taken at the beginning of each time interval.  The time interval is determined by a clock pulse. The frequency of the clock is called the sampling rate or sampling frequency. The sampled value is held constant for the next time interval. The circuit for doing this is called a sampling and hold circuit.  Each of the samples is still analog in amplitude: it may have any value in a continuous range. But it is discrete in time: within each interval, the sample has only one value.  Quantization The process of converting continuous sample values into discrete values. Sampling Quantization 1 3 4 4 2 1 3 6 6 Note that:  In quantization process we divide the signal range into a fixed number of intervals. Each interval is of the same size and is assigned a number.  Each sample falls in one of the intervals and is assigned that interval's number.  The size of the quantization interval is called the quantization step.  Coding  The process of representing quantized values digitally.  These eight levels can be coded using 3 bits if the binary system is used. Quantization Coding ADC Note that: If the sampling rate and the number of quantizing levels are high enough, the digitized signal will be a close representation of the original analog signal. Digital to Analog Converter  DAC is used to reconstruct the original analog signal from the digital data.  Each of quantized values is held for a time period equal to the sampling interval to produce a series of step signals.  These step signals are then passed through a low-pass filter to reconstruct an approximation of the original signal. In the ADC process, the most important issues are how to choose the sampling rate and the number of quantization levels for different analog signals and different applications. Sampling Rate: It depends on the maximum frequency of the analog signal to be converted. The number of quantization levels: They are used to determine the amplitude fidelity of the digital signal relative to the original analog signal. Fidelity: It is defined as the closeness of the recorded version to the original sound. It depends upon the number of bits per sample and the sampling rate. Quantization error or Quantization noise: It is the maximum difference between the quantized sample values and the corresponding analog signal values within the quantization step.  The number of quantization levels determines how many bits are required to represent each sample.  Their relationship is determined by: b= log2 Q Q= 2b Where: b: the number of bits Q: the number of quantization levels Signal-to-Noise Ratio (SNR)  SNR measures the quality of the signal in decibels.  It is defined as: SNR=20 log10(S/N) Where: S: is the maximum signal amplitude N: is the quantization noise SNR=20 log10(S/N) Assuming that: q: the quantization step, then N=q and S=2bq. SNR= 20 log10(2bq/q) = 20 log10(2b) = 20b log10(2) SNR = 6b dB This equation indicates that using one extra bit to represent samples increases the SNR by 6 dB. Nyquist theorem  According to Nyquist theorem, a minimum of two samples (per cycle) is necessary to represent a given sound wave.  Thus, to represent a sound with a frequency of 440 Hz, it is necessary to sample that sound at a minimum rate of 880 samples per second.  Sampling rate = 2 x Highest frequency. Aliasing  It is a serious problem with all systems using a sampling mechanism: when the signal to be sampled has frequency components higher than half of the sampling rate , a distortion known as "aliasing" occurs and it cannot be removed by post-processing the digitized audio signal.  So frequencies those are above half the sampling rate are filtered out prior to sampling to remove any aliasing effects.  falias = fsampling – ftrue for ftrue < fsampling < 2x ftrue EX. A 6-kHz analog signal Sampling Clock of 8 kHz produces series of sample values Reconstructed signal Analog-to-Digital Conversion process Digital-to-Analog Conversion process Sound formats  Stereo recordings are made by recording on two channels, and are lifelike and realistic.  Mono sounds are less realistic, but they have a smaller file size.  Stereo sounds require twice the space as compared to mono recordings. Sound formats (Cont.) To calculate the storage space required, the following formula are used: Mono Recording File size = Sampling rate x duration of recording in seconds x (bits per sample/8) x 1 Stereo Recording File size= Sampling rate x duration of recording in seconds x (bits per sample/8) x2 Quality of Sound  The bandwidth of telephone conversation is 3300 Hz. (The frequency ranges from 200-3500 Hz)  For CD-ROMs, the sampling rate is typically 44 kHz for each channel (left and right).  CD-ROMs are becoming important media for multimedia applications. (Why??) Exercise: In a CD player, the sampling rate is 44.1 kHz and samples are quantized using a 16 bit quantizer. The resulting number of bits for a piece of music with a duration of 1 minute is equal to ……………………. Criteria for Selecting a Particular Quality Audio Higher quality audio will always be associated with higher storage and access time so the quality is purely application dependent.  Compression means reducing the physical size of data such that it occupies less storage space and memory.  Compressed files are easier to transfer.  There are two types of compression:  Lossless Compression  Lossy Compression Use of Audio in Multimedia You can use sound in a multimedia project in two ways.  Content sound: provides information to audiences such as, dialogs in movies or theater  Ambient sound: such as background music and sound effects. Video Video  Video is a medium of communication that delivers more information per second than any other element of multimedia.  The DVD (Digital Video Disk) makes it possible to distribute large videos, much in the same way as the Compact Disk made the move from analog sound to digital sound easy. Analog and Digital Video  Only analog video is used as a broadcast medium  The video is broadcast in analog format, even though specific movies may be in digital format prior to broadcasting.  The three standards worldwide for broadcasting analog video are NTSC PAL SECAM. Analog Video NTSC ( National Television System Committee) This standard displays 30 frames per second. Each frame can contain 16 million colors Each full-screen frame is composed of 525 lines. PAL (Phase Alternation by Line) It displays 25 frames per second. Each full-screen frame is composed of 625 lines. SECAM (Sequential Couleur Avec Memoire) It displays 25 frames per second. Each full-screen frame is composed of 625 lines. Digital Video  There is a distinct trend towards digital video, even in the case of consumer electronics.  Digital video is easy to access and easy to edit.  Editing videos involves: removing frames, inserting frames, mixing audio with video, and so on. Timecode  Timecode is a unit of measurement the duration of a video clip.  It can also be used as an address of a frame.  The timecode used by SMPTE (Society of Motion Picture and Television Engineers) has become a standard and is in the form hrs:mins:secs:frames  Ex. 00:02:31:15 Digitizing Analog Video  Digitizing Analog Video Video Capturing  A video capture card accepts a video input from an input device.  The audio has to be sampled through a separate cable which attaches to the sound card.  The software with the video card synchronizes the two channels of audio and video.  You should ensure that your video capture card supports capturing at the rate of 30 fps (frame dropping) A Keyframe  A keyframe is a complete image frame, unlike other frames that only store changes between.  Essentially, keyframes serve as reference points, and the rest of the video relies on these to reconstruct the full picture.  A keyframe interval refers to how often a keyframe appears in your video stream. Compression  Compression is the process of restructuring data to reduce the file size.  During video capture, the video file is compressed.  As a compressed video file is played, it is decompressed.  Several compression/decompression (codec) algorithms are available for compressing digital videos.  An important feature of codecs is whether they are Asymmetric or Symmetric. Factors affecting Compression  The choice of the Frames per Second (fps)  The number of key frames used.  The data rate specified for playback. File formats for Video  After a digital video is edited, we need to save it in a particular file format.  For the PC, the AVI format is used, and for the Macintosh, the Quicktime format is used.  Both use similar strategies for compression and decompression of video information.  Many conversion programs are available to convert between AVI and Quicktime to other standards and vice versa. Video on the Internet  The technology that makes video on the internet possible is the concept of Streaming Video.  Streaming Video : is a term applied to compression and buffering techniques that allow you to transmit real time video via the Internet, enabling a user to watch the video as it downloads.  Video quality over the Internet mainly depends upon the following factors:  Available bandwidth  Sound intensity/frequency  Difference in information between two successive video frames Surround Video  Surround video is a way to allow real time navigation and give photorealistic visuals in Web pages by adding seamless 360 degree panoramic images.  You can turn the image around in your Web page and interact with it from every angle.  This can be very handy to display products as the client can zoom in to any point. Quiz1 Suppose the SMPTE timecode of a video clip is 00:1:30:12 and has 15 frames per second. What is the total number of frames in this clip? No. of frames= 15* (60+30) +12= 1362 frames Quiz 2 Suppose we are digitizing a video that is 640 x 480 pixels wide, with 24-bit color depth, and 30 fps. How much space will one second of this video take if put uncompressed on the hard disk. Size of video = (640*480*24/8) *30 /10242 = 26.37 MB Quiz3 Suppose the fps is 30 and the keyframe interval is 5, then how many key frames appear in one second? No. of Key frames = 30/5 = 6 key frames

Use Quizgecko on...
Browser
Browser