Computer Basics Quiz
5 Questions
1 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the primary function of a computer's memory?

  • To permanently store data
  • To temporarily store data and instructions (correct)
  • To display images on the screen
  • To process mathematical calculations

Which of the following is an example of an input device?

  • Monitor
  • Keyboard (correct)
  • Speaker
  • Printer

What does CPU stand for?

  • Central Processing Unit (correct)
  • Computer Peripheral Unit
  • Control Program Utility
  • Common Performance Upgrade

What type of software is an operating system?

<p>System software (D)</p> Signup and view all the answers

Which of these is a unit of data storage capacity?

<p>Byte (B)</p> Signup and view all the answers

Flashcards

No Term Available

Unfortunately, the content you provided seems to be empty or composed of non-educational material. Therefore, I am unable to create flashcards.

Study Notes

Biostatistics for Premedical Students

Chapter 1: Introduction

  • Statistics involves collecting, classifying, summarizing, and analyzing data to make scientific inferences.
  • Descriptive Statistics include techniques to organize, describe, and summarize data using summary measures and graphs.
  • Descriptive statistics are a way of summarizing and communicating the key aspects of data, such as demographics, and describe a dataset or population.
  • Inferential Statistics involve investigating questions, modeling, and hypothesis relating to a study by analyzing data from one or more samples to draw conclusions about related parameters in the population.
  • Biostatistics applies the principles of statistics to problems in biology, human biology, medicine, and public health.
  • Statistics is a branch of applied mathematics with roots in probability theory and is fundamental to all observational sciences.
  • The raw data any statistical inquiry contains a number of observations acquired through a process of measurement.
  • Quantitative data contains numerical information regarding numbers or amounts (e.g., heights, weights, time, number of items).
  • Qualitative data contains information regarding attributes like the presence or absence of characteristics of individuals or objects.
  • A variable is a characteristic that takes different values for different persons, objects, times, or places.
  • Examples of variables include height, disease diagnosis, eye color, temperature, and the number of patients.
  • Random variables have observations that are not controlled by certain rules or systems but are measured arbitrarily.

Chapter 2: Scales of Measurement

  • Measurement assigns numbers to objects/events based on a set of rules.
  • Numbers assigned only identify a quality and distinguish distinct categories
  • A nominal variable (qualitative or categorical) contains cannot be quantified and is used to distinguish between distinct categories.
  • Ordinal variables’ values can be ordered or ranked and allow measurements to be ranked in terms of quantity represented by the variable.
  • Ordinal variables do not show “ how much less?” or "how much more?". It is also a qualitative variable.
  • Interval variables’ values can be regarded as points on a continuous number line and is quantitive.
  • Interval scale differences provide a numerical measure and is quantitive
  • Interval scale zero point does not indicate a total absence of the quantity being measured (e.g., temperature in Celsius).
  • Differences and ratios can be determined numerically in interval scales.
  • A ratio variable is one whose values can be regarded as points on discrete/continuous number lines with a quantitative value.
  • A ratio scale differs in that its value can be regarded as points on discrete or continuous number lines with a quantative value.
  • A ratio scale differs difference in that any two values provides a numerical measure and ratios may be determined.
  • A ratio scale differs in that it has a true zero indicating a totally absence of the quantity being measured).

Chapter 3: Summarizing Data

  • Numerical data results in unordered values that presents information when measurements of a variable are taken on the entities of a sample, but must be ordered to be useful.
  • Data presentation can be done using tables (tabular) or graphically.
  • Tabular presentation presents statistical data concisely and comprehensible and is of 2 types: Frequency distribution or categorical distribution
  • A frequency table is used to summarize a set of observations for the variable under study, arranged according to magnitudes (individually or in ranges), useful for quantitative data.
  • A cumulative frequency distribution shows the total number of observations getting value "less than" or "more than" a value/class by successively adding (subtracting) the frequencies of the values/classes according to a law.
  • Categorical distribution is used to summarize a record of attribute observation dealing with qualitative data.
  • Graphical presentation illustrates statistical data to elucidate the main data features, suggest methods to analyze data, and explain conclusions.
  • A bar chart has equal width but with heights proportional to the frequencies corresponding to different periods or categories.
  • Bar charts illustrate quantative and qualitative data, can be drawn either vertically or horizontally with bars that are usually separated by equal width.
  • Histograms illustrate continuous frequency distributions (quantitative data) with erecting vertical rectangles equal to bases for the class and heights proportional to the frequency for each class.
  • A frequency polygon is used with quantitative data (discrete or continuous) and plot frequency distribution graphically.
  • Frequency polygons are obtained by plotting frequencies on the vertical axis against variable values, with the midpoints of the classes on the horizontal axis and joining the resulted points respectively by a line.
  • Frequency polygons can be drawn on histogram midpoints of the adjacent rectangles are joined respectively by straight lines.
  • Cumulative frequency polygon (ogive) is a graph representing the cumulative frequency distribution and can have 2 types: less than or more than ogive
  • A less than ogive is constructed by plotting the less than cumulative frequencies on the vertical axis, against the correspoding upper class boundaries on the horizontal axis, and joining the resulting points respectively by a smooth free hand curve.
  • More than ogive is constructed by plotting the cumulative frequencies on the vertical axis, against the corresponding lower class boundaries on the horizontal axis, and joining the resulting points respectively by a smooth free hand curve.
  • A pie chart is a graph representing the categorical distribution drawn as a circle divided to segments.
  • A pie chart expresses each qualitative data's category’s/segment frequency as a precentage of the total data, in which each percentage is expresse as an angle (in degrees) proportionally to 360, and the circle is then divided.

Chapter 4: Measures of Central Tendency (Location)

  • Data summarization can happen with tabulation, graphical representations and numerical measures.
  • Measures describe the location of a data set and their variation or dispersion.
  • These measures are considered representative or typical values of the data.
  • The commonly used measures are arithmetic mean, median, mode, and weighted mean.
  • An arithmetic mean is the average of the numbers calculated using a data set
  • Arithmetic Mean for ungrouped data is the sum of values divided by the number of observations.
  • For grouped data from a frequency table with k class intervals and midpoints, and corresponding class frequencies, the mean can be calculated using the formula:
  • The mean is simple to calculate, good for interval and ratio data, and does not ignore any information. -There is only one mean for any given data.
  • The mean can be distorted by extreme values.
  • As its name suggests, the median is the data point where the values are split into two groups of = number in set, provided that the data has been order numerically.
  • For ungrouped data, if the number of observations "n" is odd, then the median is the observed value exactly in the middle of the ordered observations. Alternatively, if it's even, then the median is the number halfway bteen the 2 middle values, defined as the mean/average
  • For grouped data from a frequency table with k class intervals the median can be calculated as calculating the median order, median class, and then use the calculation to calculate the median. -The median is simple to determine and can be used with quantitative or qualitative data. -Furthermore it is useful to find the centerpoint value on the set -Furthermore it can be easily read from a “less/more than” ogive graph -The median is not affected by extreme measures, but in turn isn’t useful as a representative measure because it ignores the data.
  • The mode is the value that occurs most often in the data set with multiple potentially occurring. -A data set with only one value that occurs with the greatest frequency is said to be unimodal.
  • If a data set has two values that occur with the same greatest frequency, both values are considered to be a mode and the data set is said to be bimodal.
  • If a data set has more than 2 values that have the same greatest frequency, each value is used as a mode, and the data set is said to be multimodal.
  • When non data value occurs more than once, or all data values occur equally, then, the data set is said to have no mode.
  • For grouped quantative data, mode = L + (h(f_m - f_p)/(f_m - f_p) - (f_s - f_m)) where L = true lower limit of the modal class, h = whdth modal class, f_m = frequency of the modal class, f_p = frequenct of the class preceding the modal class, and f_s = frequency of the class succeeding the modal calss
  • The mode is good with nominal data. -The mode can be calculated using the histogram. -It isn’t commonly used as a mean or median -There is possibility that no result is unique, a dataset can have > data set, and the presence of low/extremely values affects it more severely)
  • Weighted mean gives quantities being averaging their proper degree by assigning them weights and then calculating.

Chapter 5: Measures of Dispersion (variation)

  • Dispersion is numerically measures the extent of variation around the center point.

  • The measure illustrates the amount variability present the data is in a set, if are identical, 0 dispersion, otherwise it’s present. -The rate of change in dispersion is small when the values are adjacent vs farther to each other.

  • Range is a positional measure that computes the spread with difference of the largest - smallest observed values. Denoted by R -R = highest - lowest

  • For grouped ranges of data: Subtract the limit of last class - lower of the first class -Range -The range only takes into account 2 values, weak variation measures

  • Variance - based on means a ref point, smallest when close, larger when there’s greater variance

  • To compute values in an ungrouped data set, need sample size (x,x,x,) w/ x, mean computed using the formula: S~ x

  • Similarly there’s variance for grouped data.

  • Standard deviation - square root of variance.

  • The standard of an ungrouped is computed using the formula sx= s to = v = I(xi~ x)~ /n-1

  • For grouped, you can also complete the following: sx = vx = to v =S” (m,-x) f// =I- fi /Ekfi - I)^

  • Variance indicates (non-negative value) and also the standard deviation is also - non negative, and square root of variance If alpha is 0, indicated data’s same

  • Coefficient of variation - comparison of the various values for variables and deviation for values of single variable as measure -There's measure of relative, -And might have different values - needs a degree of relative variance that doesn't depend on the value itself since:

  • Variation an be described as the change//s * 1.0 % variation

Chapter 6: Basic Concepts of Probability

  • Probability is studying chance events to create statistical inferences by measuring uncertainty.
  • A random experiment is a process leading to observe or measure a certain phenomena, by which we observe something uncertain, or an experiment.
  • A sample space is two types - discreet and cotinuous- and is defined as the set of possible outcome
  • Event includes impossible/certain events, Simple = single point, compound = multi events
  • Independent = untouched & dependent
  • Mutually exclusively events is when the occurrence of one event precludes the other
  • Mutually exclusive are also defined as nessarily dependent
  • When the outcome has a same chance, equally likely; when it’s a different chance unqually likely
  • The probability of and event is defined as: P(E) = fraction/percentage where occurs The probability of an event E [P(E)] is a real number lies between 0-1 which is the range of possible outcomes If E= no results, then =0, same goes if = all possible outcome, it follows there's a probability of 1.

To calculate a empirical probability is as follows P{E)= thenumberof tratisin which loccurs * s the totalnumberof trails

Then we can calculation through the axiomatic pathway P(A or union, the calculation becomes P (A union B = P (A) ++ P (B) - P(A) intercept B. Otherwise if there one occurance can lead to another to there’s an intersecting.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Description

Test your computer knowledge with this basic computer quiz. Questions cover memory, input devices, CPU, operating systems, and data storage.

More Like This

Computer Hardware Basics
10 questions

Computer Hardware Basics

MemorableDoppelganger avatar
MemorableDoppelganger
Computer Hardware and Technology Quiz
42 questions
Use Quizgecko on...
Browser
Browser