Software Quality Attributes

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

Which of the following best describes what quality encompasses in software systems?

  • Only how the system performs under ideal conditions.
  • Primarily the user interface and user experience.
  • Both what the system does and how it does it. (correct)
  • Only what the system does.

Which of the following exemplifies a desired property of a system, characterized as a quality attribute?

  • The system’s ability to consistently offer correct functionality even under unforeseen conditions. (correct)
  • The programming language used.
  • The number of lines of code in the project.
  • The development team's communication style.

In the context of software quality attributes, what is the primary focus of 'security'?

  • Optimizing the system's speed and performance.
  • Making sure the system scales to more users.
  • Protecting information from unauthorized access while providing service to authorized users. (correct)
  • Ensuring the system is easy to use.

When discussing software 'availability' as a quality attribute, what is the key focus?

<p>The ability to carry out a task when needed, minimizing downtime and recovering from failures. (D)</p> Signup and view all the answers

What does the quality attribute 'interoperability' primarily address?

<p>The ability of the system to exchange information and provide functionality to other systems. (C)</p> Signup and view all the answers

How does redundancy primarily affect system qualities?

<p>It improves availability but can lessen security. (A)</p> Signup and view all the answers

What is the primary goal of establishing 'correctness', 'reliability', 'safety', and 'robustness' in software development?

<p>To provide evidence that the system is dependable. (A)</p> Signup and view all the answers

What is the main focus of software 'safety'?

<p>The ability to avoid hazards and prevent undesirable situations. (A)</p> Signup and view all the answers

What is the relationship between correctness and reliability in software systems?

<p>Reliability is a statistical approximation of correctness. (B)</p> Signup and view all the answers

What does software 'robustness' primarily focus on?

<p>Handling unforeseen issues and performing graceful degradation of services when assumptions are violated. (A)</p> Signup and view all the answers

What does a Rate of Occurrence of Fault (ROCOF) of 0.02 mean?

<p>Two failures per 100 time units. (C)</p> Signup and view all the answers

Which scenario best illustrates the application of Probability of Failure on Demand (POFOD)?

<p>A safety-critical system, like a nuclear reactor, assesses the likelihood of failure upon a specific request. (B)</p> Signup and view all the answers

How is 'availability' typically measured as a reliability metric?

<p>By the ratio of uptime to total time observed. (C)</p> Signup and view all the answers

Why are hardware-based metrics often unsuitable for assessing software reliability?

<p>Software failures are always design failures. (B)</p> Signup and view all the answers

What does the term 'Mean Time Between Failures (MTBF)' represent?

<p>The average time a system operates before a failure occurs. (C)</p> Signup and view all the answers

Why is it crucial to understand the nature of failures when aiming to improve software availability?

<p>To better prevent, tolerate, remove, or forecast failures. (C)</p> Signup and view all the answers

Consider a scenario where a software system experiences 6 failures during 144 hours of operation. What is the Rate of Occurrence of Fault (ROCOF)?

<p>0.04 (C)</p> Signup and view all the answers

A software system is advertised with a ROCOF of 0.001 failures per hour, and it takes 3 hours to get the system up again after a failure. Approximately how many failures will occur per year?

<p>8.76 (B)</p> Signup and view all the answers

You're analyzing a software system and find that after 7 full days of operation, 972 requests were made, but the product failed 64 times (37 crashes, 27 bad output). Assuming an average restart time of 2 minutes after each failure, what is the ROCOF?

<p>0.38/hour (C)</p> Signup and view all the answers

For the same system, after 7 full days of operation, 972 requests were made, but the product failed 64 times (37 crashes, 27 bad output) and an average of 2 minutes to restart after each failure. What is the POFOD?

<p>0.066 (B)</p> Signup and view all the answers

In addition to the previous question. For the same system, after 7 full days of operation, 972 requests were made, but the product failed 64 times (37 crashes, 27 bad output) and an average of 2 minutes to restart after each failure. What is the approximate availability?

<p>99.3% (A)</p> Signup and view all the answers

Given after 7 full days of operation, 972 requests were made, but the product failed 64 times (37 crashes, 27 bad output) and an average of 2 minutes to restart after each failure. You want availability of at least 99%, POFOD of less than 0.1, and ROCOF of less than 2 failures per 8 hours, and ROCOF is too low. Can we calculate MTBF?

<p>No, we need timestamps. (C)</p> Signup and view all the answers

Given after 7 full days of operation, 972 requests were made, but the product failed 64 times (37 crashes, 27 bad output) and an average of 2 minutes to restart after each failure. You want availability of at least 99%, POFOD of less than 0.1, and ROCOF of less than 2 failures per 8 hours. Is the product ready to ship?

<p>No. Availability/POFOD are good, but ROCOF is too low. (B)</p> Signup and view all the answers

Which phrase accurately describes 'performance' in the context of software quality attributes?

<p>The ability to deliver timing requirements (A)</p> Signup and view all the answers

What is 'latency' when measuring performance in software systems?

<p>The time between the arrival of a stimulus and the system's response to it. (C)</p> Signup and view all the answers

What does 'response jitter' measure in the context of performance measurement?

<p>The allowable variation in latency. (C)</p> Signup and view all the answers

If a system's throughput increases, what typically happens to the response time for individual transactions?

<p>It increases. (A)</p> Signup and view all the answers

Consider a system where, with 10 concurrent users, a request takes 2 seconds. How should response time be expected to change with 100 users?

<p>The request would take more than 2 seconds. (A)</p> Signup and view all the answers

Under what condition might throughput goals conflict with response time goals?

<p>In situations where achieving higher throughput leads to increased response time for individual users. (D)</p> Signup and view all the answers

What is 'Scalability'?

<p>Ability to process increasing number of requests. (D)</p> Signup and view all the answers

What is 'Vertical Scalability'?

<p>Adding more resources to a physical unit. (D)</p> Signup and view all the answers

What is one of the response measure reflect that is used to assess scalability?

<p>Changes to performance. (A)</p> Signup and view all the answers

What is ability to protect data and information from unauthorized access, while still providing access to people and systems that are authorized.

<p>Security (C)</p> Signup and view all the answers

Which of the following is NOT a component of the CIA triad?

<p>Authenticity (C)</p> Signup and view all the answers

In the context of security, what is the meaning of 'nonrepudiation'?

<p>Guarantees that sender cannot deny sending, and recipient cannot deny receiving. (D)</p> Signup and view all the answers

In what area of software does achieving security rely on?

<p>All of the above. (C)</p> Signup and view all the answers

Why is 'Security is risk Management'?

<p>Balance risks against cost of guarding against them. (D)</p> Signup and view all the answers

What is assessing security measure?

<p>Measure of system's ability to protect data from unauthorized access while still providing service to authorized users. (A)</p> Signup and view all the answers

What should response that is assessed by appropriate metrics?

<p>Time to stop attack. (C)</p> Signup and view all the answers

What is the primary focus of dependability in the context of software characteristics?

<p>Ensuring correctness, reliability, safety, and robustness. (A)</p> Signup and view all the answers

Flashcards

Quality Attributes

Attributes describing desired qualities of a system, prioritized by developers to meet thresholds.

Dependability

The ability to consistently offer correct functionality, even under unforeseen or unsafe conditions.

Performance

Meeting timing requirements; responding quickly to events.

Security

Protecting information from unauthorized access while providing authorized access.

Signup and view all the flashcards

Scalability

The ability to process a greater number of concurrent requests.

Signup and view all the flashcards

Availability

Ability to carry out a task when needed, minimizing downtime and recovering from failures.

Signup and view all the flashcards

Modifiability

The ability to enhance software by fixing issues, adding features, and adapting to new environments.

Signup and view all the flashcards

Testability

Ability to easily identify faults in a system; the probability a fault results in a visible failure.

Signup and view all the flashcards

Interoperability

Ability to exchange information and functionality with other systems.

Signup and view all the flashcards

Usability

Ability to enable users to perform tasks and provide support; ease of use and satisfaction.

Signup and view all the flashcards

Correctness

A program that is always consistent with its specification.

Signup and view all the flashcards

Reliability

The likelihood of correct behavior from some period of observed behavior.

Signup and view all the flashcards

Safety

The ability to avoid hazards.

Signup and view all the flashcards

Robustness

Software that 'gracefully' fails.

Signup and view all the flashcards

Availability Metric

Can the software carry out a task when needed?

Signup and view all the flashcards

POFOD (Probability of Failure on Demand)

Likelihood that a request will result in a failure.

Signup and view all the flashcards

ROCOF (Rate of Occurrence of Fault)

Frequency of occurrence of unexpected behavior

Signup and view all the flashcards

MTBF (Mean Time Between Failures)

Average length of time between observed failures.

Signup and view all the flashcards

Latency

Time between stimulus and system response.

Signup and view all the flashcards

Response Jitter

Allowable variation in latency to maintain quality.

Signup and view all the flashcards

Throughput

Number of transactions the system processes in a unit of time.

Signup and view all the flashcards

Horizontal Scalability

Adding more resources to logical units.

Signup and view all the flashcards

Vertical Scalability

Adding more resources to a physical unit.

Signup and view all the flashcards

Confidentiality

Data and services protected from unauthorized access.

Signup and view all the flashcards

Integrity

Data and services not subject to unauthorized manipulation.

Signup and view all the flashcards

Availability (Security)

The system will be available for legitimate use.

Signup and view all the flashcards

Study Notes

Software Quality

  • High-quality software is the goal, but the definition of quality can vary
  • Quality considers what a system does and how it performs
  • Key aspects of system quality include:
    • Speed
    • Security
    • Availability
    • Scalability
  • Evaluating quality can be subjective and difficult to quantify

Quality Attributes

  • Quality attributes are desired properties of a system
  • Developers prioritize these attributes when designing systems, and must meet specific thresholds
  • Dependability is a relevant measure for assessing software quality
  • Dependability means consistently providing correct functionality, even under unforeseen or unsafe conditions
  • Performance refers to the ability to meet timing requirements, enabling quick response to events
  • Security involves protecting information from unauthorized access while allowing authorized users to access services
  • Scalability is the capacity to handle more concurrent requests
  • Availability means the ability to perform tasks when needed, minimizing downtime and ensuring recovery from failures
  • Modifiability is the ability to enhance software through fixes, features, and adaptation
  • Testability refers to easily identifying faults and the probability of a fault leading to a visible failure
  • Interoperability enables the exchange of information and functionality with other systems
  • Usability is the degree to which users can easily perform tasks and obtain support
  • Usability also measures the system's feature accessibility, adaptability, and user satisfaction
  • Other quality attributes include:
    • Resilience
    • Supportability
    • Portability
    • Development efficiency
    • Time to deliver
    • Tool support
    • Geographic distribution
  • These qualities can conflict with each other
  • Fewer subsystems can improve performance but may hurt modifiability
  • Redundant data can improve availability but may lessen security
  • Localizing safety-critical features can ensure safety but degrades performance
  • It is important to decide what is important, and set a threshold on when it is "good enough"

Dependability

  • Providing evidence the system is dependable is key before software release
  • Dependability encompasses four key characteristics:
    • Correctness
    • Reliability
    • Safety
    • Robustness

Correctness

  • A program is correct if it consistently aligns with its specification
  • Correctness depends on the quality and detail of system requirements
  • With a weak specification, correctness is easy to demonstrate
  • With a detailed specification, proving correctness can be difficult
  • Achieving provable correctness is rare

Reliability

  • Reliability is a statistical approximation of correctness
  • Reliability refers to the likelihood of correct behavior over a period of observed behavior
  • The period of observed behavior can be characterized as a time period, or number of system executions
  • Reliability is measured relative to a specific usage profile and how different types of users interact with the system

Dependence on Specifications

  • Success depends on the strength of the specification for correctness and reliability
    • It can be hard to meaningfully prove anything when the spec is very strong
  • Failure severity is not considered in strong specs
    • However, some failures may be worse than others
  • Safety revolves around a restricted specification
  • Robustness focuses on everything not specified

Safety

  • Safety is the ability to avoid hazards
    • A hazard is an defined undesirable situation or a serious problem
  • Safety relies on a hazard specification
  • Key points of this reliance include:
    • Defining the hazard
    • Avoiding it in the software
    • Providing evidence that it is indeed avoided
  • Safety proofs are often possible, as they are only concerned with hazards

Robustness

  • Software may fail if the assumptions of its design are violated even if that software is correct
  • How it fails matters
  • Software that gracefully fails is considered robust
  • Software should be designed to counteract unforeseen issues or gracefully degrade services
  • It is important to look at how a program could fail and how to handle those situations
  • Robustness cannot be proved, but should be a goal

Measuring Dependability

  • It is important to establish what makes the system dependable enough to release
  • Correctness is hard to prove conclusively
  • Robustness and safety are important, but do not demonstrate functional correctness
  • Reliability becomes the basis for arguing dependability
  • Reliability can be measured and demonstrated through testing

Measuring Reliability

  • Reliability means a probability of failure-free operation for a specified time in a specified environment for a given purpose
  • It depends on the system and type of user
  • Reliability also considers how well users think the system provides services they require
  • Improved when faults in the most frequently used parts of the software are removed
    • Removing X% of faults does not automatically equate to X% improvement in reliability
    • For example, removing 60% of faults in one study only led to 3% improvement
  • Removing faults with serious consequences is the top priority in regard to reliability
  • Reliability is measurable, with requirements that can be specified

How to Measure Reliability

  • Hardware metrics often don't suit software
  • Hardware metrics are based on component failures and the need for replacement
  • In hardware, the design is assumed to be correct
  • Software failures are always design failures
  • The system is often available even when a failure has occurred
  • Metrics consider:
    • Failure rates
    • Uptime
    • Time between failures

Metric 1: Availability

  • It means the software carries out a required task when needed
  • It encompasses reliability and repair, including correct behaviour and error recovery
  • It provides the ability to mask or repair faults so cumulative outages don’t exceed a required value over a time interval
  • It is both a measure of reliability and an independent quality attribute
  • Availability is measured as uptime divided by total time observed
  • This metric takes repair and restart time into account
  • It doesn’t consider incorrect computations and is only for crashes/freezing
  • For example:
    • 0.9 availability = down for 144 minutes a day
    • 0.99 availability = down for 14.4 minutes
    • 0.999 availability = down for 84 seconds
    • 0.9999 availability = down for 8.4 seconds
  • Improving availability requires understanding the nature of any failures that arise
  • Failures can be:
    • Prevented
    • Tolerated
    • Removed
    • Forecasted
    • It is important to ascertain how failures are detected, their frequency, and what happens when one occurs
    • Consider the system downtime, safety, preventability, and required failure notifications

Availability Considerations

  • Time to repair measures the time until a failure is no longer observable
    • This can be hard to define, as seen with Stuxnet, which caused problems for months
  • Software can remain partially available easier than hardware
  • If code containing a fault executes but the system recovers, no failure occurred

Metric 2: Probability of Failure of Demand (POFOD)

  • The likelihood of a request causing a failure when a request is made
  • Calculated against the "failures/requests over observed period"
    • POFOD = 0.001 means 1 out of 1000 requests fail
  • Important in situations where failure is serious
    • Independent of request frequency
    • A 1/1000 failure rate could be risky or sound good depending on its effect per lifetime

Metric 3: Rate of Occurrence of Fault (ROCOF)

  • Frequency of unexpected behavior occurrences
  • Calculated as the number of failures divided by total time observed
    • A ROCOF of 0.02 means 2 failures per 100 time units
    • Given as "N failures per M seconds/minutes/hours”
  • Used when requests are made on a regular basis (like a shop)

Metric 4: Mean Time Between Failures (MTBF)

  • Average time between observed failures
  • It only considers when the system operating
  • It requires timestamps for each failure and when the system resumed service
  • Used for long user session systems, where crashes can cause major issues
  • This is because saving requires disc/CPU/memory consumption

Probabilistic Availability

  • An alternate definition of this is that probability refers to a service within required bounds over a specified time interval
  • Availability = MTBF / (MTBF + MTTR) metrics are used for this definition
    • MTBF is the mean time between failures
    • MTTR is the mean time to repair

Reliability Metrics

  • Availability: (uptime) / (total time observed)
  • POFOD: (failures/ requests over period)
  • ROCOF: (failures/ total time observed)
  • MTBF: Average time between observed failures
  • MTTR: Average time to recover from failure

Reliability Examples

  • Provide software with 10000 requests
    • If 35 requests have the wrong result and 5 requests crash, then the POFOD is 40 / 10000 = 0.0004
  • Running software for 144 hours where 6 million requests lead to it failing on six different requests
    • The ROCOF is 6/144 = 1/24 = 0.04, while the POFOD = 6 divided by 6000000 equals 10 to the power of -6
  • With a ROCOF = 0.001 failures per hour
    • It takes 3 hours (on average) to get the system up after a failure
    • Failure per year = 0.001 * 8760, which is approximately 8.76 failures
      1. 76 * 3 is 26.28 hours of downtime per year
    • Using this formula, Availability = 0.997 ((8760 – 26.28 approx.) / 8760)

Additional Examples of Reliability

  • With an availability of at least 99%, a POFOD of less than 0.1, and a ROCOF of less than 2 failures per 8 hours:
    • After 7 full days, 972 requests were made
    • The product failed 64 times during this period, made up of 37 crashes, and 27 bad outputs
    • The average is 2 minutes to restart after each failure
    • ROCOF = 64/168 hours = 0.38/ hour = 3.04/8-hour work day
    • POFOD = 64/972 = 0.066
    • Availability: Down for (37*2, or the number of crashes times the avg time to restart) = 74 minutes / 168 hrs
      • Calculating that that 74/10089 minutes = 0.7% of the time = 99.3%
  • MTBF can’t be calculated as there are no timestamps, although long it has been down is know
  • The product is not ready to ship. The availability and POFOD ratings are good, but the ROCOF is too low

Reliability Economics

  • It may be cheaper to accept unreliability and pay for failure costs
  • It depends on social, political and factor systems
  • Reputation for unreliability may hurt more than improving reliability
  • Cost of failure depends on risks of failure
    • Health risks or equipment failure risks require high reliability to prevent these occuring
    • Minor annoyances can be tolerated to a degree

Performance

  • Ability to meet timing requirements
  • Key aspects of performance include:
    • Characterizing the pattern of input events and responses
    • Requests served per minute
    • Variation in output time
  • It is a driving factor in software design
  • Often comes at the expense of other quality attributes
  • All systems have it as a requirement

Performance Measurements

  • Latency measures the time between the arrival of a stimulus and the system’s response to that
  • Response Jitter measures the allowable variation in latency
  • Throughput: Usually number of transactions the system can process in a unit of time
  • Deadlines in processing mark points where processing must have reached a particular stage
  • Number of events not processed: Measure of how many events can’t be processed because the system was too busy
    • Time it takes to complete an interaction
    • How quickly system responds to routine tasks

Measurements of Latency

  • Turnaround time is the time it takes to complete larger tasks
    • Ascertain if tasks can be completed in available time
    • Impact on system while running
    • If partial results can be produced
  • Response time is non-deterministic
    • This can be okay if non-determinism can be controlled
  • Defines how much variation is allowed and places boundaries on when a task can be completed
  • If boundary is violated, quality is compromised

Metrics of Throughput

  • Throughput denotes workload a system can handle in a time period
  • Shorter the processing time, higher the throughput
  • As load increases as does throughput which rises, response time for individual transactions tends to increase
    • Requests with 10 users will take 2s
    • Requests with 100 users will take 4s

Measuring Deadlines

  • Some tasks must take place as scheduled
  • If those times are missed, the system will fail, like when fuel must ignite in a car when its cylinder is in position
  • Deadlines can be used to place boundaries on when events must complete

Measuring Missed Events

  • If the system is busy, the input may be ignored
  • Track the number of many input events which are ignored as the system is too slow to respond
  • Set an upper bound on how many missed events occur

Scalability

  • Ability to process increasing number of requests and is often assessed as part of performance
  • Horizontal scalability is also known as “scaling out"
    • This measures adding more resources to logical units or adding another server to a cluster
    • It can also act as “elasticity” when adding or removing VMs from a pool
  • Vertical scalability referred to as scaling up
    • This is the action of adding more resources to a physical unit, like memory to a computer
  • It measures the effectiveness of additional resource utilization
    • This requires that additional resources improve performance and don't require great difficulty
    • It should not disrupt operations
  • To achieve scalability, the system must be designed to scale (i.e., designed for concurrency)

Assessing Scalability

  • Assessing scalability directly measures impact of adding or removing resources
  • Response measures reflect change in performance and also in availability
  • Additional data:
    • How the load is assigned to existing and new resources

Security

  • Ability to protect data and information from unauthorized access while still people with authorization
  • Protect software from attacks or attempts to deny service to legitimate users
  • Processes allow owners of resources to control access
    • Actors are systems or users of the operation
    • Resources are sensitive elements, operations, and data of the system
    • Policies define legitimate access to resources
      • Enforced by security mechanisms used by actors

Security Characterization (CIA)

  • Confidentiality ensures dat and services is protected from unauthorized access
    • Hackers won't be able to access tax information in systems secured by confidentilaity rules
  • Integrity prevents data/services from unauthorized manipulation
    • Data/services cannot be tampered
  • Availability ensures systems are available
    • Attacks, like a DDOS attack, should not prevent a legitimate users purchase

Supporting CIA

  • Authentication verifies identities of all parties
  • Nonrepudiation guarantees sending and prevents the sender from denying it, also ensures the recipient receives
  • Authorization grants privileges according to tasks

Security Approaches

  • Security relies on:
    • Detecting attacks
    • Resisting attacks
    • Reacting to attacks
    • Recovering from attacks
  • Objects which may be protected
    • Data at rest
    • Data in transit
    • Computational processes

Security in relation to Risk Management

  • System security not simply secure/not secure
  • All systems will be compromised at some stage
  • Always attempt to prevent potential attacks, reduce attack damage, and quicken recovery
  • Set a realistic expectation regarding these attributes

Assessing Security

  • Measures system's ability to protect data from unauthorized access given authorized users
  • It assesses an overall system's response to an attack
  • The stimuli for action are attacks or demonstrations of policies from external systems
  • These include audits, logging, reporting, and analysis
  • There ae no universal metrics for measuring

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Chp11
95 questions

Chp11

FelicitousTrigonometry avatar
FelicitousTrigonometry
هندسة البرمجيات
31 questions

هندسة البرمجيات

FelicitousLogarithm4386 avatar
FelicitousLogarithm4386
Use Quizgecko on...
Browser
Browser