COMP2120 Lecture Notes PDF

Document Details

Uploaded by Deleted User

Tags

software engineering requirements engineering software development computer science

Summary

These lecture notes cover topics in software engineering, introducing various software development approaches like project-based and product-based development. They also delve into agile methodologies, specifically Scrum, and discuss key concepts such as sprints, product backlogs, and user stories. The notes emphasize the importance of clear and complete requirements for software projects.

Full Transcript

COMP2120 LECTURE NOTES lec 1: 23/7/24 software products! —> generic software systems that provide functionality, useful to range of consumers software oridyct engineering methods? —> like engn methods lols project based soft engn ○ requirements owned by external clie...

COMP2120 LECTURE NOTES lec 1: 23/7/24 software products! —> generic software systems that provide functionality, useful to range of consumers software oridyct engineering methods? —> like engn methods lols project based soft engn ○ requirements owned by external client ○ software developed by contractor implements functionality to meet requirements ○ these requirements may change throughout the project ○ you have things you need to get done —> designs to fulfill this product based ○ starting point is the business opportunity, more specific ○ idea of getting from a concept or idea to a final design and implementation software product line comparable soft dev ○ product dev = no external customer that generates requirements for you INTRO TO SCRUMS some terminology… ○ scrum - daily team meeting where progress is reviewed and work to be done that day is discussed ○ sprint - short period (2-4 weeks) when product increment is developed ○ product - softw prod developed by scrum team ○ scrummaster - team coach ○ potentially shippable prod incremeent - output of sprint; should be high eneough quality to be deployed for customer use ○ velocity - estimate of how much work a team can do in a sprint ○ product owner - team member responsible for ID prod fetures and attributes ○ product backlog - to do list of items i.e. bugs, features, improvements that the Scrum team have not completed ○ dev team - small self-organising team of 5-8 ppl responsible for dev product scrum elements ○ prod backlog items on list are pbi’s (prod backlog items) may include prod feagtures, user requestes, engn improvements etc shoudl always be prioritised so that items to be implemented first are at top of list ○ sprint backlog product backlog item states (PBI States) a) ready for consideration ○ high lv ideas and feature description considered for inclusion in product ○ tentative - may radically change/not be included b) ready for refinement ○ team has agreed this is important item that should be implemented ○ reasonably clear definition of what is required ○ work is needed to understand and refine c) ready for implementation ○ pbi has enough detailfor team to estimate effort invlolved and impleemnt item ○ dependencies on other items have been identified lec 2: 30/7/24 timeboxed sprints ○ benefits tangible output - demonstrable progress work planning problem discovery ○ sprint planning agreed sprint goal - focus on software functionality, support, etc decide on list of items from product backlog sprint backlog - output of planning process, breakdown of PBIs to show what is involved ○ sprint goals function/support/performance & reliability scrums ○ short daily meeting - short and focused agile activities ○ test automation - exeutable tests to be run at any time ○ continuous integration - new/changed components must be integrated into existing system code completeness ○ reviewed - understandable, appropriate comments, refactored etc ○ unit tested ○ integrated ○ integration tested ○ accepted sprint reviews ○ review meeting user understanding ○ try to understand users —> surveys, interviews, etc from personas to features feature description ○ input, activation, actions (how input data is processed), output personas ○ imagined users - create character portraits of people you think will use product ○ investigate corner cases of everyone who uses app ○ short and easy to read ○ describes bg and why they want to use product ○ personalisation, job related, education, relevance ○ benefits empathise with users check ideas to make sure features are all relevant make sure no unwarranted assumptions ○ deriving personas based on understanding of potential users, jobs, bg, aspirations abstract essential info proto-personas — developed under basis of limited info scenarios ○ narrative describing how users might use your system writing scenarios ○ from users perspective, based on identified personas/real users ○ general, not including implementation info ○ coverage of all potential user roles software features ○ fragment of functionality e.g. print feature, new doc feature ○ before programming product, aime to create list of features included ○ starting point for prod design and dev user stories ○ scenarios are high lv stories of system use, describe sequence of interations with the system, dont include details of these interactions ○ finer grain, detailed narratives thatare structured features ○ should be independent, coherent, and relevant ○ knowledge needed for feature design user/product/domain/tech knowledge ○ tradeoffs simplicity vs functionality familiarity vs novelty automation vs control ○ feature creep? when new features are added in response to user requests, without considering if they are useful too many = hard to use ○ descriptions: description, constraints, comments L3: Requirements ✶ Self-organising teams: ◈ Coordinates work of team members - discussing tasks and reaching consensus ◈ Limits involvement of engineers in external interactions with management & customers ◈ Makes own decisions on schedule, deliverables etc ◈ Self contained bubble basically ▸ External interaction delegated to only ScrumMaster and Product Owner ▸ Idea is that team should work on soft dev w/o distraction ◈ ✶ Team composition: ◈ Ideal = 5-8 people, with diff skills, diff levels of experience, diverse but still can conmmunicate informally ◈ Self organising = cohesive that adapts to change ◈ Coordination! ▸ Scrum developers assumed teams would be co-located, and could meet daily ▸ this is super unrealistic, most teams aren;t full time workers who can meet daily ✶ Project management: ◈ Need to appoint someone to take on project management - Scrum devs did not envisage ScrumMaster to do this ◈ Project management responsibilities: ▸ Reporting - budget, schedule, risks, problems, progress ▸ People - vacations, absences, work quality, hiring ▸ Admin - finance, compliance, liaison ✶ Software products ◈ Three factors driving design of soft prods 1. Business and consumer needs not met by current prods 2. Dissatisfaction with existing prods 3. Changes in tech mean making new prod possible ✶ Requirements ◈ What the system will do! ◈ Fred brooks: ▸ hardest part of building system is deciding WHAT to build - no part of conceptual work = as hard as detailed technical requirements ◈ Incomplete requirements = largest issue in projects ✶ Requirements engineering??? ◈ Knowledge acquisition - how to capture relevant detail ◈ Knowledge representation - how to express these system details ◈ Requirements say what the system will do, not HOW it will do it - defines desired behaviour ✶ Functional requirements & implementation bias ◈ Domain knowledge and assumptions outline existing behaviours that remain unchanged by system ◈ Gaps in requirements ▸ Unshared actions e.g. theft cannot be controlled ▸ Future requirements ◈ Avoiding implementation bias… ▸ Indicative mood = as-is environment ▸ Optative mood = environment with machine (to-be) ◈ Types of requirements: 1. Functional requirements: specify what the system should do i.e. inputs/outputs/interface/response ** important that requirements need to be complete, consistent, and precise (ccp) 2. Non functional (quality) requirements: quality of system performance, dont specify what system does but how well it performs, should be testable ◈ Expressing quality requirements: ▸ Informal goals: broad intentions (e.g. ease of use) that guide design ▸ Verifiable non-functional reqs: specific, measurable crit ensuring reqs are met e.g. error rates ✶ Functional requirements & implementation bias ◈ Typical steps: ▸ Identify stakeholders of interest and target info to be gathered ▸ Understand domain ▸ Discover real needs by interviewing stakeholder - record, transcribe, report ec ▸ Check validity of report with interviewee ▸ Explore alternatives to address needs ◈ Life hack - just keep asking the interviewee what is a problem for them!!! Try to understand them ◈ Capturing vs synthesising ▸ Engineerings need to be faithful to stakeholder expectations and needs ✶ Interview advice!!! ◈ Get basic info about person before e.g. role, responsibilities ◈ Review questions ◈ Begin concretely with specific questions, work through a scenario/prototype e.g. ◈ Be open minded ◈ Contrast with current system/alternatives, explore conflicts, priorities ◈ Plan for follow up questions ◈ Identifyuright interviewee sample for full issue coverage ◈ Make them feel comfy, keep control, but appear trustworthy ▸ Help this entire section is SO funny because its literally just. Being a normal empathetic human but nooooo comp students need to be coached through being a normal person ✶ prototypes , mockups, stories ◈ High (detailed) vs low (simple) fidelity mock ups ◈ Storyboards illustrate scenarios (typical sequence in system that meets an objective) ▸ Different types e.g. pos vs neg, normal vs abnormal ✶ Types of inconsistency ◈ Terminology clash ▸ same concept is named differently ▸ e.g. borrower vs patron in library = same user ◈ Designation clash ▸ same term used for different concepts ▸ e.g. ‘user’ is ambiguous ◈ Structure clash ▸ same concept rep in diff formats/structures ▸ e.g. latest return date = Friday, or friday at 5:30pm in another ◈ Strong conflict ▸ statements cannot coexist w/o violating each other ▸ e.g. the letter is completely confidential vs i should know what is in the letter ◈ Weak conflict (divergence) ▸ Statements that only conflict under certain conditions ▸ E.g. patrons shall return borrowed book within x weeks vs patrons shall keep borrowed book as long as needed only a conflict if x < needed ✶ Handling inconsistencies ◈ Terminology, designation, structure clashes ▸ Build shared glossary can help standardise terms and structures ◈ weak/strong conflicts ▸ Negotiations to resolve, usually because of different objectives among stakeholders ✶ Requirements traceability, prioritization, risks ◈ Traceability: keeping connections bw reqs is essential to track dependencies ◈ Prioritisation: constraints e.g. cost, time means need to prioritise using strategies like MoSCoW ◈ risks: any uncertain factor that could mean not meeting an objective e.g. system glitches ✶ Risk assessment ◈ Risks consist of: ▸ Failure likelihood ▸ Neg impact ▸ Root causes e.g. underlying weak links in tech (causal agent) ◈ Risk calculation = likelihood x impact ✶ Models for risk management ◈ Swiss cheese model ◈ Risk assessment matrix ✶ ◈ Decide model ◈ ◈ Ooda loop ✶ A summary… ◈ Soliciting requirements through doc analysis, interviews, user observation ◈ Conflict resolution using prototypes to explore ◈ Documentation e.g. user stories are central ✶ Configuration management ◈ Process for managing changes in system to ensure consistency and coordination ▸ Goal to support integration ◈ Includes several activities.. ◈ Version management: ▸ Track diff versions of software components ▸ Coordinate work across devs, providing means to manage simultaneous changes ◈ System integration: ▸ Defines which version of components used ▸ Often = automated builds systems that compile and link ◈ Problem tracking: ▸ Allows users to report bugs/issues ▸ Enables devs to assign and track responsibility Pretty sure all of this is extra content but idgaf ✶ Design and implementation ◈ Involve creating executable system based on reqs ◈ Stages are often interwoven ◈ Key aspects: ▸ Software design: creative phase focused on decomposing system into components and defining interaction ▸ Implementation: realises design by programming components ✶ build/buy decisions ◈ Possible to by COTS (commercial, off the shelf) systems that can be adapted to meet user requirements - faster and cheaper than dev new system ◈ Configuration capability of COTS to fulfil reqs ✶ Object oriented design (OOD) process ◈ Various system models dev to structure system effectively ◈ Process stages: 1. Define system context and modes of use 2. System architecture - high level structure for system 3. Identify principal system objects 4. Develop design models 5. Specify object interfaces ◈ System context and interactions - essential for defining boundaries ◈ Boundary setting - prevent scope creep and focuses on features critical to systems main functionality L4: METRICS ✶ Understanding large systems ◈ You cannot understand the entire system….. ◈ Goal is to develop/test working model/set of working hypotheses about how some part of a system works ▸ Working model = understanding of pieces of system and way they connect and interact ◈ Prior knowledge is useful i.e. frameworks, architectural patterns, design patterns ◈ Software… ▸ constantly changes → so is easy to change! ▸ Is a big redundant mess → there’s always something to copy as starting point ✶ Entry points ◈ Triggers to make code run!!! ◈ Locally installed programs: ▸ run cmd (commands in terminal), OS launch, i/o events etc may be entry triggers ▸ Exists in binaries (machine code) ◈ Local applications in dev: ▸ build + run commands , test, deploy (e.g. docker) ▸ Source code in repo (+ dependencies) ◈ Web apps server-side: ▸ browser sends HTTP request (GET/POST) initiating code exec on server ▸ Code runs remotely (only observe outputs) ◈ Web apps client-side: ▸ browser runs JavaScript ▸ Source code downloaded and run locally ✶ Side note on build systems….. ◈ Basically same across all languages/platforms (make, maven, gradle etc) ◈ Goals: source code + dependencies + config → runnables ◈ Commons themes: ▸ Dependency management (repo, versions, etc) ▸ Config management ▸ Runnables (start, stop?, test) ▸ Almost always have debug mode and help (‘-h’ or similar) ▸ Almost always have one or more “build” directories (= not part of source repo) ◈ Can running code be… probed, understood, edited?? ✶ Beware of cognitive biases… ◈ Anchoring ◈ Confirmation/expectation/plan continuation/ pro innovation bias ◈ Congruence bias (tendency to test hypotheses exclusively through direct testing, instead of testing possible alt hypotheses) ◈ Conservatism (belief revision) ◈ Curse of knowledge ◈ Default/overconfidence bias ◈ Recency illusion ✶ Static/dynamic info gathering ◈ Basic needs: ▸ Code/file search and navigation ▸ Code editing (probes) ▸ Execution of code, tests ▸ Observation of output ◈ Many tools…. E.g. decent IDE, debugger, test frameworks, coverage reports, Google ◈ Use LEGITIMATE IDE for static info gathering ▸ VSCode, IntelliJ IDEA ◈ Consider documentation/tutorials ▸ Great for discovering entry points ▸ Can teach you about gen structure, architecture ◈ Dynamic info gathering ▸ Change = useful primitive to inform mental models about a software system ▸ Systems almost always provide some kind of starting point ▸ Put simply: ✧ Build it, run it, change it, run it again!! ◈ Probes for observation & control ▸ Allow devs to observe and sometimes lightly control code exec ▸ Simple probes: logging statements (printf), debug logging and breakpoints ▸ Advanced tools: sophisticated debugging tools i.e. stepping through code, remote debugging ✶ Sanity checks & hypothesis ◈ Before diving into debugging/changes, perform basic validation 1. Build & run verification (confirm as expected) 2. Consistency check (no unintended versions) 3. Visible change - make externally observable change ◈ These checks ground you in systems state, validate consistency ◈ Starting points for changes… ▸ Modify existing tests ▸ Create new tests ▸ Observe persistent changes ✶ Maintainability index ◈ Calculates an index value between 0 and 100 that represents the relative ease of maintaining the code ◈ Higher value - better maintainability ◈ Color coded rating = quickly identify trouble spots (green = 20-100 - good maintainability, yellow = 10-19, red - 0-9) ◈ Thoughts… ▸ Metric seems attractive ▸ Easy to compute ▸ Often matches intuition ▸ Parameters do seem almost arbitrary, calibrated in a single small study ▸ All metrics related to size - just measure lines of code???? ▸ Og 1992 programs quite different from modern Java etc ✶ Autonomous vehicle safety ◈ How can we judge AV software quality e.g. safety ◈ Test coverage ▸ Amount of code executed during testing - statement, line, branch coverage etc ▸ 75% branch coverage = ¾ if-else outcomes executed ◈ Model accuracy ▸ Train machine learning models on labelled data ▸ Omput accuracy on separate labelled test set - 90% accuracy implies object recognition is right for 90% of test inputs ◈ Failure rate ▸ Frequency of crashes/fatalities ◈ Mileage? ◈ size/afe of codebases ✶ Measurement for decision making ◈ What is measurement? ▸ Empirical, objective assignment of numbers according to a rule derived from model or theory, to attributes of objects/events with intent of describing them ◈ Software quality metrics ▸ Function whose inputs are software data, output = single numerical value interpreted as the degree to which software possesses a given attribute that affects its quality ▸ Metrics proposed for many quality attributes, may define own metrics ◈ Code complexiy via lines of code = easy to measure ◈ Normalising lines of code ▸ Ignoring comments, empty lines, lines with < 2 characters ◈ Halstead volume? ▸ Number of operators.operands * log2(no. Of distinct operators/operands) ▸ Approx. size of elements and vocab ◈ Cyclomatic complexity: ▸ based on a control flow graph, measures linearly independent paths through program ▸ Approx no of decisions ◈ Object-oriented metrics (ck metrics) ▸ No of methods/class ▸ Depth of inheritance tree ▸ No of child classes ▸ Coupling bw object classes ▸ Calls to methods in unrelated classes ◈ What software qualities do we care about? ▸ Scalability, installability, security, maintainability, extensibility, functionality, documentation, performance, availability, ease of use, consistency, portability ◈ What process qualities do we care about? ▸ On time release, dev speed, meeting efficiency, process conformance, time spent on rework, reliability, fairness, time/costs/actions/resources ◈ EVERYTHING measurable ▸ If x = something we care about, x must be detectable in some amount ◈ Benchmark-based metrics ▸ Monitor many projects/modules, get typical values for metrics and report deviations ◈ Questions to consider… ▸ What properties do we care about, how do we measure it ▸ What is being measure, and to what degree does it capture wha we care about/limitations ▸ How incorporate into process ▸ Potentially neg side effects/incentives ◈ Streetlight effect ▸ Known observational bias - people tend to look for something only where its easiest to do so → you’ll look for your keys under streetlights if you drop them at night ◈ WHAT COULD GO WRONG ▸ Bad stats: basic misunderstanding of measurement theory ▸ Bad decisions: incorrect use of data ▸ Bad incentives: disregard for humans, cultural change of taking measurements may affect people ◈ Measurement scales - Type of data being measured, scale dictates what sorts of analysis/arithmetic is legitimate ▸ nominal/categorical: certain attributes, no implied order ▸ Ordinal scale: maps to ordered set, no info about magnitude ▸ interval : has order and magnitude, but no zero ▸ Ratio scale: has zero but still interval (represents ABSENCE of quantity) ◈ ✶ Understanding your data ◈ For causation ▸ Provide a theory (from domain knowledge) ▸ Show correlation ▸ Demonstrate ability to predict new cases ◈ Spurious correlations ◈ Confounding variables ▸ E.g. coffee vs smoking vs cancer ◈ Measurements validity ▸ Construct validity - are we measuring what we intend to measure ▸ internal/external validity ◈ Measurements reliability ▸ Extent to which measurement yield similar results when applied multiple times ▸ Goal is to reduce uncertainty and increase consistency ◈ Mcnamara fallacy ▸ Measure whatever can be easily measured & disregard others ▸ Presuming that which cannot be measured easily is not important, and does not exist ▸ Idea that there is a presumption that you can’t model something unless you know all constants to high accuracy, meaning you omit them ◈ Defect density - known bugs/lines of code ▸ System spoilage = time to fix post-release defects/total system dev time ▸ Post release vs pre release ◈ Measuring usability ▸ Automated measures on code repos, use or collect process data, instrument program, surveys, interviews, controlled experiments, expert judgment, stat analysis of sample ✶ Metrics and incentives ◈ Goodharts law: when a measure becomes target, it ceases to be a good measure ◈ Productivity metrics ▸ Lines of code/day ▸ function/object/application points per month ▸ Bugs fixed??? Milestones reached??? ◈ Incentivising productivity - what happens when dev bonuses are just based on these metrics e.g. lines of code, amount of doc, reported bugs etc ▸ Can encourage cheating, short term thinking, extinguish intrinsic motivation etc ◈ Most software metrics are controversial!!! ▸ Only plausibility arguments, not rigorously validated ▸ Cyclomatic complexity refuted but still used/… ▸ Similar to measuring intelligence in terms of weight of brain ▸ Avoid claims about human factors e.g. readability unless validate ◈ Some strats…. ▸ Metrics tracked using tools and processes ▸ Expert assessment ▸ Mining software repos, defect databases ▸ Benchmarking for performance ◈ Factors in successful measurement program ▸ Set solid objectives and plan ▸ Make measurement part of process ▸ Gain thorough understanding of measurement ▸ Focus on cultural issues ▸ Create safe environment to collect and report data, and predisposition to change ▸ Complementary suite of measures ◈ Read on: kaner’s questions when choosing a metric ✶ Goals, signals, metrics ◈ Measuring engineering productivity… is it even worth measuring??? ◈ Google uses GSM (Goals, sigs, metrics) framework to guide metrics creation ▸ Goal: desired end result ▸ Signal: how do you know you have achieved end result ▸ Metric: proxy for signal ◈ Goals: capturing tradeoffs ▸ Quality of code - ▸ Attention from engineers ▸ Intellectual complexity ▸ Tempo and velocity ▸ Satisfaction of engineers ◈ I DONT UNDERSTAND THIS SECTION → READABILITY WHAT ◈ Idea that engineerings write better quality code due to making it more readable, and learning more about it = high intellectual complexity, they can complete work faster due to readability, and see beenftis and are more satisfied??? ◈ L5: INSPECTION ✶ Reviews and inspections ◈ Overview: ▸ Structured group activities where parts of software system (including documentation) are examined to identify issues to improve quality ▸ May result in formal ‘sign off’, which allows project to move to next dev stage (approved by management) ▸ Different types of reviews: ✧ Inspections for defect removal (product), reviews for progress assessment (product, process), quality review (product, standards) ◈ Quality reviews: ▸ Detailed examination of software components and documentation (e.g. code, designs, test plans) by group ▸ Objective = all elements meet standards and ready for next dev stage ▸ Signed off by management if approved ◈ Phases in review process 1. Prereview: planning and preparation e.g. selecting materials 2. Review meeting: doc or code author guides team in walkthrough to id issues 3. Post-review: address issues, create action items document findings ◈ Distributed reviews ▸ Trad reviews assume team meets face to face ▸ Distributed teams = remote review methods i.e. shared docs ◈ Program inspections ▸ Specific type of peer review focused on identifying anomalies and defects in source code w/o executing program/implementation ▸ Can be applied to requirements/design docs, effective for early error detecting ◈ Inspection checklists ▸ Error checklists tailored to common issues, including typical errors: ✧ Initialisation issues ✧ Constant naming conventions ✧ Loop termination problems ✧ Array bounds check ▸ Data faults ✧ Variables initialised, constant named, upper bound, strings, buffer overflow ▸ Control faults ✧ Loop termination, condition correct, cases accounts, break included ▸ i/o faults ✧ Input and output assigned, unexpected inputs ▸ Interface faults ✧ No of parameters, right order, shared memory structure model ▸ Storage management faults ✧ Link structure, dynamic storage space, is space explicitly deallocated ▸ Exception management faults ✧ All possible error conditions?? ◈ Software reviews ▸ Process or meeting during product is presented to others for comment/approval ◈ Objectives (for soft review) ▸ To detect errors in program logic/structure, inconsistencies bw artifacts ▸ Programming should be a public process ▸ Make sure intention of artifact is clear ▸ Very design meets reqs ▸ Ensure software dev in uniform manner, adhere to standards ◈ Walkthroughs ▸ STATIC ANALYSIS technique - designer/programmer leads members of dev team through segment of docs or code ✧ Participants ask questions, comments, about errors ▸ Three roles: ✧ Author: author of materials ✧ Moderator: handles admin e.g. schedule ✧ Recorder - writes down comments ◈ Formal inspections.. ▸ STATIC ANALYSIS technique - relies on visual examination of products ▸ Group of devs meet to formally review 🙁 ▸ Most effective approach to find bugs - 60-90% ▸ Expensive and labor intensive ▸ Team & roles (typically 4-5 ppl) ✧ Author - different to walkthrough, they dont step through the work - the ‘reader’ does that ✧ inspector(s) - raises questions, objective and constructive, EVERYONE EXCEPT AUTHOR can be an inspector ✧ Reader - leads inspection team through software, paces inspection ✧ Scribe/recorder - describes issues, makes report ✧ Moderator - chooses inspection team, admin, protocols, controlling interactions ▸ Why arent they common? ✧ Devs dont believe reviews are worth their time ✧ Ego problems….. ✧ Boring lol ✶ Quality management in agile ◈ Overview: ▸ Inherently informal…. Relies on a quality focused culture ▸ Each team member rakes responsibility for quality ◈ Shared best practices: 1. Check before check in - self organised peer review before integrating into main build 2. Never break the build - verify changes work 3. Fix problems on sight - dont defer to OG author ◈ Reviews in agile methods ▸ typical ly informal: ✧ Scrum: hold sprint reviews after each iteration to discuss quality issues, potential improvements ✧ Extreme programming (XP): pair programming to ensure real time code review ◈ Pair programming ▸ Two people work closely to write together ▸ Driver: types code, responsible for immediate tactical issues ▸ Navigator: observes and strategises, looks for larger scale issues and long term ▸ This collaboration leads to: ✧ Deeper code understanding - both devs have intimate knowledge ✧ Bug identification ◈ Benefits of pair programming ▸ Higher quality with minimal time increase ▸ Enhanced morale and confidence ▸ Improved teamwork and learning ◈ Challenges in pair programming ▸ Misunderstandings reinforced in pairs ▸ Reluctance to slow down - teams may hesitate to id errors because dont want to slow down progress ▸ Close relationships may affect critique ◈ Agile quality management in large systems ▸ Challenges in large scale/long lifecycle projects… ▸ External customer requirements - large clients may demand formal docs and progress reports ✧ Agile is too informal ▸ Distributed teams ✧ Informal communication is impractical ▸ Long lifetime systems ✧ Lack of docs may hinder NEW team members ✶ Modern code reviews ◈ Reasons for code reviews ▸ Finding defects (high/low level issues, requirements, design, security, performance etc) ▸ Code improvement e.g. readability, formatting, naming, coding standards ▸ Identifying alt solutions ▸ Knowledge transfer ▸ Team awareness and transparency to double check changes, announces changes to team, general awareness ▸ Shared code ownership - shared understanding of larger codebase, openness towards critique ◈ Style guides/rules @ google ▸ Rules are STRICT MANDATORY LAWS ✧ Encourage good and discourage bad ▸ Overarching principles = optimise for reader, be consistent, avoid error prone constructs, practicalities ▸ Separate style guides ✧ Avoid dangers, enforce best practices, ensure consistency ▸ Code review ✧ Polite, professional ✧ Small changes and good change description ✧ Reviewers to a minimum ✧ Automate where possible ◈ Mismatch of expectations and outcomes ▸ Low quality of code reviews ✧ Reviewers look for easy errors, miss serious ones ▸ Understanding is main challenge - reason for a change, context, etc ▸ No quality assurance on outcome ◈ Code review at google ▸ Introduced to “force developers to write code other developers could understand” ▸ 3 found benefits: ✧ Checking consistency of style and design ✧ Ensuring adequate tests ✧ Improving security - no dev can commit random code w/o oversight ◈ Personal review checklist ▸ Are all req traceable to spec. User need ▸ Any req that ar impossible to implement ▸ Could reqs be understood/implemented by independent group ▸ Security reqs specified? ▸ Glossary for term defs? ✶ Pair and mob programming ◈ Benefits of pair programming ▸ Knowledge sharing ▸ Reflection ▸ Code review on the go ▸ Focus ▸ Keeping WIP low ▸ Collective code ownership ◈ Mob programming ▸ All brilliant people working on same thing at same time in same space on same computer????? The fuck ▸ Not about getting MOST out of your team , but getting the BEST out of your team ▸ Personal/solo programming - both your best and worst make it into the code, but pair means both your bests are in the code, mob = best of your ENTIRE TEAM ✶ Running a meeting ◈ good questions! ▸ Keep log of qs and as ▸ Try find answers first ▸ Keep mental model of who knows what ◈ Rules of running a meeting ▸ set agenda, start and end on time end with an action plan ▸ Give everyone a role - establish ground rules, decision/consensus? ▸ Control meeting, not the conversation ▸ Make everyone contribute, and make meetings essential ✶ Making code reliable??? NOT EXAMINABLE BITCH LETS GOOOOOO L6: INSPECTION ✶ 10X Engineers ◈ Concept is that these ‘ninja’ or ‘rockstar’ engineers are significantly more productive than others ◈ Super good but also lowkey terrible, not correlated with experience ✶ Teams/team issues ◈ Groups are necessary in software dev due to: ▸ Division of labor ▸ Division of expertise ◈ Teams can face various issues…. ◈ Social loafing: ▸ People tend to put in less effort when working in a group ▸ Diffusion of responsibility, lack of motivation, individual effort is dispensable ◈ Groupthink ▸ Groups may prioritise minimising conflict over exploring alts ▸ Lead to poor decisions ◈ multiple/conflicting goals ▸ Team incentives clash w individual ◈ Process costs ▸ Communication overhead increases with team siz, ▸ Mythical man month - adding people to late software proj often delays it further ◈ Diversity!!!!!!!! ▸ gender diversity = better problem solving ▸ Cultural diversity process losses through task conflict, but gaines through increased creativity BRUHHHH ◈ Unconscious bias ▸ Hinder effectiveness of diverse teams, organisations should prioritise raising awareness, setting explicitly diversity goals ▸ ✶ Modern team structures ◈ Several team structures emerged to address challenges of software development ◈ Brooks surgical teams ▸ Model emphasis a hierarchical structure ▸ Chief programmer leads team of specialists ✧ E.g. copilot, administrator, editor, secretaries, clerk, toolsmith, tester etc ◈ Microsoft small team practices ▸ Small federated teams with overlapping functional specialists ▸ Vision statement and milestones ▸ Feature selection ▸ modular ◈ Agile practices e.g. scrum ▸ Advocate for self managing teams with clearly defined roles e.g. scrummaster, product owner ◈ Conway’s law ▸ Any organisation that designs a system will produce a design whose structure is a copy of the organisations communication structure ▸ Highlights the importance of aligning team structure with code structure ◈ Case study: broderbund ◈ Commitment & accountability ▸ Conflict is useful to expose all views ▸ Assign responsibility, record decisions and commitments ◈ Causes of conflict: ▸ Conflicting resources, styles, perceptions, goals, pressures, roles, personal values, unpredicatable policies ✶ Virtual teams ◈ Spotify squads, github, jazz, cscw = tech assisted collab ◈ Clear communication = crucial for virtual teams, involves leveraging tools like issue trackers, online communication to reduce overhead and increase reliability ◈ Spotify squad: ▸ Way to organise teams into small, autonomous units that focus on specific areas of a product ▸ Each squad has unique mission guiding work they do, agile coach for support, and product owner for guidance ✶ General guidelines ◈ Hints for team functioning…. ▸ Trust! diversity! ▸ Reduce bureaucracy ▸ Physical coalition and time for interaction ▸ Avoid competition, have peer coaching ▸ time for quality assurance and realistic deadlines ▸ Elitism????????? ◈ Elitism case study: the black team ▸ Legendary IBM team in 60s, group of talented testers ▸ Formed team personality and energym cultivated image of destroyers, all started to dress in black ◈ Troubleshooting teams ▸ Cynicism = warning sign ▸ Get to know each other ✶ Dev turnover & motivation ◈ High turnover rate, exceeding 20% annually ◈ High turnover is expensive…. Hiring overhead, lost in productivity ◈ Causes… ▸ Disposability, loyalty is silly, passing through mentality ◈ Address dev motivation is critical to mitigating turnover: ▸ Growth and challenge ✧ Opportunities for learning ▸ Autonomy and purpose ✧ Empower devs to take ownership, and meaningful work ▸ recognition/rewards ✧ Acknowledging accomplishments ▸ Work-life balance ✧ Healthy work hours ✶ Growth & challenge ◈ Theories: ▸ Mazlows hierarchy of needs ▸ Herzberg motivation and hygiene factors ✧ Addressing dissatisfaction does not lead to satisfaction ✧ Eliminate dissatisfaction, then create condition fo satisfaction ▸ Daniel pink drive ◈ Causes for dissatisfaction ▸ Respect for supervisor, fun, learning, working conditions, policies and admin, ethics, compensation ◈ Addressing these ▸ Respecting others, out of office play, celebration of accomplishments, leading by example, explore new tech, good working conditions, fire stupid fucks, free food, FLEXIBILITY ▸ ESTABLISH CULTURE, worthy goals, protect staff from organisational distractions, lots of communication, toys! Modern tech and hardware, praise loudly and specifically, celebrate success ◈ Incentivising overtime???? ▸ ✶ Documentation ◈ Chapter 10 of SE @ Google ◈ What qualifies as doc? ▸ Any suppl text that engineer needs to write to do their job inc comments ◈ Why do we need it ▸ Helps formulate api ▸ Roadmap for maintenance an dhistory ▸ Makes code look professional and attractive ▸ Fewer questions ◈ Documentation is like code, should: ▸ Have internal policies/rules and placed under source control ▸ Clear ownership, undergo reviews for changes and have issues tracked ▸ Be periodically evaluated and measured ◈ Understandtarget audience - what is here experience, domain knowledge and purpose ◈ Different types of docs ▸ Reference docs ▸ Design ▸ Tutorials L7: DEVOPS ✶ What is devops ◈ Software development approach that emphasis collaboration bw development and operations teams ◈ Aims to bring together dev and ops teams, which are traditionally separate within software orgs ◈ Offers several benefits, including: ▸ Increased velocity - faster release of prods and apps ▸ Increased quality - successful delivery of features and products ◈ Goals of devops… ▸ Technological - dev automated process to move code from development to release ▸ Cultural - build cohesive, multidisciplinary teams where devs are first responders and prod issues arise ✧ Instills sense of ownership throughout software lifecycle ◈ How to achieve devops….? ▸ CI - constant checking as code is checked/pushed into repo ✧ Verify build process ✧ Verify unit tests ✧ Build artifacts ▸ CD - moving build from test → stage → prod environments ✧ Environments always differ ✧ Gate code ◈ CI: ▸ commit/check in code frequently (can squash later) ▸ Commits build on previous commits ▸ Autotmated feedback and testing on commits ▸ Artifact creation ▸ Ensure code, supporting infrastructure, doc are all versioned together ◈ CD: ▸ ARTIFACTS AUTO SHIPPED INTO TEST, STAGE, PROD ENVIRONMENTS ▸ PREVENTS MANUAL DEPLOYMENT AND MANUAL STEPS, EARLY DETECT OF PROBLEM ▸ CAN be tied to a manual promotion technique to advance through environments ▸ Multi stage deployment w auto rollback on failure detection ◈ Software support ▸ Traditionally, separate teams responsible for software dev, release, and support ▸ Devteam passed over final version of software to release, they built test and prepared release doc, then third team = customer support ▸ Inevitably delays and overhead in trad support model ▸ Devops developed to speed up release and support processes ◈ Widespread adoption factors ▸ Agile soft eng reduced dev time, but trad release = bottleneck bw dev and deployment ▸ Amazon reengineered software and introduced approach in which service was dev and supported by same team → widely publicised ▸ Became possible to release software as a service, didnt need ot b physical media ◈ Devops principles ▸ Everyone is responsible for everything ▸ Everything that can be automated should be automated ▸ Measure first, change later ◈ Benefits of devops ▸ Faster deployment - communication delays are reduced ▸ Reduced risk - increment of functionality in each release is SMALL ▸ Faster repair - devops team work together - dont need to figure out who was responsible ▸ More productive teams - they happier ◈ what to practice for DevOps ▸ Infrastructure as code ✧ Use code to create required resources e.g. cloud services ✧ Embrace immutable infrastructure by replacing instances ✧ Everything everything is in code, checked in, versioned ▸ Observability (monitoring, logging, tracing, metrics) ✧ Gain insights in application performance in production ◈ Devops measurement - four types ▸ Process measurement - data about dev, testing, deployment processes ▸ Service - softwares performance, reliability, and customer acceptability ▸ Usage - how do customers use prod ▸ Business success - how does prod contribute to overall business success ◈ Automating measurement ▸ Devops principle of automating should be applied to software measurement ▸ Should instrument software to collect data about itself ✶ CI: continuous integration ◈ By using devops with automated support, dramatically reduce time & costs for integration, deployment, and deployment ◈ Aspects of DevOps automation ▸ CI - each time dev commits change to master branch, executable version of system is built and tested ▸ Continuous delivery - simulation of prods op environ is created, and executable software version is tests ▸ Continuous deployment - new release of system is made available to users every time a change is made to the master branch of the software ▸ Infrastructure as code - machine readable models of infrastructure on which product executes are used by config management tools ot build softwares exec platform ◈ System integration ▸ Process of gathering all elements required in a working system, moving them into right directories, putting them together to create an operational system ▸ Typical activities: ✧ Installing DB software, setting up database with appropriate schema ✧ Loading test data into DB ✧ Compiling files, and linking this with libraries/components ✧ Checking external services are operational, deleting old config files and moving config files to correct locations ✧ Running set of system tests to check integration has been successful ◈ Definition ▸ CI simply = integrated version of system is created and tested every time a change is pushed to systems shared repo ▸ After pushing, repo send message to integration server to build a new version of product ▸ Advantage: ✧ Compared to less freq integration, faster to find and fix bugs ✧ Make small change and some system tests fail, problem almost certainly lies in new code ✧ Focus on this code to find bug ◈ Breaking the build ▸ Devs have to make sure they dont break the build during CI → pushing code to proj repo which make system tests fail ▸ Priority should be discover and fix problem ▸ Mitigation: integrate twice approach → test on your own computer ✶ ◈ System building ▸ CI is only effective if the integration process is fast and devs do not have to wait for results ▸ Some build activities e.g. compiling, populating DV are slow → essential to have automated builds that minimise time ▸ Fast system builds = incremental building ◈ Dependencies ▸ Running system tests depends on executable object code (for both program and system tests) ▸ Depend on source code for system and tests compiled ▸ Automated build uses specification of dependencies to work out what needs to be done ◈ CI/CD pipeline overview ▸ Edit code → run tests → merge code → code deployed ◈ CI facts…. ▸ Helps us catch bugs earlier ▸ Makes us less worried about breaking builds ▸ Lets us spend less time debugging ✶ CD: Continuous delivery & deployment ◈ Overview ▸ Real environment in which software runs = different from your dev system → environment bugs may be revealed that didnt show up in test environ (CI) ▸ Continuous delivery: ✧ After system changes, changed system is ready for customer delivery ▸ Have to test in production environment to make sure that environmental factors do not cause system failures or slow down performance ◈ Deployment pipeline ▸ After initial integration testing, staged test environ created ✧ Replica of actual prod environm ▸ System acceptance tests (inc functionality, load and performance tests) run to check software works ✧ If all pass, changed software is installed on prod servers ▸ To deploy system, momentarily stop all new requests, leave older version to protect ▸ Once completed, switch to new version ◈ Benefits of CD ▸ Reduced costs ✧ Invest in auto deployment pipeline - expensive upfront but long term make back money ✧ Manual deployment is timeconsuming and error prone ▸ Faster problem solving ✧ Issues only affect small part of system - will be more obvious what source of problem is ✧ Harder to find problem if you bundle many changes into a release ▸ Faster customer feedback ✧ Faster feature deployment = use feedback to id improvements ▸ A/B testing and canary deployments ✧ Option if you have large customer base - using several deployment servers ✧ Basically let some people use older, and others use newer versions to measure and assess how new features are used (e.g. instagram’s post directly to profile feature) ✶ Continuous deployment strategies ◈ Nightly build ▸ Automated build done once a day! ▸ benefits ✧ Minimises integration risk ✧ Reduces risk of low quality ✧ Easier defect diagnosis ✧ Improves morale ◈ Ring deployment (microsoft) ▸ Commits progressively rolled out to different groups called rings with deflight options in case of issues ✧ Each ring = user group, with progressively MORE ✧ Team < dogfood < beta < many < all ▸ Windows 10 insiders program, and windows edge browsers insiders programs to gather feedback and id issues before full release ◈ rapid release (mozilla) ▸ Pushes code to diff release channels regularly, moving updates through staged release ✧ Allows for early testing and adaptation before reaching stable ▸ Stages include nightly builds, alpha (more stable than nightly), beta, release candidate ▸ Mozilla’s firefox employes this approach ◈ Big bang deployments ▸ All users receive new version at same time in single large deployment ▸ Requires rigorous testing, but allows for rapid issue response and extensive bug fixes ▸ E.g. facebook! ◈ Instagram dark launches ▸ Introduce code into production w/o immediately launching to users ▸ Teams gather performance metrics, tests, only showing features when confident of readiness ▸ E.g. instagram pushes changes in small increments to assess performance ◈ Facebook process (until 2016) - quasi continuous release ▸ Release is cut sunday 6pm ▸ Stabilize until tuesday, canaries, release ▸ Cherry pick: push 3 times a day from wed-fri ▸ Bends rapid deployment with controlled batch releases ◈ Rolling deployments ▸ Updates parts of an app incrementally, replacing old instances with new ▸ Minimises downtime, helps isolate potential issues by gradual replacement, allows for quick rollback ◈ red/black (blue/green) deployments ▸ Deploy new version of app alongside current ✧ Once new version is verified, traffic redirected from old to new ▸ Provides quick, seamless rollback if previous arise ▸ E.g. microservices architectures to transifiton traffic bw two nearly identical environments ◈ Canary deployments ▸ New version released to small subset of users first ▸ Gradual rollout = real world testing on small scale ▸ Effective at catching issues early in controlled manner ◈ Feature flags ▸ Allows devs to enable/disable features at runtime w/o deploying new code ▸ Flags control who has access to specific features, making it useful for a/b testing, staged rollouts, and quick rollbacks ▸ Helpful in agile environments, when teams want to test features with specific user groups/specific conditions ✶ Infrastructure as code ◈ Way to address issue where we cant track software installed on each machine (all have diff configs and diff software packages) ◈ Instead of manually updating software on company’s servers, process = automated using model of infrastructure written in machine language ◈ CM (COnfig management) tools i.e. Puppet auto installs software and services on servers ◈ Benefits ▸ Solves two key problems of continuous deployment: ✧ Your testing environ Must be exactly same as deployment - if you change deployment, these changes must be mirrored in testing ✧ When you change service, you have to roll that change out to all your servers quickly and reliably → if bug in change affects system, you have to seamlessly roll back ▸ Business benefits: ✧ Lower costs of system management ✧ Lower risk of unexpected problems ◈ Characteristics of IAC ▸ Visibility - infrastructure is a stand alone model that is understood by whole DevOps team ▸ Reproducibility - installation tasks will always run in same sequence so same environment created ✧ Dont need to rely on humans to remember ▸ Reliability - automating process avoids mistakes in managing complex infrastructure ▸ Recovery - can be versioned and stored, you can easily revert to older versions ◈ Containers ▸ Provides stand alone exec environ running on top of operating system e.g. Linux ▸ Docker = container → valuable tool for implementing IAC ✧ Dockerfiles = definition of infrastructure, specifies required software and configs ▸ Makes it simple to create identical exec environment ✧ Build an image for execution, and you can run an ap as a test or operational system - no distinction ✧ When you update software, you rerun image creation to create new image ✶ Monitoring ◈ What is observability? ▸ Our ability to know and discover what is going on in our systems as a dev ▸ Adding telemetry to systems to measure change and track workflow ◈ Observability dashboard ▸ What is happening now ▸ What does normal behavior look like ▸ What does it look like when somethings gone wrong ▸ Can i correlate events to changes in actual graphs L8: MICROSERVICES ✶ Monolithic vs service oriented ◈ Monolithic: ▸ all features in singular codebase – simplified, tightly coupled ▸ Potential vulnerabilities of monolithic architecture… facebook outage oct 2021 ✧ Failures in a single point within monolithic system = cas ▸ MVC pattern ✧ User uses controller, which manipulates a model, which updates the view, allowing the user to see ▸ Aspect Consequence of monolithic arch scalability Limited to scaling whole app rather than individual features reliability Vulnerable to system wide failures if one mod crashes performance Potential bottlenecks due to shared resources development Slows down due to large complex cb maintainability Challenging as apps grows evolution Hard to modernise or adopt new testability Time consuming ownership Difficult - changes by one team impact othes Data consistency Strong! All share centralised database COME BACK ON THIS IN LECTURE ◈ Service oriented architecture ▸ Decomposes app into independently managed services ▸ Allows for modularity, independent scaling, fault isolation, streamlined dev and maintenance ◈ Evolution of web browsers from monolithic to SOA ▸ Broader trend to modularity ✶ Microservices ◈ What are they… ▸ Architectural pattern that arranges application as a collection of loosely coupled, fine-grained services ▸ Small scale, stateless services that have a single responsibility ✧ Easy to replace ✧ Fine grained, one function per serbice ✧ Composable ✧ Easy to develop, test, and understand ✧ Fast restart, fault isolation ✧ Highly observable ▸ Combined to create apps ▸ Completely independent with their own DB and UI management ▸ Good for cloud based soft prods that are adaptable, scalable, amd resilient ◈ Software services ▸ Software component that can be accessed from remote computers over internet ▸ Given input, service produces output w/o side effects ▸ Do not maintain internal state → state info is stored in DB or maintained by requestor ▸ When req is made, state info may be included as part of request ✧ Updated state info returned as part of service result ▸ No local state - services dynamically reallocated from one virtual server to another ◈ Modern web services ▸ Modern SOA use simpler, lighter weight service interaction protocols with lower overheads and thus, faster execution ◈ Consequences on factors… ▸ Scalability ✧ MSA puts each element of functionality into separate service, and scales by distributing across servers ✧ Monolithic puts all functionality into single, and replicates EVERYTHING ▸ Team organisation ✧ Conways law (design systems ←→ organisation communication structure) ✧ Teams organised around individual services/sets of services → allows for independent ownership and autonomy ✧ Cross functional teams - rapid development ✧ End-to-end responsibility for entire lifecycle of their microservice ▸ Data management/consistency ✧ Mono = single database ✧ Distributed data management ✧ DB per service - supports autonomy of each service ✧ Event driven communication and distributed transactions ▸ Deployment and evolution ✧ Mono = multiple modules in same process, micro = modules running in different ✧ CD and rapid evolution of individual services w/o impacting entire system ✧ Each system = independently deployable ✧ Rolling update, canary releases, backward compatibility ◈ Microservices overhead ▸ In LESS complicated systems, extra baggage to manage microservices = reduce productivity ✧ But… as it becomes more complex, productivity of monoliths fall rapidly ◈ Microservice challenges ▸ Complexity of distributed systems ✧ Network latency, faults, inconsistencies ✧ Testing challenges ▸ Resource overhead, rpcs - requires thoughtful design ▸ Shifting complexities to network and operational complexity ▸ Frequently adopted by breaking down monolithic apps ◈ Serverless (functions as a service) ▸ Instead of writing minimal services, writ ejust functions ▸ No state - relies on cloud storage/services ▸ More way things can fail, state is expensive ✶ Microservice design example ◈ Example: system authentication ▸ User registration, authentication using UID/password, two factor authentication, user info management e.g. change password, reset password ▸ Each of these features = implemented as separate service with central shaed database to hold authentication info ▸ Features are TOO LARGE to be microservice → need functional breakdown ▸ E.g. user registration: ✧ Set up new login id ✧ Setup new password ✧ Setup password recovery info ✧ Setup two factor authentication ✧ Confirm registration ◈ Characteristics of microservices ▸ Self contained - no external dependencies ▸ Lightweight - communication overhead low ▸ Implementation independent - can be implemented using diff programming languages and diff tech ▸ Independently deployable ▸ Business oriented - implements business needs, rather than just a technical service ◈ Microservice communication ▸ Exchanging messages bw services e.g. admin info, req, data required to deliver ▸ Services return response ✧ Authentication service may send message to login service that includes name input by user ◈ Microservice characteristics ▸ High cohesion (measure of no of relationships that parts of a component have with each other) ✧ All parts needed to deliver components functionality are included in component ▸ Low coupling (measure of no of relationships one component has with other components in system) ✧ Low coupling = components have minimal relat. W other comps ▸ Each microservice should have a single responsibility i.e. should do one thing and do it well ✧ Difficult to define… ◈ Microservices architecture ▸ Architectural style of implementing logical software arch ▸ Addresses two problems w monolithic apps: ✧ Whole system needs to be rebuilt when changes are made – suppppeeeerrrr slow ✧ Demand on system increase = system needs to be scaled,even if demand is localised to small no of system components ◈ Benefits of MS architecture ▸ Self contained & run in separate processes ▸ In cloud based systems ,each microservice may be deployed in its own container ✧ Microservice can be stopped/restarted w/o affecting other parts of the system ▸ If demand on service increases, service replicas can be quickly created + deployed ◈ Microservice architecture design questions ▸ What are the microservices ▸ How should they communicate w each other ▸ How should service failure be detected/reported/managed ▸ How should they be coordinated ▸ How should data be distributed and shared ◈ Decomposition guidelines ▸ Balance fine grain functionality and system performance ✧ Single function services - changes are limited to fewer services but increase service communication, slows down system ▸ Follow common closure principle ✧ Elements of a system that are likely to be changed at same time should be located within same service ▸ Associate services with business capabilities ▸ Design services so that they only have access to the data that they need ◈ service communications ▸ Need to establish a standard for communication ▸ Make key decisions for standards ✧ Synchronous vs asynchronous ✧ Directly or via broker middleware ✧ Protocols? ◈ Synchronous vs asynchronous ▸ Synchronous: A waits for b ✧ Service A issues request to B, then suspends processing whilst B processes ▸ Asynchronous: A and B execute concurrently ✧ A issues request and queued for processing by B, and A continues processing without waiting ✧ B completes earlier request and queues results to be retrieved by A ◈ Direct vs indirect service communication ▸ Direct: A & B send messages to each other ✧ Requires interacting services know each others address ✧ Send reqs directly to these addresses ▸ Indirect: A & B communicate through a message broker ✧ Involve naming service required and send req to message broker (message bus) ✧ Broker responsible for finding service to fulfil request ◈ Microservice data design ▸ Isolate data within each system service with as little data sharing as possible ▸ If unavoidable, should design MS so most sharing is read only, minimal services can update data ▸ If services are replicated - must include mechanism that can keep DB copies used by replica services consistent ◈ Inconsistency management ▸ ACID (atomicity, consistency, isolation, durability) transaction bundles set of data updates into single unit → all updates completed or non are ✧ ACID transactions are impractical in MSA ▸ DB used by different MS/MS replicas neednt be consistent ALL the time ▸ Dependent data inconsistency ✧ actions /failures of ne service can cause daya managed by another to be inconsistent ▸ Replica inconsistency ✧ Several replicas of same service executing concurrently ✧ All have DB copy, and each updates own copy of service data ✧ Need to make these DB eventually consistent ◈ Eventual consistency ▸ System guarantees DBs will eventually = consistent ▸ I.e. maintain transaction log ✧ DB change made, recorded on pending updates log ✧ Other service instances look at log, update own DB and indicate made change ◈ Service coordination ▸ Workflow → interaction where ops have to carried out in specific order ▸ E.g. authentication workflow for UID/password authentication shows steps ◈ Failure types in microservices system ▸ Internal service failure ✧ Conditions detected by service - reported in error message ✧ E.g. service takes URL as input and discovers invalid link ▸ External service failure ✧ External cause affecting availability ✧ May cause it become unresponsive ▸ Service performance failure ✧ Performance degrades to unacceptable level ✧ May be due to heavy load, internal problem ✧ External service monitoring used to detect performance failures ◈ Timeouts and circuit breakers ▸ Timeout - Counter associated with service requests, starts running when request made ✧ Once counter reaches a certain value e.g. 10 secs, calling service assumes req failed ✧ Problem: delayed by timeout value, system slows down ▸ Circuit breaker - like electrical circuit breaker ✧ Immediately denies access to a failed service w/o delays ✶ RESTful services ◈ Overview ▸ REST (representational state transfer) architecture style = idea of transferring representations of digital resources from server to a client ✧ Resource = chunk of data i.e. credit card details ✧ Accessed through unique URI, ▸ Fundamental approach used in web → resource = page deployed in user browser ◈ RESTful principles ▸ Use HTTP verbs ✧ GET, PUT, POST, DELETE must be used to access ops ▸ Stateless services ✧ Services never maintain internal state ✧ Microservices are STATELESS ▸ URI addressable ✧ Must have unique ID, with hierarchical structure ▸ Use XML or JSON ✧ Resources should be represented in these forms ◈ RESTful service operations ▸ Create (HTTP POST) ✧ Creates resource with given URI ▸ Read (HTTP GET) ✧ Read and return value ▸ Update (HTTP PUT) ✧ Modifies existing resource ▸ Delete (HTTP DELETE) ✧ resource = inaccessible ◈ EXAMPLE: road info system ▸ System maintains info about road incidents, and accessed by URL ▸ Users can query system, when implemented as a RESTful service, need to design resource so incidents are organised hierarchically ✧ Maybe by road ID, location, incident number so it shows in URI e.g. https://trafficinfo.net/incidents/A90/stonehaven/north/1 ▸ E.g. incident ID: A90N17061714391 ✧ Id includes date, time, where, direction etc ◈ Service operations ▸ Retrieve (get) ▸ Add (post) ▸ Update (put) ▸ Delete (delete) ◈ Service deployment ▸ Once system is dev and delivered, has to be deployed on servers, and monitored and updated ▸ Deployment is more complicated when there are many microservices ▸ No standard deployment config for all services since each service dev team decides language, DB, etc ▸ Now normal for microservice dev teams to be responsible for deployment and management to ▸ CD means as soon as change is made, modified service is redeployed ◈ Deployment automation ▸ CD depends on automation so as soon as change committed, automated activities test software ▸ If pass tests, enters another automation pipeline that packages and deploys software ✶ Machine learning microservices ◈ Typical ML pipeline ▸ Static: ✧ Get labelled data (collect, clean, and label) ✧ ID and extract features (feature engn) ✧ Split data into training and eval set ✧ Learn model from training data (model training) ✧ Evaluate model on eval data (model eval) ✧ Repeat, revising features ▸ With production data: ✧ Eval model on production data; monitor (model monitoring) ✧ Select prod data for retraining (model training + eval) ✧ Update model regularly (model deployment) ◈ Feature engineering ▸ Id parameters of interest that model may learn on ▸ Convert data into useful form ▸ Normalise data ▸ Include context ▸ Remove misleading things ◈ Feature extraction ▸ In OCR/Translation → character boundaries, line segments for each character, CPS location of phone (to ID likely source language) ▸ In surge prediction → location/time of past surges, event, no of people, typical demand curves, weather ◈ Data cleaning and learning ▸ Remove outliers, normalise data, missing values? ▸ Then build predictor bes describing outcome for observed features ◈ ML model tradeoffs ▸ Accuracy ▸ Capabilities e.g. classification, recommendation, clustering ▸ Amount of training data needed ▸ Inference/learning (due to incremental learning) latency ▸ Model size ◈ Typical designs ▸ Static intelligence in product ✧ Difficult to update ✧ Good execution latency ✧ Cheap, offline operation ✧ No telemetry to eval and improve ▸ Client-side intelligence ✧ Update are slow, costly, out of sync ✧ Complexity in clients ✧ Offline op, low exec latency ▸ Server-centric intelligence ✧ Latency in model exec (remote) ✧ Easy to update and experiment ✧ Operation cost, and no offline op ▸ Back-end cached intelligence ✧ Precomputed common results ✧ Fast exec, partial offline ✧ Saves bandwidth, complicated updates ▸ Hybrid models??? ◈ Reactive systems ▸ Responsive, consistent, high performance ▸ Resilient - maintain responsive in face of failure ▸ Elastic - scale w varying loads ◈ Common design strategies ▸ Message driven, lazy comp, functional programming ✧ Asynchronous, message passing style ✧ Reduce bottlenecks ✧ Save resources and improve efficiency - e.g. waiting to load large datasets until explicitly required ✧ Reduce side effects ▸ Replication, containment, supervision ✧ Replicate and coordinate isolated comps e.g. w container ✧ Isolating each component in environment reduces dependencies and conflicts, and these being monitored makes sure they are operating as expected ▸ Data streams, infinite data, immutable facts ✧ Streaming tech, data lakes ✧ Continuous data streams e.g. social media feeds ✧ Models can process historical data and maintain consistent records, essential for ◈ Models can break, mistakes will happen ▸ ML components detect patterns from data - both real and FAKE (SPURIOUS i love that word) ✧ Predictions are often accurate but mistakes are always possible - no always explainable ▸ system/model outages - file corrupt? Model tested? ▸ Model errors and degradation - dtaa drift, feedback loops ✧ Backup strat? Undoable? Nontechnical compensation? ◈ Mitigating mistakes ▸ Investigating in ML - MORE TRAINING DATA, BETTER DATA AND FEATURES ▸ Less forceful experience - prompt rather than automate decisions ▸ Adjust learning parameters, more freq updates and manual adjustments ▸ guard rails - heuristics and output constraints ▸ Override errors by hardcoding specific results ◈ Telemetry ▸ Purpose ✧ Monitor operation, success (accuracy) and improve models over time ▸ Challenges ✧ Too much data and hard to measure ✧ Rare events are important but hard to capture ✧ Cost… significant investment! ✧ Privacy - abstracting data L9: TESTING ✶ Types of testing ◈ What is testing.. ▸ Execution of code on sample inputs in controlled environ ▸ Goals: ✧ Validate requirements ✧ Reveal failures and bugs ✧ Assess quality ✧ Verify contracts, clarify specification, documentation etc ◈ Software testing ▸ Execute program using data simulating user input ▸ Observe behaviour - you can infer program will behave correctly if inputs are representative of larger set ▸ If behaviour doesnt match… = program bugs ✧ Programming errors ✧ Understanding errors ◈ Types of testing ▸ Functional - functionality of overall system, discover as many bugs as possible ✧ Large set of program tests ✧ Unit → feature → system → release testing ▸ User - test usability by end users, show features help users do what they want to do, and that users understand ▸ Performance and load - software works quickly and can handle expected load, response and processing time = acceptable ▸ Security - maintains integrity and protect user info ◈ Unit testing ▸ Principles ✧ Aims to test indiv programs in isolation ✧ General assumption: if program unit functions correctly for set of inputs with common characteristics, it will function similarly for broader ✧ Crucial to id sets of inputs (EQUIVALENCE PARTITIONS) expected to be handled similarly by code ✧ Eq partitions should encompass valid and invalid inputs ▸ JUnit - popular unit testing framework for Java ✧ User friendly, strong tool support (maven, gradle) ✧ Serve as design mechanism ▸ Basic test elements ✧ Input and output ✧ Environment, harness (mechanism that initiates execution of test case), and oracle - way to determine test success ◈ Characteristics of good unit tests ▸ Small, fast deterministic, allowing devs to execute often ▸ Easy to write during code dev ▸ High test coverage ▸ Easy to understand failure reasons due to simplicity and focus ▸ Serve as documentation and examples ▸ Ideally 80% of tests should be unit tests ◈ Avoiding brittle tests ▸ Unchanging tests - tests should ideally be unchanged over time ▸ Testing through public apis: tests rely on public interfaces for interaction ▸ Testing state, not interactions: focus should be on system state after action, not specific interactions within code ◈ Writing clear tests ▸ Complete and concise ▸ Test behaviours, not methods ▸ No logic in tests - avoid complex logic within tests ▸ Clear failure messages ◈ blackbox/specification based testing ▸ Test cases designed based on equivalence classes derived from specifications ▸ Assumption tests passes for one value in equiv class, will pass for all values in that class ▸ Systematically derived from specification ◈ Boundary value testing ▸ Focuses on testing cases at boundaries of equivalence classes defined in the specifications ▸ Involves testing with inputs just inside, just outside, and exactly at boundary ◈ Unit testing guidelines ▸ Test edge cases ▸ Force errors ▸ Fill buffers (test with inputs causing buffer overflows) ▸ Repeat yourself… ▸ Overflow and underflow (really small or large numbers) ▸ Dont forget null and zero ▸ Keep count (track element counts in lists) ▸ One is different (test sequences with only a single element) ◈ Test design principles ▸ Use only public APIs ▸ Clearly distinguish bw inputs, config, exec, and the oracle ▸ Keep tests simple, avoid complex control flow ▸ Design tests that dont require frequent changes ◈ Anti patterns in testing ▸ Snoopy oracles ✧ Relying on internal implementation details instead of observable behaviour ▸ Brittle tests ✧ Overfitting tests to specific behaviour rather than gen principles ▸ Slow tests ✧ Take long to run ▸ Flaky tests ✧ Produce inconsistent pass/fail results for same input e.g. random inputs, timing issues ◈ Feature testing ▸ Purpose ✧ Involves evaluating indiv features within software system ✧ Ensures functionality of feature is implemented correctly + meets user needs ▸ Types of feature tests: ✧ Interaction tests : ensure proper interaction bw units in feature ✧ Usefulness tests : verify whether it meets needs ◈ E.g. sign in with google feature ▸ User stories describe desired functionality ▸ Specific tests designed based on user stories to verify feature aspects e.g. ✧ Correct display/functionality of login screen ✧ Handling of incorrect credentials ✧ Ifo sharing w Google ✧ Email opt in choice ◈ System and release testing ▸ System testing: ✧ Focus on testing entire system as whole rather than indiv features ✧ Goals include…. ↪ Discovering unexpected interactions bw features ↪ Ensuring thy work together effectively ↪ Validating system operation ↪ Testing quality attributes e.g. security ▸ Scenario based testing: ✧ Systematic approach to testing - starting w scenarios describing possible system uses ✧ Used to id end to end pathways ↪ E.g. user inputs departure airport and chooses to see all flights. User quits. ▸ Release testing ✧ Specifically for system being prepared for release to customers ✧ Key differences from gen system testing: ↪ Takes place in real op environ, more complex and less reliable user dtsa ↪ Goal to assess whether suitable for release, not just to find bugs ↪ Minor issues may be ignored if negligible impact ◈ Security testing ▸ Purpose: ✧ Aims to id vulnerabilities in system that attackers could exploit ✧ Objective → demonstrate system ability to resist attacks, malware injection, compromisation of user data ▸ Risk based security testing ✧ Id common security risks, dev tests specifically targeted at these ✧ automated tools scan system for vulnerabilities ✧ Tests based on id risks ✧ Manual inspection of system behaviour and files ▸ Risk analysis ✧ Only security risk ided, analysed to determine how exploited ✧ E.g. unauthorized access risks assessed by considering factors like weak passwords, no two factor authentication, social engineering attacks ✧ Tests dev to check for these vulnerabilities ◈ White box/structural testing ▸ Aims to test cases that execute various program elements e.g. functions, statements ▸ Underlying principle = unexectued code cannot be tested for bugs ◈ COVERAGE != COMPLETENESS ◈ Mutation testing ▸ Nice idea, several limitations ✧ Equivalent mutations ✧ Needs pretty complete test oracles ✧ Expensive to run ▸ Test oracles???? ✧ Obvious in some apps e.g. sort() but more challenging in others e.g. encrypt(), UI based tests ✧ Lack of c=good oracles can limit scalability ◈ Property-based testing ▸ Intends to validate invariants always true of computed result ✧ E.g. assert output.size() == input.size() ✧ rev(rev(list)).equals(list) ▸ Now easily scale testing to very large data sets without hard coding expected outputs completely ◈ Differential testing ▸ Two implementations of same specification, output should match on all inputs ◈ Regression testing ▸ Differential testing through time or versions ▸ Assuming V2 & V2 dont add new feature/fix bug, then f(x) in V should give same result as f(x) in V2 ✶ Test automation ◈ Test driven development (TDD) ▸ Agile dev technique → tests written before code they are designed to test ▸ Claims. ✧ encourage testable design ✧ Focus on interfaces ✧ Reduce unnecessary code ✧ Improve prod/test suit quality, overall productivity ▸ Common bar for contributions: ✧ Chromium ↪ Changes req corresponding tests ✧ Docker conventions ↪ Fork repo and changes in fork in feature branch ✧ Firefox ↪ Automated tests by default ◈ Stages of TDD ▸ Identify partial implementation ✧ Break down functionality into mini units ▸ Write mini unit tests ▸ Write code stub that will fail test ▸ Run all existing automated tests ✧ All previous tests pass, incomplete code should fail ▸ Implement code that should cause failing test to pass ▸ Rerun all automated tests ▸ Refactor code if necessary ◈ Benefits of TDD ▸ SYSTEMATIC APPROACH, TESTS CLEARLY LINKED TO SECTIONS ✧ Confident that tests cover all code ▸ Test = written specification ▸ Debugging simplified ✧ Immediately link program failure to last increment of code added ▸ TDD → simpler code ◈ Regression testing ▸ Usual model: ✧ Introduce regression tests for bug fixes ✧ Compare results as code evolves ✧ Benefits ↪ Ensure bug fixes remain in place ↪ Reduce reliance on specifications ◈ Testing levels… ▸ Unit testing ✧ Functions implemented correctly ✧ No complex environment setup needed ▸ Integration testing ✧ Do components interact correctly ▸ System testing ✧ Validating whole system end2end ✧ Select items from menus, screen selectings, inputting keyboard info ✧ Look for interactions bw features that may cause problems ✧ Manua; system testing is boring…. ▸ Testing in production ✧ Real data but risky… ◈ Automated tests ▸ Structure into three parts: ✧ Arange ↪ Set up system ↪ Define parameters ✧ Action ↪ Call unit being tested ✧ Assert ↪ What should happen ◈ Automated feature testing ▸ Users access features through GUI ▸ But…. GUI testing is expensive to automate - best so features can be accessed through an API, not just through user interface ▸ Feature tests can access them through API ✧ Can reimplement GUI without changing functional software components ✶ Limitations of testing ◈ Value of testing ▸ Ensure software meets req, is correct ▸ Prevent bugs/quality degradations/unexpected behaviour ▸ Increased confidence in change ▸ Increase code maintainability - tests are docs, checks softw design ◈ Ssommerville’s reasons for not using TDD ▸ Discourage radical program change ▸ Focused on tests rather than problem trying to solve ▸ Spent to much tim thinking about implementation details rather than programming problem ▸ Hard to write ‘bad data’ tests ✧ A lot of problems involve messy and incomplete data - unpredictable ◈ Limitations of testing… ▸ What can we not easily test for??? ✧ Cant prove correctness or other quality attributes ↪ Only measure sample ✧ Only as good as tests you write ✧ Does not validate requirements ▸ Only shows absence of bugs - not presence ▸ No formal assurances ▸ Writing them is time consuming and hard ▸ Executing is expensive ◈ Code coverage ▸ Beware of coverage chasing ✧ Numbers are deceptive: 100% coverage != exhaustively tested ✧ Coverage not correlated strongly with effectiveness ✧ Good low bar ▸ Coverage != outcome ✶ Fuzzing ◈ Concept ▸ Testing technique wher eyou provide invalid data as inputs ▸ Find security vulnerabilities ◈ Automatic oracles: sanitisers ▸ Tools that help detect memory errors during fuzzing ▸ Addresssanitiser (ASAN): detects various memory errors ▸ Leak sanitiser: detects memory leaks ▸ Threadsanitiser (TSAN): data races in multithreaded code ▸ Undefined behaviour sanitizer (UBSAN): detects undefined behaviour ◈ Fuzzing strength & limitations ▸ Strengths ✧ Easy and cheap to gen random inputs ✧ Effective at uncovering security vulnerabilities and crashes ▸ Limitations ✧ Random inputs are often meaningful - difficult to trigger specific program behaviour ✧ May take long time to find bugs ◈ Types of fuzzing: ▸ Mutation based ✧ Start w valid input (seed) and apple mutations to create variations ✧ Can involve bit flips, insertions, setting values to special cases ▸ Coverage guided fuzzing ✧ Use feedback from code coverage to guide mutation ✧ Favors mutations leading to new code paths ✧ American fuzzy lop (AFL) are popular for coverage guided fuzzing ◈ Applications of fuzzing ▸ Clusterfuzz (THATS SO FUNNY): LARGE SCALE GOOGLE FUZZING INFRASTRUCTURE TO TEST CHROME ▸ Unit testing: gen random data for specific data types ▸ Gens and mutators: tools used to gen random data and mutate existing data ✶ Performance testing ◈ Goals: ▸ Identify performance bugs e.g. ✧ Unexpected degradation for specific inputs ✧ Decline over time ✧ Discrepancies across versions ◈ Challenges ▸ Establishing performance oracle

Use Quizgecko on...
Browser
Browser