Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Testing → unit, integration, end-to-end, smoke, regression Tests are also split into categories, based on their complexity or their role in the development process: Unit tests - exercise individual functions in isolation, mocking any dependencies required (ie. network requests) Usually quick t...

Testing → unit, integration, end-to-end, smoke, regression Tests are also split into categories, based on their complexity or their role in the development process: Unit tests - exercise individual functions in isolation, mocking any dependencies required (ie. network requests) Usually quick to write, and their only goal is to ensure that each function satisfies its contract - any bugs captured by unit tests are usually easier to pinpoint and address Integration tests - exercise groups of components to ensure correct interaction between individual units Involve much larger portions of the CUT, so they can be prone to false positives, or the bugs they detect can be more difficult to narrow down End-to-end tests (ie. acceptance tests) - validate entire customer experience, and are most useful out of all forms of testing to check for quality issues (ie. performance, usability), since these are difficult to test in isolation Fantastic for ensuring the system behaves correctly, but very difficult to use when narrowing down bugs A | B tests - are used in a production environment to determine whether version A or version B of a program performs better at runtime Not used to test correctness, instead used to support business decisions Smoke/canary tests - subset of test suite designed to run very quickly, is highly reliable, and has high effectiveness Attempts to reveal faults as quickly as possible when changes are made to the CUT Makes it possible to avoid running integration or end-to-end tests for code that is known to be faulty or unimplemented Regression tests - a set of tests that are run to ensure that changes to the code do not fail previously-implemented tests (ie. the code doesn’t regress Process → waterfall, agile, scrum Definition: Structured set of activities for developing a software system → who, what, when, how to attain goals ○ Describes product + stakeholders ○ Adopted early to avoid software project risks ROLE: increase transparency about the software development process so that all stakeholders know what is required of them/what they can expect from others Risks team face + Mitigation Strategies Software Project Risks ○ User: resistance to change; conflicts between them, negative attitudes ○ Requirements: changing, inadequately identified, unclear, incorrect ○ Project Complexity: new tech, immature tech, first use of tech ○ Planning & Control: poor process; inadequate estimation; poor planning; unclear milestones; inexperienced PM; ineffective communication ○ Team: lack of experience, training, specialized skills, experience working as a team ○ Organizational Environment: change of management during the project; unstable org; ongoing restructuring Mitigations ○ Processes are essential de-risking mechanisms to help everyone on a team work together effectively ○ Teams use processes to provide clarity about how decisions get made + how the software will evolve Process Phases → GOAL: clear steps, produce tangible items, allow review of work, specify actions to perform next Requirement elicitation Architectural design Detailed design/SPEC Implementation Integration Testing deployment Maintenance Waterfall: each project phase flows into the next one with explicit stakeholder sign-off before the next phase begins Key features: ○ Sequential phases, clear + specific handoffs between stakeholders, exit criteria → before a phase can be considered complete, an explicit set of exit criteria must be validated Waterfall Phases Requirements (18 months) ○ Soliciting customer feedback; creating high-fidelity mockups; validating reqs Design (12 months) ○ Components, classes, methods, fields ○ Deriving high-level architectural description + all design info + doc Implementation (18 months) ○ Constructing system components matching design that performs as in the req Verification (12 months) ○ Takes SPEC + makes a test plan to verify implementation correctness ○ Ensure the overall system performs as specified in the reqs Maintenance (15+ years after v1) ○ Evolves system over time ○ Keep the system going amid OS/Security/device changes Waterfall Shortcomings Reqs are often imperfectly understood Overall value cannot be validated until the process has run to completion Resistant to change + hard to revisit Struggles to adapt to changes in business needs/priorities during the project Waterfall is well-suited for projects where the objectives are clearly outlined from the beginning + long term projects Spiral → tries to increase the responsiveness of waterfall-based processes Revisit each system phase on each iteration Phases: Planning (gather + analyze req), Risk Analysis, Engineering (the system is built + validated), Evaluation (the system is validated externally with customers to inform future iterations) Compared to Waterfall, SPIRAL… PROS CONS Greater sensitivity to changes in reqs The overhead of performing effective risk Keep customers involved → avoid situations analysis + review = overwhelming where the team builds the wrong software Large delays in customer feedback AGILE Arose in response to concerns that dev teams weren’t able to rapidly + flexibly produce software systems Automation made it possible to develop + release software in smaller, frequent iterations Used by teams who need to frequently deploy their systems (e.g. continuous deployment techniques) ○ Agile Manifesto Individuals + interactions over processes + tools Working software over comprehensive doc Customer collaboration over contract negotiation Responding to change by following a plan GOAL: decrease the amount of time devs spend building the wrong thing + let the engineering team try out different design alternatives to see what works best for clients AGILE Process: TDD Emergent design: minimum viable product → identify duplication + introduce abstractions to solve duplication Refactor code: architectural spike must come first AGILE Benefits: Increased customer interaction Focus on experimentation The system is always in a buildable state More flexibility Extreme (XP) → type of agile The system is always buildable → testable + always ready for deployment improve software quality and responsiveness to changing customer requirements. Devs should be willing to start small + adapt + refactor systems XP Principles Communication: w/ stakeholders can help projects stay on track + schedule decisions Simplicity: focus on the simplest solution so engineers can validate their work with customers before tackling more expensive solutions Feedback: from tests, customers, and teams → more knowledge to ensure the most locally correct decisions are being made Courage: be willing to discard failed experiments (not sunk cost); opportunity to learn + improve system Respect: don’t break the build; focus on long-term understandability Spikes: short, intense activities preceding development iterations; insight for decision-making Architectural spike: when the product is being devised, decide on high- and medium-level architectures ○ E.g. tech stack, collaborators, subcomponents of major components (client, server) User Interface spike: before any UI development, decide on look + feel, UI framework, plan for UI expansion ○ Lightweight prototyping + setting the stage for design from a user pov TDD Ensures automated testing is always a key part of development Important b/c agile relies on quickly refactoring code Not often used in practice as devs have a hard time writing tests before source code SCRUM: an incremental iteration-based methodology that breaks work into fixed-length sprints (1-3 weeks) @ the end of each sprint, the code should be shippable Scrum teams work in series of sprints → no new reqs during the sprint The dev team commits to implementing the work items (user stories) in one sprint scrum/agile board to track progress Unsolved work items are moved back to the product backlog Sprints take 2-4 weeks SCRUM ROLES PRODUCT OWNER SCRUM LEAD TEAM Defines features of the Facilitate scrum process Self-organizing, product Resolve problems self-managing, Prioritizes features Shields team from external cross-functional according to the market interferences Devs, designers, managers, value NOT manager clients Adjust features + priorities 7 (+/-) ppl in every iteration, as needed SCRUM ARTIFACTS Product Backlog Sprint Backlog Burndown chart Prioritized list of backlog Contains a list of user Total remaining items (PBIs) stories that are team task hours PBIs specify a negotiated by the within one sprint customer-centric team + product owner feature(User Story form) from the Product Effort estimated by the Backlog Team, priority estimated by Negotiated PBIs the Product Owner broken down into specific tasks SCRUM CEREMONIES Sprint planning - occurs before the sprint starts ○ What user stories will be included in the sprint? Work items are pulled from the product backlog ○ The product owner presents the highest-priority user stories from the product backlog ○ After priorities are specified, the tech team estimates how long each task will take ○ The team decides which tasks to include in the sprint + which to defer to future sprints Standup meeting ~ 15 min (daily) ○ Held @ same time + place each day, usually in the morning What did you complete yesterday? What will you work on today? What is blocking your progress? ○ An effective way for Scrum Master to track team progress ○ Only team members involved in the technical challenges of the system speak in the scrum meeting (others may attend) ○ Allow the team to be aware of what everyone is working on to help align work for the upcoming day to minimize teammates being blocked by each other ○ It is not a problem-solving session; not designed to blame who is behind schedule Sprint review ~ less than 2 hrs (everyone is invited) @ the end of the sprint ○ After each sprint, review meeting to demonstrate their feature to the product owner ○ This is possible b/c the output of each sprint is supposed to be a potential shippable product ○ The team evaluates how much progress has been made out of the sprint backlog Retrospective @end of sprint ○ Reflect + identify activities the team should start/stop/continue to do ○ Make sure the process is providing value for the team + find opportunities to improve + refine the activities the team is performing Note: review + retrospective can be one or separate meetings Strength of Scrum (relative to prior methodologies) Drives teams to deliver small quanta of software → GOAL = deliver after every sprint KANBAN: pull-based software process that emphasizes continuous delivery + flexibility No sprints, continuous process without sprint backlog Heavy use of agile boards to visualize + track work as it happens Pull system ○ Each column on the board has a work-in-progress (WIP) limit related to the team’s capacity “Keep WIP under control” Reduce the chances of the team being overworked or work not getting pushed to completion Each task is only moved off when it is completed Columns ○ Backlog (all available work) ← ToDos ○ Doing (the work that is currently being performed) ○ Review (work that is ready for sharing + review by the team) ○ Done (work that has been reviewed + completed) ○ Note: some teams combine doing + review Kanban arose from teams who felt agile was helpful for productivity + quality, but felt constrained by the rigid-feeling aspects of scrum with their explicit schedules, planning, ceremonies KANBAN vs Scrum No specific release dates → team’s decision No sprint planning or reviews, but daily standups are still included Used for more rapid releases than Scrum Less rigid ceremonies emphasizing the continuous development of features ○ Any time an item moves to the done part of the agile board, a Kanban-based team may ship the feature, without having to wait for a sprint to end _____________________________________________________________________________________________ Specification/Requirements (User Stories) User Stories User stories: lightweight descriptors of features used to specify software development tasks ○ Used to discuss features, how they can be validated, costs ○ Easy way to increase cohesion between product owners + engineers Format ROLE-GOAL-BENEFIT: “As a I want so that ○ Not RGB → implement… define database schema, automated algorithm, refactor code to make it more readable ← these are engineering tasks Definition of Done (acceptance criteria) → specific description of how the story can be validated by both the developer + product owner to ensure it is completed correctly ← this is the solution to the RBG ○ Help the dev create the correct feature + avoid working on extra functionality the stakeholder might not need ○ DoD is user level; these are your contracts with your clients; client-oriented solution domain → should not mention code ○ Dod derives test suites Important so that all stakeholders understand the ways the story will be evaluated Brings concerns to the forefront before development starts + encourages that features be built in a verifiable USER STORY EXAMPLE RGB: As a shopper, I want to be able to buy something and then see it in my purchased list so that I can spend money on the site Definition of Done: User clicks the button buy, and it appears in their purchased item, and it is shipped to their home and then the user will see the money deducted from their account BAD USER STORY + DoD EXAMPLE RGB: As a buyer, when I’m told that I’m not approved for purchase by the system, I want to be able to click request approval (solution domain!)and then receive confirmation that the approval request has been sent. (NO BENEFIT! How valuable is this) Definition of Done: User is seeing “not approved” and clicks “request approval, and this triggers a react function which makes user’s ID appear in the list of approval requests, and an email is sent back to the user that their request is in processing ← “react function” is code Engineering Tasks: useful for the dev team to keep track of how this feature interacts with the system or subsystem ○ NOT from a user’s POV; E.g. finish parser, investigate JSON library, set up the database, mocks for testing ○ Based on these tasks, the devs estimate how much time the story will take Estimating Story Points ○ Story points: corresponds to an hour of dev work ○ Traditionally was made by a dev, guessing (based on experience) how long a story would take them ○ Estimates are moving more towards an ontology (e.g. using classifications rather than opaque experience); this is key, especially for young teams or new domains ○ Important to ensure teams can complete work; used to see if the team is ahead/behind schedule; greater responsiveness + team awareness ○ The larger the story, the worse we will be at estimating it Task → User story → epics → themes Themes group epics + describe even higher-level objective ○ E.g. Introduce tracking enhancements to our cycling app Epics group together related user stories → delivered over multiple sprints; grouping stories that share the overall goal ○ E.g. enhance the cycling app’s GPS tracking functionality User stories provide a useful middle group → good for individual features/fixes ○ These are discrete product function that produces new value for customer ○ E.g. As a cyclist, I want to track my rides in Google Maps and get directions simultaneously Task: a specific piece of technical work needed to complete a user story ○ E.g. enable in-app alert for new Google Maps integration User STORY: RGB: As a prof, I want to create a repo for a 310 team so they can start working on the project DoD: Single command will take params + complete task Success should be programmatically verifiable Unit tests to check for error handling for org, team name, members Integration test ensure compatibility with the Github API Script-based tests will be provided for the command line aspects of the feature Engineering notes: MUST integrate with existing GitHubManager Any constants that would need to be changed must be stored in config.js API will be used by user interface in the future, keep this in mind when designing API Estimate: 1.4 units User stories relate the problem domain to the solution domain to capture client expectations Problem domain: RBG + DoD are written here (from user pov) → never written in solution domain Solution domain: engineering-forward view into how the story will be implemented Agile approaches depends on user stories If they are ill-defined/incomplete, the story will be built poorly + any time estimates are meaningless INVEST → user story checklist Independent User stories should not depend on the implementations of other usr stories Self-contained; can be reordered + implemented This does not mean unsequenced → user stories need to build upon one another ○ Within a sprint, user stories should not be dependent on each other/blocked by one another Negotiable: needs to be dialogue between customer and technical team to determine which user stories are selected for completion Clients have to fully understand + critique how a feature will work Clearly written; time estimates are present so a customer can decide whether the amount of time spent is worth it Valuable to users/customers: clear about how it adds meaningful value to a product Explicit discussion surrounding a user story’s value to a customer → monetary or refactoring Refactoring needs to be put into a story that has client value (or put into a Spike) Technical Debt(relevant to Agile) ← e.g. refactoring ○ Design choices made in interest of time/budget rather than technical design justifications ○ Accrue over time + require broad system restructuring to decrease debt ○ Needs to be allocated on a different “budget” or put into an engineering task + revealed to customer so they can negotiate whether it is valuable to them ○ Some technical debt related user stories can exist (“user” as a subsystem) “As the payment system, I need refactoring of the underlying code to cailiatate the change to the costing algorithm” Estimate: user stories should be written precisely so a dev can estimate how long it will take (Fibonnaci in terms of hrs) → huge value to dev b/c projects can run overtime if requirements are vague User stories make reasoning about tasks + maintain schedules tractable ← if theuser story can’t be estimated, it won’t work in practice Small User stories should take between half a day and half an iteration (e.g. 1 week) Split up longer stories so that everything can get completed during an iteration Some teams strive to get size down in a single day → over granulating can lead to undesired dependencies between stories More opportunity for stakeholder engagement + incremental change Testable: this is how you know you’re done + you’ve built what you said you were going to build DoD to verify the dev team has completed what the customer was looing for Some things are not testable (e.g. make the user happy) Testability is directly related to having products that are easier to validate. It has little impact on forcing a process to be followed Requirements Functional Requirements: specify what the system should do requirements describe what a system is to do, but not how it is to be done Functional req: “a list should be sortable in 0(nlogn) time” “a list will be sorted using quicksort” ← not a req Quality Attributes: properties that the product must have; usually described using adjectives; often strongly impacts system success; low complexity, high performance often in conflict with one another ○ Complexity vs performance ○ Usability vs scalability E.g. security(e.g. Privacy, confidentiality), reliability(e.g. Durability, recoverability), performance(e.g. Scalability, capacity), legal(e.g. Compliance, regulatory), usability(e.g. Learnability, accessibility), others(e.g. Affordability, debug, evolve) Vary in who they are actually important to (e.g. customer cares more about usability performance vs development team cares more about reliability) OTHER REQS Design constraints: legal or field-specific standards ○ Conway’s law → organizational structure is reelected in code structure (separate UI + Model teams) Environmental constraints → software systems don’t exist in isolation ○ Ensures new system will work with existing systems ○ Mandate execution system (OS, libraries, services) + operational expectations (expected input, performance constraints, disaster recovery) Preferences → rank reqs should they come in conflict Requirement Engineering Lifecycle Elicitation: the process reqs are gathered → sources = clients, users, observation, vidoes, docs, interviews ○ Ethical concerns will be evaluated here Validation: have we elictated + documented the right reqs? Are they consistent Requirement Size Alignment Large Reqs (comprehensive reqs/formal specs) ○ Needed for situations where coding, compiling and testing are expensive (e.g. aircraft, medical devices) → large docs + written formally in situations where life is at risk ○ Systems applied to high-risk problems → specifying reqs is important ○ Problems Difficult for a client to play our the behaviour based on the description b/c its too in-depth Medium Rews (use cases): when coding, compiling, shipping became cheaper, but were NOT free ○ Problems Difficult for … → formal descriptions Contributes to the mismatch between client expectations + what the dev does Intermingles the client + solution domains Interconnected → refer to one another Weave together multiple stakeholders: requestor, buyer, vendor Small Reqs (user stories): when coding, testing, shipping are free → humans are the most expensive ○ PROS clear link between problem + solution domain b/c DoD + succinct descriptions Decrease risk → DoD makes sure we are building the right thing; small increments so failure is detected quicker No hierarchy the way prior reqs did; good way to distribute work between team Pictorial Reqs ○ Use case diagrams show the packaging + decomposition of use cases, not their content ○ Each ellipse is a use case → only top-level services should be shown; not internal behaviour ○ Actors can be in other systems (can be in other use case diagram) ○ Are not good enough by themselves → must individually document use cases Reqs + Specs communicate the same thing, but to different audiences Specifications: describe WHAT to do (NOT HOW) ← written for the engineering team (can vary in formalism) Connects customer + engineer; ensures parts of the implementation work together; defines correctness of the implementation A system is worthless if it solves the wrong problem → good SPEC is essential for a project to be successful Writing SPECS are hard Challenge: translating informal (vague) reqs into the SE domain ○ Hard b/c of the gap between the abstraction of natural languages + precision required by an implementation From the technical view, Specs should be … ○ Complete - if not, can create a misunderstanding ○ Consistent - if not, can make it hard to understand the right behaviour ○ Precise - if it isn’t precise, it further complicated understanding the intended behaviour ○ Concise - too much can provide space for imprecision/inconsistency Management view: balance what you want to have + what you can have in a system ○ Challenge: include all stakeholders, make decisions smoothly/rapidly, satisfy as many constraints as possible _____________________________________________________________________________________________ Automation ○ Managing these tasks historically taken a large amount of coordination → time-consuming + error-prone The faster this is done, the quicker a developer will be able to get feedback Goals of Automation Repeatability → the process of building software should not vary between different versions (excluding small improvements) - ppl are less involved Reliability → the build process should not be subject to non-deterministic failures Reversibility → quickly revert out of any change Automation: ← all mature teams use automation to improve their process Derisk development through continual evaluation Facilitates rapid problem identification Eases rollback; past states are well-understood Decreases resistance to release (individual steps well understood, possibility of rollback known) Increase team trust Small investments in automation almost always pay dividends in future time-saving What can be automated? Source/version control - git, hg Dependency management - ant, nvm, npm, yarn build/compilation tools - make, ant, gulp Analysis tools → code review to ensure changes do not introduce obvious errors or violate project/team coding guidelines ○ Lint, security checkers, check style Testing tools → Junit, Nunit, mocha These first 5 steps = continuous integration Test runners → test suites must run in a repeatable way; test run on common infrastructure to increase the consistency between test runs for all devs → Jenkins, Bamboo, TravisCI Deployment → software must be developed once it's built; it should be automated to reduce chance of human error Each step has a feedback loop to go back to the changing code → includes changing metadata(build numbers, commit messages) + automation scritps We can automate all the steps except the original code change Continuous Integration Refers to the build + unit testing stages of the software release process. Every committed revision triggers an automated build + test. Works with programmers to ensure that code is high-quality Automatically adds dependencies, checks for build issues, run tests suites Facilitates branch development ○ Devs can work in parallel to add functionality to different parts of the application ○ Always the risk of merge conflicts or other conflicting changes (e.g. bug introduction) Continuous delivery/deployment: changes that are pushed to remote repo are also distributed to clients automatically (e.g. chrome updates) → automated updates on client machines the main difference between continuous delivery and continuous deployment is whether the release is fully automatic, or if there is a manual step where someone has to ‘push a button Continuous release push for automated, or very frequent updates (on the scale of weeks or days) → gives devs a lot of feedback to work with Hosted software(Saas: software as a service) can even be updated without any customers knowing Semantic Versioning MAJOR.MINOR.PATCH ○ Only integers, no leading zeros (unless it’s 0) → e.g. 1.0.2 is valid; 1.02.2 invalid MAJOR ○ Incompatible changes in the API may affect any client of the package; clients may fail to compile MINOR ○ new features have been added, but these should not affect the compilation of existing clients; can introduce new APIs PATCH → should never affect any clients; used to distribute bug fixes When MAJOR is 0, treat the package as a prerelease ○ Any version change = breaking change ○ Can have sub-patches indicated like 2.12.7+20251123, for documentation changes Upgrading ○ Can always safely upgrade MINOR + PATCH ○ MAJOR upgrades are often not backward compatible, so may be more risky Downgrading ○ Downgrading PATCH is usually safe, as is MINOR (if you aren’t relying on any newly-introduced APIs) ○ Downgrading MAJOR carries significant risk ○ Generally want to avoid downgrading MAJOR or MINOR versions Feature flags: enable DevOps engineers to dynamically turn on + off different features for different users Canary deployment: a new feature can be trailed on a small, targeted subset of users to gather data + ensure that it works in practice A|B testing: different versions of the same feature are simultaneously deployed so that the impact of the new version can be compared to the other version Chaos tools: extension to automated tools Intentionally stress-test software services (e.g. take it offline) to see how the teams + platform would recover ________________________________________________________________________________________ Ethic → IP Dealing w/ Data → often analyzed or sold to 3rd parties Anonymization is used to avoid reputational harm to the collecting org + harm to those whose data is collected → simple to perform but many shortcomings Algorithm Bias Systematic + repeatable errors in a computer system that create ‘unfair’ outcomes (e.g. privileging one category over another in ways different from the intended function of the algorithm) ○ Assigning prices, making shippable decisions, approving loans + mortgages, admitting ppl to schools, making parole decisions Intellectual Property: tangible expressions of intellectual/creative pursuits (e.g. inventions, designs, creative works) are treated in legal + social spheres as property, with all its attendant implications (e.g. ownership, use, economic transactions) Governed by copyright, trademark, patent, trade secret law As an IP USER, I need to HONOR creators’ rights + have limited rights to use existing work As an IP CREATOR, I have the right to profit + must honour the rights of other creators when I use their work As an EMPLOYEE, I have responsibility to comply w/ IP laws + protect trade secrets IP Protection Trademark: differentiates a product/org in the marketplace ○ Registered on a country-by-country basis + cost a few $1000/country to register; ○ Can be renewed indefinitely Copyright: mechanism to protect creative work (the representation of the work, not the idea) ○ E.g. protects source code instead of the function of the code. Other people can write code that does the same thing ○ Lasts until 50 years after the death of the author ○ APIs are considered uncopyrightable ○ Copyright exceptions for software → backup copies are allowed Reverse engineering for the purpose of detecting security vulnerabilities, conducting encryption research, improving interoperability between programs ○ Standard conventions, restricted implementations + obvious implementations CANNOT be protected with copyright Trade secrets: protects info using confidentiality agreements → no limit to how long trade secrets can survive for ○ Releasing the compiled version of a program is not considered revealing the source code Patents: state-conferred (exclusive) rights to an inventor for their invention for profit, for a given amount of time → 20 years of protection ○ Hard to claim that software is an invention, because it must be novel, non-obvious, useful ○ Software = abstract idea → need to demonstrate its unique vs all previous related software Licenses: used w/ the other options to capitalize on IP Restrictive/Copyleft: Generally require the same rights for derivative work. E.g., GPL Weak Copyleft: Enable portions of a system to be released under non-copyleft licenses. ○ Specific exceptions for linking to libraries. E.g., LGPL Permissive: Require only attribution, allowing non-derivative portions to remain proprietary. E.g., MPL/MIT/BSD. Read: https://fossa.com/blog/all-about-copyleft-licenses ○ Copy left → GPLv3 → programs can be used, shared, modified, but all derivations must be licensed under GPLv3 E.g. modifying a GPLv3 library requires your application to also be GPLv3 ○ Weak copyleft → LPGLv3 → similar to GPLv3, but only marked libraries must be shared under LGPLv3l; more restrictive licenses can be used ○ MPLv2 → only files need to be shared (rather than library) ○ CC-by → specialized licenses that users can pick and choose from to fit their needs IP protection protects the right to: Produce, reproduce, imitate original work Sell, rent, license or distribute copies of OG work Allow others to derivative works in new ways Creators can sue: To stop infringers, for compensation of lost profits, for earnings from infringers Tradeoffs between Individual needs + Business goals Balancing creative + public rights ○ Creator: time-bounded exclusive rights ○ Public: restricted rights for use of existing work in new creations But there needs to be some access is necessary for innovation Derivative works ○ IP policies strive NOT to inhibit future innovation ○ Allows improvements of existing inventions, new works that include portions of existing work, innovative interpretations of existing work

Use Quizgecko on...
Browser
Browser