Quiz Topics Study Guide PDF

Summary

This document explores the trustworthiness of AI systems and the concept of the "black box" problem. It introduces different perspectives on trust, including Von Eschenbach's views, and examines arguments regarding the potential threat of AI. The document also discusses socio-technical systems and the challenges in understanding and controlling AI.

Full Transcript

Quiz Topics Study Guide The Black Box Problem With increasing automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. Due to the fact that AI systems using...

Quiz Topics Study Guide The Black Box Problem With increasing automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. Due to the fact that AI systems using deep learning operate and function in ways where we only wholly have an opaque or totally hidden understanding and comprehension of how the AI functions. Observers can witness the inputs and outputs of these complex and non-linear processes but not the inner workings. More specifically the Black Box problem revolves around trust, how can we trust AI systems when their mechanisms are opaque. Von Eschenbach View of Trust Von Eschenbach has two views of trust, In the first sense, we talk about trustworthiness when an individual is deliberating whether to trust another or not. In these cases, one is making judgments about whether another person is trustworthy in the sense of able-to-be-trusted. The trustee’s motivational states, interests, character, past performance, competency, and other personal characteristics all factor into the trustor’s judgments. Trustworthiness in this sense is the judgment that to trust in a person is fitting or appropriate given the circumstances. We also talk about trustworthiness in the sense of the trustee’s responsiveness to the trust placed in her. That one has been entrusted provides the trustee with reasons or motivations to be responsive in the appropriate way. In these cases, the trustee is making herself trustworthy in the sense of being in some way responsive to trust. A trusts B to do X only if A judges B to be trustworthy where trustworthy means that A has good reason to believe that B is competent in doing X. Everyday transactions with others, especially with those about whom we know very little, can be cases of reliance rather than trust. The barista at a local coffee shop is competent in fulfilling orders accurately and quickly. This is a necessary condition for trust and not sufficient. As cases of non trusting /reliance satisfy these conditions Von Eschenbach’s view that we should focus on socio-technical systems rather than AI’s themselves Von Eschenbach views that we should focus on socio technical systems rather than AI themselves due to the fact that. A socio-technical system sees technology as more than “a collection of devices intermediating between their designers on one hand and their users on the other” but is understood to be a hybrid between the technical and social. Treating technology in this way not only is more faithful to the phenomenology of using technology in everyday life but avoids the philosophical problems associated with attributing moral properties to artifacts and with limiting trust strictly to persons. DL systems used to diagnose and prognosticate the presence and progression of disease, such as cancer, occur within asocio technical system that includes the doctor, patient, technicians, hospital administrators, and health insurance companies, as well as AI designers, operators, and AI tools. Though each has a different role, level of understanding of AI, and potential for “opening” the black box, they have the common goal and interest in treating disease in the most efficient and effective manner possible. Trust with respect to technology, therefore, can only be understood in reference to the system as a whole, and each agent’s trustworthiness will be judged relative to the differences in roles, interests, and expertise. This Also solves two worries 1. We are holding AI to too high of a standard 2. Efforts to make AI transparent confuse explaining results with processes Cappelen, Goldstien, and Hawthrone’s three premise argument for AI threat. The three premise argument for AI threat is that 1. AI Systems will become extremely powerful 2. If AI systems become extremely powerful, they will destroy humanity 3. Therefore AI systems will destroy humanity The four way that argument could fail. The two ways premise 1 could be false. - A Technical Plateau occurs, and scientific barriers in technology development prevent AI from becoming extremely powerful. - Cultural Plateau occurs and humanity bans any AI research, stopping AI from becoming extremely powerful Two ways premise two could be false. - Alignment occurs, and AI systems do not destroy humanity, because their goals prevent them from doing so. Or at least possibly their goals have no consideration on humanity dying or surviving - Oversight occurs, and we have implemented a form of safe guard where we detect AI becoming too powerful reliably, and are able to shut it down. The worries about each individual survival story - Technical Plateau: Has one thought in favour which that super intelligence is impossible or would be incoherent. But that raises three challenges o (i) recursive self-improvement has the potential to produce superintelligent AI systems; o (ii) even without superintelligence, AI systems with roughly human-level intelligence could pose an existential threat to humanity; and o (iii) there are good reasons to think that such AI systems will soon be developed. - Cultural Plateau: Raises three issues o (i) it would be difficult for humanity to agree that AI is an existential threat; o (ii) even then, many individual actors have powerful incentives to continue making AI systems; o (iii) banning AI development is a collective problem, because the relevant actors are engaged in a race. - Alignment: Four challenges o (i) AI systems will tend to develop intrinsic goals that conflict with human goals; o (ii) scarce resource competition and human attempts to control AI will give AIs instrumental reasons to enter conflict with humanity; o (iii) selection pressure pushes against indifferent AI; o (iv) existing alignment techniques are uninspiring. - Oversight: Three challenges o Long term perfect oversight requires a very low risk of failure, but our best safety tools pass through fallible bottlenecks o Even more intelligent AI systems facilitate making AI systems safer, we should expect fluctuations in the relative rates of increases in danger vs increases in safety o Perfectly safe oversight faces the challenge of being no equilibrium for safe systems

Use Quizgecko on...
Browser
Browser