What Is Science? (And Why Does This Matter?) PDF

Summary

This document from a book chapter explores the concept of scientific knowledge and its various applications. It examines issues like climate change and its scientific basis, appropriate scientific evidence, and the public perception of science.

Full Transcript

2 What Is Science? (And Why Does This Matter?) willem halffman 2.1 Trust Me, I’m a Scientist What is...

2 What Is Science? (And Why Does This Matter?) willem halffman 2.1 Trust Me, I’m a Scientist What is science? The question may sound academic, the kind of question only professional philosophers of science would worry about. Not so. In fact, it is a very practical question, and its answer can be of enormous consequence. Knowledge that can claim to be ‘scientific’ generally carries more weight and that, in turn, can affect how people make some rather important decisions. Here are some examples: – What knowledge is scientifically sound enough to justify expensive climate policies? Climate sceptics have challenged what counts as ‘sufficiently sound evi- dence’ for anthropogenic climate change (Lomborg, 2001), and whether the Intergovernmental Panel on Climate Change should be considered ‘political’ rather than ‘scientific’ – and not always for the noblest reasons (Oreskes and Conway, 2010). See also the case on climate science in this volume (Hulme, Chapter 3). – What is permissible scientific evidence in a court of law? Is the forensic identification of handwriting sufficiently ‘scientific’ to be allowed as expert evidence in court? Or is it more akin to graphol- ogy, the dubious pseudo-science that infers emotions and mental states from handwriting? Legal challenges have effectively required judges to demarcate ‘science’ from ‘merely experiential knowledge’ of hand- writing. Historically, the verdict on handwriting expertise has been by no means straightforward (Mnookin, 2001; Solomon and Hackett, 1996). 11 https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 12 Halffman – Should creationism be taught in public schools? Traditional religious objections against evolutionary biology relied on theol- ogy: Darwinism contradicted religious texts, which were presented as the higher authority. Modern creationists try to mobilise the authority of science by claiming that theories of intelligent design have scientific credence, or, inversely, by challenging scientific evidence in evolutionary biology. Their attempts to replace Darwinism with creationism in school curricula have challenged what counts as ‘science’ (Gieryn, Bevins, and Zehr, 1985), or have presented creationism as ‘science also’. Establishing exactly which knowledge can rightfully claim to be ‘scientific’, and hence receive the extra cognitive authority that comes with it, is surpris- ingly difficult. As an (environmental) scientist with the best of intentions, you may want to claim that the world should believe you because your knowledge is ‘scientific’, but in practice things are not so simple – even if your knowledge really is quite solid. In this chapter, we will describe attempts to resolve this problem through various ‘gold standards’ to distinguish solid, scientific knowl- edge from all the rest. We will show that there are no simple solutions. ‘Trust me, I’m a scientist,’ or ‘we used the scientific method’ are not going to convince people. This is a problem that cannot be resolved with simple analytic distinctions such as ‘scientific’, ‘specialist’, or ‘expert’. In the practice of environmental expertise, facts and values are intricately connected in ways that defy a simple separation into distinct roles for scientists and non-scientists. After we clear this conceptual dead end in this chapter, the rest of the book will suggest more fruitful ways forward by describing the processes and institutions that either challenge or build the trust in knowledge. This chapter is probably the most conceptual in the book. It tries to address some deep-rooted assumptions about science and knowledge, in order to make way for alternative conceptions and experiences. The chapter builds on many concrete examples, but in case you find it still too abstract, we recommend you proceed with the rest of the book and return to this one later. Although this conceptual problem logically precedes the rest of the book, reversing the order may be more instructive for some. 2.2 The Reputation of Science and Its Uses Even with good arguments, it can be very difficult to come to an agree- ment on what exactly may count as scientific. If we momentarily suspend our urge to decide who is right and who is wrong, then we can at least https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 13 establish that whether something is ‘proper science’ can be of grave importance, potentially raising a lot of agitation. Is this vaccine really safe, and is contrary anecdotal evidence ‘not scientific’? Should we take the knowledge of local people into consideration for regional planning decisions, or stick with science-based knowledge only? Can patients participate in an advisory body on disease management, or only scientific experts? Should we stop public funding for a field of research because it is not universally considered ‘properly scientific’? If you want knowledge to be part of school curricula, or of (legal) decisions, political deliberation, or public research funding, it seems to help a lot if you can claim that this knowledge is ‘scientific’. In most societies, science tends to have a considerable reputation. In spite of populism, science is still more trusted to establish factual truth than other institutions: if we want to know what is safe or healthy, what the state of our environment is, or what our options are for a sustainable future, we generally trust scientists over companies, governments, or social movements. Science has cognitive authority: if science says it is true, then it must be so – or at least highly likely, to most people. The label ‘scientific’ acts as a certification for the reliability of knowledge. Even though science’s cognitive authority holds in general, there are impor- tant qualifications. First, not everybody trusts science to the same degree. Scientists and more highly educated people tend to trust science more than the general public, the level of trust varies considerably between countries, and there seems to have been some erosion of trust over time (De Jonge, 2016; Gauchat, 2011). Second, there are good reasons not to trust science blindly. A deeper understanding of science may also lead to criticism of some con- troversial techno-scientific endeavours, such as genetic modification or animal experimentation. The history of eugenics, abuse in human experiments, and stories of scientific fraud remind us that our trust in science should not be too unconditional. Science was never perfect. The cognitive authority of science is symbolised in heroic stories that we tell our children and students. We tell tales of how Galileo Galilei was almost burnt at the stake for a better model of the solar system, or of the immeasurable genius of Newton, and Einstein. We even keep Einstein’s brain in a jar, because some people thought it might reveal the secret of his brilliance (Abraham, 2004). If you grew up in France, then you will have learnt to celebrate Louis Pasteur for conquering superstition and bringing us vaccines (Geison, 1995). In environmental science, you probably know of the scorn and ridicule that befell Darwin for his theory of evolution (Desmond and Moore, 2009), and of Rachel Carson’s courageous struggle to show how organochlorine pesticides https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 14 Halffman threatened our birds (Carson, 1962).1 We honoured these people with statues in parks and market squares, wrote their biographies, and put their faces on postage stamps or bank notes. Modern culture celebrates science (even if it does not always fund it accordingly). Let us be clear about this: the cognitive authority of science is well earned. Science has an impressive record of achievement. Just think about our increased life expectancy (at least in the richer parts of the world), the increased productivity of agriculture, our environmental awareness, the knowledge of our distant past, and of our place in the cosmos. The authors of this book would not turn to quackery and superstition if they got ill, but to tested medicine. We want proper, certified science to assess the quality of our drinking water, and we trust our life to the wonders of engineering and aerodynamics whenever we step into an airplane. Even though we may point out many complications, the cognitive authority of science is not achieved through empty rituals, but is based on the accumulated improvements to the human condition of the past, using carefully honed methods and institutions. Even though we may be critical at times, daily life in a technological society would become impossible without some level of delegation – that is, trust in scientists’ assessments of matters we cannot figure out for ourselves. For the safety of our drinking water, food, trains, or pills, we rely on regulatory standards that are at least partly based on scientific knowledge. If we had the talent and the time to study for ourselves the toxicology, food science, railroad engineering, or pharmacology involved, we might theoretically be able to verify the solidity of the knowledge involved. In practice, citizens, policy makers, companies, and judges rely on delegation: we hesitatingly trust the specialists who assess such safety issues, and expect that the science involved will be impartial, sound, and considerate. Thus, identifying some knowledge as ‘scientific’ is one way to manage the division of labour in a complex society and delegate the detailed assessment of knowledge claims to science-based experts. Non-scientists also try to rely on science’s cognitive authority to make claims about what is going on in the world, or about what should be done. They try to strengthen their statements by claiming that they are ‘scientific’. They may claim it is ‘scientifically proven’ that certain foods are healthier than 1 Often, the heroes in the stories are physicists and male, reflecting older historic biases. The stories also over-expose the positive aspects of such heroes. After all, Galileo recanted at the last moment, part-time alchemist Newton obscured the contributions of his great competitor Robert Hooke, Pasteur won his battles at least as much through clever theatrics as through brilliant experimentation, and Darwin’s fear of religious backlash made him delay publication for more than a decade. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 15 others, or that there is ‘scientific proof’ for climate change. Or you may claim your company’s product was ‘scientifically proven’ to be superior to a competitor’s. As a policy maker, a consultant, or a company you might be able to use science’s reputation in support of your activity or decision. It is not just scientists who claim superior credibility, but also those who use scientists’ knowledge. The cognitive authority of science can also rub off on your activities. To do that, you would have to be ‘just like science’: you would need grounds on which to claim that what you are doing is part of, or similar to, the scientific endeavour. For example, you could argue that, just like science, your profession is taught at a university, with academic reflection, resulting in an academic degree that is recognised beyond your local private college. You might want to point out that what you are doing is based on recognised methods, certified by peers, and widely accepted; that you use a laboratory, mathematics, falsifiable theories, or some of the other things that people recognise as characteristically scientific. If you can claim that a practice, a measuring technique, a device, a statement, a person, or an organisation is ‘scientific’, then you implicitly mobilise the impressive legacy of science in support of your activities. However, there is also a catch. Many people have tried to appeal to the authority of science in the name of various causes, but on shaky grounds. For example, throughout the nineteenth century, phrenologists claimed they could deduce mental capacities from the shape of human skulls or bodies, using dubious biometry to justify inequality, sexism, racism, and even slavery (Desmond and Moore, 2009; Gould, 1981). In recent times, governments and companies have appealed to scientific arguments to downplay the hazards of chemical pollution, nuclear installations, medicines, or food contamination, such as in the case of ‘mad cow disease’, often to avoid costly safety measures or avoid inconvenient upheaval. Similarly, we would not want to allow any quack doctor, who’s out for a quick profit, to claim scientific credentials. It is not because you appeal to the cognitive authority of science that you auto- matically deserve to shelter under its umbrella. There is also the opposite problem: if you reserve cognitive authority too strictly to only the ‘hardest’ of sciences, then you may discard a lot of valuable knowledge (see Chapter 8, this volume). Indigenous peoples of the Amazon may not read the Journal of Pharmacology, but they can point pharmacologists to promising substances through their intimate knowledge of forest plants (Minnis, 2000). Patients with chronic diseases, such as diabetes or HIV/ AIDS, may use their knowledge of daily disease realities to inform better care plans, even if their knowledge is not based on double-blind clinical trials, the somewhat overrated ‘gold standard’ of medical research (Epstein, 1996). https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 16 Halffman Field biology enthusiasts, such as experienced birders or lifetime bryologists, may gather extensive biodiversity knowledge that we could never afford through professional ecologists (Ellis and Waterton, 2004; Lawrence and Turnhout, 2010). Including too much under the umbrella of science’s cognitive authority may present the risk of admitting nonsense, but, clearly, allocating cognitive authority too strictly risks discarding a lot of valuable knowledge. It would be convenient if there were a clear criterion, an infallible yardstick that would allow us to separate true science from quackery, reliable knowledge from dubious superstition and pseudo-science, reserving the accolade of science’s cognitive authority only for what truly deserves it. Scientists seem to recognise good science when they see it, so surely there must be some logic to their assessment? Can’t we devise a practical definition of science, or at least of reliable knowledge, and then use that to decide who and what is ‘in’ or ‘out’? 2.3 Science as a Particular Way of Reasoning It is surprisingly hard to come up with a clear and universal definition that covers everything that you would commonly call ‘science’. It is easy enough to list recognisable characteristics: science tends to perform empirical studies and accumulate facts, gathered through experiments or at least using well- established methods of systematic observation or registration, integrated through general theoretical notions and understanding, which are then tested against further facts, in a quest for general laws and regularities, all in a spirit of healthy scepticism towards tradition and accepted belief. With such a list of characteristics, science appears as a particular way of gathering and accumulat- ing knowledge that seems more systematic and objective than ordinary forms of knowledge. We could argue about the details, but this is roughly the descriptive definition of science that scientists are taught during their training. However, if we try out our list of characteristics on specific forms of scientific knowing, then it quickly becomes hard to make all of them fit. For example, we may think of experiments as the epitome of scientific research, but some sciences are not very experimental at all, such as astronomy or sociology. Large parts of climate science work with computer models rather than experi- ments or systematic empirical observation, as does much of economics. We could follow the Anglo-Saxon usage of the word ‘science’ and restrict its meaning to the natural sciences only, as opposed to the ‘social sciences’ (in contrast to the tradition of Continental Europe, where ‘science’ generally includes all academic knowledge). Even if we were to reserve the accolade of ‘scientific’ to the natural sciences only (which seems to discard rather a lot of https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 17 valuable knowledge), this still does not mean that all ‘science’ is experimental. In fact, some highly respected sciences, such as mathematics, are not very empirical at all. While some sciences are occupied with testing generalisable laws, fields such as taxonomy deal with nomenclature and classification instead. Even a concept such as ‘objectivity’ is more problematic than you might expect. It might mean ‘without value judgement’, or ‘without any human intervention’, or ‘checked and agreed by many people’, or ‘accurately repre- senting the world’ – all of which become quite complicated in any concrete example of research (Daston and Galison, 2007). In some sciences, ‘objectiv- ity’ is treated with a lot of circumspection. For example, anthropologists recognise that their presence in the communities they study, or their own inescapable cultural biases, affect their analyses. To say that mathematics, anthropology, taxonomy, and astronomy are ‘not really sciences’ would not only be counterintuitive, it would also deny these obviously valuable forms of knowledge the cultural legitimacy and support provided by the label ‘science’ (or ‘social science’, if you insist). Inversely, some forms of knowledge are empirical and theoretically inte- grated, but would not be seen as science by most people, such as astrology. Astrology uses astronomical tables and relies on the mathematics involved, consists of theories about how planets influence people, and involves estab- lished methods for predicting the future, such as a birth horoscope. It learns and improves (at least by its own standards) – for example, it incorporated the planets beyond Jupiter as they were discovered. Although we do not want to suggest that astrology is a science, it is important to realise that even philoso- phers of science disagree about exactly why it is not a proper science. In addition, some forms of knowledge operate somewhere in the grey zone, such as herbal medicine or acupuncture, with debated evidence and contro- versy about what counts as reliable evidence. Advocates and practitioners of ‘alternative’ sciences may engage in fierce and bitter debates with mainstream scientists that can last for decades. Some of these debates result in eventual rejection, such as the case of phrenology: the study of how skull shapes express personality eventually became ‘unscientific’. In other cases, the ‘unscientific’ outsiders eventually are accepted into the fold, as in the case of plate tectonics, a field of study that took decades to achieve recognition. Some disputes never quite seem to get fully resolved, as with homeopathy, which has had a precarious position on the fringes of science for decades. Philosophers of science call this the problem of demarcation: what criterion could we use to demarcate true, reliable scientific knowledge from other forms of knowing? A famous attempt, still quoted often by scientists today, was made https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 18 Halffman by the philosopher of science Karl Popper: science works with falsifiable hypotheses. Rather than looking for facts that can confirm theories about the world (‘verification’), the true scientist should articulate claims about the world in such a way that they are open to challenges and then try to refute these claims. For example, we should not assume that all swans are white and then find in every observed white swan a confirmation of our belief, but rather look for black swans and try to refute our tentative theory that all swans are white. Until the day we actually find black swans,2 we can cautiously assume that all swans are white, but we must accept that this is only a preliminary certainty – that is, the best we can do at that point in time. Popper used this criterion to distinguish science from pseudo-sciences such as astrology or occultism, which always seemed to find a way to reinterpret observations to confirm their beliefs. More than just a philosophy of science, Popper also wanted to challenge over-commitment to ideological beliefs. With his principle of refutation, he distanced himself from Marxism, which in his opinion failed to accept obser- vations that ran contrary to its expectations. After Marx had pointed out that capitalism undermines its own existence and therefore must lead to revolution, his followers were forced to find ad hoc explanations for why capitalism continued to survive its ‘internal contradictions’. Tellingly, Popper’s theory of science was first published in German in 1934, under the looming threat of Nazism, and republished in English during the Cold War in 1959 (Popper, 1934/1959). It became part of his defence of an ‘open’ and liberal society, against totalitarianism (Popper, 1942/1966). While immensely influential and still quoted often, philosophers of science have raised quite a few objections to Popper’s demarcation through the refuta- tion principle, especially when confronted with the actual practice of research. For example, when a refuting observation occurs, it is not immediately clear what should be rejected: is the theory wrong, or did the experiment fail? Was the observation an outlier, or was the observer incompetent? Because we have no absolute way to determine the cause of refutations, of whether they result from error of measurement, the experiment, or the observer, it is impossible to answer these questions with certainty. This means that the application of Popper’s principle will run into inevitable difficulties in practice. How to handle refutation is further compounded by the problem of inter- pretation. What do we do if we find a black bird that looks like a swan? Do we say that it is a swan and refute the theory that all swans are white, or do we revise the definition of a swan to exclude black ones? (While this latter option 2 Black swans (Cygnus atratus) actually exist, originally from Australia, but also as escaped park populations. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 19 may seem silly, black and white swans are in fact considered two different species.) Or do we just accept that the matter remains unresolved? Thomas Kuhn famously observed that scientists regularly allow anomalies to accumu- late, rather than refute well-supported theories that have proven useful. It is only when sufficient evidence accumulates that completely new approaches (‘paradigms’) will be considered, typically by a new generation of scientists who are not so committed to the old ways, launching a ‘scientific revolution’ (Kuhn, 1962/1970). An example of such a scientific revolution is the change from Newtonian to relativist mechanics, or the rise of evolutionary theories in biology. Popper’s aim of falsifying hypotheses is hence a principle that only seems to work in the context of stable theoretical assumptions, defying the idea that refutation can effectively challenge fundamental worldviews, as it was intended. Philosophers have observed that, in practice, scientists use a rich mix of epistemological principles in their research, sometimes aiming to refute hypotheses, sometimes generating new theories through observed confirma- tion, or even by purely deductive reasoning. If we ask them which philosophy of science they use, they answer with an eclectic mix of principles of opposing schools in the philosophy of science (Leydesdorff, 1980). This does not mean that ‘anything goes’: just because there is not one universal method in science, it does not mean that there are no valid methods at all (Feyerabend, 1975). But it does mean that we have to understand and appreciate that research fields have differing, evolving standards of what constitutes valid and solid research. Hence, it is still possible to criticise some knowledge for being unfounded, or to question research in terms of its own or even neighbouring standards (Fagan, 2010). For example, paleo-climatologists may have meaningful objections to global circulation climate modelling on the basis of completely different principles for how to generate reliable knowledge: from empirical observation of traces left by long-term climate change, versus computer models based on physical laws and current measurements (Shackley and Wynne, 1996). Or we may criticise toxicologists if they do not stick to their own risk assessment methods to judge the hazards of chemical substances. Once standards have been established for how to assess pesticide hazards, it becomes possible to check whether actual assessments live up to them (see Van Zwanenberg and Millstone, 2000). There is no need to give up knowledge standards completely, but such standards tend to be much more specific than a universal ‘scientific method’. Popper was not the only one to search for the ultimate demarcation criterion. The demarcation problem was a key concern for an entire generation of philosophers of science, but none ever came up with a solution that remained https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 20 Halffman universally convincing (Callender, 2014, 46). The idea of a universal ‘gold standard’, ‘the scientific method’, or a definitive criterion to distinguish all scientific from non-scientific reasoning, is a theoretical project that never seems to work in practice. To determine which chemicals are dangerous to the environment, how to assess the effectiveness of pharmaceuticals, or what contributes most to climate change, we still need to go through lengthy deliberations over what counts as sufficiently reliable knowledge. While refu- tation is one useful principle to do so, it does not definitively resolve the problems of citizens, policy makers, and judges who have to distinguish valid from questionable knowledge in order to decide on appropriate courses of action, especially under conditions of polarised conflict or expert disagreement. 2.4 Science as a Particular Way to Organise Knowledge Creation We could try to define science not as a particular style of reasoning or generat- ing knowledge, but as a particular way to organise the generation of knowl- edge. Maybe there is something unique about the social structure of science, rather than its cognitive structure. This shifts the attention away from how scientists think to how they work, cooperate, and deliberate together. Thus, science could be seen as a scholarly activity, relating new research to a previously accumulated body of knowledge, but in a unique format. This knowledge is openly shared among a community of researchers, open to common scrutiny and debate, and organised via peer review and a particular system of scientific publishing and communication. We then understand science as a unique set of organisations (laboratories, journals, conferences), social practices (peer review, open discussion), and shared values. For exam- ple, researchers like to present science as disinterested, neutral, unbiased in political or cultural quarrels, based on facts, objective, and universally true, irrespective of country, race, gender, belief, political convictions, or other particularistic categories. Even if the methods of science are diverse and complex, presumably these shared attitudes and organisations can distinguish it from other activities in society? 2.4.1 Unique Norms and Values? Sociologist Robert Merton argued that science is set apart because of its unique combination of four core values. According to Merton, the norms of science https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 21 dictate that knowledge should be openly available and Communally shared, and should not be tainted by particular characteristics of the scientist (gender, nationality, age, race, etc.), but be Universal. The production of knowledge should be done in a spirit of Disinterestedness, where private advantages should not interfere with the quest for new knowledge. Last, knowledge should subject to systematic and critical scrutiny by peers through Organised Scepticism: not just random criticism of every detail and assumption, but a reasonable and collective questioning of scientific findings, as can be found in peer review. He expressed this ‘ethos of science’ with the acronym CUDOS, with a pun on kudos and the importance of recognition in the reward structure of science. Only science would have this unique combination of norms, providing a social rather than cognitive demarcation criterion. Merton origin- ally formulated the normative structure of science in 1942, with concerns similar to Popper’s about the totalitarian repression of science under Nazi and Stalinist regimes (Merton, 1973 ). Many scientists would agree that the CUDOS values stand for important norms, and invoke them to call each other to order. A recent reappraisal of CUDOS has shown these values to still be relevant, especially in a critique of a science that is increasingly operated as a business (Radder, 2010). For even if these norms may be invoked and recognised by scientists as valuable ideals, the reality of research is often very different: knowledge is held back until pub- lication priorities can be assured; rivalries between laboratories, personalities, and even countries lead to particularistic research choices and unfair opportu- nities; research investments are steered by commercial considerations or mili- tary advantages that require secretive knowledge rather than communal sharing; and careers are built on commitments to theoretical premises or approaches rather than sceptical self-doubt. (Such research commitments and identities are woven into how scientists are trained and rewarded, and are no less particularistic than commitments to economic or political interests.) Furthermore, governments increasingly expect scientists to affiliate with com- panies or government agencies to guarantee practical benefits from research. If we remove all interests from science until we have truly ‘disinterested’ research, there would not be a lot of research left. Our list of values may express an inspiring and acutely relevant ideal, but it is not a very useful yardstick to distinguish true science from more questionable knowledge creation. This has led to the objection that daily research practice is often guided by more dishonourable ‘counter-norms’: science that is secretive instead of com- munal, particularistic instead of universal, interested instead of disinterested, and dogmatic instead of open to scrutiny and organised scepticism (Mitroff, https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 22 Halffman 1974; Ziman, 1994).3 There is even some evidence that the increased commo- dification of science is becoming accepted by scientists as the new state of affairs, altering the ideals of science (Macfarlane and Cheng, 2008). In any case, whether or not the traditional ideals of science still carry weight, they do not provide a solution to the problem of what we should trust as ‘true science’, even if they might provide some guidance as to the kind of research we could question. Evidently, supporting beautiful ideals does not automatically produce reliable scientists. 2.4.2 Peer Review What if we take a procedural perspective to the demarcation problem? What if ‘science’ is simply what scientific organisations do? For example, ‘scientific’ is all that is published in peer-reviewed journals, in which anonymous reviewers review papers prior to publication. The quality checks of peer review increas- ingly include grant proposals, and even career advances. In fact, just as with Popper’s refutations, scientists and their organisations sometimes refer to peer review as the ultimate criterion to distinguish the reliable from the unreliable. To underline its importance, they sometimes refer to its origins in seventeenth- century academies of science that fostered the illustrious Scientific Revolution. There is a firm belief that peer review will ultimately weed out faulty and unreliable science: by submitting research to the anonymous, ‘blind’ evaluation of knowledgeable colleagues, scientists will be able to assess the quality of research without being confused by irrelevant social properties of the knowl- edge source, such as reputations or nationalities. However, peer review is a varied practice, and not even unique to research. Some forms are used outside of science in the assessment of professional work, among nurses or teachers. The double-blind review system of the humanities (where names of authors are removed from submitted papers) is uncommon in some natural sciences, particularly in small, highly specialised communities where anonymity is an illusion anyway. In fact, in some areas of physics quick online access seems to be more important than pre-publication peer review. In any case, peer review originated not in quality assurance, but in censorship: review was to prevent the publication of seditious or subversive material that could undermine the authority of the king or the established social order 3 Expressed by John Ziman as the PLACE counter-norms (Ziman, 1994): ‘post-academic’ science is Proprietary (concerned with owning and earning from knowledge), Local (e.g. in search of regional advantages), Authoritarian (as in large hierarchical research programmes), Commissioned (by states or companies), and Expert (advising with certainties rather than cautious doubt). https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 23 (Biagioli, 2002). As a system of quality guarantee, it only became common in the natural sciences after World War II, and in the rest of the sciences even more recently. This would mean we would have to exclude Newton, Darwin, and even Einstein from science, as their work was published long before peer review became a standard practice. As a quality guarantee, peer review is under a lot of criticism. Peer review has a conservative bias, making it hard to publish ideas that go against dominant beliefs. In fact, several Nobel prizes have been awarded to work that was originally turned down by the peer review system (McDonald, 2016). Novel or interdisciplinary work can suffer from the fact that relevant and competent peers may be hard to identify. There are also indications that peer review favours established scientists, can be biased and even abused, and is subject to negligence and self-interest (Biagioli, 2002). Experiments where faulty papers were submitted for peer review revealed less than desirable levels of scrutiny, especially among the growing number of fringe, open-access journals (Bohannon, 2013). In fact, scientists themselves often appear unim- pressed by a publication’s approval through peer review in general, and rely instead on characteristics such as the likelihood of claims in light of prior knowledge, or the reputation of a journal, the author, or the author’s institute (Collins, 2014). Without questioning the immense value of some peer-review practices, it seems highly problematic to find in peer review a simple gold standard establishing what we should and should not take to be solid and reliable scientific knowledge. 2.4.3 Laboratories and Centres of Calculation French anthropologist and philosopher Bruno Latour and his colleagues have highlighted the crucial position of laboratories in science (or similar places where knowledge is created, such as field stations and botanical gardens), not as a way to demarcate science, but to describe the particular processes by which research operates. The laboratory is the place where scientists collect samples and measurements (even from places as far away as the tails of comets), and subject these bits of collected nature to trials and tests to register their beha- viour. The registrations collected by scientists produce an enormous universe of carefully noted ‘inscriptions’ and calculations, ordered and re-ordered into data repositories and publication libraries. This perspective focuses on the remarkable processes of accumulation in science, showing the dazzling net- works that connect scientists to the world, rather than to separate them from it (Latour, 1987). https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 24 Halffman The great feat of scientists is to re-enact the tamed nature of the laboratory, of the controlled experiment or model, out in the world again: to make ‘tamed’ nature repeat in the wild what it did in the laboratory. This turns the laboratory into a powerful instrument to change the world in which we live – as, for example, when scientists use tamed viruses to produce vaccines (Latour, 1983, 1993; Latour and Woolgar, 1979). If we use the word laboratory in a wider sense (as is more common in France, where you can have a ‘laboratory’ of sociology without test tubes or microscopes), we can see how such ‘centres of calculation’ operate throughout the history of science and its diversity. Researchers collect survey results, exotic plants in botanical gardens, or naval explorers’ sketches of coastlines for accumulation in colonial atlases (Callon, 1986; Jardine, Secord, and Spary, 1996; Law, 1987). Research becomes a particular way to gather, control, and accumulate knowledge. This view of science stresses the practical work of scientists, rather than lofty theory or interpretation, a perspective that scientists sometimes see as alien or even offensive. For many scientists, it misses the deeper meaning of their work, and the systematic reasoning and argumentation it involves. Latour himself has insisted that social scientists and philosophers should not try to demarcate science. In trying to describe science (rather than perform it), we should never draw boundaries, because we run the risk of overstating the rigidity of distinctions that would hide the close connections between science and society. Hence, we should strive never to make boundaries harder than the actors would (Latour, 1987). In more recent work, he has stressed that science uses particular modes of operation, different from law or religion, but insists that theses modes are complex and cannot be reduced to simple demarcation criteria (Latour, 2013a, 2013b). 2.5 The Diversity of the Sciences 2.5.1 The Family of Sciences Attempts to distinguish science from other forms of knowledge aim not only to find a hard boundary, but also to gather all of science under one umbrella: they insist on the unity of the sciences. Such a project may make sense in historic circumstances that threaten free inquiry, such as religious intolerance, totalitar- ian regimes, or waves of mass superstition, but there are important disadvan- tages too. A universal criterion for what is and is not science may threaten the wealth of cognitive diversity in the sciences. It entails the danger that one way of doing research becomes the gold standard, imposing awkward or even https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 25 inappropriate criteria on other sciences. Throughout the twentieth century, experimental physics would count as an epistemological ideal, the ultimate ‘hard science’, and many other fields of research abandoned older styles to shape themselves in its image. These days biomedical research has become a powerful template, even though their approaches may sit awkwardly with field research, anthropology, or information sciences. In addition, a strict demarcation runs the risk of discarding too much knowledge, knowledge that is practical or true, but not experimental and peer reviewed, such as patients’ knowledge of how to live with the daily realities of a chronic disease. We will return to the latter problem in Chapter 8, on science and lay knowledge. If we look at the daily operation of science more closely (scientists at work in their laboratories, climate modellers behind their computers, social scientists interviewing people), we can observe how diverse the practices of these researchers truly are. Perhaps it is not so strange that we have such a hard time finding simple demarcations for knowledge practices that are actually so varied: calculating, modelling, theorising, counting, experimenting, interpret- ing; in diverse places such as observatories, laboratories, museums, botanical gardens, observation stations, space stations; producing various outputs includ- ing reports, research papers, technologies, public statements, and lectures. Some researchers use extremely expensive equipment, others rely on libraries, databases, or just their desktop computer. There is something particular about the idea that there is a shared criterion in all of these ways of producing knowledge. One way to describe such a criterion metaphorically is as a search for the largest common denominator: the most meaningful characteristic we can find which all the sciences all have in common. This approach to the problem of demarcation is an analytic one: we try to identify essential characteristics in the diversity of the sciences. In itself, there is nothing wrong with subjecting the sciences to such an analysis. However, a problem arises when we inversely start to use this essence as a criterion to judge who and what really belongs, and who and what does not. The project then acquires a nasty, fundamentalist character: only those who have the pure trait can claim to be true scientists, while the others will have to be inferior mongrels, expelled from universities, or tolerated but with a lesser status. Turning back an essential, analytic definition of the sciences as a demarcation criterion then becomes a brutal essentialism used to expel some forms of knowledge. Ultimately, attempts to find some pure essence that all the sciences have in common (hence the term ‘essentialism’) have failed to provide a definitive and universal answer, even though some are still looking. Perhaps it is more accurate to think of the sciences as an extended and somewhat rambunctious https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 26 Halffman family. This family shares some resemblances and a number of more or less common characteristics,4 such as empirical work, theoretical thinking, sys- tematic testing, and peer review. However, there are some eccentric uncles who lack some such features. There are also some distant nephews and cousins who married into the family, but are not yet fully accepted by some of the grand- mothers. There is also a lot of fuss about who actually rules the roost and who represents the purest family traits. Ultimately, this family is a rowdy, fuzzy bunch, with ever-contested membership. The search for demarcation is a search for a clear criterion for who belongs in the family, who is allowed to share the inheritance. Favouring some characteristics over others runs the risk of losing some valued members, while a happy-go-lucky open-door policy means some shady suitors could take off with the family heirlooms. The family of the sciences seems too complex, extended, and dynamic to resolve its membership and proper customs once and for all. However, to simply state that ‘it’s complex’ is too easy and defeatist. Even though it is complex, there are ways to identify different branches in the family. Rather than one homogeneous ‘science’, we can try to identify more specific family branches to see how they establish the validity of knowledge claims. 2.5.2 Styles in Science In an attempt to celebrate, rather than condemn, science’s diversity, some historians of science have suggested a series of different scientific styles. A style is an encompassing way to express different approaches to research, involving much more than just different methodologies, but also what scientists see as ideals for knowledge (a model, a system, a law, a solution, a cure), or what they consider valid ways to establish solid, reliable knowledge (deduc- tion, modelling, experimentation, field trials). Specific disciplines tend to have a preference for certain styles. In medicine, random control trials are often considered the gold standard for research, but other fields put more trust in computer models, or insist on laboratory experiments, theoretical deduction, or systematic observation. In some cases this is the result of practical impossi- bilities, such as experiments in astronomy. In policy sciences, semi-controlled field experiments once held the promise of a hard standard to test the effects of policies or other interventions, but they failed to deliver the solid conclusions they promised and are now used sparingly. Economists, on the other hand, put a high credence in models. We can also see different styles at work in how 4 The family resemblance is a metaphor used famously by Wittgenstein: even though not all members of a family may have the same nose, there may be a set of features most of them share, so that family members are still recognisably similar (Wittgenstein, 1953). https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 27 knowledge is organised: field biologists who look for new species and try to systematise them in the nomenclature of taxonomy do something quite differ- ent from ecologists who try to model how predator–prey relations affect the size of populations over time, or from molecular biologists trying to identify the processes of gene expression, even though their findings may be relevant for each other. The precise list and number of styles distinguished varies between historians of science (Kwa, 2011; Pickstone, 2000), but our point here is more to show how styles can help to identify diversity. For example, the taxonomical style is rooted in biologists that travelled along with colonial expeditions, collecting plants and animals from exotic places and bringing them back to botanical gardens, where they could be systematised and grown, and then redistributed for horticulture or commercial agriculture. The experimental style, in contrast, does not rely on collecting in botanical gardens or aiming to systematise species, but on laboratories, where the experiment is the crucial way to know nature. Its ideal is not so much systematisation, but to find general laws and regularities. In a technological style, the objective is to develop working apparatuses or procedures, whereby ‘working’ is more important than precise and exact understanding of underlying laws, which could come later. Over time, styles have fallen in and out of favour. For example, at least up until the mid-nineteenth century, research in biology predominantly followed the taxonomical style: Linnaeus, Buffon, and even Darwin were fierce collec- tors and systematisers. Now, this style of biological research is no longer considered very ‘hot’ and is overshadowed by experimental research, espe- cially in molecular biology. Many of the painstakingly gathered specimen collections have been discontinued and in most academic biology curricula, taxonomical knowledge is no longer a priority. There is a risk in such trends, of course, as old knowledge may suddenly become relevant again, for example as biologists learn to use the great collections to study climate change or ecolo- gical restoration (Bowker, 2000). 2.5.3 Disciplines Another way to characterise the science family is as a collection of disciplines: biology, physics, geology, sociology, economics, etc. Disciplines are divisions and departments of science that structure research, with many originating in the nineteenth century. Disciplines organised the knowledge creation process in specialised publications (disciplinary journals), scientific societies and their conferences, handbooks, and degrees. The punitive connotation of the word ‘discipline’ is not coincidental: a discipline defines the masters students should https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 28 Halffman emulate, the founders of the field, prescribes what methods and theories should be known and can be used, and how to apply them correctly. During education in a discipline, students go through strong socialisation processes. Respected and often charismatic professors and their handbooks infuse their students with a strong ethic of what is and is not proper research, of what is ‘the scientific method’. Disciplines may even prescribe correct language and formulations, sometimes in radically divergent ways. For example, whereas physics will require the researcher to be completely absent from the narrative of the scientific publication by using passive tense (‘an experiment was conducted’ rather than ‘we con- ducted an experiment’), in anthropology the invisibility of the researchers in the narrative is generally considered a serious faux pas. Whereas the physicist will respect mathematisation as a convincing way to confirm theoretical arguments, the anthropologist is taught to question the assumptions and interventions needed to count anything in the first place (Verran, 2001). Hence, the criteria for what is and is not proper science differ between disciplines. The disciplines have lost some of their status as the main building blocks of science. Their rigid separation of knowledge into distinct departments and separate communities has led to the accusation that they have thereby pre- vented scientific breakthroughs. Advocates of interdisciplinarity pointed to the crucial discoveries in genetics as molecular chemists entered biology, or the rise of cognitive sciences from the cooperation between disciplines such as psychology, linguistics, and biological neuroscience. Another important argument for interdisciplinarity was that disciplines tended to turn inward, to focus on research problems defined by the discipline, at the expense of questions posed by problems in society. To really contribute to problems presented by urbanisation, pollution, climate change, or health, cooperation of different disciplines would be necessary (also see Chapter 7). Nevertheless, disciplines are still relevant categorisations for many research- ers and important frameworks for organising research. For example, most universities still offer disciplinary degrees, with separate departments or facul- ties for ‘biology’ or ‘chemistry’, and there are still disciplinary societies, journals, and handbooks. Yet there are now more networks and communities working in various interdisciplinary fields, such as environmental studies, climate sciences, and science studies. In fact, differing disciplinary notions of what constitutes an interesting question or an acceptable way to answer it are some of the key challenges of interdisciplinary cooperation. For the same reasons, it can be hard to establish what constitutes reliable knowledge or proper ways to do research in interdisciplinary fields, which can lead to extensive debates and disagreement. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 29 2.5.4 Fields and Specialties Most scientists identify with a ‘field’ of research, with a community of researchers they consider peers, a set of research problems considered promis- ing, a set of methods and theories (possibly in competition) to address these problems. Their relevant environment is not all of science, but their particular ‘field’. It is the field that defines methodological standards, provides counter- arguments, and identifies pertinent questions. Scientists meet their field at specialised conferences, as they publish in specific journals, where they referee each other’s papers. The idea that the research field defines the relevant environment for the validation of knowledge is a principle in US law, and has even been entertained by philosophers of science. Nevertheless, it is not easy to distinctly map out fields of research. One can use citation relations between researchers or between journals: the more often they cite each other, they more they should be related. Thus, the frequency of citations can be used as a measure of distance. However, these maps are highly stylised representations of the structure of science and should not be seen as realistic ‘maps’ of the country of science. What is the relevant field varies strongly with the position taken in research: depending on your specific research, the field environment may look very different. In addition, this does not resolve the problem for public policy of how to establish the proper set of scientists to provide or assess knowledge, as different fields may have relevant knowledge for the issue at stake. This has been a hot topic of debate in the discussion around the attempt to demarcate who has relevant expertise. Since scientific research tends to be performed and judged in highly specia- lised communities, Collins and Evans (2002, 2007) have argued that we should maintain a distinction between insiders and outsiders to such communities. Outsiders may still have relevant experiential knowledge or other expertise that can contribute to the specialised knowledge of insiders, and may to some extent even understand this knowledge, but Collins and Evans argue that only insider specialists have the genuine ability to assess the detailed intricacies of a research field. Thus, they tried to return to the project of identifying who can meaningfully assess the truthfulness of scientific knowledge, attempting to find demarcation criteria not for science in general, but for communities of pertinent experts. Their particular concern was that the growing attention given to local or experiential knowledge (people who know about their forest, patients about their disease) or ‘lay expertise’ could lead to the idea that everyone’s knowledge counts for the same, with a resultant loss of any demar- cation at all. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 30 Halffman Their defence of insider expert knowledge has raised a storm of criticism (Jasanoff, 2003a; Rip, 2003; Wynne, 2003). One strong counter-argument is that reference to specialist knowledge does not really allow citizens, policy makers, or even other scientists to establish which specialists have the legit- imate claim over a specific topic, especially if there are competing expert groups. The pre-eminence of specialist knowledge may be a strong argument in the context of Collins’ favourite example, the study and detection of grav- itation waves, where there is a stable topic of research defined by a long- standing group of specialised physicists (Collins, 1975, 2014). However, the suggestion that we must turn to the specialists falls short if we look for answers about the economy, nature conservation, or the environment. Not only are there competing fields of expertise on such topics, but for many such topics there is no clear definition of what the problem is. Should we protect rare species or habitats? Nature with or without people? Complex issues such as climate change can be interpreted as an economic problem, a problem of global equity, a matter of planetary engineering, a problem of adequate prediction, of attitudes towards the environment, of sheer irresolvable conflicts of interest between nation-states, and so on. With each of these interpretations or ‘frames’, differ- ent experts may claim to have pertinent knowledge. Collins and Evans put aside this framing as a different problem, but it is hard to see how this side of the coin can be separated from reliance on well-demarcated specialties. We will return to the problem of framing, as well as to lay expertise, in Chapters 3 and 8, respectively. There is a more serious challenge to the idea that a community of experts can always dependably guarantee the quality of its knowledge, providing solid knowledge that outsiders can rely on. Even communities of experts may develop blind spots or fall victim to collective bias. There are examples of expert communities that have relied on assumptions that remained untested for decades – and even longer. For example, the idea that all human fingerprints are unique goes back to a conjecture by biometrician Francis Galton in the nine- teenth century. Throughout the twentieth century, experts developed principles of fingerprint identification and comparison, using rules of thumb that grew out of practice and experience. Fingerprinting acquired such a solid reputation (in the law, but also in popular culture) that when DNA analysis was introduced it was often called ‘DNA fingerprinting’, suggesting it was equally reliable. Legal challenges of partial prints and ambivalent evidence, as well as careful reconstruction of the development of fingerprint expertise, have shown that at least some of the assumptions commonly shared by all fingerprint experts deserved closer examination (Cole, 1998). Experts and scientists are not immune to groupthink or collective bias, and the fresh but forceful look of an https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 31 outsider, or of a competing field of expertise, can identify weaknesses. In spite of an understandable urge to award some reverence to specialised experts, such examples underline the importance of knowledge diversity and critical reflec- tion by outsiders. 2.6 So How Do We Proceed? Clearly, claiming your knowledge is true because it is scientific knowledge is problematic – not because scientific research is not valuable or a solid basis for acquiring knowledge, but because there is no unequivocal criterion to establish what constitutes ‘scientific’ knowledge. Even on the more detailed level of disciplines or specialties, attempts to draw a clear demarcation of what con- stitutes relevant expertise run into problems. Depending on how environmental problems are defined, different forms of (scientific) knowledge may become relevant. Science is diverse, and some parts of science have radically different assumptions and approaches – leading many in science studies to prefer to talk of ‘the sciences’ rather than a singular ‘science’. This sometimes triggers fierce responses, especially from people with a strong belief in reductionism or a shared scientific ideal, as the diversity of sciences may be misread as relativism. However, a degree of scepticism towards the cognitive superiority of science-based expertise does not automa- tically mean that anything is true. Diversity does not mean that there is no rigidity of scientific reasoning, or that no meaningful debate between diverse sciences is possible. It does suggest, however, that a degree of modesty is appropriate in the public presentation of science-based expertise (Jasanoff, 2003b). In practice, scientists, policy makers, citizens, companies, NGOs, journal- ists, and similar actors in society have invested a lot of work in settling what we will accept as reasonable, science-informed principles to establish what counts as reliable knowledge. Over time, this results in institutionalised boundaries: anchored in rules, protocols, devices and apparatuses, communities of experi- menters that share conceptions and skills for ‘correct’ experiments, as taught at universities and explained in handbooks. For example, the environmental toxicity of chemicals is established through relatively standard procedures, involving highly protocolled toxicity tests with model water organisms, assess- ments of expected distribution of chemicals through the environment, and ways of comparing expected exposure to expected toxicity to assess potential pro- blems. For high-volume chemicals or chemicals of particular concern, addi- tional assessments are required. In the context of regulatory toxicity testing, for https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 32 Halffman the specific purpose of regulating which chemicals to allow onto the market, these protocols define what counts as trustworthy knowledge. Such procedures may be challenged: occasionally, researchers or actors may challenge whether these procedures are appropriate, ‘scientific’, fair, reasonable, or reliable (Chapman, 2007). However, most of the time such arrangements, resulting from long and fiercely debated negotiations, establish what can be considered sufficiently reliable knowledge in a given context. From this perspective, the procedures and principles our societies have developed to certify knowledge are so much richer than relatively simple universal principles: standardised tests, accredited professionals, peer review, certified equipment, counter-expertise and second opinions, expert committee deliberations, public consultation, right-to-know procedures, transparency, and so forth. None of these procedures are a guarantee in themselves, and even their combined use may not always resolve things, but to replace them all with an appeal to ‘the scientific method’ or ‘the norms of science’ would sell short the rich resources our societies have developed to assess knowledge. It is this wealth of practices and institutions that organise and adjudicate what counts as reliable knowledge in the context of environmental policy making that this book addresses. Now that we have removed our inherited tendency to fix this problem with a few simple rules of thumb (refutation, CUDOS, or insider knowledge), we can look at the actual wealth of practices and principles that have developed in the thick of environmental policy making, in the midst of controversies, fierce social debate, but also great successes. By now, there is a wealth of social science research into environmental policy making, the role of knowledge in policy, and the issues and tensions they give rise to. Our challenge is to find patterns in these debates and in the institutional arrange- ments that have developed. We must also articulate principles informed by both practice and research that can provide meaningful handles for environmental professionals to develop and use expert knowledge productively, modestly, and with integrity. References Abraham, C. (2004). Possessing Genius: the Bizarre Odyssey of Einstein’s Brain. London: Icon. Biagioli, M. (2002). From Book Censorship to Academic Peer Review. Emergences, 12 (1), 11–45. Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60 https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 33 Bowker, G. (2000). Biodiversity Datadiversity. Social Studies of Science, 30(5), 643–683. Callender, C. (2014). Philosophy of Science and Metaphysics. In S. French and J. Saatsi, eds., The Bloomsbury Companion to the Philosophy of Science (pp. 33–54). London: Bloomsbury Academic. Callon, M. (1986). Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St. Brieuc Bay. In J. Law, ed., Power, Action and Belief: A New Sociology of Knowledge? (Vol. 32, pp. 196–229). London: Routledge and Kegan Paul. Carson, R. (1962). Silent Spring. Boston: Houghton Mifflin. Chapman, A. (2007). Democratizing Technology: Risk, Responsibility and the Regulation of Chemicals. London: Earthscan. Cole, S. A. (1998). Witnessing Identification: Latent Fingerprinting Evidence and Expert Knowledge. Social Studies of Science, 28(5), 687–713. Collins, H. M. (1975). The Seven Sexes: A Study in the Sociology of a Phenomenon, or the Replication of Experiments in Physics. Sociology, 9(2), 205–224. Collins, H. M. (2014). Rejecting Knowledge Claims Inside and Outside Science. Social Studies of Science, 44(5), 722–735. Collins, H. M., and Evans, R. (2002). The Third Wave of Science Studies: Studies of Expertise and Experience. Social Studies of Science, 32(2), 235–296. Collins, H. M., and Evans, R. (2007). Rethinking Expertise. Chicago: University of Chicago Press. Daston, L. J., and Galison, P. (2007). Objectivity. Cambridge, MA: MIT Press. De Jonge, J. (2016). Trust in Science in the Netherlands 2015. Retrieved from Den Haag: www.rathenau.nl. Desmond, A., and Moore, J. (2009). Darwin’s Sacred Cause: Race, Slavery and the Quest for Human Origins. Harcourt: Houghton Mifflin. Ellis, R., and Waterton, C. (2004). Environmental Citizenship in the Making: The Participation of Volunteer Naturalists in UK Biological Recording and Biodiversity Policy. Science and Public Policy, 31(2), 95–105. Epstein, S. (1996). Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley: University of California Press. Fagan, Melinda B. (2010). Social Construction Revisited: Epistemology and Scientific Practice. Philosophy of Science, 77(1), 92–116. doi:10.1086/650210 Feyerabend, P. (1975). Against Method: Outline of an Anarchistic Theory of Knowledge. London: New Left Books. Gauchat, G. (2011). The Cultural Authority of Science: Public Trust and Acceptance of Organized Science. Public Understanding of Science, 20(6), 751–770. doi:10.1177/ 0963662510365246 Geison, G. L. (1995). The Private Science of Louis Pasteur. Princeton: Princeton University Press. Gieryn, T., Bevins, G. M., and Zehr, S. C. (1985). Professionalisation of American Scientists: Public Science in the Creation/evolution Trials. American Sociological Review, 50, 392–409. Gould, S. J. (1981). The Mismeasurement of Man. New York: Norton. Jardine, N., Secord, J. A., and Spary, E. C., eds. (1996). Cultures of Natural History. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press 34 Halffman Jasanoff, S. (2003a). Breaking the Waves in Science Studies: Comment on H. M. Collins and Robert Evans, The Third Wave of Science Studies. Social Studies of Science, 33(3), 389–400. Jasanoff, S. (2003b). Technologies of Humility: Citizen Participation in Governing Science. Minerva, 41(3), 223–244. Kuhn, T. S. (1962/1970). The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Kwa, C. (2011). Styles of Knowing. Pittsburgh: University of Pittsburgh Press. Latour, B. (1983). Give Me a Laboratory and I Will Raise the World. In K. D. Knorr- Cetina and M. Mulkay, eds., Science Observed: Perspectives on the Social Study of Science. Beverly Hills: SAGE Publications. Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers through Society. Cambridge: Harvard University Press. Latour, B. (1993). The Pasteurization of France. Cambridge: Harvard University Press. Latour, B. (2013a). Biography of an Inquiry: On a Book about Modes of Existence. Social Studies of Science, 43(2), 287–301. Latour, B. (2013b). An Inquiry Into Modes of Existence. Cambridge: Harvard University Press. Latour, B., and Woolgar, S. (1979). Laboratory Life: The Construction of Scientific Facts. Beverly Hills: SAGE Publications. Law, J. (1987). Technology and Heterogeneous Engineering: The Case of the Portuguese Expansion. In W. Bijker, T. P. Hughes, and T. J. Pinch, eds., The Social Construction of Technical Systems: New Directions in the Sociology and History of Technology (pp. 111–134). Cambridge, MA: MIT Press. Lawrence, A., and Turnhout, E. (2010). Personal Meaning in the Public Sphere: The Standardisation and Rationalisation of Biodiversity Data in the UK and the Netherlands. Journal of Rural Studies, 30, 1–8. Leydesdorff, L., ed. (1980). Philips en de wetenschap. Amsterdam: SUA. Lomborg, B. (2001). The Skeptical Environmentalist. Cambridge: Cambridge University Press. Macfarlane, B., and Cheng, M. (2008). Communism, Universalism and Disinterestedness: Re-examining Contemporary Support among Academics for Merton’s Scientific Norms. Journal of Academic Ethics, 6(1), 67–78. doi:10.1007/ s10805-008–9055-y McDonald, F. (2016). 8 Scientific Papers That Were Rejected Before Going on to Win a Nobel Prize. ScienceAlert (16 August). Retrieved from www.sciencealert.com /these-8-papers-were-rejected-before-going-on-to-win-the-nobel-prize Merton, R. K. (1973 ). The Normative Structure of Science. In R. K. Merton and N. W. Storer, eds., The Sociology of Science: Theoretical and Empirical Investigations (pp. 267–278). Chicago: University of Chicago Press. Minnis, P. E. (2000). Ethnobotany: A Reader. Norman: University of Oklahoma Press. Mitroff, I. I. (1974). Norms and Counter-Norms in a Select Group of the Apollo Moon Scientists: A Case Study of the Ambivalence of Scientists. American Sociological Review, 39(4), 579–595. Mnookin, J. L. (2001). Scripting Expertise: The History of Handwriting Identification Evidence and the Judicial Construction of Reliability. Virginia Law Review, 87(8), 1723–1845. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press What Is Science? (And Why Does This Matter?) 35 Oreskes, N., and Conway, E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming: New York: Bloomsbury Press. Pickstone, J. V. (2000). Ways of Knowing: A New History of Science, Technology and Medicine. Manchester: Manchester University Press. Popper, K. (1934/1959). The Logic of Scientific Discovery. London: Hutchison. Popper, K. (1942/1966). The Open Society and its Enemies (revised fifth edition edn.). London: Routledge and Kegan Paul. Radder, H. (2010). Mertonian Values, Scientific Norms, and the Commodification of Academic Research. In H. Radder, ed., The Commodification of Academic Research (pp. 231–258). Pittsburgh: University of Pittsburgh Press. Rip, A. (2003). Constructing Expertise: In a Third Wave of Science Studies? Social Studies of Science, 33(3), 419–434. Shackley, S., and Wynne, B. (1996). Representing Uncertainty in Global Climate Change Science Policy: Boundary-Ordering Devices and Authority. Science, Technology, and Human Values, 21(3), 275–302. Solomon, S. M., and Hackett, E. J. (1996). Setting Boundaries between Science and Law: Lessons from Daubert v. Merrell Dow Pharmaceuticals, Inc. Science, Technology, and Human Values, 21(2), 131–156. Van Zwanenberg, P., and Millstone, E. (2000). Beyond Skeptical Relativism: Evaluating the Social Constructions of Expert Risk Assessments. Science, Technology & Human Values, 25(3), 259–282. Verran, H. (2001). Science and an African Logic. Chicago: University of Chicago Press. Wittgenstein, L. (1953). Philosophical Investigations. Oxford: Blackwell. Wynne, B. (2003). Seasick on the Third Wave? Subverting the Hegemony of Propositionalism: Response to Collins & Evans (2002). Social Studies of Science, 33(3), 401–417. Ziman, J. (1994). Prometheus Bound: Science in a Dynamic Steady State. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316162514.002 Published online by Cambridge University Press

Use Quizgecko on...
Browser
Browser