Animal Models in Biomedical Research PDF

Document Details

AdorableSardonyx

Uploaded by AdorableSardonyx

2015

Kirk J. Maurer and Fred W. Quimby

Tags

Animal Models Biomedical Research Vertebrate Animals Scientific Discovery

Summary

This chapter from Laboratory Animal Medicine Third Edition provides a general description of animal models used in biomedical research. It highlights historical advancements and recent progress, focusing on vertebrate animals. Keywords: Animal Models, Biomedical Research, Scientific Discovery.

Full Transcript

C H A P T E R 34 Animal Models in Biomedical Research Kirk J. Maurer, DVM, PhD, ACLAMa and Fred W. Quimby, VMD, PhD, ACLAMb a Center For Comparative Medicine and Research, Dartmouth College Lebanon, NH, USA b Rockefeller University, New Durham, NH, USA O U T L I N E I. Introduction II. What Is a...

C H A P T E R 34 Animal Models in Biomedical Research Kirk J. Maurer, DVM, PhD, ACLAMa and Fred W. Quimby, VMD, PhD, ACLAMb a Center For Comparative Medicine and Research, Dartmouth College Lebanon, NH, USA b Rockefeller University, New Durham, NH, USA O U T L I N E I. Introduction II. What Is an Animal Model? A. Types of Models B. Principles of Model Selection and the ‘Ideal’ Animal Model 1497 1497 1497 1502 III. Nature of Research 1502 A. Hypothesis Testing and Serendipity 1502 B. Breakthroughs in Technology: Paradigm Shifts 1504 I. INTRODUCTION Animal models have been used to contribute to scientific discovery throughout the millennia. This chapter will focus on a general description of animal models and will highlight the contribution of animals to scientific discovery utilizing historical advancements as well as key recent advancements. Although, in general, animal models can span the phylogenetic tree, the chapter presented herein will focus its attention on the use of vertebrate animals in research. II. WHAT IS AN ANIMAL MODEL? A. Types of Models 1. Introduction There are many types of models used in biomedical research, e.g., in vitro assay, computer simulation, Laboratory Animal Medicine, Third Edition DOI: http://dx.doi.org/10.1016/B978-0-12-409527-4.00034-1 IV. History of Animal Use in Biomedical Research 1505 A. Early History 1505 B. From Pasteur to the Genomic Era 1506 C. Genomics, Comparative Genomics, and the Microbiome 1513 D. Animals as Recipients of Animal Research 1525 E. Perspectives on the Present State of Animal Research 1526 References 1526 mathematical models, and animal models. Vertebrate animals represent only a fraction of the models used, yet they have been responsible for critical research advancements (National Research Council, 1985). Invertebrate models have made a profound impact in the areas of neurobiology, genetics, and development and include the nematode Caenorhabditis elegans, protozoa, cockroaches, sea urchins, the fruit fly Drosophila melanogaster, Aplysia, and squid, among others. The sea urchin, for instance, contributed to the discovery of meiosis, events associated with fertilization, discovery of cell sorting by differential adhesion, basic control of cell cycling, and cytokinesis (National Research Council, 1985). Similar lists can be prepared for insects, squid, and other marine invertebrates. These invertebrate models have been previously reviewed and are not the major focus of this discussion (National Research Council, 1985; Woodhead, 1989; Huber et al., 1990; Jasny and Koshland, 1990). A model serves as a surrogate and is not necessarily identical to the subject being modeled (National 1497 2015 Elsevier Inc. All rights reserved. © 2012 1498 34. Animal Models in Biomedical Research Research Council, 1998; Scarpelli, 1997). In this chapter it is assumed that the human biological system is the subject being modeled; however, many of the advances made through studies of animal models have been applicable to animals other than humans. Conceptually, animals may model analogous processes (relating one structure or process to another) or homologous processes (reflecting counterpart genetic sequences). Prior to the rapid advancement of genomic sequencing and genomic manipulation, many animal models were selected as phenotypic analogs of human processes and conditions insofar as the human condition appeared similar even though the underlying genetic homology may or may not have been identical. To date, the primary driver of homologous modeling has been the genetically manipulated mouse. The rapid advancement of genetic manipulation in other species will likely result in an expansion of homologous modeling in these other species. Another useful concept in modeling concerns oneto-one modeling versus many-to-many modeling. In one-to-one modeling a model is sought that generally demonstrates a similar phenotype to that which is being modeled. Examples of one-to-one modeling include many infectious diseases and both spontaneous and induced monogenetic diseases. Many-to-many modeling results from analysis of a process in an organism or organisms where each component feature of that process is evaluated at several hierarchical levels, e.g., system, organ, tissue, cell, and subcellular levels (National Research Council, 1985; Office of Technology Assessment, 1986). The understanding that many of the most common diseases such as cancer and obesity are complex, often polygenic, with multiple interacting environmental influences has made the use of many-to-many modeling more common. The advent of high-throughput techniques such as sequencing, transcriptomics and proteomics has facilitated this process. Often, many-to-many modeling requires the use of multiple model systems including computational modeling, in vitro modeling, in vivo modeling, and population-based studies in humans. Importantly, each has its own relative strengths and weaknesses and each may be used to continuously refine a hypothesis or group of hypotheses. In this context it is important to note that despite the many different factors modifying the evolutionary history of humans and other animals, comparative genomics demonstrate that there remains an impressive degree of genetic conservation between commonly utilized research species and humans. 2. Spontaneous and Induced Animal Models Animal models can be classified as spontaneous or induced. Spontaneous models may be represented by normal animals with phenotypic similarity to those of humans or by abnormal members of a species that arise through spontaneous mutation(s). In contrast, animals submitted to surgical, genetic, chemical, or other manipulation resulting in an alteration to their normal physiologic state are induced models. The single largest category of induced models are those which arise through intentional genetic manipulation. Occasionally, investigators will refer to another category of animal model, the so-called negative model. This is an animal that fails to develop or is protected from developing a particular phenotype. Some of the best-characterized spontaneous models are those with naturally occurring mutations that lead to disorders similar to those in man. Among the bestknown spontaneous models are the Gunn rat (hereditary hyperbilirubinemia), piebald lethal and lethal spotting strains of mice (aganglionic megacolon), nonobese diabetic mouse and BB Wistar rats (type 1 diabetes mellitus), New Zealand Black and New Zealand White mice and their hybrids (autoimmune disease), nude mice (DiGeorge syndrome), SCID mice (severe combined immunodeficiency), Watanabe rabbits (hypercholesterolemia), Brattleboro rats (neurogenic diabetes insipidus), obese chickens (autoimmune thyroiditis), spontaneously hypertensive rats (SHR-primary hypertension), dogs and mice with Duchenne X-linked muscular dystrophy, dogs with hemophilia A and B, swine with hyper-low-density lipoproteinemia and malignant hyperthermia, mink with Chediak–Higashi syndrome, cats with achalasia, gerbils with epilepsy, cattle with ichthyosis congenita and hyperkeratosis, and sheep with Dubin–Johnson syndrome (Andrews et al., 1979). A significant limitation of spontaneous models is that their development or occurrence is quite often unpredictable and often relies upon chance. A unique and compelling subset of spontaneous animal models are veterinary clinical patients as models of disease treatment. Recently, new cancer treatments including novel chemotherapeutics have been evaluated in dogs that presented with spontaneous tumors (Peterson et al., 2010). These models are intriguing because they have the potential to directly benefit the animals in which the drugs and treatments are being tested and to eventually benefit human patients. Further, unlike other commonly utilized species (i.e., inbred mice) the spontaneous tumors which develop in these veterinary patients may more faithfully recapitulate the heterogeneity of human tumors. Induced models have been used to unravel some of the most important concepts in physiology and medicine. Surgical models contributed greatly to the understanding of brain plasticity, (Florence et al., 1998; Jones and Pons, 1998; Merzenich, 1998), organ transplantation; coronary bypass surgery; balloon angioplasty; replacement of heart valves; development of cardiac LABORATORY ANIMAL MEDICINE II. What is an Animal Model? pacemakers; the discovery of insulin; fluid therapy and other treatments for shock, liver failure, and gallstones; and surgical resection of the intestines, including the technique of colostomy (Council on Scientific Affairs, 1989; Bay et al., 1995; Quimby, 1994a, 1995). Additional useful models have been induced by diet or administration of drugs or chemicals. Alloxan and streptozotocin have been used to study insulin-dependent diabetes because, when injected, these chemicals selectively destroy the beta cells of the islets of Langerhans (Golob et al., 1970; Sisson and Plotz, 1967). More recently, chemical mutagenesis approaches have been utilized as a tool to conduct forward mutagenesis screening in mice and zebrafish (Becker et al., 2006). Diet-induced models have been responsible for discovery of most vitamins and the necessity for trace minerals as nutrients, as well as for exploration of the pathogenesis of many diseases. Observations of chickens with beriberi (thiamin deficiency) resulted in a cure for humans (and animals) and led to the discovery of vitamins (Eijkman, 1965). In fact, dietary manipulations in the chicken alone have contributed to our knowledge of rickets (Kwan et al., 1989), vitamin A deficiency (Bang et al., 1972), vitamin B6 deficiency (Masse et al., 1989), zinc deficiency (O’Dell et al., 1990), Friedreich’s ataxia (van Gelder and Belanger, 1988), fetal alcohol syndrome (Means et al., 1988), and atherosclerosis (Kottke and Subbiah, 1978; Kritchevsky, 1974). Often complex induced models are used by combining multiple experimental manipulations. An excellent example is the humanized severe combined immunodeficient (hu-SCID) mouse, where a natural mutation in the RAG1 gene prevents T- or B-cell antigen receptor rearrangements, resulting in a severe combined immunodeficiency. When this mouse is injected with human lymphocytes or stem cells, it adopts the immune system of humans (Carballido et al., 2000). Injection of this reconstituted mouse with HIV-1 virus leads to viral propagation and a small-animal model for the assessment of anti-HIV drugs (Mosier, 1996). More recently the NSG (NOD, SCID, IL2 receptor knockout) mouse has been utilized to create ‘humanized’ mice. Specifically, these already immunodeficient mice are exposed to myeloablative irradiation and then reconstituted with human stem cells, generally hCD34+ human hematopoietic stem cells. These mice can then be utilized to study a variety of human infectious and immunological diseases (Zhang et al., 2010). A promising new model has been developed for personalized cancer treatment. In general these rely on transplanting tissue biopsies from patients with tumors into a variety of immunodeficient mice and testing various treatment modalities to determine which treatment might be most efficacious for that patient’s specific tumor (Hidalgo et al., 2011). The results of these studies are promising insofar as drug susceptibility identified in orthotopically transplanted 1499 mice seems to faithfully reflect susceptibility noted in the human patients (Hidalgo et al., 2011). Another recent induced model involves manipulation of the host microbiota and the understanding of the critical role that the microbiota plays in a variety of diseases (Ukhanova et al., 2012; Garrett et al., 2010; Gordon, 2005; Backhed et al., 2005; Hooper et al., 2002). The mouse, and in particular the gnotobiotic mouse, have largely been responsible for our understanding of this process. In a series of elegant studies it was demonstrated that the microbiome of mice is critical for the development of diet-induced obesity and that this phenotype can be altered by altering the microbiome (Turnbaugh et al., 2008; Backhed et al., 2007; Turnbaugh et al., 2006). Likewise, other studies have demonstrated that the microbiome plays roles in a variety of diverse diseases (Fremont-Rahl et al., 2013; Greer et al., 2013; Hansen et al., 2013; Karlsson et al., 2013; Kostic et al., 2013; Mathis and Benoist, 2012; Schwabe and Jobin, 2013). One thing that differs between these induced models and many others described within this chapter is that the inducing factor in these models is not an exogenous source (e.g., toxin, pathogen, surgery) but is something that is entirely indigenous to the animal. The ability of the microbiome to impact a broad range of host processes invariably leads to the realization that the ability to replicate certain phenotypes or studies may be impacted by the microbiome. That is to say, study variability at different institutions and perhaps even among different rooms and mouse colonies at the same institutions may be impacted by unknown variations in the microbiome of the research subject. Animals have, of course, served as models of toxicology and drug safety for a very long time. The use of animal models in drug safety, pharmacology, and toxicity testing are an important component of preclinical studies involving these products. Traditional rodent models of toxicology studies include the B6C3F1 hybrid mouse and the Fisher F344 rat which have been characterized extensively to describe their spontaneous histological lesions as reference points in toxicological studies (Ward et al., 1979, Goodman et al., 1979). Larger animal models including dogs, various nonhuman primates and swine have also been utilized in toxicological and drug safety research (Gad, 2007). 3. Genetically Manipulated Animals Due to the close homology of the mouse genome to the human genome direct manipulation of the mouse genome has produced a great number of animal models that robustly mimic the intended human disease phenotype. The techniques utilized to create knockout and transgenic mice are covered in more detail in other chapters in this text and are extensively detailed in other texts; however, we will briefly detail some historical and LABORATORY ANIMAL MEDICINE 1500 34. Animal Models in Biomedical Research common methods used to generate these induced models here (Behringer et al., 2013, Pluck and Klasen, 2009). a. Chemical Mutagenesis When the drug N-nitroso-N-ethylurea (ENU) is injected into male mice, single base pair mutations are created in the germ cells. By breeding progeny and backcrossing mice, homozygotes for the mutated allele are obtained. Genes in mouse embryonic stem cells (ESCs) can be mutated by use of ENU. Many useful models of human disease have been so created in mice, including models for phenylketonuria (mutated phenylalanine hydroxylase gene), α-thalassemia (α-globin), β-thalassemia (β-globin), osteopetrosis (carbonic anhydrase II), glucose-6-phosphate deficiency, tetrahydrobiopterin-deficient hyperphenylalaninemia (GTP-cyclohydrolase I), Duchenne muscular dystrophy (dystrophin), triose-phosphate isomerase deficiency, adenomatous intestinal polyposis coli, hypersarcosinemia (sarcosine dehydrogenase), erythropoietic protoporphyria (ferrochelatase), and glutathionuria (γ-glutamytranspeptidase) (Herweijer et al., 1997). Zebrafish, Danio rerio, have been used extensively for studies in development because their embryos are transparent, each clutch contains 50–100 embryos, and the fish are amenable to large-scale mutagenesis using compounds like ENU (Driever and Fishman, 1996). Distinct genes have different mutability rates; however, ENU is reported to induce genetic mutations at average induction rates of 1 in 1000. This estimate serves as the basis for large-scale genomic screens (Nusslein-Volhard, 1994). A limitation of ENU mutagenesis is that mutations occur relatively randomly. Once a phenotype is established, in order to determine which gene was mutated to produce the phenotype, chromosomal mapping and large-scale sequencing were necessary. Until very recently, large-scale rapid automated sequencing was for the most part unavailable or available as a very limited and expensive resource. More recently, the cost and availability of automated high-throughput sequencing has made sequencing and data analysis of mice and zebrafish created by these forward genetic screens more reasonable. b. Irradiation Irradiation was used as a germline mutagen dating back to the early 1920s. X-rays have been shown to cause small chromosomal deletions in mouse spermatogonia, postmeiotic germ cells, and oocytes (Takahashi et al., 1994). Examples of radiation-induced models in wide use include the beige mouse (bg), dominant cataract (Cat-2t), and cleidocranial dysplasia (Roths et al., 1999). Because X-rays often produce large deletions, this technique has significant practical limitations due to low recovery rate of mutant mice. c. Transgenics and Targeted Mutations The first method of transgenic manipulation was described by Gordon and Ruddle (1981) and involved direct insertion of cloned genetic material into the pronucleus of a fertilized mouse egg. This method is relatively straight-forward but is limited in that the site of integration is fairly random. Around the same time, mouse ESC lines were first produced and maintained in culture (Martin, 1981). This discovery allowed investigators to insert genes by targeted homologous recombination, using vectors that contain selectable markers. As a result, ESCs with a targeted mutation can be microinjected into a developing mouse embryo which is then implanted into pseudopregnant recipient mothers. Offspring are chimeras because they contain cells of both cultured ESC origin and embryo origin. With molecular screening and appropriate matings, eventually founders that have germline expression of the mutation are produced. These technologies can be utilized to create gain of function and loss of function mutants and combinations of the two. Through the use of tissue-specific promoters or enhancers and receptors whose transcription can be controlled by exogenous drugs and chemicals, gene expression can be altered so that it occurs in a tissue specific or temporally regulated fashion. A common example of this is created by flanking the gene of interest with loxP sequences (so called ‘floxed’ genes/mice), which are targets for bacteriophage Cre recombinase. Crossing floxed mice to mice expressing Cre recombinase under control of desired promoter results in tissue-specific exon excision and ablation of gene function (Gordon, 1997; Nagy and Rossant, 1996). Alternatively, Cre can be exogenously provided by viral or other vectors which can be administered to specific anatomic sites (brain, lungs, liver, etc.) by direct injection or through viral tissue targeting (van der Neut, 1997). Insertion of a tetracycline element into this system allows genes to be turned off or on in response to tetracycline administration (Utomo et al., 1999). Another example is creating a fusion between Cre under the control of a tissue-specific promoter which is turned on in response to a mutated steroid ligand-binding domain. Cre expression is driven by administering tamoxifen or other similar synthetic steroids to the mice (Schwenk et al., 1998). The use of the Cre-lox system has proven so valuable to research that there are any number of commercially available Cre-expressing strains of mice under the control of a number of tissue-specific promoters or chemical mediators which can be purchased from commercial vendors and bred directly to ‘floxed’ mice. Examples include Cre being driven from the albumin promoter (liver), actin promoter (muscle), and CD8 promoters (T-lymphocyte) A complete list of available strains can be found at The Jackson Laboratory Cre repository website (http://jaxmice.jax.org/list/xprs_creRT1801.html). LABORATORY ANIMAL MEDICINE II. What is an Animal Model? Another modification of the standard microinjection method for producing transgenic mice is one in which large multilocus segments of human DNA were transferred into the mouse pronucleus in the form of yeast artificial chromosomes (YACs). The entire β-globin multi­gene locus (248 kb) was cloned into yeast, and once integrated, this locus could be mutated at precise points by homologous recombination. After transferring YACs and mutated YACs into mice, the full developmental expression of epsilon, gamma, beta, and delta genes was observed since the YAC also contained the human locus control region that interacts with structural genes to ensure that the correct globin is produced at the proper time and place during development (Peterson et al., 1998; Porcu et al., 1997). These YAC transgenic mice are free of the restrictions inherent in single-gene cloned DNA, e.g., the genomic organization is not disrupted around the structural gene; thus, higher levels of transcription and developmental regulation of gene expression can be studied. The mouse embryo has proven relatively easy to genetically manipulate when compared to some other mammalian genomes. If one examines the relative availability of other genetically engineered mammals it is evident that these techniques have not always readily translated to other species. The limiting factors of these techniques are the fairly low frequency of recombinational events. This low frequency, 1 in 104–107, for targeted mutation, necessitates the use of large numbers of implantable embryos or the use of ESCs and selectable markers, techniques which may not be readily available for all species (Templeton et al., 1997). Recently however several technologies have allowed for more efficient creation of non-mouse transgenic/knockout animals. The first of these techniques is lentivirus-mediated transgenic manipulation; briefly, this involves the use of modified lentiviral vectors to deliver exogenous nucleic acid. This technique takes advantage of the natural ability of lentiviruses to integrate into the host genome. There are several advantages to this in comparison to standard pronuclear injection for the creation of transgenic animals. First, the process is not technically demanding, additionally when compared to standard pronuclear injection the amount of progeny bearing the transgene are very high (Park, 2007). The ability to create a higher percentage of progeny expressing the transgene has allowed this technique to be used in a variety of mammalian and avian species because far fewer embryos are needed (Park, 2007). Another technique is the use of zinc-finger nucleases (ZFN) (Le Provost et al., 2010; Carroll, 2011a,b). These enzymes are targetable recombinases that can be used to induce homologous recombination or removal of a portion of the genome. ZFN contain separate DNA binding and cleavage domains and, unlike standard knockout 1501 techniques, rely upon the normal double-strand break repair process of the host (Carroll, 2011a,b). ZFN DNA binding is governed by a distinct three nucleotide sequence; however, not all nucleotide sequences and their corresponding ZFN have been identified (Wei et al., 2013). An advantage of the use of ZFN is that they are far more efficient when compared to standard strategies and efficiencies of around 10% are reported (Carroll, 2011a,b). This high efficiency means the manipulation can be conducted without the use of ESCs and can be done directly in the zygote or embryo (Carroll, 2011a,b). To date, a variety of genetically modified species have been created using this technique and it is important to realize that ZFN is still in its relative infancy meaning the likelihood of even more broad application is quite probable. The TALEN (transcription activator-like effector nuclease) technique is similar to the ZFN technique insofar as it relies upon a nuclease. TALEN however utilizes TAL (transcription activator-like) effector elements (TALEs). These elements are produced by bacteria and used to modulate transcription of the host genome in a way that benefits the pathogen by binding to host DNA and activating transcription (Doyle et al., 2013). TALEs have the advantage of being able to be engineered to bind to targeted areas of the genome. DNA binding specificity of TALEs was demonstrated to be mediated by two amino acids in the TALE protein that correspond directly with the nucleic acid sequence of the target site (Moscou and Bogdanove, 2009). This is a distinct advantage over ZFN technology because it means that synthetic TALEs can be generated with direct specificity. By further coupling this TALE to a nuclease (TALEN) you can generate DNA cleavage in a site-specific fashion (Bogdanove and Voytas, 2011). To date this technique has proven highly efficient in organisms as diverse as plants and large agricultural animals (Wei et al., 2013). The final, and most recent, technique is called CRISPR. In bacteria, a CRISPR (clustered regularly interspaced short palindromic repeat) locus is a DNA sequence containing direct nucleic acid repeats with intervening nucleic acid regions called spacers (Wei et al., 2013). Bacteria can incorporate foreign nucleic acids into CRISPR regions of their own genome (Bhaya et al., 2011; Ishino et al., 1987). The bacteria then transcribe this region of DNA and with the help of CRISPR-associated (Cas) nucleases the small transcribed RNA molecules target invading foreign nucleic acid for cleavage (Bhaya et al., 2011). In this manner, bacteria effectively utilize invading viral genomes against themselves in a primitive adaptive immune response (IR). The transcribed CRISPR RNA (crRNA) targets nucleic acid of invading pathogens by direct ‘Watson–Crick’ base paring and Cas proteins cleave the invader (Reeks et al., 2013). LABORATORY ANIMAL MEDICINE 1502 34. Animal Models in Biomedical Research Genetic manipulation in eukaryotes can therefore be accomplished by generating crRNA molecules against specific genomic regions. The technique is in its infancy but efficiencies of over 90% have been reported in mice and the technique has been used in other species as well (Pennisi, 2013; Sung et al., 2014). Indeed the highly effective nature of this technique has led many to speculate that it could be used to treat or reverse human genetic conditions, but enthusiasm may need to be tempered by findings that there are a high degree of off-target modifications in human cells (Schwank et al., 2013; Fu et al., 2013) Regardless of the long-term capability of the technique to treat human diseases the high efficiency of the technique likely means that it will be used for a greater array of vertebrates in the future. B. Principles of Model Selection and the ‘Ideal’ Animal Model Various authors have attempted to define the ‘ideal’ animal model. Features such as (a) similarity to the process being mimicked, (b) ease of handling, (c) ability to produce large litters, (d) economy of maintenance, (e) ability to sample blood and tissues sequentially in the same individual, (f) defined genetic composition, and (g) defined disease status are commonly mentioned (Dodds and Abelseth, 1980; Leader and Padgett, 1980). Perhaps the most important single feature of the model is how closely it resembles the original human condition or process. Shapiro uses the term ‘validation’ as a formal testing of the hypothesis that significant similarities exist between the model and the modeled (Shapiro, 1998). He argues that to be valid, the animal model should be productive of new insights into and effective treatments for the human condition being modeled. The National Research Council (NRC) recommended some criteria for models to be financially supported (National Research Council, 1998). Among the criteria listed were that the model (1) is appropriate for its intended use(s) (a specific disease model faithfully mimics the human disease and a model system is appropriate for the human system being modeled); (2) can be developed, maintained, and provided at reasonable cost in relation to the perceived or potential scientific values that will accrue from it; (3) is of value for more than one limited kind of research; (4) is reproducible and reliable, so results can be confirmed; and (5) is reasonably available and accessible. These seem to be prudent criteria to follow when a funding organization seeks the greatest benefit within the confines of a finite budget and when an investigator is seeking the best model to utilize. These recommendations also fulfill most of the criteria of an ‘ideal’ model. III. NATURE OF RESEARCH A. Hypothesis Testing and Serendipity 1. The Progressive and Winding Route to Discovery Francis Bacon (1620) proposed a process of scientific discovery based on a collection of observations, followed by a systematic evaluation of these observaiotns in an effort to demonstrate their truthfulness. Bacon’s requirement for elimination of all those inessential conditions (which are not always associated with the phenomenon under study) was, in the end, unachievable, and the process of choosing facts was found to depend on individual judgment. However, Bacon did set the tenets for what would become the method of hypothesis testing. Arguably, the foundation for sorting fact from fiction in scientific investigations is based on hypothesis testing (a particularly weak aspect of Bacon’s philosophy). Although it is never possible to directly prove a hypothesis by experimentation, but rather to disprove one (or more) alternative (null) hypotheses; history has documented the steady (although sometimes slow) progress toward understanding the scientific world. Additionally, observations made during the testing of one hypothesis often have lead investigators in an altogether different direction. One may argue, and rightfully so, that hypothesis testing is an inefficient mechanism for discovery; however, this paradigm of generating a hypothesis based on known facts and designing experiments to disprove the hypothesis generally produces meaningful and reproducible results. Using coronary bypass surgery as an example demonstrates just how long and often how circuitous the road to medical discovery can be. The earliest studies that contributed to the first successful bypass surgery in the 1970s go back to 1628 when Harvey described the circulation of frogs and reptiles; then in 1667, Hooke hypothesized (and later demonstrated) that pulmonary blood, flowing through lungs distended with air, could maintain the life of animals. These early observations had no impact on medicine until centuries later, primarily because other technologies necessary for successful application of extracorporeal oxygenation in humans, including antisepsis, anticoagulants, blood groups, anesthesia, etc., had not yet been discovered. Dogs played a critical role during this process of discovery, and between 1700 and 1970 contributed knowledge on the differential pressures in the heart; measurements of cardiac output, cardiopulmonary function, and pulmonary capillary pressure; and development of heart chamber catheterization techniques, heart–lung pumps, angiography, indirect revascularization, direct autographs, saphenous vein grafts, balloon catheters, and floating catheters LABORATORY ANIMAL MEDICINE III. Nature of Research (Comroe and Dripps, 1974). While examining the history behind the 10 most important clinical advances in cardiopulmonary medicine and surgery, Comroe and Dripps (1976) selected 529 key articles (articles that had an important effect on the direction of research) in order to determine how these critical discoveries came about (Comroe and Dripps, 1976). They found that 41% of these articles reported work that had no relation to the disease it later helped to prevent, treat, or alleviate. This phenomenon probably contributes to the observation that few basic science discoveries, including those conducted using animals, are cited in seminal papers describing a clinical breakthrough. The idea that major clinical breakthroughs required a long history of basic science discoveries, often involving animals, and often being conducted by individuals who were unaware of the ultimate application of this knowledge, continues to be true today. Rodolfo Llinás after reflecting on 47 years of research aimed at elucidating the nature of neurotransmission, much of it accomplished using the giant axons of squid, states: “In the end, our complete understanding of this process (synaptic transmission) will manifest itself not as a simple insight, but rather as an ungainly reconstruction of parallel events more numerous than elegant” (Llinás et al., 1999). Both Bacon and Mill, who followed him, believed it was the responsibility of scientists to find the “necessary and sufficient conditions” that describe phenomena. That exhaustive lists of circumstances had to be examined in the search of what was necessary and sufficient never concerned these philosophers (Bacon, 1990; Mill, 1974). Both saw virtue in this process. The scientific method practiced today evolved from the principles of Bacon and Mill and was refined by the middle of the 19th century. The method provides principles and procedures to be used in the pursuit of knowledge and incorporates the recognition of a problem with the accumulation of data through observation and experimentation (empiricism) and the formulation and testing of hypotheses (Poincare, 1905). The method attempts to exclude the imposition of individual values, unsubstantiated generalizations, and deferments to higher authority as mechanisms for seeking the truth. It also subscribes to basing hypotheses only on the facts at hand and then rigorously testing hypotheses under various conditions. Hypotheses that appear to be true today may and indeed are frequently disproved or modified in the future as new conditions are imposed upon them and new technologies employed in the collection of data. Although great discoveries in biology and medicine have depended on the application of these principles, progress is still often slow. As hypotheses are proven incorrect either slightly or greatly, alternative hypotheses 1503 are sought and tested. Unexpected experimental results require careful consideration; and often the reasoned explanation of this data contributes information critical for the formulation of an alternative hypothesis. In the mid-1970s, a series of breeding experiments was conducted to test the hypothesis that systemic lupus erythematosus (SLE) resulted from a mutation passed between individuals through simple Mendelian inheritance. Dogs that spontaneously developed SLE were bred and their progeny tested (Lewis and Schwartz, 1971). Surprisingly, no offspring in three generations of inbreeding developed SLE, but over half the offspring developed other autoimmune diseases, including lymphocytic thyroiditis, Sjögrens syndrome, rheumatoid arthritis, and juvenile (type 1) diabetes (Quimby et al., 1979). After careful reexamination of the data, it was hypothesized that multiple, independently segregating genes were involved in the predisposition to autoimmunity and furthermore, that certain genes (class 1) would affect a key component in the immune system common to several autoimmune disorders, with other genes (class 2) acting to modify the expression of class 1 genes, producing a variety of different phenotypes (autoimmune disease syndromes) (Quimby and Schwartz, 1980). Data collected over the next 15 years, using techniques unavailable in the 1970s, have generally upheld this hypothesis and elucidated genetic mechanisms unimaginable at the time (Datta, 2000). 2. Taking Advantage of Unexpected Findings Serendipity also contributes to important discoveries. In 1889, a laboratory assistant noticed a large number of flies swarming about the urine of a depancreatized dog and brought it to the attention of VonMering and Minkowski. Minkowski discovered, on analysis, that the urine contained high concentrations of sugar. This chance observation helped VonMering and Minkowski discover that the pancreas had multiple functions, one being to regulate blood glucose (Comroe, 1977). In the late 1800s, Christiaan Eijkman was sent to the Dutch Indies to study the cause of beriberi, a severe polyneuritis affecting residents of Java. While conducting studies, Eijkman noticed that chickens housed near the laboratory developed a similar disease. He tried and failed to transfer the illness from sick to healthy birds; however, shortly thereafter the disorder in chickens spontaneously cleared. Eijkman questioned a laboratory keeper about food provided to the chickens and discovered that for economy, the attendant had previously switched from the regular chicken feed to boiled polished rice, which he obtained from the hospital kitchen. Several months later the practice of providing boiled rice to the chickens was discontinued, which correlated with disease recovery in the birds. This chance observation LABORATORY ANIMAL MEDICINE 1504 34. Animal Models in Biomedical Research led Eijkman to conduct feed trials demonstrating that a factor missing in polished rice caused beriberi and that the disease could be cured by eating unpolished rice. These studies led to the discovery of the vitamin thiamin, and were the first to show that disease could be caused by the absence of something rather than the presence of something, e.g., bacteria or toxins (Eijkman, 1965). These examples reinforce the necessity for making careful observations, investigating unexpected findings, and designing careful follow-up experiments. Eijkman was awarded the Nobel Prize in Medicine in 1929, and Banting and Macleod received the Nobel Prize in 1923 for their discovery of insulin, made possible by the previous observations of VonMering and Minkowski (Leader and Stark, 1987). The issue of serendipity involving laboratory animals in biomedical research was extensively reviewed in a dedicated issue of the ILAR Journal (vol 46[4], 2005). B. Breakthroughs in Technology: Paradigm Shifts In The Structure of Scientific Revolutions, Kuhn (1970) makes a case for scientific communities sharing certain paradigms (Kuhn, 1970). Scientific communities consist of practitioners of a scientific specialty that share similar educations, literatures, communications, and techniques and as a result, frequently have similar viewpoints, goals, and a relative unanimity of judgment. Kuhn believes that science is not an objective progression toward the truth but rather a series of peaceful interludes, heavily influenced by the paradigms (call them theories) shared by the members of a scientific community and interrupted, on occasion, by intellectually violent revolutions that are associated with great gains in new knowledge. Revolutions are a change involving a certain sort of reconstruction of group (community) commitments. They usually are preceded by crisis (from within or outside the community) experienced by the community that undergoes revolution. Kuhn explains that scientific communities share a disciplinary matrix composed of symbolic generalizations (expressions, displayed without question or dissent by group members), beliefs in particular models, shared values, and exemplars (those concrete problem solutions that all students of the community learn during their training). This disciplinary matrix is what provides the glue that keeps members of the community thinking (problem solving) alike. However, it is also what prevents members from taking high-stake chances and proposing new rules that counter prevailing opinion. Precisely when two members of a community disagree on a theory or principle because they realize that the paradigm no longer provides a sufficient basis for proof is the debate likely to continue in the form it inevitably takes during scientific revolutions. What happens during revolutions is that the similarity sets established by exemplars and other components of the disciplinary matrix can no longer neatly classify objects into similar groups. An example is the grouping of sun, moon, Mars, and Earth before and after Copernicus, where a convert to the new astronomy must now say (on seeing the moon), “I once took the moon to be a planet, but I was mistaken.” As a result of the revolution, scientists with a new paradigm see differently than they did in the past and apply different rules, tools, and techniques to solve problems in the future (Kuhn, 1970). In the previous edition of this chapter, the author predicted that we were on the verge of just such a paradigm shift in biology. The prediction was that the integration of computation into biology would transform or revolutionize the biological field. This prediction has indeed been prescient. We are now in just such a revolution in biology, a biological revolution that is driven largely by advances in computation. In the past, biological studies were often limited by the ability of the observer to assimilate and analyze data. That is to say, studies were confined because individuals were only able to analyze a finite, relatively small amount of information. Now, large data sets can be analyzed by computer programs which can be used on virtually even the simplest home computer. There are numerous examples of this in the literature today including the widespread use of RNA/cDNA microarrays, high-throughput sequencing, metabolomics and high-throughput drug screening (Kim et al., 2013; Carrico et al., 2013; O’Brien et al., 2012; Rodriguez and Gutierrez-de-Teran, 2013; Henson et al., 2012; Tian et al., 2012; Garcia-Reyero and Perkins, 2011; Koyuturk, 2010; Laird, 2010; Yen et al., 2009; Dalby, 2007; Kleppe et al., 2006; Zhang and Zhang, 2006; Hennig, 2004; Capecchi et al., 2004; Kim, 2002; Varfolomeev et al., 2002; Gerlai, 2002). It is indeed, amazing to consider that merely decades ago it took teams of scientists and a massive input of federal funding to complete the human genome project, whereas today an individual consumer can get their personalized genome sequenced and analyzed (Li-Pook-Than and Snyder, 2013). Of course there are other contributors to this biological revolution aside from computational analysis. Specifically automation, microfluidics and engineering processes have all contributed to this rapid progression. Aside from analyzing large data sets, computation allows for the generation of de novo prediction. That is, computer programs can and are being used to generate new hypotheses. Examples of this type of work are the prediction of three-dimensional protein structure, protein function, and protein–protein interaction based upon amino acid sequence and other input data (Patronov and Doytchinova, 2013; Demel et al., 2008; LABORATORY ANIMAL MEDICINE IV. History of Animal Use in Biomedical Research Breitling et al., 2008; Schrattenholz and Soskic, 2008; Zhao et al., 2008, Ecker et al., 2008). Indeed there are numerous commercially available software packages that are capable of doing this. An offshoot of this is novel protein design or computationally assisted protein design and modification. That is, using programs to generate modified proteins or chemicals with specific user input characteristics (e.g., altered receptor binding) or capabilities (Patronov and Doytchinova, 2013; Mak et al., 2013; Shublaq et al., 2013). Indeed, the appearance of computational biology has led to the necessity of completely new methods to analyze data for patterns or trends. That is to say, classical statistical analysis used for decades by biologists often fail to capture the complexity of the large data sets analyzed. For example, if one were to analyze the transcriptional profile of cells or a mouse under several treatment paradigms and then merely determine which genes were expressed at statistically significant levels under these conditions there would invariably be a relatively large and somewhat unmanageable data set. The question then becomes, what do these changes tell us about the system being analyzed and are the individual gene changes truly meaningful. Other analyses therefore may be more beneficial in understanding the system. Various algorithms may be used to classify expression patterns and two commonly used approaches are unsupervised learning and supervised learning schemes (Allison et al., 2006). Unsupervised learning is relatively free of user input whereas supervised learning relies on user input to guide the algorithm. Unsupervised has the advantage of being free of user input bias but also may fail to capture differences which result from sample variability as opposed to true population differences (Allison et al., 2006). There are other techniques in their scientific infancy today including RNA sequencing which will undoubtedly provide even greater data sets (Wang et al., 2009). RNA sequencing provides transcriptional profile data similar to microarrays but provides greater data discovery power on several levels. First, the expression levels appear to be more precise compared to microarray data (Wang et al., 2009). Additionally, RNA sequencing does not rely upon the known or presumed coding sequence of the organism being examined (Wang et al., 2009). That is, microarrays utilize ‘known’ cDNA targets and therefore novel or altered transcripts will be missed (Wang et al., 2009). An additional benefit is that relatively large quantities of RNA are required for microarray analysis whereas smaller amounts are needed for RNA sequencing (Wang et al., 2009). There remain some challenges with RNA sequencing including the necessity to assemble the transcriptome, a step which is not necessary in arrays; however, it would appear the advantages of RNA sequencing will soon make this the gold standard in transcriptional profiling. 1505 Another relatively new technique is ChIP Seq (chromatin immunoprecipitation sequencing). This technique is a highthrouput method to determine DNA:protein interactions in which cells are fixed to crosslink proteins to the chromosome and then immunoprecipitated with antibodies specific to the protein of interest (Landt et al., 2012). If the protein has bound to any chromosomal regions this DNA will be co-immunoprecipitated as well. High-throughput DNA sequencing is then conducted to determine genomic regions that are being bound by the protein (Landt et al., 2012). The obvious implication is that the protein is interacting in some fashion with the DNA sequence obtained. Data analysis, of course, relies upon computational assembly and annotation of the sequence and then identifying the chromosomal region/gene it is associated with. It appears therefore that this is truly a transformative time in the biological sciences. Automation and radical advancements in computation make even the most complex of data sets capable of being analyzed. Perhaps unexpected but nevertheless true is that current finding, based on DNA sequence analysis with comparisons between animals and man, that predictions can be made in a comprehensive, unbiased, hypothesis-free manner (Lander, 2011). IV. HISTORY OF ANIMAL USE IN BIOMEDICAL RESEARCH A. Early History Humans have a history of close interaction with animals that extends back over 20,000 years (with the domestication of poultry in China) and includes the domestication of buffalo, cattle, sheep, and dogs between 6,000 and 10,000 years ago. The earliest written records of animal experimentation date to 2000 bc when Babylonians and Assyrians documented surgery and medications for humans and animals. True scientific inquiry began in the intellectually liberal climate of ancient Greece where the teachings of Aristotle, Plato, and Hippocrates symbolized a move to understand natural phenomena without resorting to mysticism or demonology. In this environment, philosophy was conceived and wisdom was admired. Early animal experimentation was conducted in 304 bc by the anatomist Erasistratos, who demonstrated the relationship between food intake and weight gain in birds. In the second century ad, the physician Galen used a variety of animals to show that arteries contained blood and not air, as believed by his contemporaries. During this period, physicians carried out careful anatomic dissections, and on the basis of the comparative anatomy of animals and humans, accumulated a remarkable list LABORATORY ANIMAL MEDICINE 1506 34. Animal Models in Biomedical Research of achievements, including a description of embryonic development; the establishment of the importance of the umbilical cord for fetal survival; and the recognition of the relationship between the optic nerves, which arise from the eyes, and the brain. The Greeks, and later the Romans, developed schools of higher learning (including medical schools), created museums, and documented their findings in libraries. Physicians from this period recognized that fever aided the healing process, recognized the inherited nature of certain disorders and classified them, and practiced intubation to prevent suffocation and ligation and excision for the treatment of hemorrhoids. This brief period of scientific inquiry in Europe gave way to the Middle Ages, a 1200-year period characterized by war, religious persecution, and unsavory politics. During the Middle Ages until the Rennaisance, the writings of ancient Greece and Rome remained the final word on science and medicine. Medical education was revived in 10th-century Salerno, Italy, but because of a prohibition on human dissection that lasted into the 13th century, animals were substituted for humans as models in the instruction of anatomy. Because no investigations took place, virtually no new discoveries in medicine were made. Imagine how handicapped these medieval physicians must have been. They still did not know that the filling of lungs with air was necessary for life, that the body was composed of many cells organized into tissues, that blood circulated and the heart served as its pump, and that blood traverses from arteries to veins in tissues via capillaries; these facts were revealed by Hook in 1667, Swammerdam in 1667, Van Leeuwenhoek in 1680, Harvey in 1628, Malpighi in 1687, and Pecquet in 1651, respectively, each using animals to demonstrate these basic principles. In part, this return to the process of scientific discovery was built on the foundations established by Francis Bacon, a foundation based on collecting facts, developing hypotheses, and attempting to disprove them via experimentation. The pace of biomedical research increased during the 1700s as Priestley discovered that the life-promoting constituent of air was oxygen. Scientists such as Von Haller, Spallanzani, Trembly, and Stevens, each using animals, discovered the relationship between nerve impulses and muscle contraction, recorded cell division, and associated the process of digestion with the secretions of the stomach. Hales made the first recording of blood pressure in a horse in 1733, Crawford measured the metabolic heat of an animal using water calorimetry in 1788, and Beddoes successfully performed pneumotherapy in animals in 1795 (although it was not until 1917 that Haldane would introduce modern oxygen therapy for humans). By 1815, Laennec had perfected the stethoscope, using animals. Despite these dramatic gains in medical knowledge, physicians were still not aware of the germ theory of disease (and of course could not avoid, prevent, or treat infections) (Quimby, 1994a). B. From Pasteur to the Genomic Era In the 1860s, the French scientist Louis Pasteur discovered that microscopic particles, which he called vibrions (i.e., bacteria), were a cause of a fatal disease in silkworms. When he eliminated the vibrions, silkworms grew free of disease; the first demonstration of the germ theory of disease. In 1877, Pasteur turned his attention to two animal diseases, anthrax in sheep and cholera in chickens. In each disease he isolated the causative agent, reduced its virulence by exposure to high temperature, and showed that on injection the attenuated organism imparted protection against the disease. Pasteur referred to this process as vaccination (from Latin vacca, ‘cow’) in homage to the English surgeon Edward Jenner, who discovered that injection of matter from cowpox lesions into humans protected them against smallpox. Pasteur went on to develop the first vaccine against rabies, in which the virus was attenuated by passage through rabbits. This vaccine was shown to impart protection in dogs and later in humans. Pasteur’s work with microscopic organisms as agents of disease quickly led to two other important discoveries. John Lister, having read of Pasteur’s discovery, hypothesized that these microorganisms were responsible for wound infections. He impregnated cloth with an antiseptic of carbolic acid and showed that when used as a wound dressing, the antiseptic prevented infection and gangrene. This led to the generalized use of antiseptics before surgery and sterilization of surgical instruments. In 1876, Robert Koch would demonstrate a technique for growing bacteria outside of an animal (in vitro) in pure culture. This would reduce the number of animals required to conduct research on infectious agents, and it allowed Koch to establish postulates for definitively associating a specific agent with a specific disease. Using these postulates, Koch discovered the cause of tuberculosis, Mycobacterium tuberculosis, and he developed tuberculin used to identify infected animals and people. Between 1840 and 1850, Long and Morton demonstrated the usefulness of ether as a general anesthetic first in animals and later in humans. 1. Contributions to Inheritance The second half of the 19th century began a new era in biology and medicine. In addition to such medical developments as vaccination, testing for tuberculosis, anesthesia, and blood transfusion, each of which depended on animal experimentation, two other events changed the direction of biological science forever. In 1859, the English naturalist Charles Darwin published On the Origin of Species, in which he hypothesized that LABORATORY ANIMAL MEDICINE IV. History of Animal Use in Biomedical Research all life evolves by selection of traits that give one species an advantage over others. Around the same time, the Austrian monk Gregor Mendel used peas to demonstrate that specific traits are inherited in a predictable fashion. Nearly half a century later, the English biologist William Bateson reached the same conclusion by selectively breeding chickens and reported his result, as Mendel’s work was being generally recognized. Mendel proposed two laws of heredity: first, that two different hereditary characters, after being combined in one generation, will again segregate in the next; and second, that hereditary characteristics assort in new daughter cells independently (Sourkes, 1966). Unfortunately, Bateson’s investigations with chickens did not always give the numerical results of two independent pairs of characters. This led Sutton and Bovery, at the turn of the century, to conclude that the threadlike intracellular structures seen duplicating and separating into daughter cells carried the hereditary characters. Later Thomas Hunt Morgan, using cytogenetics and selected breeding in fruit flies, clearly demonstrated the phenomenon of genetic linkage (Morgan, 1928). Others went on the verify these observations in plants and animals. During the first half of the 20th century, revelations concerning the discovery of nucleic acids by Kossel, using salmon sperm and human leukocytes (Sourkes, 1966); the structure of nucleotides by P. A. Levine; and the structure of DNA by Watson, Crick, Wilkins, and Franklin depended on advances in chemistry and X-ray crystallography (Watson and Crick, 1953). In fact, it was the application of X-ray diffraction techniques that finally allowed scientists to deduce the double helical structure of DNA. When Watson and Crick saw Franklin’s photographs, it galvanized them into action; by building models of the nucleotides and hypothesizing the points for hydrogen bonding between purines and pyrimidines, they quickly assembled the three-dimensional structure of DNA. Their insight into how the diffraction pattern correlated with helical symmetry allowed for a practical solution to a very complex and, until then, elusive problem. They reinforced the meaning of the term ‘great science,’ as expressed by Lisa Jardine, “Great science depends on remaining grounded in the real” (Jardine, 1999). There were 50 years between the isolation of ‘nuclein’ in leukocytes by Kossel and the discovery of the doublehelical structure of DNA, which recognized that the pattern of purine and pyrimidine coupling contained the code for heritability. Likewise, there were 50 years between the hypothesis by Garrod in 1902 that family members with alkaptonuria had inherited a deficiency in a particular enzyme that metabolizes homogentisic acid and Beadle and Tatum’s proof, using Neurospora, that indeed X-ray-induced genetic mutations affected the production of specific enzymes (Lederberg and Tatum, 1953). To a certain extent, these latter studies 1507 depended on the demonstration that bacteria (and other lower organisms) in fact contained genetic information that controlled protein synthesis in a manner similar to that in eukaryotes (Lwoff, 1953). This breakthrough provided the fuel for the revolution in molecular genetics, which included the biological synthesis of deoxyribonucleic acid (Kornberg et al., 1959) and the genetic regulation of protein synthesis (Jacob and Monod, 1961). It is indeed worth noting that the initial observations on heredity predated these initial molecular genetic discoveries by approximately a century. This rate of scientific discovery would be considered glacial in the context of the current rate of scientific discovery but was instrumental nevertheless. For their achievements in genetics and molecular biology, the following scientists have won the Nobel Prize: Thomas Morgan; Albrecht Kossel; George Beadle, Edward Tatum, and Joshua Lederberg; James Watson, Francis Crick, and Maurice Wilkins; Andre Lwoff, Francois Jacob, and Jacques Monod; and Severo Ochoa and Arthur Kornberg. 2. Progress in the Field of Immunology a. Origins There may be no field which greater illustrates the contribution of vertebrate animal research to biomedical discovery then the field of immunology. The section herein attempts to use immunology as an illustrative example demonstrating the contributions of animal models to scientific discovery. The concept of adaptive immunity, developing protection after exposure to an infectious agent or poison, dates back to at least 430 bc when Thucydides writes of the plague of Athens, “Yet it was with those who had recovered from the disease that the sick and dying found most compassion. These knew what it was from experience and had now no fear themselves; for the same man was never attacked twice—never at least fatally” (Thucydides, 1934). Despite this early recognition, the association of disease with infectious agents was missing. During the 1200s, the Black Death in Europe and the East was attributed to a conjunction of Mars, Saturn, and Jupiter; and later in the fifteenth century the appearance of syphilis in Europe was attributed to another conjunction of the same planets (Silverstein, 1989). It was not until the end of the nineteenth century that studies using animals allowed investigators such as Pasteur, Koch, Ehrlich, von Behring, and Metchnikoff to demonstrate the phenomenon of acquired immunit

Use Quizgecko on...
Browser
Browser