Yökdil 2019 Sonbahar FEN Kaynakları PDF
Document Details
Tags
Summary
This document describes the life and inventions of Rudolf Diesel, focusing on his development of the diesel engine. It traces his early interest in technology, his experiments, and the challenges he faced during the process. The document also highlights the engine's eventual success and impact on industry.
Full Transcript
Yökdil 2019 Sonbahar FEN Kaynakları In Search of the Perfect Engine Around 125 years ago, Rudolf Diesel revolutionised industry and the transport sector, when he invented his famous diesel engine. But failed investments and accusations of theft weighed heavily on the German engineer, who would even...
Yökdil 2019 Sonbahar FEN Kaynakları In Search of the Perfect Engine Around 125 years ago, Rudolf Diesel revolutionised industry and the transport sector, when he invented his famous diesel engine. But failed investments and accusations of theft weighed heavily on the German engineer, who would eventually disappear under mysterious circumstances. The three men are impatient. They are standing in front of an impressive engine, which is almost 3 m tall. According to their calculations, the machinery should be able to generate an output power of 5 HP, and after many days of fine-tuning, chief engineer Rudolf Diesel and his two assistants are ready to start the engine on 10 August 1893. Rudolf Diesel steps up to the engine and turns the fuel cock, which allows the light lamp oil to flow into the engine. But at the very moment it is ignited, the pressure rips the engine apart. Diesel and his colleagues are knocked down, as sharp metal chips fly through the air. Luckily, the three men are not killed in the violent explosion, and despite the accident, Diesel does not give up his quest. Now he knows that his theories about how to ignite fuel without a spark work. He is convinced that by using pressure, he can create an engine that is more powerful than any other. Over the course of the next few weeks, the experiments continue night and day, and when they are completed, Diesel writes a note to himself: “Even with this imperfect engine, it has been proven that the process can indeed be carried out.” House was close to exploding Rudolf Christian Karl Diesel was born on 18 March 1858 in Paris by German parents, and even at an early age, he was interested in technology and enjoyed disassembling his family’s cuckoo clock to study all the springs, toothed wheels, and dials. On other occasions, his curiosity took things a bit far, when Rudolf experimented with the gas fittings of his home and almost made it explode. In school, Rudolf’s gifts were evident to the teachers, particularly in natural science. When not in school, the boy 1 spent much of his time seeking out many of the engineering inventions offered by Paris back then. Engines particularly caught Rudolf Diesel's attention, and when his parents took him to the world fair in 1867, the gifted schoolboy was in his element. He experienced Nicolaus August Otto’s petrol engine that was considered the first alternative to the steam engine. Rudolf also began to visit Paris’ technical museums, where he analysed steam and gas engines for hours. The curious boy particularly found one thing interesting: the world’s oldest self-propelled vehicle – a 3 wheel steam tractor from 1769. 12-year-old Rudolf Diesel very carefully drew a sketch of the tractor in his note book. The idea of revolutionising world transport had already taken root. Piston made Diesel think However, Rudolf Diesel did not have the opportunity to continue his studies in Paris. In 1870, the Franco-Prussian War began, and as German immigrants, the Diesel family was suddenly caught in the crossfire between the two nations. In September 1870, the family boarded one of the many refugee trains to the city of Rouen and took a crammed ship to England. Diesel did not remain in London very long. After only two months, his parents decided to send their son to Germany to live with his father’s cousin and learn German. In his new school, Rudolf Diesel immediately became one of the best pupils, and he knew what he wanted: to become an engineer and invent the perfect engine. In October 1873, he initiated his mechanical engineering studies in Augsburg. Via his studies, Diesel became very fascinated with the modern version of a “fire piston”. The instrument resembles a bicycle pump made of glass, and when the piston is forced into the cylinder, a tinder at the bottom glows. The process shows that the temperature of air rises intensely, when it is compressed. The young engineer student was deeply fascinated, pondering for a long time how the discovery could be used in practice. After two years of studies, Rudolf Diesel was enrolled into the technical university of Munich, and at 21, he completed his studies with the highest grades ever in the history of the school. 2 Painful headache Immediately after his exams, Diesel landed a job in Paris, where he was to reseach cooling methods in a factory. The young engineer worked hard with refrigerators, pumps, and boilers, developing an engine fuelled by ammonia. In September 1881, the talented engineer took out his first patent for a machine that could produce crystal clear ice. However, not everything was a bed of roses. Diesel had begun to suffer from intense headaches and financial problems – both of which continued for the rest of his life. In spite of the difficulties, Diesel experimented intensely with his ammonia engine. His intention was that overheated ammonia gas would one day replace the steam of a steam engine. In theory, that would allow a much higher pressure in the engine and ditto use of the thermal energy. However, ammonia is hard to control, and Diesel was constantly experimenting his way through unknown territory. Diesel began to work on a combustion engine that used liquid fuel. In 1893, he published a text about his new ideas, and in February 1893, he patented a “new, rational heat engine”. The diesel engine was in sight. Challenged outdated theories Rudolf Diesel wanted to design a compact engine that made as much use of the fuel as possible, but unfortunately, he found it hard to convert his theories into concrete reality. Experiments with some of the basic theories he had learned in university convinced the inventor that the theories were outdated. New ones were required. “Both theory and practice have already made me consider overheating steam, and the small engine I built produced surprising evidence of the advantages of overheating,” Diesel wrote about the development of his inventions. “It became clear that the scientific material we have about the behaviour of steam is insufficient for processing the problem any further.” So, Diesel tried to think along new lines, and by pursuing his ideas, he developed a ground-breaking theory that involved increasing the temperature of an engine very rapidly by means of very high pressure. However, Diesel had difficulties carrying his ideas into effect. Investors did not believe that an engine could handle such pressure, 3 and the affluent men hence did not trust Diesel’s project. Over and over again, he was turned down or received no reply. Finally, he made contact with the Augsburg engine works, and in February 1893, the parties signed a contract. The construction of the world’s first diesel engine could begin. Professor was impressed The first experiment ended up in an explosion in August 1893, but after minor adjustments, Rudolf Diesel was convinced that he was finally on the right track. About five months after the first test, the engine ran for the first time for an entire minute. But the victory was not in the bag yet, as the engine could still not run evenly and regularly, and Diesel became ever more depressed due to the prolonged efforts and his poor personal financial situation. In spite of the problems, the inventor worked hard, and in May 1895, the engine functioned well and generated no less than 23 HP. The final experiments were completed in January 1897. Rudolf Diesel had developed an engine that was twice as powerful as all other combustion engines and no less than four times more powerful than the steam engines of the time. In order to boost his engine, Diesel made a professor from the University of Munich make the final experiments. The professor was impressed and named Diesel’s invention the world's most efficient engine and the engine of the future. The money came pouring in Almost overnight, Rudolf Diesel and his invention became world famous. Finally, the money started pouring in, and Diesel and his family moved to a luxury flat in Munich, but personally, Diesel was falling apart. The engineer still suffered severe headaches and depressions, and in spite of the considerable income, his personal finances remained a mess. Although his invention was acknowledged, competitors accused Diesel of having stolen the idea for the engine, resulting in countless lawsuits that Diesel had to struggle with for the rest of his life. However, the diesel engine was already a success and in high demand. The sale of production licenses went really well, and soon, the first diesel engine formed part of commercial production at a German factory, in which it was used to make matches. 4 In October 1898, Diesel decided to go to a health resort to get over his depression. The fresh air immediately improved his condition, but upon his return to Munich, Diesel made several disastrous estate deals that cost him a major portion of his fortune. He spent some of the remaining money on a large villa ripe with superfluous luxury – such as an indoor biking corridor for his kids. 10 million wasted And not even the major success could remedy Rudolf Diesel’s fragile mental state. He began to see himself as a man who had lost to depression. Diesel was still accused of having plagiarised the diesel engine, and his financial problems were so disastrous that his entire fortune was disappearing before his eyes. In 1913, Rudolf Diesel was in rags. He had lost more than 10 million Reichsmarks over the past 20 years due to failed investments. Depression, insomnia, eternal headaches, and finally severe arthritis made all his waking hours a nightmare. Moreover, the prospects of a world war caused the pacifist Diesel even more worries. The inventor had imagined that his engine should be a gift to mankind, but now he feared that it would be used to cause death and destruction in a war. (He was right.) In August 1913, Diesel agreed to go to England, where a new diesel engine factory was to be built. He was hoping that the trip would take his mind off his worries, at least for a short while. What happened next was, in hindsight, a warning: Diesel bought his wife Martha a fancy overnight bag as a goodbye gift and insisted that she could not open it until one week later. Family: It was suicide Before his departure, Diesel made several appointments with English business connections, and on 29 September 1913, he boarded the Dresden passenger liner in the Belgien port of Antwerp. After dinner, Rudolf Diesel went to bed. At 10 PM, he asked the crew to wake him up at 6.15 the next morning, but when the steward knocked on his door, Diesel's cabin was empty. The bed was untouched, and the crew found Diesel’s hat and carefully folded jacket on the quarterdeck. In spite of a massive search, Rudolf Diesel was nowhere to be found. 5 Ten days later, a towboat near the Dutch coast found a dead body in the water. The sailors removed the body’s ID papers, before throwing it back into the water. It turned out to be Rudolf Christian Karl Diesel. When Martha Diesel opened the bag, she found 20,000 marks in cash and a statement of empty bank accounts and an overwhelming debt. The family of Rudolf Diesel had no doubts that the inventor of the diesel engine had killed himself. What Is The Difference Between Diesel And Petrol? In some ways, diesel and petrol engines are constructed the same. Both are designed to convert the chemical energy of fuel into mechanical energy, which can power the wheels of a car. The conversion takes place via a series of small explosions. The biggest difference between the two engines is to be found in the way in which the explosions take place. In a petrol engine, fuel is mixed with air, compressed, and ignited by a sparking plug – after which the explosions take place. A diesel engine has no sparking plug. The air is compressed, before it is mixed with a different type of fuel (also called diesel). As air is heated by compression, the fuel will auto-ignite. Forest Operations in the Tropics In recent years considerable efforts have been made in introducing improved forest harvesting practices to tropical forests to support sustainable forest management. However, only a small proportion of forests in the tropics is actually being managed on a sustainable basis. Environmental groups and, increasingly, the general public have called for refined harvesting systems and techniques so as to utilize wisely forest resources, thereby maintaining biodiversity and keeping forest stands intact in order to provide forest goods and services for the present as well as for future generations. The application of reduced impact logging (RIL) systems and techniques seems to have gained increasing importance in meeting environmental challenges and providing economic and social benefits. 6 In tropical countries, harvesting operations are fundamentally different from those applied in temperate zones. Stand densities in temperate forests are considerably higher than in the tropics, generating much higher potentials for commercial timber volumes per hectare. There is less diversity of tree species in temperate zones, and the utilization of commercial tree species permits simplified forest operations. Due to the application of both selective and clear-cutting as well as the higher standing timber volumes per hectare, harvesting densities in general greatly exceed those in the tropics. Directional felling is easier to carry out in temperate zones due to the usually smaller- sized trees and crowns; and soil conditions are better for skidding operations. In spite of varying seasons, climatic conditions allow the appropriate implementation of harvesting operations throughout the whole year. In the tropics, forest operations are generally more complex to organize and implement than in temperate zones. Natural forests in the tropics are characterized by a higher abundance of different species with many diverse sizes of timber and ample stand densities with only a few species of commercial interest. The harvestable wood volumes per hectare vary considerably depending on the occurrence of commercial tree species. … Asian Sea Bass (Lates calcarifer) A commercially important fish species farmed in several Asian countries and northern Australia. They belong to the family Centropomidae and are commonly known as baramandi, bhetki or giant sea perch. Asian sea bass are widely distributed in tropical and subtropical areas of the Indo-Pacific region inhabiting a wide variety of freshwater, brackish and marine habitats, including streams, lakes, estuaries and coastal waters, but spawning only in inshore marine waters. These predatory fish feed initially on small crustaceans and later switch to fish. Juvenile sea bass may be cannibalistic. 7 Treehoppers If there were a competition for the world’s weirdest insect, treehoppers would have a clear shot at first place. See one for the first time and you’re sure to wonder, What are those strange protrusions sprouting from its body? Many treehoppers flaunt outlandish outcroppings, such as the helicopter-like orbs of Bocydium sp. Others play it coy, mimicking thorns, leaves, or insect droppings. Still others impersonate ants or wasps. Forty-plus named species, as well as another 700 or so awaiting scientific description, resemble drops of rainwater. As their common name suggests, these tiny insects—none are longer than a dime is wide—live on trees and plants worldwide, with nearly half the 3,200 or so described species inhabiting the New World tropics. One leaf in the Ecuadorian rainforest where this story was photographed could easily harbor more treehopper species than found in all of Europe. Treehoppers are members of a huge and varied order of insects known as the Hemiptera, which include leafhoppers and cicadas. Like others of their kind, they’re equipped with mouthparts for piercing plant stems and slurping the juices inside. A bit like mosquitoes, they have two interlocking, needle-like feeding tubes, one for siphoning fluids, the other for secreting saliva that prevents the juices from coagulating. Because they’re often content to feast on one plant’s bounty their entire life, most treehoppers pose little threat to economically important crops (though they may spread at least one botanical disease). Partly for this reason, treehoppers haven’t been studied as extensively as their close relatives. This lack of scientific attention has left significant gaps in our knowledge of these bugs, including the purpose of their mystifying body modifications. Unlike most insect mothers, which desert their eggs soon after laying them, many treehopper mothers remain present and vigilant, guarding their offspring until the nymphs grow up and fly away. When predators such as stinkbugs approach, the nearest nymph sounds the alarm by swinging its body and producing a vibrational “chirp.” Siblings pick up the vibe and join in, amplifying the signal. Springing into action, the 8 mother confronts the invader, furiously buzzing her wings or punching with her club- shaped back legs. Sometimes treehoppers get help from ants and other insects that provide protection in exchange for honeydew, a sweet liquid treehoppers secrete as a product of constantly drawing plant sap. Collecting treehoppers that have ant allies can be painful: “You’ll get dozens of stings on your hands,” says Chris Dietrich, curator of insects at the Illinois Natural History Survey. But the astonishing variety of these bizarre bugs makes for endless surprises. “When you work with insects,” McKamey says, “it’s like Christmas every day.” Known for their devoted parental care, treehopper mothers of the species Alchisme tridentata watch over their progeny until the young hoppers are old enough to fly away. The nymphs have barbs and bright red and yellow accents, probably warning that they’re unpalatable. Mobile Game Industry Adopting New Strategy 2018’s biggest gaming titles revealed evolutions in the OS mobile gaming market. From subscriptions and intellectual property to ad revenue and cross-platform promotions, take a closer look at the changing gaming landscape, and see what the future holds for the industry. Whether you consider yourself a hardcore Fortnite player or you just enjoy a game of Candy Crush from time to time, it’s hard to deny that 2018 has been an incredible year in the world of mobile gaming. This year, more than 2.3 billion of us have tapped, shook, and flicked our smartphones and 46% of us - 1.1 billion - have spent money paying for premium addons. That’s not all - for the first time ever, we’ve helped mobile gaming take over desktop and console gaming, with tablet and smartphone gaming increasing more than 25% and adding an eye-watering $70.3 billion to the coffers of big names and independents. Although smartphones have been arguably the most popular gaming platform for the past couple of years, they haven’t been able to compete with the likes of the XBox, 9 PlayStaton or desktop computers. Until now, that is. In 2018, many news outlets reported on the fact that it makes more sense to buy an iPhone XS for $1,000 than it does to buy a $1,000 MacBook, as the former offers more power and functionality and is more likely to be used to its potential. In 2018, the computers in our pockets are just as powerful - if not more powerful - than the computers we use in the gaming room, and developers are cashing in on the ever- closing technological gap. Games that five years ago would only have been available on PC are now being released to our smartphones, which is making the industry more exciting than ever before. Fortnite, for example, is arguably the biggest gaming hit of the year - in May of 2018 alone, a month after launching on IOS, developers generated a record-breaking $318 million and end-of-year turnover is set to be in the hundreds of millions, all thanks to a single game. … Problems With E-Mail Advertising E-mail advertising really can work. E-mail advertising is getting a good reputation these days, as people realize that it can be affordable and effective, but not all e-mail advertising is such a great idea. There are four main problems to watch out for: 1. Classified Ad E-Mails Don’t Work Some newsletters sell classified ads. You buy a few lines and your ad runs along with scores of others. These ads almost never work. Few people read these ads—in fact many of these newsletters probably aren’t read at all, having been subscribed to in order to enter a drawing. Even if they are read, people tend to quickly scroll past the classifieds. If you can find a newsletter with cheap classified ads, go ahead and try it. Create a doorway page to track incoming visits and see how many people hit that page. Probably very few. Remember, the best way to place an ad in a newsletter is in the editorial content, separated from any other ads the newsletter may be carrying. The better the content and the fewer ads in the newsletter, the better your ad is likely to work. 2. Ads Sent Solo to Opt-In Lists Don’t Work 10 E-mail message ads sent by themselves to opt-in lists probably won’t work well. If the message carries nothing but an ad, people will read the subject line, and unless it’s a really good subject line, just delete the message. Remember, people are flooded with e-mail, so they’re using the Delete key a lot. … Temperament Temperament can generally be defined as a behavioral or emotional trait that differs across individuals, appears early in life, is relatively stable over the life span, and is, at least to some degree, influenced by biology. This broad definition of temperament is generally agreed upon by most psychologists, but there is a devil in the details. The majority of the many questions about temperament can be summarized into two broad themes: what the structure of temperament is and how biology is related to this structure. As noted above, one of the things that temperament researchers agree upon is that temperament reflects individual differences in the way individuals interact with the environment. Accordingly, temperament has become an important factor in predicting other behavioral outcomes. Temperament, broadly defined, is among the oldest concepts in psychology. Indeed, the general manner in which temperament is defined by behavioral scientists today differs very little from the way the ancient Greeks talked about the essential nature of a person. Galen, a Greek physician of the second century A.D. whose works were considered authoritative for many centuries, used the four humors to identify nine basic temperament types, five of which were the result of balanced relationships among the four humors and four of which were derived from the dominance of one humor over the others (Kagan 1994). If one replaces the humors with genes and neurochemicals, the approach to temperament taken by Galen seems little different from that of contemporary temperament researchers. One of the primary questions facing temperament researchers is how to identify a temperament trait; what exactly are the dimensions of temperament? One of the issues that make answering this question so challenging is that temperament is not a kind of 11 behavior, like aggression, prosocial behavior, risk-taking, or other categories of behaviors. Rather temperament is a quality of behaving. Alexander Thomas and Stella Chess suggested that temperament is understood to be the “how of behavior” (1977). Similarly, Strelau (1987) defines temperament as the stylistic aspects of behavior rather than the content of behavior.Thus, typically temperament is not so much observed as it is inferred from abstract descriptions of the qualitative aspects of an individual’s behavior.As a result, the identification of a temperament trait is sensitive to the qualitative descriptions used to describe it. Perhaps the importance of this kind of description accounts for Strelau’s (1997) identification of nearly eighty different terms used to refer to temperament characteristics. A primary emphasis in temperament research has then been to seek a convergence of descriptive terms. For example, how different are arousal and sensory threshold, approach/withdrawal and behavioral inhibition, and emotionality and quality of mood? … 30 Renaissance Treasures The Renaissance produced some of the most iconic buildings, most innovative inventions and most memorable pieces of art. Here are 30 of the greatest. … Renaissance Clock The word 'Renaissance' usually conjures up images of fine art and breathtaking architecture, but many great conveniences were also invented during this period, such as the printing press and the mariner's astrolabe. Arguably the most significant invention to come from this time, though and one we still use on a daily basis, is the clock. Before the advent of the clock, time was kept by sundials, which, although accurate, largely depended on the weather, and later water clocks, which required constant monitoring and were therefore impractical for casual timekeeping. Although the first example of what is considered a clock was built in England in 1283, the need to make clocks smaller led to the creation of the spring-driven clock in the 15th century, which 12 allowed timekeeping to take place within the home and gave Renaissance artisans the opportunity to unleash their artistic, not to mention technical, talents. The Printing Press The invention of the printing press by the German Johannes Gutenberg, was one of the most influential events in the second millennium that revolutionised the way in which people conceived and described the world they live in. Gutenberg's invention was based on existing screw presses and adapted existing technologies to perfect the printing process. By devising a hand mould, the precise and quick composition of movable metal type was made possible in large quantities, which was a key element in the overall profitability of the enterprise. The arrival of the printing press had a profound effect on Renaissance Europe and introduced an era of mass communication in many different languages. It meant that information could be easily circulated and the power of political and religious authorities could be challenged by the masses. Within a few decades, printing presses were becoming widespread in cities throughout Europe and by 1500, they had produced more than 20 million volumes. The Prince The Prince is a political treatise by the Italian diplomat and political theorist Niccolo Machiavelli and it sent shock waves through Europe when it was published in 1532 by detailing ruthless tactics for gaining absolute power at the abandonment of conventional morality. Written in Italian rather than Latin, which was considered innovative at the time, The Prince describes how princes can justify their aims, such as glory and survival, by immoral means and, as such, Machiavelli came to be regarded as something of an agent of evil and his name has become synonymous with unscrupulous cunning and deception. Though relatively short, The Prince is the most famous of Machiavelli’s works and considered one of the first works of modern political philosophy. Its chapters cover topics such as generosity versus parsimony and cruelty versus mercy, and the treatise had a profound impact on political leaders throughout the modern west, where the advent of the printing press meant it was catapulted into political consciousness. 13 … Adapting To Climate Change We have to combat climate change if we are to reduce the risk of catastrophic events in the future. Yet whatever we do, some change is inevitable. Even if we stopped all greenhouse gas emissions tomorrow, the average global temperature would still keep rising for the next 30 years, mainly because of the gradual release of heat stored by the oceans. The rising temperatures are bound to raise sea levels. They may also cause more droughts and floods, and create problems for agriculture and wildlife. So we must prepare for these changes, while working hard to stop the problem from getting worse. Tidal defenses Many great cities are built on lowlying coasts that are vulnerable to rising sea levels. Most of these cities already have sea defenses, but they will need extra protection against extra-high tides and storm surges. Some barriers have already been built. One of the largest, the Thames Barrier, was built in 1974–1982 to protect London, England, from storm surges that were already seen as a threat. The ten giant steel gates were used quite rarely until 1990, but as sea levels creep up, they have been closed a lot more often. Natural barriers Developing nations often cannot afford to build coastal defenses, but the sea naturally creates barriers to storm waves in the form of shoals, salt marshes and, in warmer regions, mangrove swamps like this one in Florida. Many of these natural barriers have been destroyed by poorly planned coastal development. By preventing this, even poor countries can protect their coasts from flooding. Wildlife reserves Wildlife is already suffering from the massive destruction of habitats all over the world, and the extra stress of changing climates will drive many species into extinction. By creating wildlife reserves, we can make life easier for plants and animals, and also preserve the ecosystems that help resist serious climate change. 14 Battling the deserts People living on the fringes of expanding deserts can stop the sand from taking over by stabilizing sand dunes with palm fronds, as here in Morocco, or by planting drought- resistant grasses and shrubs. They can also stop dry grassland from turning into desert by preventing overgrazing by farm animals. New farm crops Scientists at the International Rice Research Institute in the Philippines are developing new rice plants that can grow well in dryer, warmer climates. These may help stop rice yields from dwindling as temperatures rise. Away from the tropics, farmers may switch to food crops such as corn that are more suited to hotter, drier summers—although predicting exactly which crops will do well in a constantly changing climate may not be easy. Health and Longevity The health of a country’s population is often monitored using two statistical indicators: life expectancy at birth and the under-5 mortality rate. These indicators are also often cited among broader measures of a population’s quality of life because they indirectly reflect many aspects of people’s welfare, including their levels of income and nutrition, the quality of their environment, and their access to health care, safe water, and sanitation. Life expectancy at birth indicates the number of years a newborn baby would live if health conditions prevailing at the time of its birth were to stay the same throughout its life. This indicator does not predict how long a baby will actually live, but rather reflects the overall health conditions characteristic of this particular country in this particular year. The under-5 mortality rate indicates the number of newborn babies who are likely to die before reaching age 5 per 1,000 live births. Because infants and children are most vulnerable to malnutrition and poor hygienic living conditions, they account for the largest portion of deaths in most developing countries. Therefore, decreasing under-5 mortality is usually seen as the most effective way of increasing life expectancy at birth in the developing world. 15 Global Trends During the second half of the 20th century health conditions around the world improved more than in all previous human history. Average life expectancy at birth in low- and middle-income countries increased from 40 years in 1950 to 65 years in 1998. Over the same period the average under-5 mortality rate for this group of countries fell from 280 to 79 per 1,000. But these achievements are still considerably below those in high-income countries, where average life expectancy at birth is 78 years and the average under- 5 mortality rate is 6 per 1,000. Throughout the 20th century, national indicators of life expectancy were closely associated with GNP per capita. If you compare Figure 8.1 (Life expectancy at birth, 1998) with Figure 2.1 (GNP per capita, 1999), you will find that in general the higher a country’s income per capita, the higher is its life expectancy—although this relationship does not explain all the differences among regions and countries. (See Data Tables 1 and 3 for countryspecific data.) The two other factors believed to be the most important for increasing national and regional life expectancies are improvements in medical technology (with some countries clearly making better use of it than others) and development of and better access to public health services (particularly clean water, sanitation, and food safety control). Education, especially of girls and women, makes a big difference too, because wives and mothers who are knowledgeable about healthier lifestyles play a crucial role in reducing risks to their families’ health. These other factors help explain how most developing countries are catching up with developed countries in terms of people’s health even though they are generally not catching up in terms of per capita income. Progress in medical technology, public health services, and education allows countries to realize “more health” for a given income than before. For example, in 1900 life expectancy in the United States was about 49 years and income per capita was more than $4,000. In today’s Sub- Saharan Africa life expectancy is about 50 years even though GNP per capita is still less than $500. In general, for nearly all countries, life expectancy at birth continued to grow in recent years. In developing countries this growth was largely due to much lower under-5 mortality. Better control of communicable diseases that are particularly dangerous for 16 children, such as diarrhea and worm infections, accounts for most of the gains. In many countries higher per capita incomes also contributed to better nutrition and housing for most families. Governments of developing countries have invested in improving public health measures (safe drinking water, sanitation, mass immunizations), training medical personnel, and building clinics and hospitals. But much remains to be done. … The Discovery Of The Elements At the beginning of the seventeenth century, 13 elements were known. Nine—carbon, sulfur, iron, copper, silver, gold, tin, lead, and mercury—had been discovered in ancient times. Four more—arsenic, antimony, bismuth, and zinc—were discovered between around 1250 and 1500. It is not by chance that 11 of the 13 are metals. Some of them have relatively low melting points and were undoubtedly first produced when fires were laid on surface ores. Fires built by preliterate peoples in modern times have often produced small quantities of metals. A rich vein of silver was discovered in this manner by an Indian sheepherder in seventeenth-century Peru who built a fire at nightfall and found the next morning that the stone under the ashes was covered with silver. Other metals, such as iron, have relatively high melting points. But iron can be smelted in fairly primitive furnaces, and it was known in Neolithic times. Iron was known long before bronze—an alloy of copper and tin. But it initially wasn’t used for weapons. Unalloyed iron isn’t as strong as bronze, and it won’t hold a sharp edge. In Homer’s Iliad, for example, the heroes have bronze armor and use bronze swords. Nevertheless, iron was considered valuable. Achilles awards a lump of iron as a prize to the individual who can throw it the farthest. The two non-metals, carbon and sulfur, have probably been known as long as human beings have known how to make fire. Carbon in the form of charcoal is a byproduct of fire and was used to make drawings on the walls of caves. Sulfur is found near volcanoes in the form of brimstone. It, too, was used in very early times. For example, after slaughtering Penelope’s suitors, Odysseus fumigates his house by burning sulfur. … 17 Clearly, little progress could be made in chemistry until chemists gained a better understanding of the materials they worked with. Unfortunately, little progress was being made. In 1650 no new elements had been discovered for 150 years. The concept of a chemical element, in the modern sense of the term, was unknown. And when Robert Boyle defined “chemical element” in 1661, he pronounced the idea to be “laboriously uselesse.” … Senses of Dogs Dogs are very alert to their surroundings and highly responsive to sensory information. They look and listen to interpret their surroundings, just as we do. Although we see things with greater clarity—except at night, when canine vision is an advantage—dogs hear much more and possess a superbly developed sense of smell. A dog’s nose is his best asset and he relies on it to provide him with a detailed account of the world. Sight Although dogs cannot see the range of color that humans can, they do see some colors. This limited range is because a dog only has two types of color-responsive cells (dichromatic vision) in the retina—the light-sensitive layer at the back of the eye— instead of three (trichromatic vision) as humans have. The canine world is viewed in shades of gray, blue, and yellow, without red, orange, or green—in much the same way as a person with red-green color-blindness. Dogs do, however, have excellent long- distance vision. They are particularly quick to pick up movement and can even detect lameness, a useful adaptation in a predatory animal seeking an easy kill. Canines see best in the low light of dawn and dusk—prime times for hunting in the wild. With less acute close vision, a dog relies more on scent, or touch through his sensitive whiskers, to investigate nearby objects. Hearing Puppies are born deaf, but as dogs mature they develop a sense of hearing that is about four times as acute as ours. They can hear sounds too low or too high in pitch to be audible to humans and are also good at detecting the direction the sounds come from. 18 Breeds with erect ears—the best design for funneling sound—usually have sharper hearing than those with drop or pendant ears. A dog’s ears are also highly mobile and frequently used to communicate with others: slightly pulled back to signal friendship; dropped or flattened in fear or submission; or raised in aggression. Smell Dogs take in most information through their noses, receiving complex messages from odors that are undetectable to humans. Sampling a smell can tell a dog about the readiness of a bitch for mating, the age, sex, and condition of a prey animal, and possibly the mood of his owner. Even more remarkable, dogs can detect and interpret who or what has crossed their path before, which is why they are so good at tracking. With training, dogs can be taught to sniff out drugs and even detect disease. The area of a dog’s brain that interprets scent messages is estimated to be about 40 times larger than ours. Although scenting ability depends to some extent on the size of the dog and the shape of his muzzle, the average canine nose has somewhere in the region of 200 million scent receptors, compared to about 5 million in humans. Earth as a Planet: Surface and Interior 1. Introduction: the Earth as a Guide to Other Planets The surface of the Earth is perhaps the most geochemically diverse and dynamic among the planetary surfaces of our solar system. Uniquely, it is the only one with liquid water oceans under a stable atmosphere, and—as far as we now know—it is the only surface in our solar system that has given rise to life. The Earth’s surface is a dynamic union of its solid crust, its atmosphere, its hydrosphere, and its biosphere, all having acted in concert to produce a constantly renewing and changing symphony of form (Figure 1). The unifying theme of the Earth’s surficial system is water—in liquid, vapor, and solid phases—which transfers and dissipates solar, mechanical, chemical, and biological energy throughout global land and submarine landscapes. The surface is a window to the interior processes of the Earth, as well as the putty that atmospheric processes continually shape. It is also the Earth’s interface with 19 extraterrestrial processes and, as such, has regularly borne the scars of impacts by meteors, comets, and asteroids, and will continue to do so. Our solar system has a variety of terrestrial planets and satellites in various hydrologic states with radically differing hydrologic histories. Some appear totally desiccated, such as the Moon, Mercury, and Venus. In some places where water is very abundant now at the surface, such as on the Earth, and the Jovian Galilean satellite Europa (solid at the surface and possibly liquid underneath), the Saturnian satellite Enceladus (possibly erupting water vapor into space through an icy surface), and Titan, Saturn’s largest moon (where a 94◦K surface temperature makes water ice at least as hard as granite). In other places, such as Mars and Ganymede, it appears that water may have been very abundant in liquid form on the surface in the distant past. Also, in the case of Mars, water may yet be abundant in solid and/or liquid form in the subsurface today. Thus, for understanding geological (and, where applicable, biological) processes and environmental histories of terrestrial planets and satellites within our solar system, it is crucial to explore the geomorphology of surface and submarine landforms and the nature and history of the land–water interface where it existed. Such an approach and “lessons learned” from this solar system will also be key in future reconnaissance of extrasolar planets. … 5. Seismic Sources Even though the field of seismology can be divided into studies of seismic sources (earthquakes, explosions) and of the Earth’s structure, they are not fully separable. To obtain information on an earthquake, we must know what happened to the waves along the path between the source and receiver, and this requires the knowledge of the elastic and anelastic Earth structure. The reverse is also true: in studying the Earth structure, we need information about the earthquake; at least its location in space and time, but sometimes also the model of forces acting at the epicenter. Most of the earthquakes can be described as a process of release of shear stress on a fault plane. Sometimes the stress release can take place 20 on a curved surface or involve multiple fault planes: the radiation of seismic waves is more complex in these cases. Also, explosions, such as those associated with nuclear tests, have a distinctly different mechanism and generate P and S waves in different proportions, which is the basis for distinguishing them from earthquakes. … Pets Have Allergies Too! Animals can have reactions to pollen and other environmental allergens. Here's what you can do. You may not be the only one in your home suffering from seasonal allergies. Animals can also have reactions to pollen and other environmental allergens, though they have different symptoms than we do and should be treated differently as well. Here are some tips to help them feel better. Allergies make pets itchy Seasonal allergies create skin inflammation in pets, says vet and animal dermatology specialist Dr Andrew Rosenberg. A dog may scoot her rump, lick her paws or groin, or shake a lot. Cats may overgroom or develop skin crusts, he says. Watch out for scratching Too much scratching can lead to infection, says vet Dr Alexandra Gould. Over time, dogs can develop hair loss, thickened skin, hyperpigmentation, hot spots and ear irritation. If cats often overgroom, it can lead to hair loss, especially on their sides or bellies, she says. In both species, excess scratching can cause yeast and bacteria on the skin to multiply, setting off infections. Make an appointment with the vet if you notice these signs of itchiness. What a vet can do Studies show that antihistamines help only about 10 per cent of dog allergy cases, Dr Rosenberg says, and aren't much more effective in cats. Instead, pets' allergies are treated with prescription anti-itch medications, medicated shampoos and, in severe cases, steroids, depending on the individual. Dr Gould also recommends allergy testing and desensitisation for pets with environmental allergies. A vet will test for specific allergen sensitivities and routinely expose pets to them via either a daily serum under 21 the tongue or a shot every one to two weeks. “The goal is that, over three to 12 months, the animal's immune system stops reacting as strongly and requires fewer medications,” Dr Gould explains. Help pets at home In addition to prescription treatment, itchy pets can get relief from cool baths once a week or so. “Bathing can rinse off allergens before they are absorbed into the skin, causing an allergic reaction, ” Dr Rosenberg says. Check with your vet first to rule out a secondary infection and ask for shampoo recommendations. The Sun The sun is the hottest, largest, and most massive object in the solar system. Its incandescent surface bathes its family of planets in light, and its immense gravitational force choreographs their orbits. The Sun is a typical star, little different from billions of others in our galaxy, the Milky Way. It dominates everything around it, accounting for 99.8 percent of the solar system’s mass. Compared with any of its planets, the Sun is immense. Earth would fit inside the Sun over one million times; even the biggest planet, Jupiter, is a thousandth of the Sun’s volume. Yet the Sun is by no means the biggest star; VY Canis Majoris, known as a hypergiant, could hold almost 3 billion Suns. Our star will not be around forever. Now approximately halfway through its life, in about 5 billion years it will turn into a red giant, swelling and surging out toward the planets. Mercury and Venus will be vaporized. The Earth may experience a similar fate, but even if our planet is not engulfed, it will become a sweltering furnace under the intense glare of a closer Sun. Eventually, the Sun will shake itself apart and puff its outer layers into space, leaving behind a ghostly cloud called a planetary nebula. Energy traveling from the Sun’s core takes 100,000 years to reach the surface and appear as light. Sun structure 22 It may seem like an unchanging yellow ball in the sky, but the sun is incredibly dynamic. A giant nuclear fusion reactor, it floods the solar system with its brilliant energy. The Sun has no solid surface—it is made of gas, mostly hydrogen. Intense heat and pressure split the gas atoms into charged particles, forming an electrified state of matter known as plasma. Inside the Sun, density and temperature rise steadily toward the core, where the pressure is more than 100 billion times greater than atmospheric pressure on Earth’s surface. In this extreme environment, unique in the solar system, nuclear fusion occurs. Hydrogen nuclei are fused together to form helium nuclei, and a fraction of their mass is lost as energy, which percolates slowly to the Sun’s outer layers and then floods out into the blackness of space, eventually reaching Earth as light and warmth. Core Making up the inner fifth of the Sun, the core is where nuclear fusion creates 99 percent of the Sun’s energy. The center of the core, where hydrogen has been fused, is mostly helium. The temperature in the core is 27 million ºF (15 million ºC). Meteorology Meteorology has been defined as the science of the earth’s atmosphere; it deals with its continuously occurring global changes and the daily variations in the conditions of the air and their effects on the earth. The analyses and prediction of weather is probably the most important aspect of meteorology. Variations in weather are caused by the uneven heating of the earth’s surface; the result is that the atmosphere is in a constant state of imbalance. As a result, the weather elements that vary are temperature, humidity, visibility, clouds, kind and amount of precipitation, atmospheric pressure, and winds. Heat from the sun and the gravitation pull of the sun and moon combined with the earth’s rotation keep the atmosphere in constant motion. Changes in temperature and pressure are important parameters. A change in pressure usually means that a change in the weather is approaching; rising pressure indicates fair weather, falling pressure a storm. Rising temperature usually indicates that winds 23 are approaching from the south and dropping temperature from the north. In the Northern Hemisphere rising clouds signal that the weather is clearing, and clouds that get thicker and lower usually forecast precipitation. People learn these characteristics by observing the weather patterns where they live, and they usually can predict the daily weather— as probably did ancient people. … Since weather plays such an important role in our daily lives, everyone is interested in forecasts: about conditions at sea, flood warnings, hail damage to crops, driving conditions, and deciding whether or not to take an umbrella to work. Although there were many attempts in the past to predict the weather, it was not until the seventeenth century, when the thermometer and barometer were perfected, that accurate measurements could begin. But people were not able to communicate their observations quickly over long distances until the telegraph was invented and first used in 1849 by Joseph Henry of the Smithsonian Institution to make weather maps. “Everybody complains about the weather, but no one does anything about it” is a common statement. But people have tried to control the weather. Orchard owners and farmers use smudge pots to prevent frosts from killing their crops. People seed supercooled clouds with dry ice pellets or silver iodide dust to produce rain during drought. Seeding has been used to prevent rain, to control flooding, and to disperse fog. In reality, however, it is difficult to tell what the net effects of these attempts are, because it is difficult to compare what would have happened it there had been no seeding. Air Pollution The release into the air of gases or aerosols in amounts that may cause injury to living organisms. Certain pollutants can harm humans. Pollution is not a new phenomenon. In the Middle Ages, London air was so badly polluted by smoke from coal fires that in 1273 Edward I passed a law banning coal burning in an attempt to curb smoke emissions. In 1306 a Londoner was tried and executed for breaking this law. Despite this, pollution was not checked, and on one 24 occasion in 1578 Elizabeth I refused to enter London because there was so much smoke in the air. Smoke killed vegetation, ruined clothes, and the acid in it corroded buildings. Coal burning remained the most serious source of pollution until modern times. It caused the Meuse Valley incident in 1930, severe pollution episode at Donora, Pennsylvania, in 1948, and the London smog incidents a few years later. These led to the introduction of legislation in many countries to reduce smoke emissions. Certain products of combustion increase the acidity of precipitation and some acids can be deposited on surfaces directly, from dry air. This causes the type of pollution known as acid rain, which was first reported in 1852. The burning of those types of coal and oil that contain sulfur is discouraged in order to reduce the problems caused by acid deposition and acid rain. … Other pollutants are not directly poisonous to any living organism and until recently were not considered to be pollutants at all. Their effects are subtle. Carbon dioxide is produced whenever a carbonbased fuel is burned, because combustion is the oxidation of carbon to carbon dioxide with the release of heat energy. Carbon dioxide is a natural constituent of the atmosphere, but its increasing concentration, which is believed to be due to the burning of fossil fuels, is suspected of causing global warming. Methane, released when bacteria break down organic material, is also harmless in itself, but implicated in undesired change as a greenhouse gas. Our understanding of air pollution has increased rapidly as scientists have learned more about the chemistry of the atmosphere. At the same time, steps have been taken in many countries to reduce pollution. The air over the industrial cities of North America and the European Union is much cleaner now than it was half a century ago. Today the task facing the global community is to promote and encourage the economic and industrial development of the less-developed countries without reducing the quality of the air their people breathe. … 25 What Is A Planet? There have been many attempts to define the term “planet” over the centuries, but to date there is still no universally agreed-upon scientific definition of the term. Generally speaking, however, a planet usually refers to an object that is not a star (that is, has no nuclear fusion going on in its core); that moves in orbit around a star; and is mostly round because its own gravitational pull has shaped it into, more or less, a sphere. What are the general characteristics of the planets in our solar system? All the planets in our solar system, by the current scientific classification system, must satisfy three basic criteria: 1. A planet must be in hydrostatic equilibrium— a balance between the inward pull of gravity and the outward push of the supporting structure. Objects in this kind of equilibrium are almost always spherical or very close to it. 2. A planet’s primary orbit must be around the Sun. That means objects like the Moon, Titan, or Ganymede, are not planets, even though they are round due to hydrostatic equilibrium, because their primary orbit is around a planet. 3. A planet must have cleared out other, smaller objects in its orbital path, and thus must be by far the largest object in its orbital neighborhood. This means that Pluto is not a planet, even though it meets the other two criteria; there are thousands of Plutinos in the orbital path of Pluto, and it crosses the orbit of Neptune, which is a much larger and more massive object. The eight objects in our solar system that meet all three criteria are Neptune, Uranus, Saturn, Jupiter, Mars, Earth, Venus, and Mercury. What Is Biodiversity? Biodiversity, an abbreviation of the phrase biological diversity, is a complex topic, covering many aspects of biological variation. In popular usage, the word biodiversity generally refers to all the individuals and species living in a particular area. If we consider this area at its largest scale—the entire world—then biodiversity can be summarized as “life on earth.” 26 However, scientists use a broader definition of biodiversity, designed to include not just the organisms themselves but also the interactions between them, and their interactions with the abiotic (nonliving) aspects of their environment. Multiple definitions, emphasizing one aspect or another of this biological variation, can be found throughout the sci entific and lay literature (see Gaston, 1996, Table 1.1). For the purposes of this essay, biodiversity is defined as the variety of life on earth at all its levels, from genes to biogeographic regions, and the ecological and evolutionary processes that sustain it. … A short history of the study of biodiversity The term biodiversity (as the contracted form of biological diversity) was first used at a plan ning meeting of the National Forum on Bio- Diversity (Wilson and Peters, 1988). The word now frequently appears in current newspaper articles and other mass media and has focused public awareness in some countries on the importance of conservation. A poll of U.S. residents in 2002 showed that biodiversity is “not just for scientists anymore”; 30 percent had heard of biological diversity, compared with only 19 percent in 1996 (Biodiversity Project, 2002). However, many who have heard of the term still do not understand what it means. Part of the confusion is that the term biodiversity applies to different aspects of biological variation and, therefore, has become a catchphrase that has multiple meanings. Even though the term biodiversity is relatively new, for thousands of years philosophers and scientists have studied aspects of biodiversity. Aristotle (384–322 B.C.) was the earliest Western philosopher who attempted to place biodiversity in some formal order or classification. He analyzed variation in the appearance and biology of organisms, and searched for similar patterns by which to group organisms together. This is the science of taxonomy, an essential tool for describing the biological diversity of organisms. Traditionally, biologists described the diversity of organisms by comparing their anatomy and physiology. Since the 1960s, biologists have developed increasingly sophisticated techniques to study biological variation at the cellular and molecular 27 levels. Scientists now examine chromosomes and genes with more precision, gathering more details about the extent of genetic variation between individuals, populations, and species. Today, scientists who study population dynamics in biodiversity still turn to studies undertaken by scientists more than two centuries ago. Malthus (1798) provided one of the earliest theories of population dynamics. Subsequent work through the nineteenth and twentieth centuries expanded these initial concepts. Lotka (1925) and Volterra (1926) developed theories of population ecology by studying population growth relative to competition and predation. Also during the twentieth century, biologists such as Fischer, Wright, and Haldane developed theories of population genetics. Their theories were based on a synthesis of the early work of Darwin and Mendel on natural selection and inheritance of morphological characteristics. The diverse aspects of population ecology and population genetics are combined in the overall subject of population biology. … Energy Use and the Environment As noted earlier, our society derives the majority of its energy from the energy changes associated with burning fossil fuels. Fossil fuels have traditionally been regarded as convenient sources of energy due to their abundance and portability and because they undergo combustion reactions that have large negative enthalpies of reac- tion (the reactions are highly exothermic). However, the burning of fossil fuels also has some serious environmental impacts. Environmental problems associated with fossil fuel use One of the main problems associated with the burning of fossil fuels is that even though they are abundant in Earth’s crust, they are also finite. Fossil fuels originate from ancient plant and animal life and are a nonrenew- able energy source — once they are all burned, they cannot be replenished. At current rates of consumption, oil and natural gas supplies will be depleted in 50 to 100 years. While there is enough coal to last much lon- ger, it is a dirtier fuel (it produces more pollution) and, because it is a solid, is less 28 convenient (more difficult to transport and use) than petroleum and natural gas. The other major problems associated with fossil fuel use stem from the products of combustion. The chemical equations shown for fossil fuel combustion all produce carbon dioxide and water. However, these equations represent the reactions under ideal conditions and do not account for impurities in the fuel, side reactions, and incomplete combustion. When these are taken into account, we can identify three major environ- mental problems associated with the emissions of fossil fuel combustion: air pollution, acid rain, and global climate change. Digital Libraries Digital libraries are organized collections of information resources and associated tools for creating, archiving, sharing, searching, and using information that can be accessed electronically. Digital libraries differ from traditional libraries in that they exist in the ‘‘cyber world’’ of computers and the Internet rather than in the ‘‘brick and mortar world’’ of physical buildings. Digital libraries can store any type of information resource (often referred to as documents or objects) as long as the resource can be represented electronically. Examples include hypertexts, archival images, computer simulations, digital video, and, most uniquely, real-time scientific data such as temperature readings from remote meteorological instruments connected to the Internet. The digitization of resources enables easy and rapid access to, as well as manipulation of, digital library content. The content of a digital library object (such as a hypertext of George Orwells novel, 1984) includes both the data inherent in the nature of the object (for example, the text of 1984) and metadata that describe various aspects of the data (such as creator, owner, reproduction rights, and version). Both data and metadata may also include links or relationships to other data or metadata that may be internal or external to any specific digital library (for instance, the text of 1984 might include links to comments by readers derived from a literary listserv or study notes provided by teachers using the novel in their classes). The concepts of organization and selection separate digital libraries from the Internet as a whole. Whereas information on the Internet is chaotic and expanding faster than either humans or existing technologies can trace accurately, the information in a digital 29 library has been organized in some manner to provide the resource collection, cataloging, and service functions of a traditional library. In addition, the resources in digital libraries have gone through some sort of formal selection process based on clear criteria, such as including only resources that come from original materials or authoritative sources. Digital libraries are thus an effort to address the problem of information overload often associated with the Internet. Kakwa Mountains and Hills A mountain is generally defined as a feature of the Earth’s surface that rises high above the base and has generally steep slopes and a relatively small summit area. Mountains rarely occur as isolated individuals. Instead, they are usually found in roughly circular groups or massifs or in elongated ranges. As a general rule, mountains represent portions of the Earth’s crust that have been raised above their surroundings by up warping, folding or buckling, and have been deepened or carved by streams or glaciers into their present surface form. Hills on the other hand, are land forms characterized by roughness and strong relief. However, the distinction between hills and mountains is usually one of relative size or height but the terms are loosely and inconsistently used. The Kakwa landscape is characterized by numerous broken lines of hills and mountains. The mountains are generally convex in shape giving the impression of being volcanic. Apparently, the surfaces have arisen due to exfoliation and peeling off of scale or layers of the gneiss (the course grained metamorphic rock of quartz, feldspar and mica) due to erosion. Such mountains and hills are known to geologists as inselbergs. The most prominent of these hills are the Bala Hills located in the Yei County. The highest mountain in the land is Gumbiri which rises to slightly over 1,500 meters above sea level. Not-So-Natural Disasters Major natural disasters have always happened. Storms, hurricanes, floods, and droughts are all part of the planet’s natural weather and climate system. In the future, however, humanity is going to be facing more and more intense versions of these phenomena 30 — and they’re going to be anything but natural disasters. Civilization — or more properly, the greenhouse gases (refer to Chapter 2) that civilization pumps into the atmosphere — will bring them on. Earth could be facing more droughts, hurricanes, and forest fires, heavier rainfalls, rising sea levels, and major heat waves. The excess carbon dioxide that people put into the air might even disrupt the carbon cycle and turn the planet’s life-support system into a vicious cycle. Don’t panic, though — you don’t need to rush out and build the ark just yet. But this chapter does offer you some very good reasons why civilization needs to start lowering its emissions to cool off global warming. Stormy Weather: More Intense Storms and Hurricanes You may have heard about stronger storms and hurricanes as an effect of global warming, either on the news or from watching Al Gore’s documentary An Inconvenient Truth. Global warming is heating up our oceans. In fact, the IPCC reports that oceans have absorbed about 80 percent of the heat from global warming. Hurricanes are now occurring in the top half of the northern hemisphere, such as Canada, because of these warmer ocean temperatures, particularly at the surface. Historically, colder ocean surface temperatures in the north slowed down hurricanes, turning them into powerful, but nowhere near as destructive, tropical storms. Now, however, the water’s warmer temperatures don’t impede storms. In fact, warming up surface water is like revving the hurricane’s engine. The most recent science shows that storm and hurricane intensity has grown around the world since 1970. This rising intensity is linked to rising ocean surface temperatures. But some scientists have challenged these data because they’re not in line with climate models; in fact, some climate models predict that storms and hurricanes are about to become less intense. Despite this disagreement, people are better to be safe than sorry when so much is on the line. Protecting humanity means reducing greenhouse gas emission immediately as well as better preparing for storms by building better protections and improving our response to natural disaster emergencies. 31 Exposure To Space Radiation Not A Problem So Far Space exploration is a risky business. As well as the physical dangers, radiation – from the sun and cosmic rays – is thought to put astronauts at a higher risk of getting cancer and heart disease in later life. But so far there is no sign space travellers are dying early from these conditions. “We haven’t ruled it out, but we looked for a signal and we didn’t see it,” says Robert Reynolds of Mortality Research & Consulting. Not enough space-goers have died from these conditions to just be able to compare their age of death with that of other groups. Instead, Reynolds’s team used a statistical technique on survival figures for 301 US astronauts and 117 Soviet and Russian cosmonauts. A total of 89 have died to date. Three-quarters of cosmonaut deaths were due to cancer or heart disease, but only half of the astronaut deaths were. This is principally because there have been more fatal accidents in the US space programme, such as the Challenger shuttle disaster. Down here on Earth, getting heart disease doesn’t make you more or less likely to also get cancer – the two conditions develop relatively independently of each other. But if radiation exposure were causing a surge in both conditions among people who have been to space, then the higher rate of death from one illness may hide a higher rate of the other. This is because anyone who dies from heart disease can’t also die from cancer. Reynolds’s team plotted the space-goers’ deaths over time as survival curves – which show the rate at which a particular group is dying – for each disease, and found no sign of this dampening effect (Scientific Reports, doi.org/c72t). However, that doesn’t rule out radiation giving space-goers a higher rate of one condition but not the other – for instance, if it caused cancer but not heart disease. Radiation would hit future Mars visitors for longer, says Reynolds, so it could still affect their health. 32 Ecological Restoration Ecological restoration (hereafter restoration) is ‘‘the process of assisting the recovery of an ecosystem that has been damaged, degraded or destroyed’’ (Society for Ecological Restoration Science & Policy Working Group). Restoration ecology and ecological restoration are terms often interchanged: The former is the scientific practice that is contained within the broader embrace of the latter, which incorporates both science and many varieties of technological and political practice. Restoration refers to an array of salutary human interventions in ecological processes, including the elimination of weedy species that choke out diverse native assemblies, prevention of harmful activities (such as excess nutrient loads), rejuvenation of soil conditions that foster vigorous plant communities, reestablishment of extirpated species, and rebuilt webs of social participation that foster ecologically rich and productive ecosystems. The metaphor of healing is often used to describe what restorationists do. However not everyone regards restoration as a fully positive practice. Some view it as a technological response to ecological damage, while others worry that restoration deflects attention from avoiding harm in the first place. There is also concern that restored ecosystems may be simply pale imitations of nature, and that ecosystems are always more complicated than those seeking to restore them can truly understand. Restoration practice is driven by the tension between a technological approach to restoration—technological restoration— and a participatory, humble, culturally aware approach, or what this author terms ‘‘focal restoration.’’ The furious debates among practicing restorationists regarding these issues and others provide particular perspectives on relations between science, technology, and ethics. Moreover, conceptual clarity offers practitioners a guide to pitfalls and opportunities for good restoration. Concept and Origins Restoration is practiced in all regions of the world, although what counts as restoration varies according to cultural perspective and socioeconomic condition. This has complicated the creation of a precise definition of this relatively new field, especially because international conversation and cooperative projects have become more 33 common in the early-twenty-first century. In North America, the aim is typically to restore an ecosystem to its predisturbance condition under the presumption that reversion to a pristine, original state is the ideal end point. In Europe and other regions, long and continuous human occupation has resulted in landscapes that present a distinctively cultural benchmark. In many regions of the southern hemisphere, and especially in areas where poverty and civil disruption prevail, the focus is on restoration of productive landscapes that support both ecological and cultural ideals. Timekeeping The history of timekeeping, at least by mechanical means, is very much the history of scientific instrument making. Although scientists may have conceived the instruments they needed for astronomical observation, a separate trade of craftsmen with the necessary skills in brass and iron working, in grinding optical lenses, in dividing and gear-gutting and many other operations grew up. It is impossible to say whether it was a scientist or a craftsman who was the first to calculate the taper required in the walls of an Egyptian water-clock to ensure a constant rate of flow of the water through the hole at the bottom as the head of water diminished. But water-clocks, together with candle-clocks and sandglasses were the first time measuring devices which could be used in the absence of the sun, so necessary with the obelisk, the shadow stick and the sundial. Once calibrated against a sun timepiece, they could be used to tell the time independently. On the other hand, portable sundials to be carried in the pocket became possible once the compass needle the water-clock had been refined to the state that the escape hole was fashioned from a gemstone to overcome the problem of wear, much as later mechanical clockmakers used jewelled bearings and pallet stones in their escapements. The sand hourglass had one advantage over the water-clock: it did not freeze up in a cold climate. On the other hand it was subject to moisture absorption until the glassmaker’s art became able to seal the hourglasses. Great care was taken to dry the sand before sealing it in the glass. Candle clocks were restricted to the wealthy, owing to their continual cost. 34 Mechanical clocks, in the West, were made at first for monasteries and other religious houses where prayers had to be said at set hours of the day and night. At first, though weight-driven, they were relatively small alarms to wake the person whose job it was to sound the bell which would summon the monks to prayer. Larger monastic clocks, which sounded a bell that all should hear, still had no dials nor any hands. They originated in the early years of the fourteenth century. When municipal clocks began to be set up for the benefit of the whole population, the same custom prevailed, for the illiterate people would largely be unable to read the numbers on a dial but would easily recognize and count the number of strokes sounded on a bell. … How a Wasp Turns Cockroaches into Zombies A special chemical blend injected into the brains of cockroaches makes them pawns in the jewel wasp’s control—and perfect live food for its offspring. I don't know if cockroaches dream, but i imagine that if they do, jewel wasps feature prominently in their nightmares. These small, solitary tropical wasps are of little concern to us humans; after all, they don't manipulate our minds so that they can serve us up as willing, living meals to their newborns, as they do to unsuspecting cockroaches. It's the stuff of horror movies, quite literally; the jewel wasp and similar species inspired the chest-bursting horrors in the Alien franchise. The story is simple, if grotesque: the female wasp controls the minds of the cockroaches she feeds to her offspring, taking away their sense of fear or will to escape their fate. But unlike what we see on the big screen, it's not some incurable virus that turns a once healthy cockroach into a mindless zombie—it's venom. Not just any venom, either: a specific venom that acts like a drug, targeting the cockroach's brain. Brains, at their core, are just neurons, whether we're talking human brains or insect brains. There are potentially millions of venom compounds that can turn neurons on or off. So it should come as no surprise that some venoms target the carefully protected central nervous system, including our brains. Some leap their way over physiological 35 hurdles, from remote injection locations around the body and past the blood-brain barrier, to enter their victims' minds. Others are directly injected into the brain, as in the case of the jewel wasp and its zombie cockroach host. Jewel wasps are a beautiful if terrifying example of how neurotoxic venoms can do much more than paralyze. The wasp, which is often just a fraction of the size of her victim, begins her attack from above, swooping down and grabbing the roach with her mouth as she aims her “stinger”—a modified egg-laying body part called an ovipositor—at the middle of the body, the thorax, in between the first pair of legs. The quick jab takes only a few seconds, and venom compounds work fast, paralyzing the cockroach temporarily so the wasp can aim her next sting with more accuracy. With her long stinger, she targets her mind-altering venom into two areas of the ganglia, the insect equivalent of a brain. The wasp's stinger is so well tuned to its victim that it can sense where it is inside the cockroach's dome to inject venom directly into subsections of its brain. The stinger is capable of feeling around in the roach's head, relying on mechanical and chemical cues to find its way past the ganglionic sheath (the insect's version of a blood-brain barrier) and inject venom exactly where it needs to go. The two areas of the roach brain that she targets are very important to her; scientists have artificially clipped them from cockroaches to see how the wasp reacts, and when they are removed, the wasp tries to find them, taking a long time with her stinger embedded in search of the missing brain regions. Then the mind control begins. First the victim grooms itself, of all things; as soon as the roach's front legs recover from the transient paralysis induced by the sting to the body, it begins a fastidious grooming routine that takes about half an hour. Scientists have shown that this behavior is specific to the venom, as piercing the head, generally stressing the cockroach, or contact with the wasp without stinging activity did not elicit the same hygienic urge. This sudden need for cleanliness can also be induced by a flood of dopamine in the cockroach's brain, so we think that the dopaminelike compound in the venom may be the cause of this germophobic behavior. Whether the grooming itself is a beneficial feature of the venom or a side effect is debated. Some believe that the behavior ensures a clean, fungus- and microbe-free meal for the vulnerable baby 36 wasp; others think it may merely distract the cockroach for some time as the wasp prepares the cockroach's tomb. Dopamine is one of those intriguing chemicals found in the brains of a broad spectrum of animal life, from insects all the way to humans, and its effects are vital in all these species. In our heads, it's a part of a mental “reward system”; floods of dopamine are triggered by pleasurable things. Because it makes us feel good, dopamine can be wonderful, but it is also linked to addictive behaviors and the “highs” we feel from illicit substances like cocaine. It's impossible for us to know if a cockroach also feels a rush of insect euphoria when its brain floods with dopamine—but I prefer to think it does. (It just seems too gruesome for the animal to receive no joy from the terrible end it is about to meet.) While the cockroach cleans, the wasp leaves her victim in search of a suitable location. She needs a dark burrow where she can leave her child and the zombie-roach offering, and it takes a little time to find and prepare the right place. When she returns about 30 minutes later, the venom's effects have taken over—the cockroach has lost all will to flee. In principle, this state is temporary: if you separate an envenomated roach from its would-be assassin before the larva can hatch and feed and pupate, the zombification wears off within a week. Unfortunately for the envenomated cockroach, that's simply too long. Before its brain has a chance to return to normal, the young wasp has already had its fill and killed its host. … Venus: Atmosphere It is generally believed that the Sun, the planets, and their atmospheres condensed, about 4.6 billion years ago, from a “primitive solar nebula.” The presumed composition of the nebula was that of the Sun, mostly hydrogen and helium with a small sprinkling of heavier elements. It is these impurities that must have condensed into dust and ice particles and accreted to form the planets. Evidently, the Jovian planets were also able to retain a substantial amount of the gas as well, but the terrestrial planets and many satellites must have been made from the solids. 37 … Many of the differences between the atmospheres of Earth and Venus can be traced to the near-total lack of water on Venus. These dry conditions have been attributed to the effects of a runaway greenhouse followed by massive escape of hydrogen. A runaway greenhouse might have occurred on Venus because it receives about twice as much solar energy as the Earth. If Venus started with a water inventory similar to that of the Earth, the enhanced heating would have evaporated additional water into the atmosphere. Because water vapor is an effective greenhouse agent, it would trap some of the thermal radiation emitted by the surface and deeper atmosphere, producing an enhanced greenhouse warming and raising the humidity still higher. This feedback may have continued until the oceans were gone and the atmosphere contained several hundred bars of steam. (This pressure would depend on the actual amount of water on primitive Venus.) Water vapor would probably be the major atmospheric constituent, extending to high altitudes where it would be efficiently dissociated into hydrogen and oxygen by ultraviolet sunlight. Rapid escape of hydrogen would ensue, accompanied by a much smaller escape of the heavier deuterium and oxygen. The oxygen would react with iron in the crust, and also with any hydrocarbons that might have been present. Although such a scenario is reasonable, it cannot be proved to have occurred. The enhanced D/H ratio certainly points in this general direction, but it could have been produced from a much smaller endowment of water (as little as 1%) than is in the Earth’s oceans. … Black Holes The most massive objects in the universe exert the most gravity. However, the strength of a gravitational field near any given object also depends on the size of the object. The smaller the object, the stronger the field. The ultimate combination of large mass and small size is the black hole. What is a black hole? 38 One definition of a black hole is an object whose escape velocity equals or exceeds the speed of light. The idea was first proposed in the 1700s, when scientists hypothesized that Newton’s law of universal gravitation allowed for the possibility of stars that were so small and massive that particles of light could not escape. Thus, the star would be black. Can anything escape a black hole? According to British physicist Stephen Hawking (1942–), energy can slowly leak out of a black hole. This leakage, called Hawking radiation, occurs because the event horizon (boundary) of a black hole is not a perfectly smooth surface, but “shimmers” at a subatomic level due to quantum mechanical effects. At these quantum mechanical scales, space can be thought of as being filled with so-called virtual particles, which cannot be detected themselves but can be observed by their effects on other objects. Virtual particles come in two “halves,” and if a virtual particle is produced just inside the event horizon, there is a tiny chance that one “half” might fall deeper into the black hole, while the other “half” would tunnel through the shimmering event horizon and leak back into the universe. What does Hawking radiation do to black holes? Hawking radiation is a very, very slow process. A black hole with the mass of the Sun, for example, would take many trillions of years—far longer than the current age of the universe—before its Hawking radiation had any significant impact on its size or mass. Given enough time, though, the energy that leaks through a black hole’s event horizon becomes appreciable. Since matter and energy can directly convert from one to another, the black hole’s mass will decrease a corresponding amount. According to theoretical calculations, a black hole having the mass of Mount Everest—which, by the way, would have an event horizon smaller than an atomic nucleus—would take about 10 to 20 billion years to lose all its energy, and thus matter, back into the universe due to Hawking radiation. In the final instant, when the last bit of matter is lost, the black hole will vanish in a violent explosion that may release a huge blast of high-energy gamma rays. Perhaps astronomers may someday observe just such a phenomenon and confirm the idea of Hawking radiation as a scientific theory. 39 Artificial Intelligence Artificial intelligence (AI) is a scientific field whose goal is to understand intelligent thought processes and behavior and to develop methods for building computer systems that act as if they are “thinking” and can learn from themselves. Although the study of intelligence is the subject of other disciplines such as philosophy, physiology, psychology, and neuroscience, people in those disciplines have begun to work with computational scientists to build intelligent machines. The computers offer a vehicle for testing theories of intelligence, which in turn enable further exploration and understanding of the concept of intelligence. The growing information needs of the electronic age require sophisticated mechanisms for information processing. As Richard Forsyth and Roy Rada (1986) point out, AI can enhance information processing applications by enabling the computer systems to store and represent knowledge, to apply that knowledge in problem solving through reasoning mechanisms, and finally to acquire new knowledge through learning. Stuart Russell and Peter Norvig (1995) have identified the following four approaches to the goals of AI: (1) computer systems that act like humans, (2) programs that simulate the human mind, (3) knowledge representation and mechanistic reasoning, and (4) intelligent or rational agent design. The first two approaches focus on studying humans and how they solve problems, while the latter two approaches focus on studying real- world problems and developing rational solutions regardless of how a human would solve the same problems. Programming a computer to act like a human is a difficult task and requires that the computer system be able to understand and process commands in natural language, store knowledge, retrieve and process that knowledge in order to derive conclusions and make decisions, learn to adapt to new situations, perceive objects through computer vision, and have robotic capabilities to move and manipulate objects. Although this approach was inspired by the Turing Test, most programs have been developed with the goal of enabling computers to interact with humans in a natural way rather than passing the Turing Test. Some researchers focus instead on developing programs that simulate the way in which the human mind works on problem-solving tasks. The first attempt to imitate human 40 thinking was the Logic Theorist and the General Problem Solver programs developed by Allen Newell and Herbert Simon. Their main interest was in simulating human thinking rather than solving problems correctly. Cognitive science is the interdisciplinary field that studies the human mind and intelligence. The basic premise of cognitive science is that the mind uses representations that are similar to computer data structures and computational procedures that are similar to computer algorithms that operate on those structures. Other researchers focus on developing programs that use logical notation to represent a problem and use formal reasoning to solve a problem. This is called the “logicist approach” to developing intelligent systems. Such programs require huge computational resources to create vast knowledge bases and to perform complex reasoning algorithms. Researchers continue to debate whether this strategy will lead to computer problem solving at the level of human intelligence. Still other researchers focus on the development of “intelligent agents” within computer systems. Russell and Norvig (1995, p. 31) define these agents as “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.” The goal for computer scientists working in this area is to create agents that incorporate information about the users and the use of their systems into the agents’ operations. … Conclusion The success of any computer system depends on its being integrated into the workflow of those who are to use it and on its meeting of user needs. A major future direction for AI concerns the integration of AI with other systems (e.g., database management, real-time control, or user interface management) in order to make those systems more usable and adaptive to changes in user behavior and in the environment where they operate. 41 Europa Europa, one of the moons of Jupiter, appears to be an airless ice-shrouded world. However, theoretical calculations suggest that under the ice surface of Europa there may be a layer of liquid water sustained by tidal heating as Europa orbits Jupiter. The Galileo spacecraft imaging showed features in the ice consistent with a subsurface ocean and the magnetometer indicated the presence of a global layer of slightly salty liquid water. The surface of Europa is crisscrossed by streaks that are slightly darker than the rest of the icy surface. If there is an ocean beneath a relatively thin ice layer, these streaks may represent cracks where the water has come to the surface. There are many ecosystems on Earth that thrive and grow in water that is continuously covered by ice; these are found in both the Arctic and Antarctic regions. In addition to the polar oceans where sea ice diatoms perform photosynthesis under the ice cover, there are perennially icecovered lakes in the Antarctic continent in which microbial mats based on photosynthesis are found in the water beneath a 4-m ice cover. The light penetrating these thick ice covers is minimal, about 1% of the incident light. Using these Earth-based systems as a guide, it is possible that sunlight penetrating through the cracks (the observed streaks) in the ice of Europa could support a transient photosynthetic community. Alternatively, if there are hydrothermal sites on the bottom of the Europan ocean, it may be possible that chemosynthetic life could survive there— by analogy to life at hydrothermal vent sites at the bottom of the Earth’s oceans. The biochemistry of hydrothermal sites on Earth does depend on O2 produced at the Earth’s surface. On Europa, a chemical scheme like that suggested for subsurface life on Mars would be appropriate (H2 + CO2). The main problem with life on Europa is the question of its origin. Lacking a complete theory for the origin of life, and lacking any laboratory synthesis of life, we must base our understanding of the origin of life on other planets on analogy with the Earth. It has been suggested that hydrothermal vents may have been the site for the origin of life on Earth and if this is the case improves the prospects for life in a putative ocean on Europa. However, the early Earth contained many environments other than hydrothermal vents, such as surface hot springs, volcanoes, lake and ocean shores, tidal 42 pools, and salt flats. If any of these environments were the locale for the origin of the first life on Earth, the case for an origin on Europa is weakened considerably. Vertical Structure of the Atmosphere Earth may differ in many ways from the other planets, but not in the basic structure of its atmosphere (Fig. 1). Planetary exploration has revealed that essentially every atmosphere starts at the bottom with a troposphere, where temperature decreases with height at a nearly constant rate up to a level called the tropopause, and then has a stratosphere, where temperature usually increases with height or, in the case of Venus and Mars, decreases much less quickly than in the troposphere. It is interesting to note that atmospheres are warm both at their bottoms and their tops, but do not get arbitrarily cold in their interiors. For example, on Jupiter and Saturn there is significant methane gas throughout their atmospheres, but nowhere does it get cold enough for methane clouds to form, whereas in the much colder atmospheres of Uranus and Neptune, methane clouds do form. Details vary in the middle-atmosphere regions from one planet to another, where photochemistry is important, but each atmosphere is topped off by a high-temperature, low-density thermosphere that is sensitive to solar activity and an exobase, the official top of an atmosphere, where molecules float off into space when they achieve escape velocity. How Was Natural Gas Formed? The main ingredient in natural gas is methane, a gas (or compound) composed of one carbon atom and four hydrogen atoms. Millions of years ago, the remains of plants and animals (diatoms) decayed and built up in thick layers. This decayed matter from plants and animals is called organic material—it was once alive. Over time, the sand and silt changed to rock, covered the organic material, and trapped it beneath the rock. Pressure and heat changed some of this organic material into coal, some into oil (petroleum), and some into natural gas—tiny bubbles of odorless gas. In some places, gas escapes from small gaps in the rocks into the air; then, if there is enough activation energy from lightning or a fire, it burns. When people first saw the 43 flames, they experimented with them and learned they could use them for heat and light. How do we get natural gas? The search for natural gas begins with geologists, who study the structure and processes of the Earth. They locate the types of rock that are likely to contain gas and oil deposits. Today, geologists’ tools include seismic surveys that are used to find the right places to drill wells. Seismic surveys use echoes from a vibration source at the Earth’s surface (usually a vibrating pad under a truck built for this purpose) to collect information about the rocks beneath. Sometimes it is necessary to use small amounts of dynamite to provide the vibration that is needed. Scientists and engineers explore a chosen area by studying rock samples from the earth and taking measurements. If the site seems promising, drilling begins. Some of these areas are on land but many are offshore, deep in the ocean. Once the gas is found, it flows up through the well to the surface of the ground and into large pipelines. Some of the gases that are produced along with methane, such as butane and propane (also known as “by-products”), are separated and cleaned at a gas-processing plant. The byproducts, once removed, are used in a number of ways. For example, propane can be used for cooking on gas grills. Dry natural gas is also known as consumer-grade natural gas. In addition to natural gas production, the US gas supply is augmented by imports, withdrawals from storage, and supplemental gaseous fuels. Most of the natural gas consumed in the United States is produced in the United States. Some is imported from Canada and shipped to the United States in pipelines. A small amount of natural gas is shipped to the United States as liquefied natural gas (LNG). We can also use machines called “digesters” that turn today’s organic material (plants, animal wastes, etc.) into natural gas. This process replaces waiting for millions of years for the gas to form naturally. 44 Biotechnology The term “biotechnology” was coined in 1919 by Hungarian scientist Karl Ereky to mean “any product produced from raw materials with the aid of living organisms.” In its broadest sense, biotechnology dates from ancient times. Approximately 6000 B.C.E., the Sumarians and Babylonians discovered the use of yeast in making beer. About 4000 B.C.E., the Egyptians employed yeast to make bread and the Chinese bacteria to make yogurt. The modern sense of biotechnology dates from the mid-1970s, when molecular biologists developed techniques to isolate, ide