Appendix B Development Tools

Problem Statement

Although more proprietary packages are available, there are abundant open source packages, giving its users unprecedented flexibility and freedom of choice, that most users can find an application that exactly meets their needs. However, it is also this pool of choices that drive individuals to share a wide range of preferences and biases. As a result, /Open source packages (and softeware in general) is flooded with religious wars over programming languages, editors, licences, etc. Some are worthwhile debates, others are meaningless flamewars. Throughout Appendix B and next chapter, Appendix C, we intend to summarize a few essential tools (from our point of view) that may help readers to know the aspects and how-to of (1) development tools including programming, debugging, and maintaining, and (2) network experiment tools including name-addressing, perimeter probing, traffic-monitoring, benchmarking, simulating/emulating, and finally hacking. Section B.1 guides readers to begin the developing journey with programming tools. A good first step would be writing the first piece of codes using a Visual Improved (vim) text editor. Then compiling it to binary executables with GNU C compiler (gcc), and furthermore offloading some repetitive compiling steps to the utility. The old 80/20 rule still works on programming where 80% of your codes come from 20% of your efforts leaving 80% of your efforts go buggy about your program. Therefore you would need some debugging tools as dicussed in Section 1.2. including source-level debugging, GNU Debugger (gdb), or a GUI fasion approach, Data Display Debugger (ddd), and debug the kernel itself using remote Kernel GNU Debugger (kgdb). As source of contribution is becoming more scattered and dependency of software is more sophisticated nowadays, Section B.3 illustrates how codevelopers should agree on a version control system, Concurrent Version System (cvs), to achieve ease of collaboration avoiding development chaos and how Red-hat Package Manager (rpm) gives end-users ease of installation by hiding software transparency. Since the world is tightly networked, users may equip themselves with tools introduced in Appendix C to enforce more interactions with others. Section C.1 discusses how name-addressing helps in knowing Internet’s who-is-who using host to query DNS, and acquire local (e.g. LAN) who-is-who with the Address Resolution Protocol (arp) utility. Certainly, there are times network doesn’t work as

1 expected. One should employ perimeter-checking as dicussed in Section C.2 either ping for the availability or trace the route (traceroute) for any bottleneck. Once troubleshooting is done, packets would begin to flow and traffic are generated. Section C.3 teaches readers to manipulate traffic monitoring. Packets can be dumped for examing its header/payload in great details with tcpdump, as well as some useful network statistics and information can be collected using netstat. As the performance issue is getting more important, a connected network is only considered workable when no benchmarks have been measure. Hence, Section C.4 introduces two benchmarking tools, one for host-to-host throughput analysis such as Test TCP (ttcp), while one for Web server performance analysis, e.g. WebStone. Furthermore, it is not always practical to wire a complete real network especially for research purpose. In this case, either simulation such as Network Simulator (ns) or emulation of the network such as NIST Net, which are discussed in Section C.5, should be taken. Finally, a few hacking methodology including eploits-scanning with nessus, sniffing with ettercap, and Distributed Denial-of-Services (DdoS) with TFN-style attacks are investigated at the end of this chapter, which might be a controversial issue yet it is placed here as a complement to Chapter 7.

B.1 Programming

With the advent of computers, writing a program (or programming), which instructs the computer to do certain task, has became an art. Quite a few people not only gets their works done but also programs them in a graceful way as well as an individual style. Nevertheless, the essence of programming languages shall not be discussed here, instead you should approach a programming language text for details.

B.1.1 Text Editor – vim Irrespective of your choice of programming language, you will need an editor, which is a program that can be used to create and modify text files. It plays an essential role in a programmer’s life since a clumsy text editor would waste your time simply to handle it while a handy one would offload these efforts and let you spend more time on thinking and innovating.

What is vim Out of numerous text editor, i.e. pico, joe, and emacs, Visual Improved (vim) is perhaps (personal bias) the prevailing text editor nowadays. It takes a leverage

2 between ease of use and powerfulness making it more handy than emacs yet way feature-riched than pico. Vim has extensible syntax dictionary, which highlights the syntax with different colors for documents it recognized, including c codes and html files. Advance users would use vim to compile their codes, to write macros, to use it as a file brwoser, and even to write a game, e.g. the TicTacToe. With its popularity, a ported windows version of vim, known as gvim for windoze, is now available.

How to vim Before getting started with vim, one should be awared that vim operates in two phases (or modes) instead of one phase like pico or any other ordinary text editor. Try this, you start vim (type vim for editing a new file or vim filename to open an existing file) and edit for a minute. If you didn’t luck out to type any special character, you would find nothing on the screen. Then you try to move around by pressing the arrow keys and you find the cursor stays unmoved. Things get even worse that you couldn’t find a way out (the solution would be provided later, so just read on) . These initial barriers do frustrate quite a few of the newbies. However, the world would begin to shine as you get to know when to insert a text and when to issue a command. In Normal mode (command mode), characters are treated as commands, meaing they would trigger special actions, e.g. pressing h,j,k, and l would move the cursor to the left, up, down, and right, respectively. However, characters would be simply inserted as text in Insert mode. Obviously, editing would become chaotic when you had a mode confusion, where you should always press to escape from such chaos and get back to normal mode. Most commands and keys can be concatenated in a more sophisticated operation, [#1] commands/keys [#2] target , where anything enclosed within brackets is optional; #1 is an optional number, i.e. 3, specifiying commands/keys are to be done 3 times; commands/keys are any valid vim operation, i.e. y for yanks(copies) the text; #2 is an optional number, similar to #1, specifying the number (or range) of targets being affected by the commands/keys; and target is the text that you ant to apply the commands/keys on, i.e. G for end of file. Although most of the commands are played (simply press it) on the main screen, some colon commands (commands that start with a colon) are keyed at the very bottom of the screen. When dealing with these colon commands, you would need to type a colon, which moves the cursor to the last line of the screen, then issue your command string and terminate it by pressing the key, e.g. :wq would save your current file and quit. The overall

3 operating modes of the vim text editor are clearly illustrated in Figure B.1 while leaving a more in-depth motion, yanking and deleting commands in Table B.1.

Figure B.1 Operating modes of the vim text editor

Command mode Effects

h , j , k , l left, down, up, right

w , W forward next word, blank delimited word e , E forward end of word, of blank delimited word b , B backward beginning of word, of blank delimited word ( , ) sentence back, forward Motion { , } paragraph back, forward 0 , $ beginning, end of line 1G , G beginning, end of file nG or :n line n fc , Fc forward, back to char c H , M , L top, middle, bottom of screen yy Copy current line Yanking :y Copy current line Y Copy until end of line dd Delete current line Deleting :d Delete current line D Delete until end of line

Table B.1 Important keys for cursor movement and text editing

4

B.1.2 Compiler – gcc With a text editor at hand, one could begin to write a program. However, this program is likely to be written in high-level language which is not machine understandable, and hence far from machine executable. Therefore, you need a software called a compiler, which converts a program (source code) written in a high-level language (C/C++, etc.) to binary machine code (object code). Notably, the translation from source code to object code only needs to be performed once and it is then stored and used repeatedly at this almost-machine level. Since it is common to incorporate old routines already compiled, a second-stage process using an utility called a linker is often used to process the compiled code to create the final executable application. This multiple-stage processes of a gcc compiler is shown in Figure B.2.

What is gcc The GNU C compiler (gcc), which by default expects ANSI C 1 , is a well-known C compiler on most Unix systems. It was primarily written by , who found a tax-exempt charity, Foundation (FSF), that raises funds for work on the GNU Project. With the efforts of other gcc advocates, gcc has integrated several versions of compiler (C/C++, Fortran, Java) and instead has used the name GCC Compiler Collection. However, this section of Appendix B discusses the gcc options for C language. As illustrated in Figure B.1, gcc does a lot more than simply a compiler. It preprocess the source files, … to be continued

1 ANSI C is more strongly-typed than the original C, and will likely help you catch some bugs earlier during coding.

5 Figure B.2 The flowchart of gcc.

How to gcc All of the source files for your program must have a name ending in ".c". However, a few of these files might be ended in “.h”, meaning header files that contains various constants, macros, and data structure definitions. They are to be included in your program. Suppose you are writing a program and have decided to split it into two source files. The main source file might be called "main.c" and the other source file might be called "sub.c". Furthermore, these two source files would need to share a data structure definition which is contained in the header file "incl.h". To compile your program, you may simply type: gcc main.c sub.c which will create an executable program (by default) named "a.out." If you prefer, you may specify the name of the executable, e.g. "prog", as follows: gcc -o prog main.c sub.c As you can see, things are this simple. However, this method could be very inefficient, especially if you are only making changes to one source file at a time, when you are re-compiling it frequently. Instead, you should compile the program as follows: gcc -c main.c gcc -c sub.c gcc -o prog main.o sub.o The first two lines create object files "main.o" and "sub.o" and the third line links the objects together into an executable. If you were then to make a change to the "sub.c" file, you could re-compile your program by just typing the last two lines. This all might seem a little silly for the example above, but if you had ten source files instead of two, the latter method would save you a lot of time. Actually, the entire compilation process could be automated as you see the following section.

B.1.3 Autocompile – make While a successful (error-free) compilation is certainly a great joy, iterative compilation process during the development of programs would be a chore for the programmer. An executable program may be built from a group of .c files, requiring each of the .c files to be compiled with gcc to a .o file, and then to be linked the .o files together (possibly with some additional libraries) to form the

6 executable. This process can be tedious and one is likely to make mistakes. It is why make worth a mention.

What is make make is a program that provides a relatively high-level way to specify the relationship (or steps that should be taken to automate the process) between the source files necessary to build a derived object (such as a program). With make, one reduces the likelihood of error and simplifies a programmer’s life. Notably, make provides implicit rules or shortcuts (that can be extended) for many common actions (such as how to turn .c files into .o files), so it is simple to do tasks such as use make to compile C programs and even kernel compiling.

How to make To be continued… Actually, the entire compilation process can be automated by using a makefile, for example: # Any line beginning with a `#' sign is a comment and will be # ignored by the "make" command. To generate the executable # programs, simply type "make". # This makefile says that pgm depends on two files prog.o and # sub.o, and that they in turn depend on their corresponding # source files (prog.c and sub.c), in addition to the common header # file, incl.h: # Note that each of the gcc directives MUST be preceded by a # tab character (not a number of spaces) for the make to work.

prog: main.o sub.o gcc -o prog main.o sub.o main.o: incl.h main.c gcc -c main.c sub.o: incl.h sub.c gcc -c sub.c

B.2 Debugging When writing a program, unless it is way too trivial, one must have have struggled hard enough to locate and remove program faults from a program2. This

2 Suprisingly, some beta programs are released commercially as Version 1.0 and let the innocent men who buy, find the bugs!

7 digging-for-bugs process is known as debugging and the tool being used is a debugger. Generally speaking, the purpose of a debugger is to let you investigate what is going on inside a program while it is running or what a program was doing at the moment it crashed. Since a workman must first sharpen his tools if he is to do his work well, the following sections (See Figure B.3) will be introducing a few useful debuggers that aids in programming development.

Development Tools: Debugging gdb (See Appendix B.2.1.)

DDD (See Appendix B.2.2.)

kgdb (See Appendix B.2.3.)

Target host Local host 2.1 Source-level 2.2 GUIed debugging DDD kernel debugging gdb 2.3 Remote gdb kernel kgdb debugging Linux kernel

RS-232

Figure B.3 The roadmap of debugging.

B.2.1 Debugger - gdb The traditional debugger used in Linux/FreeBSD is GDB, the GNU Project debugger. It is designed to work with a variety of languages, but is primarily targeted at C and C++ developers. While GDB is a command line interface, it has a few graphical interfaces to it, such as ddd. In addition, GDB can also be ran over serial links for remote debugging, such as kgdb.

What is gdb GDB can do four main kinds of things (plus other things in support of these) to help you catch bugs in the act: 1. Start your program, specifying anything that might affect its behavior. 2. Make your program stop on specified conditions. 3. Examine what has happened, when your program has stopped. 4. Change things in your program, so you can experiment with correcting the effects of one bug and go on to learn about another.

How to gdb

8 You can use the office gdb manual at your leisure to read all about GDB. However, a handful of commands are enough to get started using the debugger. This section illustrates those commands. Before loading the executable into GDB, the target program should be compiled with the –g flag, i.e. gcc –g –o test test.c. Then, starting gdb with the command, gdb test. You should see a GDB prompt following it. Then, list the program by issuing a list command, which will (by default) list the first 10 lines of the source code for the current function, and subsequent calls to list will display the next 10 lines, and so on. When trying to locate a bug, you could run the program once upon entering gdb and ensure the bug is reproduced. Then you can backtrace to see a stack trace, which usually reveals the trouble-maker, e.g. a suspected function call at line 20. Now you can use list command again to identify the location of the problem and carefully make use of setting a breakpoint and printing the variables. You should be able to locate the bug and finally quit gdb. By the way, GDB has a set of info pages and also has inbuilt help, which can be accessed via the help command.

B.2.2 GUI Debugger – ddd To be continued…

What is ddd Since gdb and many others are all command-line debuggers which aren't very friendly to use. Thankfully, the DDD (DataDisplay Debugger) is a usual front-end to all of these debuggers. Besides existing gdb capabilities, DDD has become famous through its interactive graphical data display, where data structures are displayed as graphs.

How to ddd To be refined… To use DDD you must compile your code with debug information included. In unix that means you should include the -g option on the gcc compile command. If you've never run DDD before, you may have to tell DDD to use the gdb debugger by typing "ddd --gdb" at the command-line prompt. You only have to do this once. Then you can quit DDD and when you run it later it will automatically use the gdb debugger. Subsequently, to run DDD you type "ddd prog" where prog is the name of your program. When you do this, a window like the one below pops up.

9

Figure B.4 Screenshot: the main window of ddd

Figure B.4 is the main DDD screen. At the center of all things is the source code. Following the usual standards for debugger GUIs, the current execution position is indicated by a green arrow; breakpoints are shown as stop signs. You can navigate around the code using the Lookup toolbar button or the Open Source dialog from the File menu. Double clicking on a function name leads you to its definition. Using the Undo and Redo buttons, you can navigate to previous and later positions — similar to your web browser. You can set and edit breakpoints by double clicking in the white space to the left; to step through your program or to continue execution, use the floating command tool on the right. Command-line aficionados will find a debugger console at the bottom. If you need anything else, try the Help menu for detailed instructions. Moving the mouse pointer on an active variable shows its value in a little pop-up screen. Snapshots of more complex values can be "printed" in the debugger console. To view a variable permanently, though, use the Display button. This creates a small permanent window (a display) which shows the variable name and value. These displays are updated every time the program changes its state. To access a variable value, you must bring the program in a state where the variable is actually alive; that is, within the scope of the current execution position. Typically, you set a breakpoint within the function of interest, run the program, and display the function's variables. To actually visualize data structures (that is, data as well as relationships), DDD lets you create new displays out of existing displays. For instance, if you

10 have displayed a pointer variable list, you can dereference it and view the value it points to simply by double-clicking on the pointer value. This creates a new display *list, with an arrow pointing from list to *list. You can repeat this, say, with list->self, list->next, and the like, until you eventually see the entire list (Figure B.5). Each new display is automatically laid out in a fashion to support simple visualization of lists and trees. For instance, if an element already has a predecessor, its successor will be laid out in line with these two. You can always move elements around manually simply by dragging and dropping the displays. Also, DDD lets you scroll around, layout the structure, change values manually, or see them change while the program runs. An Undo/Redo capability even lets you redisplay previous and later states of the program, so that you can see how your data structure evolves.

Figure B.5 A list in ddd

B.2.3 Kernel Debugger – kgdb To be continued…

What is kgdb To be refined… kGDB is a source level debugger for linux kernel, which provides a mechanism to debug the Linux kernel using the debugger introduced earlier gdb. kGDB is a to the kernel (and you need to recompile the kernel once it’s patched) that allows a user running gdb on a remote host to connect (during the boot process so that debugging could beging asap) to a target (over a serial RS-232 line) running the kGDB-extended kernel. Kernel developers can then "break" into the kernel of the target, set breakpoints, examine data, and other relevant debugging functions one would expect. In fact, it is pretty much similar to

11 one would use gdb on an user-space program. Since kGDB is a kernel patch, it adds following components to a kernel: 1. gdb stub - The gdb stub is heart of the debugger. It is the part that handles requests comming from gdb on the developement machine. It has control of all processors in the target machine when kernel running on it is inside the debugger. 2. modifications to fault handlers - Kernel gives control to debugger when an unexpected fault fault occurs. A kernel which does not contain gdb panics on unexpected faults. Modifications to fault handles allow kernel developers to analyze unexpected faults. 3. serial communication - This component uses a serial driver in the kernel and offers an interface to gdb stub in the kernel. It is responsible for sending and recieving data from a serial line. This component is also responsible for handling control break request sent by gdb.

How to kgdb To be continued…

To use kGDB, download the appropriate (matching your version of the Linux kernel) patch from the official website of the kGDB rpoject, wchich has been supported by SGITM.

[need to refine] A config option named CONFIG_GDB with the kGDB patch is to enable gdb debugging. To force the kernel to pause the boot process and wait for a connection from gdb, the paramter "gdb" should be passed to the kernel. This can be done by typing "gdb" after the name of the kernel on the LILO command line. The patch defaults to use ttyS1 at a baud rate of 38400. These parameters can be changed by using "gdbttyS=" and "gdbbaud=" on the command line.

After the kernel has booted up to the point where it is waiting for a connection from the gdb client, there are 2 things that need to be done from gdb: set remotebaud and target remote You can also create your own command in .gdbinit to do this all in one step. Here is an example from my .init: define rmt

12 set remotebaud 38400 target remote /dev/ttyd2 end

B.3 Maintaining To be continued…

B.3.1 Version Control – cvs To be continued…

What is cvs To be finished… CVS is a source code control system, much like RCS. CVS is also an acronym. It stands for Concurrent Versions System. Unlike RCS, CVS is network transparent, that is the work area and the database area are completely seperate. This easily allows for multiple work areas. Other version control systems require cutomized scripts and ugly hacks to enable multiple workspaces. This means that every developer can check out their own copy of the tree in their home directory. In addition to seperating the work area from the database area, CVS is designed to handle multiple developers working on the same project at the same time. Thus, with CVS, developers do not need to obtain a lock on a source file before they can modify it. Instead, CVS keeps track of what versions a developer checked out. If another developer then makes changes to that file and commits those changes to the repository (the database that CVS maintains) then the first developer will have to update their local source tree before then can commit their changes. Fortunately, CVS automates the actual merging process, making this process very simple.

13 A personal itch

Look for any similar projects

No Yes Initiate a project Join that project

Use mailing list for announcement and bug tracing. Use OpenPGP

CVS Version Control

W rite documents and manuals D o little docum ent writing

Decide a license model V ote for a license m odel

Accept patches and modifications (vote or dictatorship)

Release official version in the foreseeable future

Figure B.4 An OSSD Process

How to cvs To be continued…

„ Setup a repository (under sh and shells) ‰ CVSROOT=/usr/local/cvs (or wherever you desire) ‰ export „ Starting a new project ‰ cvs import -m “ log message ” project_name vendor_tag release_tag ‰ e.g cvs import -m “my first cvs project” myproject jrandom start „ Revisions ‰ Check (co), e.g. cvs checkout myproject ‰ Update (up), e.g. cvs update project.c ‰ Commit (ci), e.g. cvs commit project.c ‰ Diff, e.g. cvs diff project.c „ History browsing ‰ Status, e.g. cvs status project.c ‰ Log messages, e.g. cvs log project.c „ Release ‰ Release (delete), e.g. cvs release –d myproject „ Refer cvs --help-commands for a list of commands.

B.3.2 Package Management – rpm

14 To be continued…

What is rpm To be finished… The Red Hat Package Manager (RPM) is a toolset used to build and manage software packages on UNIX systems. Distributed with the Red Hat and its derivatives, RPM also works on any UNIX as it is open source. However, finding RPM packages for other forms of UNIX, such as Solaris or IRIX, may prove difficult. Package management is rather simple in its principles, though it can be tricky in its implementations. Briefly, it means the managed installation of software, managing installed software, and the removal of software packages from a system in a simplified manner. RPM arose out of the needs to do this effectively, and no other meaningful solution was available. RPM uses a proprietary file format, unlike some other UNIX software package managers. This can be problematic if you find yourself needing to extract one component from the package and you don't have the RPM utility handy. Luckily a tool like Alien exists to convert from RPM to other formats. It can be possible, through tools like Alien, to get to a file format you can manage using, say, tar or ar. The naming scheme of RPM files is itself a standardized convention. RPMs have the format (name)-(version)-(build).(platform).rpm. For example, the name cat-2.4-7.i386.rpm would mean an RPM for the utility "cat" version 2.4, build 7 for the . When the platform name is replaced by "src", it's a source RPM.

How to rpm To be continued…

rpm -ivh, install and dump some information -Uvh, update -qa, list all installed rpms (query all) e.g. rpm -qa | grep bind-util -qf, query (which rpm does this) file (belongs) rpm -qf /usr/bin/ -qpi, query package information rpm -qpi /mnt/cdrom/RedHat/RPMS/*.rpm | grep -A 12 nslookup -qpl, list queried package

15 rpm -qpl /mnt/cdrom/RedHat/RPMS/bind-8.2.2_P3-1.i386.rpm

16 Appendix C Network Experiment Tools

C.1 Name-Addressing To be continued…

C.1.1 Internet’s Who-is-Who – host To be continued…

What is host To be continued…

How to host To be continued…

C.1.2 LAN’s Who-is-Who – arp To be continued…

What is arp To be continued…

How to arp To be continued…

C.2 Perimeter-Probing

C.2.1 Ping for Living – ping To be continued…

What is ping To be continued…

17 Figure C.1. ICMP format for echo request/reply

How to ping To be continued…

C.2.2 Find the Way – traceroute To be continued…

What is traceroute To be continued…

How to traceroute To be continued…

C.3 Traffic-Monitoring To be continued…

C.3.1 Dump Raw Data – tcpdump To be continued…

What is tcpdump To be continued…

tcpdump Other processes User space

Kernel space Buffer Buffer Buffer Protocol T i m e s t a m p Stack

Filter 1 Filter 2 Filter 3

BPF

Device Device driver 1 driver 2

Ethernet Figure C.2. The flowchart of tcpdump

How to tcpdump To be continued…

18

C.3.2 Collect Network Statistics – netstat To be continued…

What is netstat To be continued…

How to netstat To be continued…

C.3.3 Sample Scenario To be continued…

hostname DNS (host) 1. ping client IP address IC M P request Source routing table 2. ICM P (route/netstat) BPF (tcpdum p) IP datagram

4. A P (arp) 3. IP copy of in/out packets

4. A R P request ethernet driver (broadcast) Ethernet 5. ARP reply(unicast, ethernet driver ethernet driver corresponding ethernet address)

5. A R P (arp) D estination 6. ICM P 7. ping server

Figure C.3. A ping’s life

C.4 Benchmarking To be continued…

C.4.1 Host-to-Host Throughput – ttcp To be continued…

What is ttcp To be continued… ttcp, which tests network TCP and UDP throughput. A separate use of this tool is to create network pipes for transferring user data. Cisco routers now incorporate a version of this tool, enabling you to easily evaluate network performance.

19

How to ttcp To be continued… Start the receiver (router or remote server with discard port) ttcp -r [-options > out] //see ttcp --help for options [benson@sgw ttcp]# ttcp -r ttcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp ttcp-r: socket Start the transmitter ttcp -t [-options] host [ < in ] [benson@sgw ttcp]$ ttcp -t sgw.cis.nctu.edu.tw < test ttcp-t: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp -> sgw.cis.nctu.w ttcp-t: socket ttcp-t: connect Statistics output at receiver/transmitter Receiver side: ttcp-r: accept from source_host This is a ttcp test! ttcp-r: 22 bytes in 0.00 real seconds = 13.55 KB/sec +++ ttcp-r: 2 I/O calls, msec/call = 0.81, calls/sec = 1261.83 ttcp-r: 0.0user 0.0sys 0:00real 0% 0i+0d 0maxrss 0+1pf 0+0csw Transmitter side: ttcp-t: 22 bytes in 0.00 real seconds = 120.70 KB/sec +++ ttcp-t: 1 I/O calls, msec/call = 0.18, calls/sec = 5617.98 ttcp-t: 0.0user 0.0sys 0:00real 0% 0i+0d 0maxrss 0+1pf 0+0csw Throughput analysis

C.4.2 Web Server Performance – WebStone To be continued…

What is WebStone To be continued…

20

Figure C.4. The network architecture of WebStone.

How to WebStone To be continued…

C.5 Simulation and Emulation To be continued…

C.5.1 Simulate the Network – ns To be continued… Simulating the network Network Simulator: ns Pros: cheap (less real infrastructure) and quick (available modules) to assemble, large-scale (given sufficient computing resources) and reproducible (everything is coded) test Cons: redo code for simulation environment, implementation and environment may considerably differ from and poorly represent the real one.

What is ns NS, which began in 1989 as a variant of the REAL (Realistic and Large) network simulator, is a collaborative simulation platform that provide common references and test suites, which simulates packet-level, discrete events within link layer and up of both wired and wireless network conditions. DARPA has been supporting NS through the VINT (Virtual InterNetwork Testbed ) project in 1995, and currently SAMAN (Simulation Augmented by Measurement and Analysis for

21 Networks) and CONSER (Collaborative Simulation for Education and Research). A few of its powerful features includes scenarios generation that creates customized simulation environment and visualization with aid of name (Network Aniumation). Though it simulates, current NS can do little emulation on certain platforms, e.g. FreeBSD. Notably, NS is implemented in two languages, C++ (ns core) and OTcl (ns configuration), respectively, because of a leverage between run-tim and iteration time.

How to ns To be continued… First of all, you need to build a network model Create (1) ns object, (2) nodes, (3) links, and then into (4) LAN (if PPP, then skip (4)) Then you build a traffic model Create (1) connection: either TCP or UDP Generate (2) traffic: e.g. FTP of TCP or CBR of UDP (3) error model (4) scheduler Tracing Output data in nam format for analysis

C.5.2 Emulate the network – NIST Net To be continued… Emulating the network Network Emulator: NIST Net Pros: an intermediate solution between real one and simulated one, and any degree of network conditions in a reproducible test Cons: scalability would be bounded by real-time timer/computation and limited statistics for further analysis

What is NIST Net To be continued… A network emulator that provides simple (parameters are typed, rather than coded, by user) user entry of network parameter (e.g. delay, loss, jitter) for emulating a wide range of network types with a small lab setup. With NIST NET, once can observe quite a few network statistics, including packet delay

22 (fixed/variable), packet reordering (delay variances), packet loss, packet duplication, and bandwidth limitation.

Figure C.5. Screenshot: the main window of NIST Net.

How to NIST Net To be continued…

C.6 Hacking To be continued…

C.6.1 Exploits Scanning – nessus To be continued…

What is nessus To be continued…

23

Figure C.6. The network architecture of nessus.

How to nessus To be continued…

C.6.2 Packet Sniffing – etherreal To be continued… Sniffing is a series of processes, that records, interprets, and saves for analysis all the packets being sent across the network.

What is etherreal To be continued…

Figure C.7. Screenshot: the main window of etterreal.

24 How to ettercap To be continued…

C.6.3 Storm of Attacks – TFN-style To be continued…

What is TFN To be continued…

How to prevent TFN To be continued…

Pitfalls and Misleading

Spoofing vs. Smurfing

Further Reading

Other Textbooks 1. P. Eyler, Networking Linux, a practical guide to TCP/IP, 1st edition, New Riders, 2001. 2. T. Mginnis, Sair Linux and GNU certification, level 1: networking, 1st edition, John Wiley & Sons , 2001. 3. K. Fogel, Open source development with cvs, The Coriolis Group, 1999 4. W. R. Stevens, UNIX network programming, Prentice Hall, 1998.

Online Links 1. The VIM (Vi IMproved) Home Page, http://www.vim.org/ 2. The GCC home page, http://gcc.gnu.org/ 3. GNU Make, http://www.gnu.org/software/make/make.html 4. GDB, http://sources.redhat.com/gdb/ 5. DDD, http://www.gnu.org/manual/ddd/ 6. kgdb, http://kgdb.sourceforge.net/ 7. CVS Home, http://www.cvshome.org/ 8. RPM, http://www.rpm.org/ 9. ttcp, http://www.ccci.com/product/network_mon/tnm31/ttcp.htm 10. Webstone, http://www.mindcraft.com/webstone/ 11. Ns-2, http://www.isi.edu/nsnam/ns/index.html

25 12. NIST Net, http://snad.ncsl.nist.gov/itg/nistnet/usage.html 13. Nessus, http://www.nessus.org/ 14. Ethereal, http://www.ethereal.com/

26