Chapter 4: Functions and Program Structure

page 67

Deep paragraph:

Functions break large computing tasks into smaller ones, and enable people to build on what others have done instead of starting over from scratch. Appropriate functions hide details of operation from parts of the program that don't need to know about them, thus clarifying the whole, and easing the pain of making changes.
Functions are probably the most import weapon in our battle against software complexity. You'll want to learn when it's appropriate to break processing out into functions (and also when it's not), and how to set up function interfaces to best achieve the qualities mentioned above: reuseability, information hiding, clarity, and maintainability.

The quoted sentences above show that a function does more than just save typing: a well-defined function can be re-used later, and eases the mental burden of thinking about a complex program by freeing us from having to worry about all of it at once. For a well-designed function, at any one time, we should either have to think about:

  1. that function's internal implementation (when we're writing or maintaining it); or
  2. a particular call to the function (when we're working with code which uses it).
But we should not have to think about the internals when we're calling it, or about the callers when we're implementing the internals. (We should perhaps think about the callers just enough to ensure that the function we're designing will be easy to call, and that we aren't accidentally setting up so that callers will have to think about any internal details.)

Sometimes, we'll write a function which we only call once, just because breaking it out into a function makes things clearer and easier.

Deep sentence:

C has been designed to make functions efficient and easy to use; C programs generally consist of many small functions rather than a few big ones.
Some people worry about ``function call overhead,'' that is, the work that a computer has to do to set up and return from a function call, as opposed to simply doing the function's statements in-line. It's a risky thing to worry about, though, because as soon as you start worrying about it, you have a bit of a disincentive to use functions. If you're reluctant to use functions, your programs will probably be bigger and more complicated and harder to maintain (and perhaps, for various reasons, actually less efficient).

The authors choose not to get involved with the system-specific aspects of separate compilation, but we'll take a stab at it here. We'll cover two possibilities, depending on whether you're using a traditional command-line compiler or a newer integrated development environment (IDE) or other graphical user interface (GUI) compiler.

When using a command-line compiler, there are usually two main steps involved in building an executable program from one or more source files. First, each source file is compiled, resulting in an object file containing the machine instructions (generated by the compiler) corresponding to the code in that source file. Second, the various object files are linked together, with each other and with libraries containing code for functions which you did not write (such as printf), to produce a final, executable program.

Under Unix, the cc command can perform one or both steps. So far, we've been using extremely simple invocations of cc such as

	cc hello.c
(section 1.1, page 6). This invocation compiles a single source file, links it, and places the executable (somewhat inconveniently) in a file named a.out.

Suppose we have a program which we're trying to build from three separate source files, x.c, y.c, and z.c. We could compile all three of them, and link them together, all at once, with the command

	cc x.c y.c z.c
(see also page 70). Alternatively, we could compile them separately: the -c option to cc tells it to compile only, but not to link. Instead of building an executable, it merely creates an object file, with a name ending in .o, for each source file compiled. So the three commands
	cc -c x.c
	cc -c y.c
	cc -c y.c
would compile x.c, y.c, and z.c and create object files x.o, y.o, and z.o. Then, the three object files could be linked together using
	cc x.o y.o z.o
When the cc command is given an .o file, it knows that it does not have to compile it (it's an object file, already compiled); it just sends it through to the link process.

Here we begin to see one of the advantages of separate compilation: if we later make a change to y.c, only it will need recompiling. (At some point you may want to learn about a program called make, which keeps track of which parts need recompiling and issues the appropriate commands for you.)

Above we mentioned that the second, linking step also involves pulling in library functions. Normally, the functions from the Standard C library are linked in automatically. Occasionally, you must request a library manually; one common situation under Unix is that certain math routines are in a separate math library, which is requested by using -lm on the command line. Since the libraries must typically be searched after your program's own object files are linked (so that the linker knows which library functions your program uses), any -l option must appear after the names of your files on the command line. For example, to link the object file mymath.o (previously compiled with cc -c mymath.c) together with the math library, you might use

	cc mymath.o -lm

Two final notes on the Unix cc command: if you're tired of using the nonsense name a.out for all of your programs, you can use -o to give another name to the output (executable) file:

	cc -o hello hello.c
would create an executable file named hello, not a.out. Finally, everything we've said about cc also applies to most other Unix C compilers. Many of you will be using acc (a semistandard name for a version of cc which does accept ANSI Standard C) or gcc (the FSF's GNU C Compiler, which also accepts ANSI C and is free).

There are command-line compilers for MS-DOS systems which work similarly. For example, the Microsoft C compiler comes with a CL (``compile and link'') command, which works almost the same as Unix cc. You can compile and link in one step:

	cl hello.c
or you can compile only:
	cl /c hello.c
creating an object file named hello.obj which you can link later.

The preceding has all been about command-line compilers. If you're using some kind of integrated development environment, such as Turbo C or the Microsoft Programmer's Workbench or Think C, most of the mechanical details are taken care of for you. (There's also less I can say here about these environments, because they're all different.) Typically there's a way to specify the list of files (modules) which make up your project, and a single ``build'' button which does whatever's required to build (and perhaps even execute) your program.

section 4.1: Basics of Functions

section 4.2: Functions Returning Non-Integers

section 4.3: External Variables

section 4.4: Scope Rules

section 4.5: Header Files

section 4.6: Static Variables

section 4.7: Register Variables

section 4.8: Block Structure

section 4.9: Initialization

section 4.10: Recursion

section 4.11: The C Preprocessor

section 4.11.1: File Inclusion

section 4.11.2: Macro Substitution

section 4.11.3: Conditional Inclusion

Read sequentially: prev next up top

This page by Steve Summit // Copyright 1995, 1996 // mail feedback