by Ike Nassi
Since its founding, Apple Computer has been dedicated to the cause of moving the power of computers into the hands of computer users. Our most successful product to date has been the Applereg. Macintosh(TM). Since its introduction eight years ago, Macintosh has defined the state of the art in computer ease of learning and ease of use. Macintosh, almost single-handedly, has transformed the way we use computers.
Its time to transform the way we program them, as well.
In 1984, before Macintosh, computer users were asked to make their work patterns conform to the requirements of the machine in front of them. All aspects of computing were presented in terms defined by the computer. The user was given little control over the organization of data, the order in which tasks were performed, and the manner in which work was presented. The machine presented a model which had little to do with the users world, and the user had no choice but to conform to this model. Rather than facilitate work, the computer came between the user and the users work. The user had to overcome the computer before getting any work done.
In 1992, the situation is still much the same for programmers. There is still a large gap between software product conception and software product realization. This gap is filled with bits, bytes, arrays, machine-level debuggers, dangling pointers, hours-long recompilations, and months-long design and redesign cycles. Programmers are still asked to present their high-level ideas of program behavior in low-level terms defined by CPU architectures of the early 1970s. Programming work flow is still dictated by the historical limitations of compilers, linkers, and debuggers. Program design in 1992 is reminiscent of typographical design in 1982: design and execution are performed by a series of specialists, resulting in an awkward, lengthy and expensive design, test, and redesign process.
Time-to-market is consistently rated very high in lists of factors affecting the success of high technology products. The end result of poor programming tools is either poor time-to-market or poor programs. Software companies are faced with a dilemma: they can take years, and do the job right, or they can cut corners and get their software into the market before their competitors. Neither solution results in a healthy software industry. Neither solution allows our products to keep up with our visions.
Software development today also has a large barrier to entry. Only companies with deep financial resources can bring a full fledged application to market. The result is a loss of healthy competition and fresh ideas in the software field.
Finally, our computing engines are becoming more diverse. In addition to the usual dimensions of speed, memory capacity, screen resolution, interconnectedness, power consumption, and size, the underlying operating systems and toolboxes are becoming broader in scope. Specific implementations can also become more focused, requiring a higher degree of tailorability. For example, with the introduction of mobile computing, software requirements may change as locations change. Software used on mobile computers needs to be able to rapidly respond to spontaneously changing conditions in a reliable way.
Object oriented programming takes important steps towards fixing these problems. By letting programmers structure the text of their programs in terms of the problem at hand, object oriented programming narrows the gap between conception and realization. However, object oriented programming by itself is insufficient. It addresses how programs are described, but it does not address many problems of the day-to-day activity of programming. It doesnt change programmings awkward work flow, nor does it make any guarantees about robustness, nor does it relieve the programmer of many tedious bookkeeping tasks, tasks which are better performed by a computer than by a human.
Todays most popular object oriented languages are still static languages. In such languages, most of the information about objects is discarded during compilation, so programs cannot be modified without recompilation, and debugging is more likely to occur at machine level than the level at which the program was designed. In addition, these languages encourage mixing objects with non-object oriented bits and bytes. Even objects can be treated as undifferentiated bits, leading to the possibility of protocol violations and obscure errors. Finally, there is no attempt to put the process of programming into the hands of the programmer. Generally speaking, large programs must still be written in their entirety before they can be compiled and tested.
Static object oriented languages provide only half of a solution. The other half is provided by dynamism, yielding what Apple calls object oriented dynamic languages, or OODLs. In addition to supporting the object oriented methodology, OODLs must support a number of features which guarantee that programming takes place in terms defined by the programmer, rather than in the terms of the hardware.
OODLs must support rapid creation, delivery, and subsequent modification of ambitious, reliable, and efficient software. Among the specific requirements an ideal OODL should satisfy are the following:
Automatic Memory Management
Memory management bugs are among the most common and difficult errors in static programming languages. Bugs involving dangling pointers and twice-freed objects are notoriously hard to track down.
The language run-time, and not the programmer, should be responsible for allocating storage for objects and reclaiming the storage of objects which are no longer used. There should be no explicit procedure calls for allocating or deallocating memory, or for deallocating objects.
In a well engineered implementation, automatic memory management should be robust and scalable. It should not create memory fragmentation, or fail in the presence of large (possibly virtual) address spaces. It should not cause seemingly arbitrary and unpredictable delays for end users.
In a true OODL, there should be no machine-level pointers, only objects. Once freed from dealing with pointers, the programmer can begin to think of objects at a higher level and the primitives become comparably richer. For example, a programmer working with collections of objects does not need to worry about memory leaks as collections expand and contract. The programmer can concentrate on the task at hand, rather than on the bookkeeping details.
Many large programming projects begin with the design of a memory management subsystem. There is no reason that this task should not be performed once, and the corresponding implementation of that design embodied in the language run-time, and thereby made available to all programmers.
Dynamic Linking/ Incremental Development
Programmers should have the ability to build up their programs piece by piece, integrating preexisting pieces when possible and where available. The transition between rapid prototyping and mainstream development should be continuous rather than discrete. It should not require changing languages or tools.
This requirement affects the programming process in at least four ways:
During the initial construction of a program, classes and functions can be compiled and tested individually. This gives programmers the freedom to use a bottom-up programming style, if they so choose, obviating the need for the construction of a complex superstructure for initial testing.
During debugging, individual functions and classes can be redefined without resorting to a full recompilation of the program and perhaps without even halting the execution of the program.
Programs can be delivered in components, which can be linked together on either the development machine or the end-user machine, using built-in language features.
Program patches (i.e., field upgrades) can be distributed to end-users using built-in language features. Applying the patch should be a very low overhead operation. There should be virtually no performance penalty for executing patched code. Because you dont need source code for the original application in order to apply the patch, traditional intellectual property issues arising from propagating source code simply need not arise.
Self Identifying Objects / Introspection
Operations should be checked for type safety before they are performed. If possible, this check should be performed at compile-time, otherwise it should be performed at run-time. This feature guarantees that type errors are noticed as soon as they occur, before they can propagate and cause system corruption. Because the integrity of the object model is maintained, error reporting can occur in terms of programmer objects and end-user objects, rather than in machine-level terms. In many cases, complete error recovery is possible.
The language should contain features for introspection. This means that the language run-time should have sufficient power to answer questions about itself and the objects it manages. For example, it should be possible at execution time to analyze the structure of an object, find the subclasses of a class, etc.
To facilitate type-safety and introspection, objects are self-identifying in memory. Unless all uses of an object can be analyzed at compile-time, the run-time memory for the object should contain enough information to identify its class and value.
Object Oriented Programming Environment
The programming environment should present all debugging information at an object oriented level. Errors should be described in high-level terms similar to those used by the programmer in constructing the program. Inspection facilities should show a program as a collection of objects, not as a mass of undifferentiated bits. There should be tools for performance analysis and monitoring of single objects as well as collections of objects.
There should be rich libraries of components, and the means to navigate within and between them and to organize and administer them.
There should be a well thought out distinction between development environment and execution environment, and where they are different, the development environment should manage the communication between them in as transparent a manner as possible. This feature is missing from current OODLs, but it is essential for the delivery of OODL-based applications to end-users.
An important aspect of these features is that they are mutually supporting, forming an organic whole. The simplest implementation of each depends on the existence of the others. For example, automatic memory management relies on the ability to walk memory and identify objects. Another example is that rapid prototyping requires rapid modification of the program, and so incremental dynamic linking becomes essential. Incremental dynamic relinking utilizes automatic reclamation of storage occupied by the functions, methods, and data being replaced.
Many people in commercial industry and in the research community have recognized the problems with static languages, and some have started to build products that provide some of the features of OODLs. For example, interactive programming environments for static languages are beginning to appear. We applaud this development. However, we believe that starting with a static language requires too many compromises. If each OODL feature were to be added in isolation, the mutually supporting characteristics would be lost, creating redundancy, conflicts, and inefficiency. When instead these features are built together into the core of a programming system, they provide a simple and secure foundation for growth.
The common criticism of OODLs is that we cannot afford them. Most programmers view OODLs as slow and as memory hogs. The common wisdom is that OODLs do not make good use of machine resources. Fortunately, this view is out of date. The combination of improved OODL implementation technology and increasingly powerful hardware make OODLs eminently practical. Every year or two our hardware gets twice as fast and has twice as much memory. Would anyone say that the quality of our software has increased at the same rate? Programming must be made easier, or the fastest hardware in the world will only give us incremental software improvements. By investing a few cycles we can enable a new generation of applications.
When Macintosh was first released, many people thought that windows, menus, and a bit mapped display were a waste of machine resources. We see in retrospect that this was not true. What good is the power of a computer, if it cant be accessed by a user?
This book describes Dylan(TM), a new object oriented dynamic language designed by Apple. Dylan is our attempt at a language which is simple, yet powerful, one which keeps programming at a high level but which can be compiled efficiently and has a relatively modest memory footprint.
Apple already has one OODL product: Macintosh Common Lisp. Dylan is intended to complement Common Lisp, not to replace it. Common Lisp is a rich environment defined by a standard and available in compatible implementations on a broad range of platforms. Dylan is lean and stripped down to a minimum feature set. At present Dylan is not available on any platform (outside Apple), but is intended to run on a wide variety of machines, including very small machines that dont have the horsepower to support a modern Common Lisp. Common Lisp is aimed primarily at the Lisp community, while Dylan is accessible to application developers unfamiliar with Lisp. Common Lisp is oriented more towards exploratory programming with delivery capability, while Dylan is oriented more towards delivery with exploratory capability.
In our research and development labs, Apple has its own implementations of Dylan, which are being produced hand in glove with the language definition as a design assurance measure. We would like to see others create additional implementations. We would be particularly interested in working with a group to create a reference implementation optimized for portability rather than performance, so that anyone could try out Dylan no matter what brand of computer they use. Because Dylan is a trademark of Apple Computer, anyone implementing the language must obtain permission to use the name. However, it is our intention that permission will be granted to anyone with a conforming implementation.
Apple Eastern Research and Technology will be the focal point for discussion of the Dylan language, its implementations, and its future evolution. Current and prospective users or implementors of Dylan, as well as programming language researchers who are interested in object oriented dynamic languages, are invited to comment on the design of the language and suggest improvements that are consistent with the goals noted above. We will also continue our past policy of driving the evolution of the language from the needs and comments of our users, who are system and application developers whose previous experience is largely with conventional languages. We hope to integrate all inputs to make Dylan the best language we can and use it to bring the benefits of OODLs to as many programmers as we can possibly reach.
Next chapter: Introduction