I have discovered that many students are uncertain what I mean by a “topic-sentence-level outline.” It is an ordinary outline, with the following additional requirements:
Therefore, when you are ready to write your first draft, all you have to do is to write the paragraph that goes with each topic sentence. Naturally, you do not have to stick to the outline exactly. In the course of writing your draft, you may find that certain topics need to be split into two paragraphs, or that two paragraphs can be combined. You may also make minor additions and deletions. The purpose of a topic-sentence-level outline is to get you to think about the content of your paper in fairly specific terms.
To give you a little bit of the flavor, here is an extract from an outline for a talk I gave in Bologna a couple years ago:
II. Weaning Ourselves Away from Church-Turing Computability
A. Every model presupposes a frame of relevance
B. Uncovering the FoR of Church-Turing Computation
1) Historical roots
(a) They were addressing issues of formal calculability and formal provability
(b) Principal assumptions included a finite number of steps requiring finite resources
2) Definitions in terms of classes of functions
3) Discreteness assumptions
(a) Information representation
(i) Information representation is formal, finite, and definite
(ii) Tokens, texts, and types defined
(iii) Mechanical determination of types is an imprecise notion
(iv) Texts are finite in breadth and depth
(v) Assumptions about texts and schemata
(b) Information processing …
Here is the corresponding text:
II. Weaning Ourselves Away from Church-Turing Computability
It is important to keep in mind that the Turing machine is a model of computation. Like all models, its purpose is to facilitate describing or reasoning about some other class of phenomena of which it is a model. A model accomplishes this purpose by being similar to its object in relevant ways, but different in other, irrelevant ways, and it is these differences that make the model more tractable than the original phenomena. But how do we know what is relevant or not? Every model is suited to pose and answer certain classes of questions but not others, which we may call the frame of relevance of the model. Although a model’s frame of relevance often is unstated and taken for granted, we must expose it and make it explicit in order to understand the range of applicability of a model and to evaluate its effectiveness within its frame of relevance. What, then, is the frame of relevance of the Turing-machine model of computation?
As we know, the Church-Turing theory of computation was developed to address issues of formal calculability and provability in axiomatic mathematics. The assumptions that underly Church-Turing computation are reasonable in that context, but we must consider them critically before applying the model in other contexts. For example, for addressing the questions of what is, in principle, effectively calculable or formally provable, it is reasonable to require only that there be a finite number of steps requiring finite resources. As a consequence, according to this model, something is computable if it can be accomplished eventually with unlimited resources.
Another consequence of the historical roots of the Church-Turing theory of computation is its definition of computing power in terms of classes of functions. A function is computable if, given an input, we will get the correct output eventually, given unlimited resources. Of course, the theory can address questions of computation time and space, but the framework of the theory limits its applicability to asymptotic complexity, polynomial-time reducibility, and so forth. The roots of the idea that computation is a process of taking an input, calculating for a while, and producing an output can be found in the theory of effective calculability as well as in contemporary applications of the first computers, such as ballistics calculations, code breaking, and business accounting.
The Church-Turing model of computation, like all models, makes a number of idealizing assumptions appropriate to its frame of relevance. Many of these assumptions are captured by the idea of a calculus, but a phenomenological analysis of this concept is necessary to reveal the background of assumptions. Although there is not time today to discuss them in detail, it may be worthwhile to mention them briefly.
The model of information representation derives from mathematical formulas. As idealized in calculi, representations are formal, finite, and definite. We assume that tokens can be definitively discriminated from the background and from one another, and we assume that they can be classified by a mechanical procedure into exactly one of a finite number of types. The texts, or as we might say, data structures, are required to be finite in size, but also finite in depth; that is, as we divide it into parts, we will eventually reach its smallest, atomic constituents, which are tokens. These and other assumptions are taken for granted by the Church-Turing model of computation. Although the originators of the model discussed some of them, many people today do not think of them as idealizing assumptions that are not appropriate in some frames of relevance.
In case you are interested, you can see my complete outline and the first draft
of the talk. Note that I have broken the above rules in a few
places; for example, I have not used a complete topic sentence when I
was sure what I was going to say (e.g., because it’s something I’ve
said often).
In case you are interested, you can see an outline for a paper on which I’m currently working (so there is no draft at this time).