perm filename COMMON.MSG[COM,LSP]23 blob sn#727344 filedate 1983-10-28 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00069 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00010 00002	
C00011 00003	∂28-Jul-83  1912	FAHLMAN@CMU-CS-C.ARPA 	ERROR SIGNALLING FUNCTIONS 
C00014 00004	∂28-Jul-83  2130	@MIT-MC:Moon%SCRC-TENEX%MIT-MC@SU-DSN 	Non-evaluated error message in ASSERT    
C00016 00005	∂28-Jul-83  2155	WHOLEY@CMU-CS-C.ARPA 	Non-evaluated error message in ASSERT 
C00018 00006	∂29-Jul-83  0607	GBROWN@DEC-MARLBORO.ARPA 	ERROR SIGNALLING FUNCTIONS   
C00021 00007	∂29-Jul-83  0807	FAHLMAN@CMU-CS-C.ARPA 	Non-evaluated error message in ASSERT
C00024 00008	∂29-Jul-83  1338	@MIT-MC:MOON%SCRC-TENEX%MIT-MC@SU-DSN 	Non-evaluated error message in ASSERT    
C00026 00009	∂29-Jul-83  2131	Guy.Steele@CMU-CS-A 	ASSERT and CHECK-TYPE   
C00027 00010	∂05-Aug-83  1309	@USC-ECL,@MIT-XX:BSG@SCRC-TENEX 	File open options
C00030 00011	∂09-Aug-83  0809	@USC-ECL,@MIT-XX:BSG@SCRC-TENEX 	File opening, :TRUNCATE    
C00032 00012	∂14-Aug-83  1216	FAHLMAN@CMU-CS-C.ARPA 	Things to do
C00041 00013	∂15-Aug-83  1251	@MIT-MC:BENSON@SPA-NIMBUS 	Looping constructs
C00044 00014	∂15-Aug-83  2305	@MIT-MC:Moon%SCRC-TENEX%MIT-MC@SU-DSN 	Things to do    
C00048 00015	∂15-Aug-83  2342	FAHLMAN@CMU-CS-C.ARPA 	Things to do
C00050 00016	∂16-Aug-83  0038	@MIT-MC:Cassels@SCRC-TENEX 	Things to do
C00052 00017	∂16-Aug-83  2131	@MIT-MC:HIC@SCRC-TENEX 	Things to do    
C00055 00018	∂16-Aug-83  2324	HEDRICK@RUTGERS.ARPA 	Re: Things to do  
C00057 00019	∂17-Aug-83  0848	@MIT-MC:DLW%SCRC-TENEX%MIT-MC@SU-DSN 	Re: Things to do 
C00059 00020	∂18-Aug-83  1006	@MIT-MC:benson@SCRC-TENEX 	What to do next   
C00068 00021	∂18-Aug-83  1134	@MIT-MC:MOON@SCRC-TENEX 	subsetting
C00070 00022	∂18-Aug-83  1226	HEDRICK@RUTGERS.ARPA 	Re: subsetting    
C00071 00023	∂18-Aug-83  1224	HEDRICK@RUTGERS.ARPA 	Re: What to do next    
C00075 00024	∂18-Aug-83  1221	FAHLMAN@CMU-CS-C.ARPA 	What to do next  
C00091 00025	∂18-Aug-83  1349	@MIT-MC:MOON@SCRC-TENEX 	subsetting
C00092 00026	∂18-Aug-83  1352	@MIT-MC:benson@SCRC-TENEX 	What to do next   
C00101 00027	∂17-Sep-83  1809	GSB@MIT-ML 	implied contracts in the mapping functions?
C00104 00028	∂17-Sep-83  1821	FAHLMAN@CMU-CS-C.ARPA 	implied contracts in the mapping functions?    
C00106 00029	∂17-Sep-83  2011	@MIT-MC:Cassels%SCRC-TENEX@MIT-MC 	implied contracts in the mapping functions?  
C00109 00030	∂18-Sep-83  1624	@MIT-ML:Moon%SCRC-TENEX@MIT-MC 	implied contracts in the mapping functions?
C00111 00031	∂18-Sep-83  1732	@MIT-ML,@MIT-MC:kmp@MIT-MC 	implied contracts in the mapping functions?    
C00120 00032	∂18-Sep-83  2158	FAHLMAN@CMU-CS-C.ARPA 	implied contracts in the mapping functions?    
C00124 00033	∂19-Sep-83  0335	DDYER@USC-ISIB 	"optimizations"    
C00127 00034	∂19-Sep-83  0548	RAM@CMU-CS-C.ARPA 	Implicit contracts   
C00129 00035	∂19-Sep-83  0811	@MIT-MC:BSG%SCRC-TENEX@MIT-MC 	"optimizations"    
C00132 00036	∂19-Sep-83  1231	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	Implicit contracts
C00135 00037	∂19-Sep-83  1307	@MIT-ML:HEDRICK@RUTGERS.ARPA 	Re: implied contracts in the mapping functions?   
C00137 00038	∂19-Sep-83  1415	DDYER@USC-ISIB 	Re: "optimizations"
C00145 00039	∂21-Sep-83  1336	@MIT-MC:DLW%SCRC-TENEX@MIT-MC 	implied contracts in the mapping functions? 
C00146 00040	∂21-Sep-83  1401	KMP@MIT-MC 	definition/errors/...  
C00148 00041	∂21-Sep-83  1508	masinter.pa@PARC-MAXC.ARPA 	Portability and performance, standards and change   
C00151 00042	∂22-Sep-83  1225	@MIT-ML:DLW@SCRC-TENEX 	Re: implied contracts in the mapping functions?    
C00154 00043	∂22-Sep-83  1423	@MIT-ML:BENSON@SPA-NIMBUS 	Re: implied contracts in the mapping functions? 
C00158 00044	∂22-Sep-83  1449	HEDRICK@RUTGERS.ARPA 	behavior of mapping    
C00160 00045	∂22-Sep-83  2049	ALAN@MIT-MC 	behavior of mapping   
C00163 00046	∂27-Sep-83  1620	JonL.pa@PARC-MAXC.ARPA 	THROW, and MAP  
C00167 00047	∂27-Sep-83  1649	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	THROW, and MAP    
C00172 00048	∂27-Sep-83  1722	FAHLMAN@CMU-CS-C.ARPA 	THROW, and MAP   
C00174 00049	∂27-Sep-83  1942	JONL.PA@PARC-MAXC.ARPA 	Re: THROW, and MAP   
C00176 00050	∂28-Sep-83  0828	Guy.Steele@CMU-CS-A 	Re: THROW, and MAP 
C00180 00051	∂28-Sep-83  0829	Guy.Steele@CMU-CS-A 	THROW, again  
C00181 00052	∂28-Sep-83  1352	GSB@MIT-ML 	THROW, and MAP    
C00184 00053	∂28-Sep-83  2017	Guy.Steele@CMU-CS-A 	Burke's remarks on THROW and MAP  
C00186 00054	∂01-Oct-83  1207	RPG   	INIT-FILE-PATHNAME
C00188 00055	∂01-Oct-83  1207	RPG   	Pathnames: duh    
C00191 00056	∂01-Oct-83  1207	RPG   	Duh duh duh  
C00193 00057	∂01-Oct-83  1208	RPG   	Decompressing
C00195 00058	∂01-Oct-83  1208	RPG   	Duh duh duh  
C00197 00059	∂01-Oct-83  1208	RPG   	Decomposing  
C00200 00060	∂01-Oct-83  1208	RPG   	Random idea  
C00204 00061	∂01-Oct-83  1208	RPG   	Random idea  
C00208 00062	∂01-Oct-83  1209	RPG   	INIT-FILE-PATHNAME
C00212 00063	∂01-Oct-83  1209	RPG   	Random idea: bringing back lexprs
C00218 00064	∂01-Oct-83  1209	RPG   	Pathnames: duh    
C00225 00065	∂01-Oct-83  1209	RPG   	Pathnames: duh    
C00230 00066	∂01-Oct-83  1210	RPG   	Random idea: bringing back lexprs
C00234 00067	∂01-Oct-83  1210	RPG   	Pathnames: duh    
C00241 00068	∂01-Oct-83  1210	RPG   	Random idea: bringing back lexprs
C00247 00069	∂01-Oct-83  1206	RPG   	Pathnames: duh    
C00250 ENDMK
C⊗;
∂28-Jul-83  1912	FAHLMAN@CMU-CS-C.ARPA 	ERROR SIGNALLING FUNCTIONS 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 28 Jul 83  19:12:13 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Thu 28 Jul 83 22:12:52-EDT
Date: Thu, 28 Jul 1983  22:12 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   GBROWN@DEC-MARLBORO.ARPA
Cc:   COMMON-LISP@SU-AI.ARPA
Subject: ERROR SIGNALLING FUNCTIONS
In-reply-to: Msg of Thu 28 Jul 83 11:32:42-EDT from GBROWN at DEC-MARLBORO.ARPA


A part of Paul's note that I agree with is that, in retrospect, I don't
think that we want to have error-signalling forms that take unevaluated
string args.  In most cases we will want to stick some constant string
in there, but there may well be cases where we want to evaluate any such
arg.  A function that goes off somewhere to find a string in the proper
natural language is just one such application.  Since the string arg in
ASSERT is used as a syntactic marker, we would have to change that
function's syntax to fix this -- the pre-string arguments would have to
be encased in a list or something like that.

We probably should think about fixing this in the second edition -- it
is not of such earth-shaking importance that we should consider
unfreezing edition 1 to put this change in.  As others have pointed out,
ASSERT is just a convenient abbreviation, not a primitive.

-- Scott

∂28-Jul-83  2130	@MIT-MC:Moon%SCRC-TENEX%MIT-MC@SU-DSN 	Non-evaluated error message in ASSERT    
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Jul 83  21:30:38 PDT
Received: from SCRC-BULLDOG by SCRC-TENEX with CHAOS; Fri 29-Jul-83 00:29:15-EDT
Date: Friday, 29 July 1983, 00:30-EDT
From: David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN>
Subject: Non-evaluated error message in ASSERT
To: COMMON-LISP@SU-AI
In-reply-to: The message of 28 Jul 83 22:12-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

When I first proposed the current syntax of ASSERT, I asked if
anyone wanted to change the syntax to allow the error message to
be evaluated, and no one said yes.  I would be perfectly happy
to make that change and regard it as an erratum in the manual.

The current syntax is

	(ASSERT test-form [reference*] [format-string] [format-arg*])

The new syntax would be

	(ASSERT test-form ([reference*]) [format-string] [format-arg*])

∂28-Jul-83  2155	WHOLEY@CMU-CS-C.ARPA 	Non-evaluated error message in ASSERT 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 28 Jul 83  21:54:58 PDT
Received: ID <WHOLEY@CMU-CS-C.ARPA>; Fri 29 Jul 83 00:55:44-EDT
Date: Fri, 29 Jul 1983  00:55 EDT
From: Skef Wholey <Wholey@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN.ARPA>
Cc:   COMMON-LISP@SU-AI.ARPA
Subject: Non-evaluated error message in ASSERT
In-reply-to: Msg of 29 Jul 1983 00:30-EDT from David A. Moon <Moon%SCRC-TENEX%MIT-MC at SU-DSN>

Of course, with the current definition of ASSERT, one can still compute
format-strings by using "~?", format indirection...

∂29-Jul-83  0607	GBROWN@DEC-MARLBORO.ARPA 	ERROR SIGNALLING FUNCTIONS   
Received: from DEC-MARLBORO by SU-AI with TCP/SMTP; 29 Jul 83  06:07:34 PDT
Date: Fri 29 Jul 83 09:06:44-EDT
From: GBROWN@DEC-MARLBORO.ARPA
Subject: ERROR SIGNALLING FUNCTIONS
To: COMMON-LISP@SU-AI.ARPA

Well, I certainly want to thank everyone for taking the time to read
my message and give it some thought.  I can tell that it's going to
be refreshing to work on Common Lisp in an environment where a variety of
people openly discuss the issues (I am just about to join the AI 
group at Digital).

It is clear now that I was coming from a different direction.  I tend
to implement a simple internal error reporting facility for an
application, and then spend most of my time on the user interface.
Thus my reaction to the functions was slanted in the direction of the
user.  The functions are quite good for internal error reporting, and
it's great that developers won't have to reinvent them every time.

I do suggest that we take Dave Moon up on his willingness to change the
definition of check-type and assert so that they can evaluate their
message strings.  We've had some success at Digital with message file
translation, so we might want to do that with Lisp applications.  One
can still put the messages in the source; a fancy macro can do the
dirty work.

I also suggest that we think a little about how declarations interact
with the key forms of these functions.

Any day now we'll get our Lisp Machine.  Thanks again.

- Paul C. Anagnostopoulos
-------

∂29-Jul-83  0807	FAHLMAN@CMU-CS-C.ARPA 	Non-evaluated error message in ASSERT
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Jul 83  08:07:23 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 29 Jul 83 11:07:58-EDT
Date: Fri, 29 Jul 1983  11:07 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN.ARPA>
Cc:   COMMON-LISP@SU-AI.ARPA
Subject: Non-evaluated error message in ASSERT
In-reply-to: Msg of 29 Jul 1983 00:30-EDT from David A. Moon <Moon%SCRC-TENEX%MIT-MC at SU-DSN>


If there are no objections to the proposed change to the syntax of
ASSERT, I propose that we let Guy decide whether this "erratum" can go
into the manual at this point without undue disruption of the
proofreading/editing/production process.  While changing anything that
is widely used would be unacceptable to us at this time, ASSERT is not
currently being used much in our code, and the proposed change is
trivial.  CHECK-TYPE would also be changed to eval its string, but this
change is upward-compatible, or nearly so.  Guy should give us a clear
go/no-go decision on this change as soon as possible.

I do remember Moon's query on this, and also that I didn't think very
long about its implications, given the amount of stuff that was being
dealt with just then.

-- Scott

∂29-Jul-83  1338	@MIT-MC:MOON%SCRC-TENEX%MIT-MC@SU-DSN 	Non-evaluated error message in ASSERT    
Received: from MIT-MC by SU-AI with TCP/SMTP; 29 Jul 83  13:37:48 PDT
Received: from SCRC-MENOTOMY by SCRC-TENEX with CHAOS; Fri 29-Jul-83 16:34:38-EDT
Date: Friday, 29 July 1983, 16:35-EDT
From: David A. Moon <MOON%SCRC-TENEX%MIT-MC@SU-DSN>
Subject: Non-evaluated error message in ASSERT
To: Scott E. Fahlman <Fahlman%CMU-CS-C@SU-DSN>
Cc: COMMON-LISP@SU-AI
In-reply-to: The message of 29 Jul 83 11:07-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>

CHECK-TYPE already does evaluate its optional third argument (error message).
If the manual doesn't say this, it's a mistake in the manual.

∂29-Jul-83  2131	Guy.Steele@CMU-CS-A 	ASSERT and CHECK-TYPE   
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 29 Jul 83  21:31:18 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 30 Jul 83 00:23:39 EDT
Date: 30 Jul 83 0027 EDT (Saturday)
From: Guy.Steele@CMU-CS-A
To: common-lisp@SU-AI
Subject: ASSERT and CHECK-TYPE

Per Moon's remark, that the manual states that CHECK-TYPE does not evaluate
the error-message argument is an error.  CHECK-TYPE should evaluate its
optional argument form.
ASSERT should have the syntax
	ASSERT  test-form [ ( {place}* ) [ string {arg}* ] ]
--Guy

∂05-Aug-83  1309	@USC-ECL,@MIT-XX:BSG@SCRC-TENEX 	File open options
Received: from USC-ECL by SU-AI with TCP/SMTP; 5 Aug 83  13:09:01 PDT
Received: from MIT-XX by USC-ECL; Fri 5 Aug 83 13:09:30-PDT
Received: from SCRC-BEAGLE by SCRC-SPANIEL with CHAOS; Fri 5-Aug-83 15:50:05-EDT
Date: Friday, 5 August 1983, 15:50-EDT
From: Bernard S. Greenberg <BSG at SCRC-TENEX>
Subject: File open options
To: Common-Lisp%su-ai at USC-ECL

The Laser Manual speaks of the :IF-EXISTS options :RENAME,
:RENAME-AND-DELETE as though they do their renaming and deleting at open
time.  It goes out of its way to say that :SUPERSEDE "destroys" the
existing file at successful close time.  The former being unreasonable,
I decided that the Lisp Machine local file system would continue to
implement all of these concepts at successful close time.  It keeps the
real file open under a funny name until then.  The only place it could
possibly screw up is a direct access, interlocked, multi-process, shared
file of a kind that we currently don't have.
I guess I'm just thinking out loud, or asking for an adjudication
on the legality of such a decision.

While we are on the subject, should anything be said about open 
':direction ':output ':if-exists ':overwrite closing in abort mode?
Should the file go away?

∂09-Aug-83  0809	@USC-ECL,@MIT-XX:BSG@SCRC-TENEX 	File opening, :TRUNCATE    
Received: from USC-ECL by SU-AI with TCP/SMTP; 9 Aug 83  08:09:27 PDT
Received: from MIT-XX by USC-ECL; Tue 9 Aug 83 08:07:50-PDT
Received: from SCRC-BEAGLE by SCRC-SPANIEL with CHAOS; Tue 9-Aug-83 11:04:41-EDT
Date: Tuesday, 9 August 1983, 11:04-EDT
From: Bernard S. Greenberg <BSG at SCRC-TENEX>
Subject: File opening, :TRUNCATE
To: Common-Lisp%SU-AI at USC-ECL
Cc: File-protocol at SCRC-TENEX

Was it ever proposed or rejected that there be a :IF-EXISTS
:TRUNCATE, being like :OVERWRITE, except that the file content
is effectively set to empty before writing starts?  There is
need for such a thing, and it is a natural behavior on many
systems.  

The default :IF-EXISTS of :ERROR is not useful on file systems
that do not have versions (note that a version of :NEWEST
changes the default to :NEW-VERSION).   We propose that the
default :IF-EXISTS be changed to :SUPERSEDE for file sytems
that do not have versions.   

Is there any reason why :IF-EXISTS is ignored in :OUTPUT/:IO
instead of generating an error?  

∂14-Aug-83  1216	FAHLMAN@CMU-CS-C.ARPA 	Things to do
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 14 Aug 83  12:16:28 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sun 14 Aug 83 15:16:47-EDT
Date: Sun, 14 Aug 1983  15:16 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   common-lisp @ SU-AI.ARPA
Subject: Things to do


A bunch of things were put off without decisions or were patched over in
the effort to get agreement on the first edition.  Most of the people
who have been intensively involved in the language design will be tied
up for another couple of months getting their implementations up to spec
and tweaking them for performance.  However, it is perhaps not too soon
to begin thinking about what major additions/changes we want to get into
the second edition, so that those who want to make proposals can begin
preparing them and so that people can make their plans in light of what
is likely to be coming.

Here's a list of the major things that I see on the agenda for the next
year or so.  Some are yellow-pages packages, some have deep roots
and require white-pages support, and some are so pervasive that they
will probably migrate into the white pages after a probationary period
in yellow-land.  I'm sure I'm forgetting a few things that have already
been suggested.  I'm also sure that people will have some additional
proposals to make.  I am not including very minor and trivial changes
that we might want to make in the language as we gain some experience
with it.

1. Someone needs to implement the transcendental functions for complex
numbers in a portable way so that we can all use these.  The functions
should be parameterized so that they will work for all the various
floating-point precisions that implementations might offer.  The design
should be uncontroversial, since it is already specified in the manual.
I don't think we have any volunteers to do this at present.

2. We need to re-think the issue of function specs, and agree on what
should go into the white pages next time around.  Moon's earlier
proposal, or some subset of it, is probably what we want to go with.

3. At one point HIC offered to propose a minimal set of white-pages
support for efficient implementation of a portable flavor system, and to
supply the portable part.  The white-pages support would also be usable
by other object-oriented paradigms with different inheritance schemes
(that's the controversial part).  After a brief exchange of messages,
HIC got super-busy on other matters and we haven't heard much since
then.  Either HIC or someone else needs to finish this proposal, so that
we can put in the low-level support and begin playing with the portable
implementation of flavors.  Only after more Common Lisp users have had
some opportunity to play with flavors will it make sense to consider
including them (or some variation) in the white pages.  There is a lot
of interest in this out in user-land.

4. We need some sort of iteration facility more powerful than DO.  The
existing proposals are some extensively cleaned-up revision of LOOP and
Dick Waters' LETS package.  There may be some other ideas out there as
well.  Probably the best way to proceed here is for the proponents of
each style to implement their package portably for the yellow pages and
let the customers decide what they like.  If a clear favorite emerges,
it will probably be absorbed into the white pages, though this would not
preclude personal use of the other style.  None of these things requires
white-pages support -- it is just a matter of what we want to encourage
users to use, and how strongly.

5. A good, portable, user-modifiable pretty printer is needed, and if it
were done well enough I see no reason not to put the user-visible
interface in the white pages next time around.  Waters' GPRINT is one
candidate, and is being adopted as an interim pretty-printer by DEC.
The last time I looked, the code for that package was impenetrable and
the interface to it was excessively hairy, but I've heard that it has
been simplified.  Maybe this is what we want to go with.  Other options?

6. We need to work out the business of taxonomic error-handling.  Moon
has a proposal in mind, I believe.  A possible problem is that this
wants to be white-pages, so if it depends on flavors it gets tied up
with the issue of making flavors white-pages as well.

7. The Hemlock editor, a public-domain Emacs-clone written in portable
Common Lisp, is now running on the Perq and Vax implementations.  We
have a lot of additional commands and modes to implement and some tuning
to do, but that should happen fairly rapidly over the next few months.
Of course, this cannot just be moved verbatim to a new implementation
and run there, since it interacts closely with screen-management and
with the file system.  Once Hemlock is polished, it will provide a
reasonable minimum editing/top-level environment for any Common Lisp
implementation that takes the trouble to adapt it to the local system.
This should eliminate the need for hairy rubout-handlers, interlispy
top-levels, S-expression editors, and some other "environment" packages.
We plan to add some version of "info mode" at some point and to get the
Common Lisp Manual and yellow pages documents set up for tree-structured
access by this package, but that won't happen right away.

8. Someone ought to put together a reasonable package of macro-writer's
aids: functions that know which things can be evaluated multiple times
without producing side-effects, type-analysis hacks, and other such
goodies.

If you have items to add to this list, let me know.

-- Scott

∂15-Aug-83  1251	@MIT-MC:BENSON@SPA-NIMBUS 	Looping constructs
Received: from MIT-MC by SU-AI with TCP/SMTP; 15 Aug 83  12:50:58 PDT
Date: Monday, 15 August 1983, 12:29-PDT
From: Eric Benson <BENSON at SPA-Nimbus>
Subject: Looping constructs
To: Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>,
    common-lisp at SU-AI.ARPA
In-reply-to: The message of 14 Aug 83 12:16-PDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>

Okay, I'll jump right in...

I believe that LOOP and LetS can coexist peacefully in the same world.
They deal with two different problems in very different ways.  LetS
allows APL-style processing of "sequences," (unrelated to the Common
Lisp sequence concept) with an expression based syntax.  It is designed
so that sequences are not actually created as data structures, rather
existing conceptually as intermediate values.  Such operations as
merging, sorting and concatentation do not fit within its paradigm.
LOOP, on the other hand, is intended to extend (all) the iteration
constructs found in other languages to Lisp, with a keyword based
syntax.  This kitchen-sink-ism and non-Lispy syntax seems to offend some
purists, but there are many iterative programs which could not be
expressed at all using LetS, and only painfully using DO, which are
quite amenable to treatment with LOOP.  Chacun a son gout.

(LOOP FOR EVERYONE IN COMMON-LISP DO (SEND EVERYONE :THIS-MESSAGE))

∂15-Aug-83  2305	@MIT-MC:Moon%SCRC-TENEX%MIT-MC@SU-DSN 	Things to do    
Received: from MIT-MC by SU-AI with TCP/SMTP; 15 Aug 83  23:05:47 PDT
Received: from SCRC-YAMASKA by SCRC-TENEX with CHAOS; Tue 16-Aug-83 02:01:39-EDT
Date: Tuesday, 16 August 1983, 01:56-EDT
From: David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN>
Subject: Things to do
To: Scott E. Fahlman <Fahlman%CMU-CS-C@SU-DSN>
Cc: common-lisp@SU-AI
In-reply-to: The message of 14 Aug 83 15:16-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

I think we plan to contribute to this, but I at least will not have time to
get near this stuff for at least a couple of months.

3. Portable flavor system / white-pages message-passing substrate.
I think we will find one or more people at Symbolics interested in finishing
the proposal HIC made long ago and implementing it on our machines.

4. LOOP and LetS
We should have both of these available in Common Lisp, and both of them should
be reimplemented.  I've intended for almost two years now to do this for LOOP,
so you know how much to believe me when I say I will get back to this soon.
As CL implementations become available at MIT in a few months someone will
no doubt be found to fix LetS.

5. Pretty-printing
DLW is interested in rewriting Waters' GPRINT.  I'll let him speak for himself
on what the status of this is.

6. Taxonomic event handling.
I have a proposal for this, which is suffiently abstract that it could be
implemented on something other than flavors (although clearly the right substrate
for it is flavors).  The proposal is not at all finished, but I will send it
out in a couple of months.

8. Macro-writer's aids.
I have most of this stuff, written in something pretty close to straight Common Lisp.
The one thing I don't have is type-analysis, although I expect that would be
easy to build on top of the other tools.  In a couple of months I'll see about 
making this stuff portable and available.

To add to your list:

A portable interpreted-code stepper that uses *EVALHOOK* and *APPLYHOOK*.

A portable or near-portable TRACE package.

A variety of macro memoization schemes, using *MACROEXPAND-HOOK*.
These aren't as trivial as they seem at first blush.

More general destructuring facilities (I have an unfinished proposal).

A portable defstruct implemementation.

Portable floating-point read and print.

∂15-Aug-83  2342	FAHLMAN@CMU-CS-C.ARPA 	Things to do
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 15 Aug 83  23:42:04 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Tue 16 Aug 83 02:42:18-EDT
Date: Tue, 16 Aug 1983  02:42 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN.ARPA>
Cc:   common-lisp@SU-AI.ARPA
Subject: Things to do
In-reply-to: Msg of 16 Aug 1983 01:56-EDT from David A. Moon <Moon%SCRC-TENEX%MIT-MC at SU-DSN>


Dave,

Glad to hear that you folks are interested in doing a number of things
on the list.  Nobody is expecting super-quick action on any of this.  As
I said, this is just a bit of pre-planning for things to do once our
respective implementations are stable.

Of the things on your shopping list, STEP, TRACE, and DEFSTRUCT are all
working here and could be made portable easily enough.  They may not be
hairy enough (or featureful enough) for everyone's taste.

I'll have to look at our floating read and print, but I doubt that they
are easy to make portable.

-- Scott

∂16-Aug-83  0038	@MIT-MC:Cassels@SCRC-TENEX 	Things to do
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Aug 83  00:38:21 PDT
Received: from SCRC-MENOTOMY by SCRC-TENEX with CHAOS; Tue 16-Aug-83 03:35:30-EDT
Date: Tuesday, 16 August 1983, 03:35-EDT
From: Robert A. Cassels <Cassels at SCRC-TENEX>
Subject: Things to do
To: Fahlman at CMU-CS-C.ARPA, Moon%SCRC-TENEX%MIT-MC at SU-DSN.ARPA
Cc: common-lisp at SU-AI.ARPA
In-reply-to: The message of 16 Aug 83 02:42-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>

    Date: Tue, 16 Aug 1983  02:42 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>

    I'll have to look at our floating read and print, but I doubt that they
    are easy to make portable.

Ours can be made portable without too much trouble (at some loss of
efficiency).  It probably wants a few extra features.  I figured I'd
wait for Steele's, in the hope that he's figured out more elegant
solutions to the problems than I did.

∂16-Aug-83  2131	@MIT-MC:HIC@SCRC-TENEX 	Things to do    
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Aug 83  21:31:41 PDT
Received: from SCH-STYX by SCRC-TENEX with CHAOS; Wed 17-Aug-83 00:25:26-EDT
Date: Tuesday, 16 August 1983, 21:30-PDT
From: Howard I. Cannon <HIC at SCRC-TENEX>
Subject: Things to do
To: Fahlman at CMU-CS-C.ARPA
Cc: common-lisp at SU-AI.ARPA
In-reply-to: The message of 14 Aug 83 12:16-PDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>

    3. At one point HIC offered to propose a minimal set of white-pages
    support for efficient implementation of a portable flavor system, and to
    supply the portable part.  The white-pages support would also be usable
    by other object-oriented paradigms with different inheritance schemes
    (that's the controversial part).  After a brief exchange of messages,
    HIC got super-busy on other matters and we haven't heard much since
    then.  Either HIC or someone else needs to finish this proposal, so that
    we can put in the low-level support and begin playing with the portable
    implementation of flavors.  Only after more Common Lisp users have had
    some opportunity to play with flavors will it make sense to consider
    including them (or some variation) in the white pages.  There is a lot
    of interest in this out in user-land.

My schedule is too unpredictable at this point to make a comitment, but
I intend to be doing some new development on Flavors this fall, and can
likely finish up a reasonable proposal within the next few months.  If
there's a 90% chance I can get something through, then I'll schedule
it...

∂16-Aug-83  2324	HEDRICK@RUTGERS.ARPA 	Re: Things to do  
Received: from RUTGERS by SU-AI with TCP/SMTP; 16 Aug 83  23:24:15 PDT
Date: 17 Aug 83 00:43:32 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: Things to do
To: HIC%SCRC-TENEX@MIT-MC.ARPA
cc: Fahlman@CMU-CS-C.ARPA, common-lisp@SU-AI.ARPA
In-Reply-To: Message from "Howard I. Cannon <HIC at SCRC-TENEX>" of 17 Aug 83 00:36:25 EDT

An alternative might be
  movei n,2
  pushj p,@[fadr block]
where the address in the fadr block was indexed by N.  That would still
save testing and dispatching within the function itself, but would
avoid adding 4 words to the FADR block.  What do you think?
-------

∂17-Aug-83  0848	@MIT-MC:DLW%SCRC-TENEX%MIT-MC@SU-DSN 	Re: Things to do 
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Aug 83  08:48:03 PDT
Date: 17 Aug 1983 1049-EDT
From: Daniel L. Weinreb <DLW%SCRC-TENEX%MIT-MC@SU-DSN>
Subject: Re: Things to do
To: Moon%SCRC-TENEX%MIT-MC@SU-DSN
cc: Fahlman%CMU-CS-C@SU-DSN, common-lisp@SU-AI
In-Reply-To: The message of Tuesday, 16 August 1983, 01:56-EDT from David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN>

I have done 90% of the work of rewriting GPRINT completely in
modern Lisp style, which also makes it perform
better.  I still have to do the "last 10%" with all that that
implies.  Converting it to Common Lisp also has to be done
but it clearly easy.  I don't think when I'll have time to work on this
more, though.  Keep in touch.
-------

∂18-Aug-83  1006	@MIT-MC:benson@SCRC-TENEX 	What to do next   
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Aug 83  10:06:04 PDT
Date: Thursday, 18 August 1983  11:54-EDT
From: dlw at SCRC-TENEX, benson at SCRC-TENEX
Subject: What to do next
To:   fahlman at cmuc
Cc:   common-lisp at su-ai


Scott, I appreciated your summary of pending issues in Common Lisp, and
I certainly think we should proceed to work on these things.  However, I
think that the "next things to do", after we get out the initial real
Common Lisp manual, are:

(1) Create a Common Lisp Virtual Machine specification, and gather a
body of public domain Lisp code which, when loaded into a proto-Lisp
that meets the spec, produces a complete Common Lisp interpreter that
meets the full language spec.  (This doesn't address the portable
compiler problem.)

(2) Establish an official Common Lisp subset, suitable for
implementation on 16-bit microcomputers such as the 68000 and the 8088.
I understand that Gabriel is interested in 68000 implementations, and I
am trying to interest Bob Rorscharch (who implemented IQLISP, which is
an IBM PC implementation of a UCILISP subset) in converting his product
into a Common Lisp implementation.

There are a lot of problems with subsetting.  You can't leave out
multiple values, beacuse several primitives return multiple values and
you don't want to omit all of these primitives (and you don't want to
discourage the addition of new primitives that return multiple values,
in future versions of Common Lisp).  You can't leave out packages, at
least not entirely, because keywords are essential to many functions.
And many things, if removed, would have to be replaced by something
significantly less clean.  We'd ideally like to remove things that (a)
can be removed without creating the need for an unclean simpler
substitute, and (b) aren't used by the rest of the system.  In other
words, we have to find modular chunks to break off.  And, of course,
any problem that works in the subset has to work and do exactly the
same thing in full Common Lisp, unless the program has some error
(in the "it is an error" sense).  The decision as to what goes
in and what goes out should be made in light of the fact that
an implementation might be heavily into "autoloading".

Complex numbers can easily be omitted.

The requirement for all the floating point precisions can be
omitted.  Of course, Common Lisp is flexiable in this regard anyway.

Rational numbers could be left out.  They aren't hard, per se, but
they're just another thing to do.  The "/" function on two integers
would have to signal an error.

Packages could be trimmed down to only be a feature that supplies
keywords; most of the package system might be removable.

Lexical scoping might possibly be removable.  You could remove support
for LABELS, FLET, and MACROLET.  You can't remove internal functions
entirely (i.e. MAPCAR of a lambda-expression can't be removed) but they
might have some restrictions on them.

Adjustable arrays could be removed.  Fill pointers could go too,
although it's not clear that it's worth it.  In the extreme, you could
only have simple arrays.  You could even remove multi-D arrays
entirely, or only 1-D and 2-D.

Several functions look like they might be big, and aren't really
required.  Some candidates: COERCE, TYPE-OF, the hard version
of DEFSETF (whatever you call it), LOOP, 

TYPEP and SUBTYPEP are hard to do, but it's hard to see how
to get rid of the typing system!  SUBTYPEP itself might go.

Multiple values would be a great thing to get rid of in the subset, but
there are the Common Lisp primitives that use multiple values.  Perhaps
we should add new primitives that return these second values only, for
the benefit of the subset, or something.

Catch, throw, and unwind-protect could be removed, although they're
sure hard to live without.

Lots of numeric stuff is non-critical:  GCD, LCM, CONJUGATE, the
exponentials and trascendentals, rationalize, byte manipulation, random
numbers.

The sequence functions are a lot of work and take a lot of room in your
machine.  It would be nice to do something about this.  Unfortunately,
simply omitting all the sequence functions takes away valuable basic
functionality such as MEMQ.  Perhaps the subset could leave out some of
the keywords, like :test and :test-not and :from-end.

Hash tables are not strictly necessary, although the system itself
are likely to want to use some kind of hash tables somewhere,
maybe not the user-visible ones.

Maybe some of the defstruct options could be omitted, though I don't
think that getting rid of defstruct entirely would be acceptable.

Some of the make-xxx-stream functions are unnecessary.

Some of the hairy reader syntax is not strictly necessary.  The circular
structure stuff and load-time evaluation are the main candidates.

The stuff to allow manipulation of readtables is not strictly necessary,
or could be partially restricted.

Some of the hairy format options could be omitted.  I won't go into
detail on this.

Some of the hairy OPEN options could go, although I'd hate to be the one
to decide which options are the non-critical ones.  Also some of the
file operations (rename, delete, attribute manipulation) could go.

The debugging tools might be optional although probably they just
get autoloaded anyway.

∂18-Aug-83  1134	@MIT-MC:MOON@SCRC-TENEX 	subsetting
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Aug 83  11:33:52 PDT
Date: Thursday, 18 August 1983  14:26-EDT
From: MOON at SCRC-TENEX
To:   dlw at SCRC-TENEX, benson at SCRC-TENEX
Cc:   common-lisp at su-ai
Subject: subsetting
In-reply-to: The message of 18 Aug 1983  11:54-EDT from dlw at SCRC-TENEX, benson at SCRC-TENEX

Most (by no means all) of the things you suggest omitting are quite inexpensive
to include in an implementation that is optimized for space rather than speed.
I don't think it is a good idea to have something called "Common Lisp" that
omits large portions of the language.  All the special forms should be included.
Two good ways to fit into a smaller machine without sacraficing functionality
are to use "autoloading" for little-used functions and to not include things
only needed at compile time in the run-time environment (they can be "autoloaded"
when needed for debugging or interpretive execution, of course).  I think both
of these categories are probably very large in Common Lisp.

∂18-Aug-83  1226	HEDRICK@RUTGERS.ARPA 	Re: subsetting    
Received: from RUTGERS by SU-AI with TCP/SMTP; 18 Aug 83  12:26:45 PDT
Date: 18 Aug 83 15:23:50 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: subsetting
To: MOON%SCRC-TENEX@MIT-MC.ARPA
cc: dlw%SCRC-TENEX@MIT-MC.ARPA, benson%SCRC-TENEX@MIT-MC.ARPA,
    common-lisp@SU-AI.ARPA
In-Reply-To: Message from "MOON at SCRC-TENEX" of 18 Aug 83 14:26:00 EDT

Another approach is to use byte code instead of real machine code.
This can allow a significant space saving. 
-------

∂18-Aug-83  1224	HEDRICK@RUTGERS.ARPA 	Re: What to do next    
Received: from RUTGERS by SU-AI with TCP/SMTP; 18 Aug 83  12:24:32 PDT
Date: 18 Aug 83 15:20:26 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: What to do next
To: dlw%SCRC-TENEX@MIT-MC.ARPA, benson%SCRC-TENEX@MIT-MC.ARPA
cc: fahlman@CMU-CS-C.ARPA, common-lisp@SU-AI.ARPA
In-Reply-To: Message from "dlw at SCRC-TENEX, benson at SCRC-TENEX" of 18 Aug 83 11:54:00 EDT

In general I agree with you.  However let me point out:
  1) that you can simplify MV's to the point that implementation is 
trivial.  All you have to do is require that the caller of a function
never asks for more MV's than there really are.  (In particular, that
he never asks for MV's when there aren't any.)  This handles the most
useful cases.  Certainly it handles the case of calling system functions
that return MV's.  Then you require only a couple of primitives, 
probably MV-SETQ and MV-BIND.  You certainly do not do MV-LIST (because
you can't) nor MV-CALL.  The result is that you can implement MV's
by putting the values in the AC's or in global variables.  No 
additional mechanisms are needed.  I believe this is the right thing
to have done in the first place.  (Indeed I think the whole subset
is probably going to be more useful than the real language.)
  2) we do have a compiler that is probably about as portable as
you are going to get.  We use CMU's Spice Lisp compiler.  It produces
code for a stack-oriented microcoded machine.  We transform this
into code for a register-oriented machine.  The instruction set we
use is an extended version of Utah's CMACS, the final intermediate
representation used in PSL.  It is close enough to machine code that
we produce the actual machine code in LAP.  I am sure there ae
machines that we might have trouble with, but for the ones I am
used to, a couple of days playing with LAP should allow our code to
be loaded on any register-oriented machine.  The origial Spice Lap
code could probably be loaded on any stack-oriented machine.
-------

∂18-Aug-83  1221	FAHLMAN@CMU-CS-C.ARPA 	What to do next  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 18 Aug 83  12:20:52 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Thu 18 Aug 83 15:21:42-EDT
Date: Thu, 18 Aug 1983  15:21 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   dlw%SCRC-TENEX@MIT-MC.ARPA, benson%SCRC-TENEX@MIT-MC.ARPA
Cc:   common-lisp@SU-AI.ARPA
Subject: What to do next
In-reply-to: Msg of 18 Aug 1983  11:54-EDT from dlw at SCRC-TENEX, benson at SCRC-TENEX

    Date: Thursday, 18 August 1983  11:54-EDT

    (1) Create a Common Lisp Virtual Machine specification, and gather a
    body of public domain Lisp code which, when loaded into a proto-Lisp
    that meets the spec, produces a complete Common Lisp interpreter that
    meets the full language spec.  (This doesn't address the portable
    compiler problem.)

Our Spice Lisp implementation is currently serving exactly this need for
about 6 implementation efforts.  Instead of specifying a little Lisp kernel
that has to be implemented, we specify a byte-coded instruction set,
then provide Common Lisp written in Common Lisp (with some of these
byte-coded primitives sprinkled in), plus a compiler written in Common
Lisp that produces these byte codes.  All a prospective user has to
supply are either microcode for the byte-codes or a post-processor from
byte codes to their machine's native instruction set, plus the byte-code
primitives, GC, I/O, and the system interface stuff.  Of course, once
that is done, it is worth putting about a man-year of tuning into any
given implementation to make it run as well as possible.

So our SLGUTS document, along with our public-domain Lisp code and
compiler, do most of what the blue pages are supposed to do.  We are
currently polishing our system up so that it is legal, and in the
process we are cleaning up our compiler so that it will be easier to
replace the whole code-generator, if people would rather do that than
work from byte-codes.  We also plan a substantial cleanup on our
byte-code set, which is by now quite obsolete and creaky.

The next obvious step is to take all of this and turn it into a cleanly
packaged "implementor's kit".  That would reduce the amount of
hand-holding we would have to do and make the whole thing look more
portable.  I don't have much interest in taking this additional (largely
cosmetic) step, but would cooperate with anyone else who wants to do it.
Of course, if someone wants to build a better compiler/kit (ours is more
stack-oriented than is optimal on conventional architectures and does
not have all those hairy optimizations in it that real compiler hackers
love), that would be a good thing to do.

So I guess I see this as a mostly-solved problem and therefore not a
high-priority issue.

    (2) Establish an official Common Lisp subset, suitable for
    implementation on 16-bit microcomputers such as the 68000 and the 8088.

I'm a little dubious about this subsetting business.  It is true that
one could get rid of maybe half the virtual core image by judicious
trimming, though including a compiler or Hemlockish editor would push
you back up there.  Rather than worry a lot about trimming the Lisp and
about going from a big core image to autoload, maybe the time should be
spent figuring out how to fake a decent virtual memory system on these
machines.  It would be nice to have things like Format lurking out on
the floppies but virtually there, rather than gone but autoloadable.

A whole generation of people learned to hate Lisp because they tried to
prehistoric implementations without good debugging tools,
pretty-printing editors,and the other things that make Lisp a decent
programming environment.  Let's not repeat this and expose high-school
kids to the result.

One thing we might do, if we want Common Lisp programs to be deliverable
on micros with the minimum of memory, is to work on a system that goes
over a set of compiled files and builds a Lisp core image with only
these things and the Lisp facilities that they call included.  You would
develop on your Vax or 3600, but the turnkey expert system might then
run on a 68000-based machine with minimal disk.

Anyway, to respond to your suggestions:

    Complex numbers can easily be omitted.
[ Yep.  Should ahve been omitted from the real thing, and may still be
if nobody works up the enthusiasm to implement them.]

    The requirement for all the floating point precisions can be
    omitted.  Of course, Common Lisp is flexiable in this regard anyway.
[ Yeah.  The high-school version could leave them out altogether, and
all the trascendental functions, for a big saving. ]

    Rational numbers could be left out.  They aren't hard, per se, but
    they're just another thing to do.  The "/" function on two integers
    would have to signal an error.
[ This wouldn't save much -- GCD is all you need and that's not too big.
Maybe leave out the Float/rational conversions.]

    Packages could be trimmed down to only be a feature that supplies
    keywords; most of the package system might be removable.
[ Yeah.  Again it doesn't save much. ]

    Lexical scoping might possibly be removable.  You could remove support
    for LABELS, FLET, and MACROLET.  You can't remove internal functions
    entirely (i.e. MAPCAR of a lambda-expression can't be removed) but they
    might have some restrictions on them.
[ Lexical scoping doesn't cost much in space, but it slows down the
interpreter and adds much hair to the compiler.  If you want to save on
compiler space, let GO and RETURN be confined in the traditional way
(only to a cleanly surrounding block and not from a position that leaves
crud on the stack) and eliminate lexical closures and the FLET stuff.]

    Adjustable arrays could be removed.  Fill pointers could go too,
    although it's not clear that it's worth it.  In the extreme, you could
    only have simple arrays.  You could even remove multi-D arrays
    entirely, or only 1-D and 2-D.
[ This would save a fair amound in dispatching hair.  Not trying to
super-optimize for speed would also save a lot of space -- whether
that's a good trade depends on the machine and the applications.]

    Several functions look like they might be big, and aren't really
    required.  Some candidates: COERCE, TYPE-OF, the hard version
    of DEFSETF (whatever you call it), LOOP, 
[Yup.]

    TYPEP and SUBTYPEP are hard to do, but it's hard to see how
    to get rid of the typing system!  SUBTYPEP itself might go.
[Got to have TYPEP, but not the hairy compund types and booleans.
Subtypep is needed by the compiler but not much by user code.]

    Multiple values would be a great thing to get rid of in the subset, but
    there are the Common Lisp primitives that use multiple values.  Perhaps
    we should add new primitives that return these second values only, for
    the benefit of the subset, or something.
[If you're willing to cons when passing back multiples, or to limit the
number to, say, 3 values, it becomes easy to do.  I believe that only
our unfortunate time functions return more than a few values.]

    Catch, throw, and unwind-protect could be removed, although they're
    sure hard to live without.
[No, you've got to keep these.  They become easy to do if you don't have
to pass multiple back on the stack.]

    Lots of numeric stuff is non-critical:  GCD, LCM, CONJUGATE, the
    exponentials and trascendentals, rationalize, byte manipulation, random
    numbers.
[Leave in hyperbolic arctan, though -- every language needs at least one
function that nobody has ever tried.]

    The sequence functions are a lot of work and take a lot of room in your
    machine.  It would be nice to do something about this.  Unfortunately,
    simply omitting all the sequence functions takes away valuable basic
    functionality such as MEMQ.  Perhaps the subset could leave out some of
    the keywords, like :test and :test-not and :from-end.
[Again, these are simple if you're willing to do everything with ELT and
FUNCALL and are not going for tenseness.  It is checking for 53 special
cases that blow things up.  If you don't have them in, people will write
DO's and their code will balloon.]

    Hash tables are not strictly necessary, although the system itself
    are likely to want to use some kind of hash tables somewhere,
    maybe not the user-visible ones.
[Again, if you want tiny but slow, an A-list can do anything a
hash-table can.]

    Maybe some of the defstruct options could be omitted, though I don't
    think that getting rid of defstruct entirely would be acceptable.

    Some of the make-xxx-stream functions are unnecessary.

    Some of the hairy reader syntax is not strictly necessary.  The circular
    structure stuff and load-time evaluation are the main candidates.

    The stuff to allow manipulation of readtables is not strictly necessary,
    or could be partially restricted.

    Some of the hairy format options could be omitted.  I won't go into
    detail on this.

    Some of the hairy OPEN options could go, although I'd hate to be the one
    to decide which options are the non-critical ones.  Also some of the
    file operations (rename, delete, attribute manipulation) could go.
[All of the above sounds OK.]

    The debugging tools might be optional although probably they just
    get autoloaded anyway.
[For an educational system on cheap machines, you don't skimp here.  For
a delivery vehicle, you don't need these.]

∂18-Aug-83  1349	@MIT-MC:MOON@SCRC-TENEX 	subsetting
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Aug 83  13:49:26 PDT
Date: Thursday, 18 August 1983  16:40-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject: subsetting

One last comment.  Inexpensive machines with ~ 1 megabyte main memory,
~ 40-80 megabyte disk, and VAX-class processors will be here sooner
than we think.  They may be limited by software and marketing more than
by technology.  So maybe by the time a subset was defined there wouldn't
be much demand for it.

∂18-Aug-83  1352	@MIT-MC:benson@SCRC-TENEX 	What to do next   
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Aug 83  10:06:04 PDT
Date: Thursday, 18 August 1983  11:54-EDT
From: dlw at SCRC-TENEX, benson at SCRC-TENEX
Subject: What to do next
To:   fahlman at cmuc
Cc:   common-lisp at su-ai


Scott, I appreciated your summary of pending issues in Common Lisp, and
I certainly think we should proceed to work on these things.  However, I
think that the "next things to do", after we get out the initial real
Common Lisp manual, are:

(1) Create a Common Lisp Virtual Machine specification, and gather a
body of public domain Lisp code which, when loaded into a proto-Lisp
that meets the spec, produces a complete Common Lisp interpreter that
meets the full language spec.  (This doesn't address the portable
compiler problem.)

(2) Establish an official Common Lisp subset, suitable for
implementation on 16-bit microcomputers such as the 68000 and the 8088.
I understand that Gabriel is interested in 68000 implementations, and I
am trying to interest Bob Rorscharch (who implemented IQLISP, which is
an IBM PC implementation of a UCILISP subset) in converting his product
into a Common Lisp implementation.

There are a lot of problems with subsetting.  You can't leave out
multiple values, beacuse several primitives return multiple values and
you don't want to omit all of these primitives (and you don't want to
discourage the addition of new primitives that return multiple values,
in future versions of Common Lisp).  You can't leave out packages, at
least not entirely, because keywords are essential to many functions.
And many things, if removed, would have to be replaced by something
significantly less clean.  We'd ideally like to remove things that (a)
can be removed without creating the need for an unclean simpler
substitute, and (b) aren't used by the rest of the system.  In other
words, we have to find modular chunks to break off.  And, of course,
any problem that works in the subset has to work and do exactly the
same thing in full Common Lisp, unless the program has some error
(in the "it is an error" sense).  The decision as to what goes
in and what goes out should be made in light of the fact that
an implementation might be heavily into "autoloading".

Complex numbers can easily be omitted.

The requirement for all the floating point precisions can be
omitted.  Of course, Common Lisp is flexiable in this regard anyway.

Rational numbers could be left out.  They aren't hard, per se, but
they're just another thing to do.  The "/" function on two integers
would have to signal an error.

Packages could be trimmed down to only be a feature that supplies
keywords; most of the package system might be removable.

Lexical scoping might possibly be removable.  You could remove support
for LABELS, FLET, and MACROLET.  You can't remove internal functions
entirely (i.e. MAPCAR of a lambda-expression can't be removed) but they
might have some restrictions on them.

Adjustable arrays could be removed.  Fill pointers could go too,
although it's not clear that it's worth it.  In the extreme, you could
only have simple arrays.  You could even remove multi-D arrays
entirely, or only 1-D and 2-D.

Several functions look like they might be big, and aren't really
required.  Some candidates: COERCE, TYPE-OF, the hard version
of DEFSETF (whatever you call it), LOOP, 

TYPEP and SUBTYPEP are hard to do, but it's hard to see how
to get rid of the typing system!  SUBTYPEP itself might go.

Multiple values would be a great thing to get rid of in the subset, but
there are the Common Lisp primitives that use multiple values.  Perhaps
we should add new primitives that return these second values only, for
the benefit of the subset, or something.

Catch, throw, and unwind-protect could be removed, although they're
sure hard to live without.

Lots of numeric stuff is non-critical:  GCD, LCM, CONJUGATE, the
exponentials and trascendentals, rationalize, byte manipulation, random
numbers.

The sequence functions are a lot of work and take a lot of room in your
machine.  It would be nice to do something about this.  Unfortunately,
simply omitting all the sequence functions takes away valuable basic
functionality such as MEMQ.  Perhaps the subset could leave out some of
the keywords, like :test and :test-not and :from-end.

Hash tables are not strictly necessary, although the system itself
are likely to want to use some kind of hash tables somewhere,
maybe not the user-visible ones.

Maybe some of the defstruct options could be omitted, though I don't
think that getting rid of defstruct entirely would be acceptable.

Some of the make-xxx-stream functions are unnecessary.

Some of the hairy reader syntax is not strictly necessary.  The circular
structure stuff and load-time evaluation are the main candidates.

The stuff to allow manipulation of readtables is not strictly necessary,
or could be partially restricted.

Some of the hairy format options could be omitted.  I won't go into
detail on this.

Some of the hairy OPEN options could go, although I'd hate to be the one
to decide which options are the non-critical ones.  Also some of the
file operations (rename, delete, attribute manipulation) could go.

The debugging tools might be optional although probably they just
get autoloaded anyway.

∂17-Sep-83  1809	GSB@MIT-ML 	implied contracts in the mapping functions?
Received: from MIT-ML by SU-AI with TCP/SMTP; 17 Sep 83  18:09:47 PDT
Date: 17 September 1983 21:12 EDT
From: Glenn S. Burke <GSB @ MIT-ML>
Subject: implied contracts in the mapping functions?
To: common-lisp @ SU-AI

I would appreciate comments on the following "bug report" i just
received.
----------------
Date:     17 Sep 83 11:59-EST (Sat)
Subject:  bug in mapc
To: bug-nil@mit-mc

I am iterating over a list and simultaneously adding things to the end of
the list with NCONC.  It works fine until I get down to the final iteration.
If I then NCONC something on to the end of the list, MAPC exits without
looking at the newly added list elements.
. . .
----------------
The manual neither implies that this should work, nor that it might not.
My impression on this is that using a mapping function is inappropriate
here, and the (unintentional) optimization which causes his case to fail
is allowable with the mapping functions.  On the other hand, since it is
so likely for this to actually work (i bet it does in most other
implementations, and in fact it only fails in NIL in the interpreter when
there is exactly one list argument to the MAPC, MAPCAN, and MAPCAR), it
may not be worth the conceptual overhead to disallow this sort of hacking
when it seems so "obvious" that it should work.

∂17-Sep-83  1821	FAHLMAN@CMU-CS-C.ARPA 	implied contracts in the mapping functions?    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 17 Sep 83  18:21:31 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sat 17 Sep 83 21:23:50-EDT
Date: Sat, 17 Sep 1983  21:23 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Glenn S. Burke <GSB@MIT-ML.ARPA>
Cc:   common-lisp@SU-AI.ARPA
Subject: implied contracts in the mapping functions?
In-reply-to: Msg of 17 Sep 1983 21:12 EDT from Glenn S. Burke <GSB at MIT-ML>


I think that "it is an error" to destructively mess around with a list
after it has been fed to MAPCAR or even to DOLIST.  The fact that most
implementations would do this in a certain way and the user can guess
what that way is should not matter.  If we let people twiddle hidden
state and count on the results, there will be no safe optimizations
left, and all Common Lisp implementations will slow down by a
considerable factor.

-- Scott

∂17-Sep-83  2011	@MIT-MC:Cassels%SCRC-TENEX@MIT-MC 	implied contracts in the mapping functions?  
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Sep 83  20:10:11 PDT
Received: from SCRC-MUDDY by SCRC-TENEX with CHAOS; Sat 17-Sep-83 23:09:56-EDT
Date: Saturday, 17 September 1983, 23:08-EDT
From: Robert A. Cassels <Cassels%SCRC-TENEX@MIT-MC>
Subject: implied contracts in the mapping functions?
To: Fahlman@CMU-CS-C, GSB@MIT-ML
Cc: common-lisp@SU-AI
In-reply-to: The message of 17 Sep 83 21:23-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: Sat, 17 Sep 1983  21:23 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    I think that "it is an error" to destructively mess around with a list
    after it has been fed to MAPCAR or even to DOLIST.  The fact that most
    implementations would do this in a certain way and the user can guess
    what that way is should not matter.  If we let people twiddle hidden
    state and count on the results, there will be no safe optimizations
    left, and all Common Lisp implementations will slow down by a
    considerable factor.

I agree that it ought to be "an error" or left undefined
(implementation-dependent).  Other languages with looping constructs
have run into this problem, and most have decided to admit that there is
"hidden state" which is implementation-dependent.

∂18-Sep-83  1624	@MIT-ML:Moon%SCRC-TENEX@MIT-MC 	implied contracts in the mapping functions?
Received: from MIT-ML by SU-AI with TCP/SMTP; 18 Sep 83  16:24:05 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Sun 18-Sep-83 19:12:49-EDT
Date: Sunday, 18 September 1983, 18:47-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: implied contracts in the mapping functions?
To: Glenn S. Burke <GSB@MIT-ML>
Cc: common-lisp@SU-AI
In-reply-to: The message of 17 Sep 83 21:12-EDT from Glenn S. Burke <GSB at ML>

    Date: 17 September 1983 21:12 EDT
    From: Glenn S. Burke <GSB @ MIT-ML>
    I would appreciate comments on the following "bug report" i just
    received.

    I am iterating over a list and simultaneously adding things to the end of
    the list with NCONC.  It works fine until I get down to the final iteration.

I don't think the language should define what happens in this case (even though
it "works" in my implementation, as it happens).  Such iterations should be
written in terms of lower-level primitives, such as DO or TAGBODY+SETQ.

∂18-Sep-83  1732	@MIT-ML,@MIT-MC:kmp@MIT-MC 	implied contracts in the mapping functions?    
Received: from MIT-ML by SU-AI with TCP/SMTP; 18 Sep 83  17:32:09 PDT
Date: Sunday, 18 September 1983, 19:50-EDT
From: Kent M. Pitman <kmp at MIT-MC>
Subject: implied contracts in the mapping functions?
To: Cassels at SCRC-TENEX, "Fahlman%CMU-CS-C" at MIT-ML
Cc: "Common-Lisp%SAIL" at MIT-ML
In-reply-to: The message of 17 Sep 83 23:08-EDT from Robert A. Cassels <Cassels%SCRC-TENEX at MIT-MC>,
             The message of 17 Sep 83 21:23-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>,
             The message of 17 Sep 83 21:12-EDT from Glenn S. Burke <GSB at MIT-ML>

    Date: Saturday, 17 September 1983, 23:08-EDT
    From: Robert A. Cassels <Cassels%SCRC-TENEX@MIT-MC>

	Date: Sat, 17 Sep 1983  21:23 EDT
	From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>

	I think that "it is an error" to destructively mess around with a list
	after it has been fed to MAPCAR or even to DOLIST.  The fact that most
	implementations would do this in a certain way and the user can guess
	what that way is should not matter.  If we let people twiddle hidden
	state and count on the results, there will be no safe optimizations
	left, and all Common Lisp implementations will slow down by a
	considerable factor.

    I agree that it ought to be "an error" or left undefined
    (implementation-dependent).  Other languages with looping constructs
    have run into this problem, and most have decided to admit that there is
    "hidden state" which is implementation-dependent.

I disagree. 

Arguments that other languages do x or y prove nothing. Many other languages
are just plain afraid of anything. Or they may have sufficiently different data 
structures and types that this is reasonable. I'd want concrete examples of
design criteria from specific languages before buying a follow-the-leader
argument.

Vague arguments about possible optimizations are also a bad idea. I would 
like to hear such arguments. Generally, I would like to see optimizations 
shaped by linguistic considerations, not linguistic considerations shaped
by optimizations. Certainly without specific examples of the kinds of
optimizations you want to make, how can I understand the consequences of 
those optimizations...

While programmers are prone to occassional errors in judgment, 
I think it is important to recognize systematic "errors" that they make and
ask why they make them.  For example, it was never a documented feature of
APPEND that it would copy its first N arguments, but people still used to
write (APPEND x NIL) to cause copying to happen. There is the small issue of
whether the program was going to do 
 (IF (NULL Y) X) (REALLY-APPEND X Y))
because that would screw them but that was their only `real' error. The other
assumption (that REALLY-APPEND would really copy X) was not really just
an assumption; it was pretty much forced by the contract of APPEND. So it
isn't as unsafe as it looks at first glance. Likewise, I think, for MAPCAR,
etc. The bug GSB mentioned was one where the guy lost because he let the
NCONC get too close to the point you were scanning with the map, but the 
basic theory that you could bash something farther down the line was sound.
Now in the case of (APPEND x NIL), people got smart and just made
a subroutine, COPYLIST, for what really needed to be done. But in the case of 
this NCONCing onto lists that are being mapped, I don't really think we can 
package it quite so simply...

Also, I don't think it is ever an ERROR (signalled or otherwise) for a
user to modify later structure. At worst, it should be nondeterministic with 
respect to the algorithm. I could imagine a case with mapping down a list of 
integers numbers which you are simulataneously NCONCing with new integers and 
splicing non-primes out of and using as a sieve where it might actually be 
reasonable to not worry whether the mapping operator actually noticed the 
splice. In one case, it might take slightly longer because more numbers might 
be tested than were really necessary, but in the case I'm thinking of, it 
would not and should not affect the correctness of the algorithm and the 
programmer shouldn't take grief from other programmers about how it was an
"error" to think that way.

And I really think that there may be cases where efficiency considerations
may call for me to write something which has a tail which is known
to be more than a threshold distance from the list which is being mapped. I
only think we should even say it is undefined for the programmer to try to 
modify the current cell (ie, if I am looking at the D in (A B C D), I should
not expect that NCONCing the list would matter), but I think it is reasonable 
for a programmer to assume that if he's looking at the C, that the list can be 
NCONC'd. This is important to the Lisp programmer's view of lists as 
modifiable, shared structure.  I think we should carefully define the unusual
situations, but I think it's overly restrictive to really define that 
modifying any part of the list after the part under inspection is an error...

One side light -- Any system with multi-processing is fairly likely at one
time or another to have GET going on in one process while PUTPROP is going
on in another process for the same symbol -- perhaps not for the same indicator.
We surely don't want people claiming this is erroneous and that we need to 
lock property lists since for most applications, the programmer will (at 
least should) hand-lock any cases that matter and the rest should just take 
what comes.



∂18-Sep-83  2158	FAHLMAN@CMU-CS-C.ARPA 	implied contracts in the mapping functions?    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 18 Sep 83  21:57:53 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 19 Sep 83 01:00:28-EDT
Date: Mon, 19 Sep 1983  01:00 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Kent M. Pitman <kmp@MIT-MC.ARPA>
Cc:   Common-Lisp@SU-AI.ARPA
Subject: implied contracts in the mapping functions?
In-reply-to: Msg of 18 Sep 1983 19:50-EDT from Kent M. Pitman <kmp at MIT-MC>


    Vague arguments about possible optimizations are also a bad idea.

I disagree.  It should be clear that this sort of thing rules out a very
large class of possible optimizations -- basically, all those
optimizations that would depend on some property of the input list, such
as its length or the type of elements present.  If we allow the user's
functionals to mutilate this list after it is passed in, any highly
optimizing compiler would have to check these functionals for destructive
side-effects on the list, a much more difficult task in most cases.

    Generally, I would like to see optimizations 
    shaped by linguistic considerations, not linguistic considerations shaped
    by optimizations.

OK, how about the following: When you pass an object to a Common Lisp
function or special form whose job it is to process that object in some
coherent manner, it HAS to be undefined what happens if you somehow
regain control (perhaps because the form is executing a function on your
behalf) and destructively modify that object in the middle of the
operation.  I see no way to make this coherent in general.  Therefore to
allow a few simple cases of it (chewing only on the as-yet-unprocessed
tail of a list being chewed linearly) would be an ugly hack.

Multi-processing is not addressed by the Common Lisp design and trying
to imagine what it would look and use that as a basis for argument seems
much more bizarre to me than worrying about what "vague optimizations"
some future super-compiler for the existing language would want to
employ.

-- Scott

∂19-Sep-83  0335	DDYER@USC-ISIB 	"optimizations"    
Received: from USC-ISIB by SU-AI with TCP/SMTP; 19 Sep 83  03:35:19 PDT
Date: 19 Sep 1983 0331-PDT
Subject: "optimizations"
From: Dave Dyer       <DDYER@USC-ISIB.ARPA>
To: common-lisp@SU-AI.ARPA


 I agree strongly that vague arguments about optimizatons are
a bad idea.  The only optimizations permitted should be those
proven to not affect the semantics of the program.  I have seen
too many cases where seemingly harmless optimizations cause
subtle bugs when the user's program does something unusual.

 In the current case, mapping functions should certainly be required
to conform to the reasonale expectation of the user that a NCONC
will work.  If permitted, this kind of incompatability is
exactly what will make common lisp non-portable.  Saving a
cycle or two just isn't worth the dirtyness of explaining
to the user why a NCONC won't always work and what to avoid.

 I am reminded of Larry Masinter's golden words:

   "I don't believe in portability in the absence of porting"

 There should be only ONE implementation of the mapping functions,
written in unambiguous primitives, shared by all common lisp
implementations.  Likewise for the largest possible subset of
the core language.  This is the best way to make sure that
the zillion+1 small implementation choices one has to make,
even working from the tightest specification, are made the same 
way by all the implementations.

-------

∂19-Sep-83  0548	RAM@CMU-CS-C.ARPA 	Implicit contracts   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 19 Sep 83  05:47:58 PDT
Received: ID <RAM@CMU-CS-C.ARPA>; Mon 19 Sep 83 08:50:49-EDT
Date: Mon, 19 Sep 1983  08:50 EDT
From: Rob MacLachlan <RAM@CMU-CS-C.ARPA>
To:   common-lisp@SU-AI.ARPA
Subject: Implicit contracts


    I think that the specification of a language by implementation as
implied by Dyer's message is an extremely bad idea.  If it is legal to
depend on any possible behavior of a function then it becomes
impossible to ever change any function because someone might depend on
a particular idiosyncratic behavior.

    It seems to me that the best approach in Common Lisp is to have
that manual describe every important behavior of an operation, and for
any code which depends on something not guaranteed by the manual to be
considered erroneous.

    As far as assuring agreement with the manual, I think that the
best solution would be to have a comprehensive validation suite, or
lacking that, a number of large portable applications.

  Rob

∂19-Sep-83  0811	@MIT-MC:BSG%SCRC-TENEX@MIT-MC 	"optimizations"    
Received: from MIT-MC by SU-AI with TCP/SMTP; 19 Sep 83  08:11:49 PDT
Received: from SCRC-BEAGLE by SCRC-TENEX with CHAOS; Mon 19-Sep-83 11:08:25-EDT
Date: Monday, 19 September 1983, 11:07-EDT
From: Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC>
Subject: "optimizations"
To: DDYER@USC-ISIB, common-lisp@SU-AI
In-reply-to: The message of 19 Sep 83 06:31-EDT from Dave Dyer <DDYER at USC-ISIB>

    Date: 19 Sep 1983 0331-PDT
    From: Dave Dyer       <DDYER@USC-ISIB.ARPA>
     There should be only ONE implementation of the mapping functions,
    written in unambiguous primitives, shared by all common lisp
    implementations.  Likewise for the largest possible subset of
    the core language.  This is the best way to make sure that
    the zillion+1 small implementation choices one has to make,
    even working from the tightest specification, are made the same 
    way by all the implementations.
I disagree with this view fairly strongly.  Different implementations
have different instruction sets and primitives, and what appears to
be an optimal macro expansion for one implementation is often not so
for another.  It is best to specify the contracts of one of these goddamn
things by documentation, driven from need of what problem the function
or form at hand was supposed to solve, not by code that nails its implementation
so that you can find delightful and challenging ways to undercut
its intended purpose. 

∂19-Sep-83  1231	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	Implicit contracts
Received: from MIT-MC by SU-AI with TCP/SMTP; 19 Sep 83  12:31:33 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Mon 19-Sep-83 15:24:45-EDT
Date: Monday, 19 September 1983, 15:23-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: Implicit contracts
To: Rob MacLachlan <RAM@CMU-CS-C>
Cc: common-lisp@SU-AI
In-reply-to: The message of 19 Sep 83 08:50-EDT from Rob MacLachlan <RAM at CMU-CS-C>

    Date: Mon, 19 Sep 1983  08:50 EDT
    From: Rob MacLachlan <RAM@CMU-CS-C.ARPA>
	I think that the specification of a language by implementation as
    implied by Dyer's message is an extremely bad idea.  If it is legal to
    depend on any possible behavior of a function then it becomes
    impossible to ever change any function because someone might depend on
    a particular idiosyncratic behavior.

Precisely the problem that Interlisp has.

	It seems to me that the best approach in Common Lisp is to have
    that manual describe every important behavior of an operation, and for
    any code which depends on something not guaranteed by the manual to be
    considered erroneous.

	As far as assuring agreement with the manual, I think that the
    best solution would be to have a comprehensive validation suite, or
    lacking that, a number of large portable applications.

I am in complete agreement with this.  We should definitely have a
comprehensive validation suite.  Even one that isn't comprehensive
would be helpful.  I wrote one for the division-related functions
(/, mod, floor, etc.), but that is hardly a drop in the bucket.

∂19-Sep-83  1307	@MIT-ML:HEDRICK@RUTGERS.ARPA 	Re: implied contracts in the mapping functions?   
Received: from MIT-ML by SU-AI with TCP/SMTP; 19 Sep 83  13:07:32 PDT
Date: 19 Sep 83 16:07:49 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: implied contracts in the mapping functions?
To: kmp%MIT-MC@MIT-ML.ARPA
cc: Cassels%SCRC-TENEX@MIT-ML.ARPA, Fahlman%CMU-CS-C@MIT-ML.ARPA,
    Common-Lisp%SAIL@MIT-ML.ARPA
In-Reply-To: Message from "Kent M. Pitman <kmp at MIT-MC>" of 18 Sep 83 20:37:17 EDT

It is dangerous to have a feature that obviously ought to be in a
language not be there, or be there in only half the implementations.  No
matter what you say in the manual, people will use it where it works.
You should either define what NCONC to a list being mapped does, or
strongly suggest that its use be made an error that is detected.  Since
I don't know how to do the latter, I suggest the former.
-------

∂19-Sep-83  1415	DDYER@USC-ISIB 	Re: "optimizations"
Received: from USC-ISIB by SU-AI with TCP/SMTP; 19 Sep 83  14:15:35 PDT
Date: 19 Sep 1983 1416-PDT
Subject: Re: "optimizations"
From: Dave Dyer       <DDYER@USC-ISIB.ARPA>
To: Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC.ARPA>, DDYER@USC-ISIB.ARPA,
    common-lisp@SU-AI.ARPA
In-Reply-To: Your message of Monday, 19 September 1983, 11:07-EDT

    From: Dave Dyer       <DDYER@USC-ISIB.ARPA>
     There should be only ONE implementation of the mapping functions,

  From: Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC>
  I disagree with this view fairly strongly.  Different implementations
  have different instruction sets and primitives, and what appears to
  be an optimal macro expansion for one implementation is often not so
  for another.

If a different actual implementation is better for some machine, then
the implementor is free to substitute an equivalent one,  provided
it is really equivalent.  It is exactly the desire to optimize each
implementation at the expense of "minor" incompatability that in aggregate
makes putatively compatable languages incompatable, be it Lisp of Fortran.

I don't really believe that specification by implementation is ideal,
just that it is the only reliable mechanism.  No prose specification
can capture the full subtlety of MAPCAR.  To be sure, there should be
a prose specification first, and the prose should remain the ultimate
arbiter of what is correct;  but ONE program, written to be faithful
to the spec, should be the arbiter of what is an acceptable implementation.
If two implementations of the same specification behave differently
in ways not explicitly permitted by the specification, then at least
one of them is wrong.


 The laser edition of the manual states in its description of MAP:
 
   "... The result sequence is as long as the shortest of the input sequences.
      If the FUNCTION has side effects, it can count on being called
    first with all elements numbered zero, and then on all elements
    numbered one and so on. ..."

  Now, I can see that the natural behavior when FUNCTION modifies one
of the input sequences will be quite different when the input sequence
is an array verses when it is a list;  In fact there are too many to
ennumerate.  (examples:  an array-type sequence probably has a known
length.  Is it legal to assume the length won't change?  Mapping over
a list-type sequence is "naturally" unaffected by changes to the elements
already processed, whereas array-type sequences might "naturally" skip
or repeat elements if a new element is removed or added at the beginning.)

 Confronted with this variability, the current spec is inadequate. One
extreme position to correct the spec would be to make it read:

   "Side effects which change the number of elements in a sequence
    have unpredictable effects on the execution sequence."

Which would simply define a large grey area. An opposite extreme might read

   "Side effects which change the number of elements in a sequence
    are immediately effective; The N'th call to the user's function
    will be with the elements which are N'th in the source sequences
    at the time of the call."

Which would define an explicit descipline: one quite different from
the customary MAP but possibly more appropriate for arrays.  A third
proposal might read:

    "Side effects which change the number of elements are well defined
     provided that only elements not yet encountered by the iteration
     are changed."

This would allow adding or dropping elements from the REST of the list
but not changing the part already processed.  Finally, one might have:

    "Side effects which change the number of elements in a sequence
     are immediately effective; the iteration proceeds by using the
     the successor of element used on the previous iteration"

Which corresponds to the usual definition of MAP, but might be
inconvenient for sequences implemeted by arrays.


----

I have two conclusion points.  First, that given this discovery of
a hole in the specification, we have to change the specification to
correct it, even if that is to simply mark the hole.  Second, that
given the subtle ways that reasonable implementations might vary,
there should be a standard implementation that meets the spec and
that all the implementors agree to use (or at least be compatable with).

This naturally constrains implementors freedom to choose representation
and algorithm, and will sometimes extract significant performance
penalties, but is part of the price of portability.  Implementors are
free to provide nonstandard extensions they consider to be better,
but only in upward compatable ways.

-------

∂21-Sep-83  1336	@MIT-MC:DLW%SCRC-TENEX@MIT-MC 	implied contracts in the mapping functions? 
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Sep 83  13:36:43 PDT
Received: from SCRC-SHEPHERD by SCRC-TENEX with CHAOS; Wed 21-Sep-83 16:40:39-EDT
Date: Wednesday, 21 September 1983, 16:41-EDT
From: Daniel L. Weinreb <DLW%SCRC-TENEX@MIT-MC>
Subject: implied contracts in the mapping functions?
To: common-lisp@SU-AI
Cc: GSB@MIT-ML

I agree: it is an error.  Programs should not be depending on the
internals of mapping functions.

∂21-Sep-83  1401	KMP@MIT-MC 	definition/errors/...  
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Sep 83  14:00:48 PDT
Date: 21 September 1983 17:02 EDT
From: Kent M. Pitman <KMP @ MIT-MC>
Subject: definition/errors/...
To: DLW @ MIT-MC
cc: Common-Lisp @ SU-AI

I spoke with ALAN at length about this the other day. He made
a point which I think is significant -- We should not be worrying
about whether this particular feature is good/bad or and error
or undefined. We should be worrying about why. Otherwise, we'll
be doomed to simply repeat this sort of discussion ad nauseum
every time someone encounters someone with poor or questionable
programming style using any powerful operator. So while I respect
that you think it's an error, I think the only really relevant
question is "What in the language spec makes it so, and how can
we write future specs to try to avoid this sort of problem without
precluding creative and reasonable uses of powerful operators?"

∂21-Sep-83  1508	masinter.pa@PARC-MAXC.ARPA 	Portability and performance, standards and change   
Received: from PARC-MAXC by SU-AI with TCP/SMTP; 21 Sep 83  15:05:52 PDT
Date: 21 Sep 83 15:07 PDT
From: masinter.pa@PARC-MAXC.ARPA
Subject: Portability and performance, standards and change
To: Common-Lisp@SU-AI.ARPA

The multiple goals in language design of portability and high
performance, standard definitions and the ability to change system
definitions are often in conflict. You sometimes have to give up one to
get the other.

The primary goal of Common-Lisp (above and beyond Franz, NIL, LispM etc)
was to be COMMON. Further along the road to allowing each implementor to
decide the exact semantics of MAPC lies the current proliferation of
MacLisp dialects.

Nailing down what the mapping functions do in the presence of structure
modification on the argument list may (a) result in performance
degradations in some cases and (b) tie you to design decisions that you
will wish later that you could change, but those are in fact the real
costs of standardization and portability.

If standardization and portability are precisely the problem with
Interlisp (as Moon put it), they are also its strengths.  If you have a
STANDARD, then you can't go off and change the semantics of your
implementation without in fact changing the STANDARD. Leaving it
unspecified or saying "it is an error" does not avoid the issue. It
merely increases the number of programs which will not transfer from one
"common" lisp to another (or even from one release to the next, if Moon
really meant what I thought he did) without tracking down dependence on
features which are not detectable by static or runtime analysis.




∂22-Sep-83  1225	@MIT-ML:DLW@SCRC-TENEX 	Re: implied contracts in the mapping functions?    
Received: from MIT-ML by SU-AI with TCP/SMTP; 22 Sep 83  12:25:00 PDT
Received: from SCRC-SHEPHERD by SCRC-TENEX with CHAOS; Thu 22-Sep-83 15:26:39-EDT
Date: Thursday, 22 September 1983, 15:27-EDT
From: Daniel L. Weinreb <DLW@SCRC-TENEX>
Subject: Re: implied contracts in the mapping functions?
To: HEDRICK@RUTGERS, kmp%MIT-MC@MIT-ML
Cc: Cassels%SCRC-TENEX@MIT-ML, Fahlman%CMU-CS-C@MIT-ML,
    Common-Lisp%SAIL@MIT-ML
In-reply-to: The message of 19 Sep 83 16:07-EDT from Charles Hedrick <HEDRICK at RUTGERS>

    Date: 19 Sep 83 16:07:49 EDT
    From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
    It is dangerous to have a feature that obviously ought to be in a
    language not be there, or be there in only half the implementations.  No
    matter what you say in the manual, people will use it where it works.

I strongly disagree.  It is completely unavoidable that some people will
depend on the peculiarities of any implementation.  It does not follow
that every peculiarity should be a defined part of Common Lisp that
every implementation must follow.  If we are to belive what you are
saying, then EVERY place in the manual that says "it is an error" should
be changed to either say "it signals an error" or else be precisely
defined.

∂22-Sep-83  1423	@MIT-ML:BENSON@SPA-NIMBUS 	Re: implied contracts in the mapping functions? 
Received: from MIT-ML by SU-AI with TCP/SMTP; 22 Sep 83  14:23:45 PDT
Received: from SPA-LOS-TRANCOS by SPA-Nimbus with CHAOS; Thu 22-Sep-83 14:24:41-PDT
Date: Thursday, 22 September 1983, 14:24-PDT
From: Eric Benson <BENSON at SPA-NIMBUS>
Subject: Re: implied contracts in the mapping functions?
To: Daniel L. Weinreb <DLW at SCRC-TENEX>, HEDRICK at RUTGERS,
    kmp%MIT-MC at MIT-ML
Cc: Cassels%SCRC-TENEX at MIT-ML, Fahlman%CMU-CS-C at MIT-ML,
    Common-Lisp%SAIL at MIT-ML
In-reply-to: The message of 22 Sep 83 12:27-PDT from Daniel L. Weinreb <DLW at SCRC-TENEX>

    Date: Thursday, 22 September 1983, 15:27-EDT
    From: Daniel L. Weinreb <DLW@SCRC-TENEX>
	Date: 19 Sep 83 16:07:49 EDT
	From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
	It is dangerous to have a feature that obviously ought to be in a
	language not be there, or be there in only half the implementations.  No
	matter what you say in the manual, people will use it where it works.

    I strongly disagree.  It is completely unavoidable that some people will
    depend on the peculiarities of any implementation.  It does not follow
    that every peculiarity should be a defined part of Common Lisp that
    every implementation must follow.  If we are to belive what you are
    saying, then EVERY place in the manual that says "it is an error" should
    be changed to either say "it signals an error" or else be precisely
    defined.

And of course that's only the beginning.  The only way to ensure that
every implementation has exactly the same behavior is to define a very
low-level virtual machine and write the entire system in terms of it.
Then there would truly be one Common Lisp, and any program which ran on
one version would run on any other.  This is how UCSD Pascal is defined,
for example.  Programs can be compiled on Z80's and run on 6502's.  This
is not the goal of the Common Lisp language definition as I understand
it, however.  I believe the intention is similar to that of Standard
Lisp, to define an @i(interchange subset).  Programs which rely only on
the behavior described in the manual will run on any Common Lisp
implementation.  No machine can enforce the Common Lisp standard,
although automated tools can of course aid the programmer in adhering to
it.

∂22-Sep-83  1449	HEDRICK@RUTGERS.ARPA 	behavior of mapping    
Received: from RUTGERS by SU-AI with TCP/SMTP; 22 Sep 83  14:49:39 PDT
Date: 22 Sep 83 17:51:30 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: behavior of mapping
To: common-lisp@SU-AI.ARPA

Yes, I realize that it is possible to carry transportability to an
extreme.  Obviously we cannot protect users against every possible
assumption they might make.  But there are some assumptions that we know
from experience are made so universally that we ought to at least
consider that the users might be right and we might be wrong.  I agree
that I can't possibly guarantee that all code will be transportable.  But
it would be irresponsible to leave in something I know will cause most
users to write untransportable programs, unless there are very strong
reasons indeed. I have been involved in Lisp development and support for
a number of years.  I can  tell you from experience that users do in
fact depend upon the exact semantics of mapping functions.  I believe
this is one of the cases where the users' expectations are so universal
that the implementors should bow to them.
-------

∂22-Sep-83  2049	ALAN@MIT-MC 	behavior of mapping   
Received: from MIT-MC by SU-AI with TCP/SMTP; 22 Sep 83  20:49:32 PDT
Date: 22 September 1983 23:52 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject:  behavior of mapping
To: Common-Lisp @ SU-AI, HEDRICK @ RUTGERS
In-reply-to: Msg of 22 Sep 83 17:51:30 EDT from Charles Hedrick <HEDRICK at RUTGERS.ARPA>

    Date: 22 Sep 83 17:51:30 EDT
    From: Charles Hedrick <HEDRICK at RUTGERS.ARPA>
    ...  But there are some assumptions that we know from experience are
    made so universally that we ought to at least consider that the users
    might be right and we might be wrong.... it would be irresponsible
    to leave in something I know will cause most users to write
    untransportable programs, ...  I can tell you from experience that
    users do in fact depend upon the exact semantics of mapping functions....

"will cause MOST users to write untransportable programs"?

I don't think I have EVER met a user who depended this closely on the exact
semantics of any function that takes a functional argument.  In fact, my
experience has been that users generally exhibit good sense about such
issues.  I suggest that that good sense be reinforced by inserting a
paragraph in the manual explaining briefly that functions that take
functional arguments generally will behave unpredictable if their arguments
are diddled before they are done with them.

∂27-Sep-83  1620	JonL.pa@PARC-MAXC.ARPA 	THROW, and MAP  
Received: from PARC-MAXC by SU-AI with TCP/SMTP; 27 Sep 83  16:20:33 PDT
Date: Tue, 27 Sep 83 15:51 PDT
From: JonL.pa@PARC-MAXC.ARPA
Subject: THROW, and MAP
To: Common-Lisp@SU-AI.ARPA

THROW

Excelsior edition's commentary on THROW doesn't make clear whether the
evaluation of the 'result' form is to take place before or after the
tag-search.
If before, then the syntax of THROW is merely that of 'function'.  If
after,
then that must be spelled out, so that side-effects in the 'result'
computation
won't occur when there is a tag-search failure.

MAP

All the recent discussion about "what happens if the mapped function
updates
the list it is mapping over" points up the non-primitive nature of the
map
series of "functions".  Admittedly, the WhitePages aren't the place to
try to
distinguish a truly-primitive kernel from the common, portable subset,
but
a simple change from [Function] to [Macro] on the map entries would
forclose
a lot misguided babbling.

Given that CommonLisp has RETURN-FROM and named BLOCKs, a macro-
expansion of the mappers need not be concerned with "prog context".  Is
there
any reason for continuing to push the primality of the map series over a
reasonable macro definition?

More to the point, sigh, is the lack of any reasonable iteration control
structure.
MacLisp DO just doesn't "cut the mustard".  DOTIMES and DOLIST are
too-little,
too-late.   LOOP in its current definition (p93) seems to preclude the
nice
MacLisp/LispM LOOP macro.  Foo.  Having used Interlisp's I.S.OPRS for
some
time now, I often wonder how one can get along without it.  The
objection to a reasonable form of LOOP can hardly be that it is "new",
since it is essentially
a modest variant of Interlisp's I.S.OPRS which has had 10-years of
extensive use.
Nor should the objection be that old "cop out" that its syntax isn't
"Lispy"
enough  [or, as the last paragraph on page 99 almost says,  "... as if
it were
impossible to write programs without
Lots-of-Interminable-Silly-Parentheses"]

∂27-Sep-83  1649	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	THROW, and MAP    
Received: from MIT-MC by SU-AI with TCP/SMTP; 27 Sep 83  16:49:07 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Tue 27-Sep-83 19:29:12-EDT
Date: Tuesday, 27 September 1983, 19:50-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: THROW, and MAP
To: JonL.pa@PARC-MAXC
Cc: Common-Lisp@SU-AI
In-reply-to: The message of 27 Sep 83 18:51-EDT from JonL.pa at PARC-MAXC

    Date: Tue, 27 Sep 83 15:51 PDT
    From: JonL.pa@PARC-MAXC.ARPA

    Excelsior edition's commentary on THROW doesn't make clear whether the
    evaluation of the 'result' form is to take place before or after the
    tag-search.  If before, then the syntax of THROW is merely that of
    'function'.  If after, then that must be spelled out, so that side-effects
    in the 'result' computation won't occur when there is a tag-search
    failure.

The syntax (you mean semantics?) of THROW is not the same as of functions,
since it sees all the values resulting from evaluating its second subform.
I vote for the subforms both being evaluated before the search for a matching
tag commences, since it seems simpler for both user-understanding and
implementation ease.

    LOOP in its current definition (p93) seems to preclude the nice
    MacLisp/LispM LOOP macro.

In the excelsior edition LOOP was fixed to not preclude that macro.
(It used to be defined in such a way that the hairy LOOP macro was not
a consistent extension of Common Lisp's simple builtin one.)
But the hair LOOP macro is not included in the white pages.

This is partly my fault since years ago I promised to come up with
a second generation of LOOP that would fix a lot of the problems people
have complained about.  In fact I did most of the design and circulated
it to a number of people, but the thing has been on the back burner for
a long time due to other responsibilities.  It probably wouldn't be hard
to make the existing LOOP run in Common Lisp as a yellow pages package,
although I for one would much rather have the nicer new one.  The
motivation for finishing this project (either by me or by someone else,
I don't care) will probably become a lot higher in a half year or so as
we start to see some "real live" Common Lisp implementations with actual
users.

I agree that it is quite impractical to try to do without some form of complex
iteration generator, whether it be the Interlisp one, the Maclisp one, or my
pie in the sky new one.  Dick Water's LetS package should definitely be
made to run in Common Lisp as well.

∂27-Sep-83  1722	FAHLMAN@CMU-CS-C.ARPA 	THROW, and MAP   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 27 Sep 83  17:21:58 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Tue 27 Sep 83 20:24:23-EDT
Date: Tue, 27 Sep 1983  20:24 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Common-Lisp@SU-AI.ARPA
Subject: THROW, and MAP
In-reply-to: Msg of 27 Sep 1983 19:50-EDT from David A. Moon <Moon%SCRC-TENEX at MIT-MC>


I agree with Moon on both counts: THROW shold eval both its args before
searching for the tag (it is very awkward to implement it the other
way), and we all are eager to see Moon's new polished LOOP proposal
rather than rushing to implement the old one.  Even we reactionaries are
about ready to accept something along these lines, but I'd like it to be
as uncluttered as possible.

-- Scott

∂27-Sep-83  1942	JONL.PA@PARC-MAXC.ARPA 	Re: THROW, and MAP   
Received: from PARC-MAXC by SU-AI with TCP/SMTP; 27 Sep 83  19:42:32 PDT
Date: 27 SEP 83 19:38 PDT
From: JONL.PA@PARC-MAXC.ARPA
Subject: Re: THROW, and MAP
To: Moon%SCRC-TENEX@MC.ARPA, JonL.pa@MC.ARPA
cc: Common-Lisp@SAIL.ARPA, JONL.PA@PARC-MAXC.ARPA

In response to the message sent  Tue, 27 Sep 83 19:50 EDT from
Moon%SCRC-TENEX@MIT-MC.ARPA

Syntax/Semantics -- either way I had overlooked the multiple-value thing
(being confined to a system which, for the moment, doesn't have them).

Yes, I'd like to see the evaluation question settled that way -- that's
how
I've already implemented an Interlisp version -- but I'd be quite happy
to
go the other way if the majority so voted.

∂28-Sep-83  0828	Guy.Steele@CMU-CS-A 	Re: THROW, and MAP 
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 28 Sep 83  08:28:29 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 28 Sep 83 11:18:27 EDT
Date: 28 Sep 83 1124 EDT (Wednesday)
From: Guy.Steele@CMU-CS-A
To: JonL.pa@PARC-MAXC
Subject: Re: THROW, and MAP
CC: common-lisp@SU-AI
In-Reply-To: "JonL.pa@PARC-MAXC.ARPA's message of 27 Sep 83 17:51-EST"

Your point about THROW is well-taken, and I will try to improve the prose.
I think that THROW was made into a special form because, although all
arguments are evaluated and thus from that point of view it could be
treated as a function, THROW has some bizarre side effects (namely
transfer of control) and most program-processing programs will probably
need to treat it as a special form anyway.  (Maybe that's wrong; maybe
it just happened somewhat accidentally as we gyrated through various
versions of throws and catches.)

As for MAP, I honestly don't see what being a macro or a function has
to do with pinning down its behavior when a list being mapped over is
modified; either a macro or a function could have the surprising behavior,
and in either case the desription must be more precise.

Now, the question of whether MAP should be a function or a macro is in
itself an interesting and debatable question.  It doesn't need to be
a macro for the sake of compiled code, because a compiler is free to
compile it inline unless specifically directed not to do so (with a
notinline declaration).  MacLISP does this kind of inline compilation
already.  As you pointed out, the RETURN-FROM technology allows
Common LISP to avoid some of the PROG-capture anomalies that occurred
in MacLISP.

I will point out that in some styles of programming it is useful to
be able to APPLY MAP (or even to MAP a MAP)!

Concerning LOOP, the Common LISP LOOP construct is compatible with the
LISP Machine LOOP construct.  There is a sentence which carefully makes
the semantics of Common LISP's LOOP undefined if any form in the
construct is a symbol (it should say atom).  So every valid Common
LISP LOOP must contain only non-atomic forms, and in this case the
LISP Machine LOOP has the same semantics as the Common LISP LOOP.
So any implementation is free to implement the entire LISP Machine
LOOP stuff and claim it is compatible with Common LISP; it just
won't necessarily be portable if you use the hairier features.

∂28-Sep-83  0829	Guy.Steele@CMU-CS-A 	THROW, again  
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 28 Sep 83  08:29:11 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 28 Sep 83 11:18:46 EDT
Date: 28 Sep 83 1127 EDT (Wednesday)
From: Guy.Steele@CMU-CS-A
To: JonL.pa@PARC-MAXC
Subject: THROW, again
CC: common-lisp@SU-AI
In-Reply-To: "JonL.pa@PARC-MAXC.ARPA's message of 27 Sep 83 17:51-EST"

Scratch my explanations of why THROW is a special form.  Moon had
the right answer (multiple values from a single argument form).
I feel silly for neglecting that point.

∂28-Sep-83  1352	GSB@MIT-ML 	THROW, and MAP    
Received: from MIT-ML by SU-AI with TCP/SMTP; 28 Sep 83  13:52:12 PDT
Date: 28 September 1983 15:41 EDT
From: Glenn S. Burke <GSB @ MIT-ML>
Subject: THROW, and MAP
To: Moon%SCRC-TENEX @ MIT-MC
cc: Common-Lisp @ SU-AI, JonL.pa @ PARC-MAXC

    Received: from MIT-MC by SU-AI with TCP/SMTP; 27 Sep 83  16:49:07 PDT
    Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Tue 27-Sep-83 19:29:12-EDT
    Date: Tuesday, 27 September 1983, 19:50-EDT
    From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
    In-reply-to: The message of 27 Sep 83 18:51-EDT from JonL.pa at PARC-MAXC

    The syntax (you mean semantics?) of THROW is not the same as of functions,
    since it sees all the values resulting from evaluating its second subform.
    I vote for the subforms both being evaluated before the search for a matching
    tag commences, since it seems simpler for both user-understanding and
    implementation ease.
    . . .

In my implementation of multiple values, it may be beneficial for me to
do the search first in order to find the eventual destination of the values.
I'm haven't gotten into this stuff enough to know whether i actually would
want to do it this way even if given the liberty, however.

∂28-Sep-83  2017	Guy.Steele@CMU-CS-A 	Burke's remarks on THROW and MAP  
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 28 Sep 83  20:17:03 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 28 Sep 83 23:07:53 EDT
Date: 28 Sep 83 2321 EDT (Wednesday)
From: Guy.Steele@CMU-CS-A
To: Glenn S. Burke <GSB@MIT-ML>
Subject: Burke's remarks on THROW and MAP
CC: common-lisp@SU-AI
In-Reply-To: "Glenn S. Burke's message of 28 Sep 83 14:41-EST"

I believe that it would be acceptable to perform THROW in the following
order:
(1) evaluate tag (2) search for catcher (3) evaluate results (4) perform unwind
However, this order would not be acceptable:
(1) evaluate tag (2) search for catcher (3) perform unwind (4) evaluate results

The point is that the results are calculated in the dynamic environment
(that includes special variables and catchers) of the THROW, bot that
of the CATCH.  (Sorry, "not" that of the CATCH.)
--Guy

∂01-Oct-83  1207	RPG   	INIT-FILE-PATHNAME
 ∂29-Sep-83  2040	@CMU-CS-C.ARPA:STEELE%TARTAN@CMU-CS-C.ARPA 	INIT-FILE-PATHNAME   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  20:39:56 PDT
Received: from TARTAN by CMU-CS-C with TLnet; 29 Sep 83 23:41:31-EDT
Received: ID <STEELE%TARTAN@CMU-CS-C.ARPA>; Thu 29 Sep 83 23:40:52-EDT
Date: Thu 29 Sep 83 23:40:50-EDT
From: STEELE%TARTAN@CMU-CS-C.ARPA
Subject: INIT-FILE-PATHNAME
To: moon%scrc-tenex@MIT-ML.ARPA
cc: fahlman@CMU-CS-C.ARPA, dlw%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA,
    bsg%scrc-tenex@MIT-ML.ARPA

I am confused in going over your messages of four weeks ago.
You say that in ZetaLISP, fs:init-file-pathname takes a file type argument
that precedes the host argument.  The blue Chineual says that it takes
a program-name and a host; no mention of a file type argument.  The Common
LISP Excelsior manual says the same thing.  Has ZetaLISP changed incompatibly
on this function's arguments since the blue Chineual?  If sop, could you
provide a few examples of the new usage for inclusion?
--Thanks,
  Guy
-------

∂01-Oct-83  1207	RPG   	Pathnames: duh    
 ∂29-Sep-83  2138	@CMU-CS-C.ARPA:STEELE%TARTAN@CMU-CS-C.ARPA 	Pathnames: duh  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  21:38:01 PDT
Received: from TARTAN by CMU-CS-C with TLnet; 30 Sep 83 00:39:39-EDT
Received: ID <STEELE%TARTAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 00:39:02-EDT
Date: Fri 30 Sep 83 00:38:58-EDT
From: STEELE%TARTAN@CMU-CS-C.ARPA
Subject: Pathnames: duh
To: fahlman@CMU-CS-C.ARPA, moon%scrc-tenex@MIT-ML.ARPA,
    dlw%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA, bsg%scrc-tenex@MIT-ML.ARPA

We had been dithering about the best way to make a pathname just like
another one except for the type.  Mono suggested (2 September) that
make-pathname should take an extra argument that if specified is used
to fill in all components not otherwise specified.

Duh.  make-pathname already has one of those.

So what I formerly wrote as
	(merge-pathnames (make-pathname :host (pathname-host x) :type "BAZ")
		         x)
[which, if you believe that device defaulting is controlled by the host
and not by the defaults pathname, had a bug in it and should have been
written as
	(merge-pathnames (make-pathname :host (pathname-host x)
					:device (pathname-device x)
					:type "BAZ")
			 x)
] can be written simply as
	(make-pathname :type "BAZ" :defaults x)
Is it not so?
--Guy
(Sigh.  "Mono" => "Moon" in line 2 up there.)
-------

∂01-Oct-83  1207	RPG   	Duh duh duh  
 ∂29-Sep-83  2216	@CMU-CS-C.ARPA:STEELE%TARTAN@CMU-CS-C.ARPA 	Duh duh duh
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  22:16:20 PDT
Received: from TARTAN by CMU-CS-C with TLnet; 30 Sep 83 01:18:14-EDT
Received: ID <STEELE%TARTAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 01:17:27-EDT
Date: Fri 30 Sep 83 01:17:24-EDT
From: STEELE%TARTAN@CMU-CS-C.ARPA
Subject: Duh duh duh
To: fahlman@CMU-CS-C.ARPA, moon%scrc-tenex@MIT-ML.ARPA,
    dlw%scrc-tenex@MIT-ML.ARPA, bsg%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA,
    guy.steele@CMU-CS-A.ARPA

When in doubt, read the directions.  It turns out that the :defaults
argument to make-pathname is explicitly documented to provide the
host component only and no others.  This is as in ZetaLISP.  So the
correct shortened example is probably just
	(merge-pathnames (make-pathname :type "BAZ" :defaults x)
			 x)
which is still a bit clumsy.  Maybe the right thing to do is to
provide yet another keyword arg to make-pathname; unfortunately,
"defaults" is the best name for it.  What to do?
--Guy
-------

∂01-Oct-83  1208	RPG   	Decompressing
 ∂29-Sep-83  2223	STEELE@CMU-CS-C.ARPA 	Decompressing
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  22:23:08 PDT
Received: ID <STEELE@CMU-CS-C.ARPA>; Fri 30 Sep 83 01:26:00-EDT
Date: Fri 30 Sep 83 01:25:58-EDT
From: STEELE@CMU-CS-C.ARPA
Subject: Decompressing
To: fahlman@CMU-CS-C.ARPA, rpg@SU-AI.ARPA, moon%scrc-tenex@MIT-ML.ARPA,
    dlw%scrc-tenex@MIT-ML.ARPA, bsg%scrc-tenex@MIT-ML.ARPA,
    guy.steele@CMU-CS-A.ARPA

This will sound like an idea dreamed up at 4 in the morning, but it's not
even 1:30 yet.

This is inspired by the :MONMOD feature of ITS DDT.

What is the first thing you type most of the time at a LISP?
That's right, a left parenthesis, because you invoke functions
more often than you look at variables (wild-eyed claim).

SOOO, why shouldn't a LISP implementation prompt with a left parenthesis?
This works even better if <return> acts like a close superbracket:
it's practically like typing at any random monitor's top level.
As for looking at variables, you can do as DDT does and allow the
left-paren prompt to be rubbed out, or you can just require the loser
to type "values a)" after the prompt.
--Quux
-------

∂01-Oct-83  1208	RPG   	Duh duh duh  
 ∂29-Sep-83  2225	FAHLMAN@CMU-CS-C.ARPA 	Duh duh duh 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  22:25:08 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 01:27:44-EDT
Date: Fri, 30 Sep 1983  01:27 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   STEELE%TARTAN@CMU-CS-C.ARPA
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      guy.steele@CMU-CS-A.ARPA, moon%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA
Subject: Duh duh duh
In-reply-to: Msg of Fri 30 Sep 83 01:17:24-EDT from STEELE%TARTAN at CMU-CS-C.ARPA


I like the idea of having :DEFAULTS supply all the slots not otherwise
specified.  That is a very convenient option to have in MAKE-PATHNAME.
If we must choose a different name, I suppose it could be :COPY-FROM or
something, but :DEFAULTS seems best.  Would this incompatibile change
screw anyone?  We don't want to make life TOO easy for the Symbolics
hackers ...

-- Scott

∂01-Oct-83  1208	RPG   	Decomposing  
 ∂29-Sep-83  2243	FAHLMAN@CMU-CS-C.ARPA 	Decomposing 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  22:43:38 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 01:45:54-EDT
Date: Fri, 30 Sep 1983  01:45 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   STEELE@CMU-CS-C.ARPA
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      guy.steele@CMU-CS-A.ARPA, moon%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA
Subject: Decomposing
In-reply-to: Msg of Fri 30 Sep 83 01:25:58-EDT from STEELE at CMU-CS-C.ARPA


(-: The first character I type varies, but the second one is usually
rubout.  If we're going to supply something for the user, why not that?
About one rubout per character typed by the user would seem to be the
proper ratio, especially for messages typed by Quuces late at night.

Better yet, let's throw the user right into the debugger -- he'll end up
there sooner or later, and letting him talk to the top-level just builds
up a false sense of security.  To make it more interesting, we can offer
the user about 30 ways to proceed, all of which require him to type keys
that don't exist on any terrestrial keyboard.  The only way to get out
with your files intact are to type PLUGH or to make it through the maze,
evading the pirate.  :-)

-- Scott

∂01-Oct-83  1208	RPG   	Random idea  
 ∂29-Sep-83  2335	FAHLMAN@CMU-CS-C.ARPA 	Random idea 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  23:34:54 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 02:37:36-EDT
Date: Fri, 30 Sep 1983  02:37 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   steele@CMU-CS-C.ARPA, rpg@SU-AI.ARPA, moon%SCRC-TENEX@MIT-MC.ARPA,
      bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA
Cc:   fahlman@CMU-CS-C.ARPA
Subject: Random idea


As long as we're tossing off random ideas, here's one from me.  For the
second edition only, of course.

Lately I've found myself missing the LEXPRs and LSUBRs of Maclisp.
&REST has its uses, but it seems to me that returning a LIST of the rest
args is very often not what you want.  Either you actualy cons up this
list, which has significant costs associated with it, or you do stack
allocation with the cdr-coding trick that Symbolics uses, which gives
you a list that evaporates if you try to pass it upward -- at best a
confusing feature and at worst downright treacherous.

What we normally want is a simple way to pass in an unbounded number of
args, a way to find out how many there are, and a way to iterate over
them.  Suppose we add &MORE to the lambda-list syntax with the same
syntax as &REST -- you can only have one or the other.  The variable
following the &MORE keyword is bound to the number of "more" args that
are present -- what would have been the length of the rest list.  Then
just add (MORE-ARG n) as a form to access the specified more-arg.  This
is SETF-able, of course.  Either &MORE or &REST could easily be
implemented in terms of the other, or both could be primitive.  If &MORE
is primitive, you don't have to cons and don't have those volatile
stack-lists.  Implementation is trivial, I think.  In fact, I now think
that it is too bad that &REST-lists got started at all.

-- Scott

∂01-Oct-83  1208	RPG   	Random idea  
 ∂29-Sep-83  2335	FAHLMAN@CMU-CS-C.ARPA 	Random idea 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 29 Sep 83  23:34:54 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 02:37:36-EDT
Date: Fri, 30 Sep 1983  02:37 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   steele@CMU-CS-C.ARPA, rpg@SU-AI.ARPA, moon%SCRC-TENEX@MIT-MC.ARPA,
      bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA
Cc:   fahlman@CMU-CS-C.ARPA
Subject: Random idea


As long as we're tossing off random ideas, here's one from me.  For the
second edition only, of course.

Lately I've found myself missing the LEXPRs and LSUBRs of Maclisp.
&REST has its uses, but it seems to me that returning a LIST of the rest
args is very often not what you want.  Either you actualy cons up this
list, which has significant costs associated with it, or you do stack
allocation with the cdr-coding trick that Symbolics uses, which gives
you a list that evaporates if you try to pass it upward -- at best a
confusing feature and at worst downright treacherous.

What we normally want is a simple way to pass in an unbounded number of
args, a way to find out how many there are, and a way to iterate over
them.  Suppose we add &MORE to the lambda-list syntax with the same
syntax as &REST -- you can only have one or the other.  The variable
following the &MORE keyword is bound to the number of "more" args that
are present -- what would have been the length of the rest list.  Then
just add (MORE-ARG n) as a form to access the specified more-arg.  This
is SETF-able, of course.  Either &MORE or &REST could easily be
implemented in terms of the other, or both could be primitive.  If &MORE
is primitive, you don't have to cons and don't have those volatile
stack-lists.  Implementation is trivial, I think.  In fact, I now think
that it is too bad that &REST-lists got started at all.

-- Scott

∂01-Oct-83  1209	RPG   	INIT-FILE-PATHNAME
 ∂30-Sep-83  1439	@MIT-MC:MOON%SCRC-TENEX@MIT-MC 	INIT-FILE-PATHNAME
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Sep 83  14:34:56 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Fri 30-Sep-83 17:36:34-EDT
Date: Friday, 30 September 1983, 17:35-EDT
From: David A. Moon <MOON%SCRC-TENEX@MIT-MC>
Subject: INIT-FILE-PATHNAME
To: STEELE%TARTAN@CMU-CS-C
Cc: fahlman@CMU-CS-C, dlw%SCRC-TENEX@MIT-MC, rpg@SU-AI, bsg%SCRC-TENEX@MIT-MC
In-reply-to: The message of 29 Sep 83 23:40-EDT from STEELE%TARTAN at CMU-CS-C

    Date: Thu 29 Sep 83 23:40:50-EDT
    From: STEELE%TARTAN@CMU-CS-C.ARPA
    I am confused in going over your messages of four weeks ago.
    You say that in ZetaLISP, fs:init-file-pathname takes a file type argument
    that precedes the host argument.  The blue Chineual says that it takes
    a program-name and a host; no mention of a file type argument.  The Common
    LISP Excelsior manual says the same thing.  Has ZetaLISP changed incompatibly
    on this function's arguments since the blue Chineual?  If sop, could you
    provide a few examples of the new usage for inclusion?

It was changed in Release 5, incompatibly, to take a file type argument before
the host argument.  As it turns out, one almost never supplies the host argument
so the incompatibility was not serious.  This change was made so that init files
could be compiled rationally; at the same time we changed the naming conventions
for init files on most hosts, to be the same as the naming conventions for
all other Lisp programs.

	(fs:init-file-pathname "Zork")
		=> SCRC:<MOON>ZORK-INIT
		suitable to pass to LOAD and load the compiled version
		if it is compiled, otherwise the source version

	(fs:init-file-pathname "Zork" :lisp)
		=> SCRC:<MOON>ZORK-INIT.LISP
		suitable to pass to ED to edit the source of the init file

	(fs:init-file-pathname "Zork" si:*default-binary-file-type*)
		=> SCRC:<MOON>ZORK-INIT.BIN
		suitable for a program that asks the user whether to compile
		the init file after editing it

Before Release 5, the init file name would be SCRC:<MOON>ZORK.INIT for all
three of these, and if that was a compiled file one had to consult an oracle
to find the name of the source file it came from.

∂01-Oct-83  1209	RPG   	Random idea: bringing back lexprs
 ∂30-Sep-83  1447	@MIT-MC:MOON%SCRC-TENEX@MIT-MC 	Random idea: bringing back lexprs
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Sep 83  14:46:45 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Fri 30-Sep-83 17:46:41-EDT
Date: Friday, 30 September 1983, 17:45-EDT
From: David A. Moon <MOON%SCRC-TENEX@MIT-MC>
Subject: Random idea: bringing back lexprs
To: Scott E. Fahlman <Fahlman@CMU-CS-C>
Cc: steele@CMU-CS-C, rpg@SU-AI, bsg%SCRC-TENEX@MIT-MC, dlw%SCRC-TENEX@MIT-MC
In-reply-to: The message of 30 Sep 83 02:37-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: Fri, 30 Sep 1983  02:37 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    As long as we're tossing off random ideas, here's one from me.  For the
    second edition only, of course.

    Lately I've found myself missing the LEXPRs and LSUBRs of Maclisp.
    &REST has its uses, but it seems to me that returning a LIST of the rest
    args is very often not what you want.  Either you actualy cons up this
    list, which has significant costs associated with it, or you do stack
    allocation with the cdr-coding trick that Symbolics uses, which gives
    you a list that evaporates if you try to pass it upward -- at best a
    confusing feature and at worst downright treacherous.

    What we normally want is a simple way to pass in an unbounded number of
    args, a way to find out how many there are, and a way to iterate over
    them.  Suppose we add &MORE to the lambda-list syntax with the same
    syntax as &REST -- you can only have one or the other.  The variable
    following the &MORE keyword is bound to the number of "more" args that
    are present -- what would have been the length of the rest list.  Then
    just add (MORE-ARG n) as a form to access the specified more-arg.  This
    is SETF-able, of course.  
Not of course.  This has very strong implications for the implementation.
In particular, it requires that APPLY must copy its last argument.  In our
implementation, it will pass that argument straight through to an &REST
argument if the function wants one.

			      Either &MORE or &REST could easily be
    implemented in terms of the other, or both could be primitive.  If &MORE
    is primitive, you don't have to cons and don't have those volatile
    stack-lists.  Implementation is trivial, I think.  In fact, I now think
    that it is too bad that &REST-lists got started at all.

I don't see a list as particularly a bad way of representing a sequence.
It seems more natural than using numerical indices and a magic function.

    What we normally want is a simple way to pass in an unbounded number of
    args, a way to find out how many there are, and a way to iterate over
    them.

Actually, this is debatable.  Many of the functions that I have seen
use their &REST args in true list-like fashion.  For instance, they use
GETF on them, or they pass them as the last argument to APPLY.  It's
only sometimes that all they want to do with them is iterate over them.

I am the first to admit that the volatility of rest args is a problem,
and plan to fix it one day.  In our system it would not be acceptable
for rest args that are only used "downward" to be inefficient, so it
can't just be fixed the obvious way (consing them in the heap) which is
why it hasn't been fixed yet.  Some day; there is a mechanism designed
but not implemented.  I don't think I would complain if Common Lisp
required a declaration that the rest arg was only going to be used
downward, and required that in the absence of the declaration they
would work like natural lists, AS LONG AS YOU DIDN'T SIDE-EFFECT them.

∂01-Oct-83  1209	RPG   	Pathnames: duh    
 ∂30-Sep-83  1650	@MIT-MC:MOON%SCRC-TENEX@MIT-MC 	Pathnames: duh    
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Sep 83  16:49:50 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Fri 30-Sep-83 19:48:35-EDT
Date: Friday, 30 September 1983, 19:47-EDT
From: David A. Moon <MOON%SCRC-TENEX@MIT-MC>
Subject: Pathnames: duh
To: STEELE%TARTAN@CMU-CS-C, Scott E. Fahlman <Fahlman@CMU-CS-C>
Cc: dlw%SCRC-TENEX@MIT-MC, bsg%SCRC-TENEX@MIT-MC, rpg@SU-AI,
    guy.steele@CMU-CS-A
In-reply-to: The message of 30 Sep 83 00:38-EDT from STEELE%TARTAN at CMU-CS-C,
             The message of 30 Sep 83 01:17-EDT from STEELE%TARTAN at CMU-CS-C,
             The message of 30 Sep 83 01:27-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

Re: how to make a pathname that is like another pathname except that
certain specific components are different.

MERGE-PATHNAMES is never the right way to do this, when what you are
trying to do is to set certain components explicitly.  It is the right
thing if you are a user interface.

We do this with message passing, not with MAKE-PATHNAME, so our
FS:MAKE-PATHNAME isn't much of a guide.  In fact you will never see any
Lisp machine code calling FS:MAKE-PATHNAME except internally to the
pathname system itself; pathnames are always made by parsing or by
taking an existing pathname and modifying (a copy of) it with a message
such as :NEW-PATHNAME.

Since Common Lisp got rid of the Lisp machine pathname-defaults object,
the :DEFAULTS argument to our FS:MAKE-PATHNAME doesn't have much
relation to the :DEFAULTS argument to the Common Lisp MAKE-PATHNAME.

Given all this, you might as well get rid of the :DEFAULTS argument to
MAKE-PATHNAME, and instead say that the :HOST argument is required,
since all pathnames must have a host, and if you don't have anything
specific to use for a host you use (PATHNAME-HOST DEFAULT-PATHNAME).
An alternative philosophy would say that the :HOST argument is optional,
but if you don't specify it you have no right to complain about the
"random" host that the function happens to choose.  The main problem with
this is that programs written on non-networking Common Lisp implementations,
where you don't have to think about hosts, would likely be non-portable.

Then if you want MAKE-PATHNAME to also fulfill the function of the
:NEW-PATHNAME message, you could give it an argument that is a pathname
from which to take nonspecified component values.  :DEFAULT would be a
better name than :DEFAULTS for this.  I think it would be a little more
tasteful to use a separate function for this operation, but others
might reasonably disagree.

This letter replaces my previous comment (of a month or two ago) in
which I suggested adding a new keyword to MAKE-PATHNAME, to be a
pathname to be selectively modified, the misguided goal then being to
avoid flushing the :DEFAULTS argument.

If I may digress back to the very original topic of this discussion,
which was Scott asking how he was supposed to write COMPILE-FILE in the
absence of a way to side-effect pathnames, the answer is as follows.
You can't just bash the input file name's type unconditionally, since
COMPILE-FILE takes an optional output-file argument.

(defun compile-file (input-file &key (output-file "") ...)
  (setq input-file (merge-pathnames input-file
				    (alter-pathname *load-pathname-defaults*
						    :type "LISP")))
  (setq output-file (merge-pathnames output-file 
				     (alter-pathname input-file
						     :type "BIN")))
  ...)

where "LISP" and "BIN" are really implementation-dependent and
alter-pathname is whatever emerges from the current discussion.

Incidentally, merge-pathname-defaults is an attempt to avoid the need
for that call to alter-pathname all the time.  As defined in the Lisp
machine currently, it doesn't actually work for that.  If a function
to do "merge-pathnames with the type component of the default altered"
is put into Common Lisp I suggest that merge-pathname-defaults is not
a very good name for it.  Call this a withdrawal of any previous
suggestion I may or may not have made that merge-pathname-defaults
should be included under that name.

∂01-Oct-83  1209	RPG   	Pathnames: duh    
 ∂30-Sep-83  2216	FAHLMAN@CMU-CS-C.ARPA 	Pathnames: duh   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 30 Sep 83  22:16:27 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 21:56:44-EDT
Date: Fri, 30 Sep 1983  21:56 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <MOON%SCRC-TENEX@MIT-MC.ARPA>
Cc:   bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA,
      guy.steele@CMU-CS-A.ARPA, rpg@SU-AI.ARPA, STEELE%TARTAN@CMU-CS-C.ARPA
Subject: Pathnames: duh
In-reply-to: Msg of 30 Sep 1983 19:47-EDT from David A. Moon <MOON%SCRC-TENEX at MIT-MC>


If I understand Moon's message correctly, he would be happy with a
MAKE-PATHNAMES function that takes no default (or defaults) argument,
but just the names of various fields to be filled in: :HOST, :TYPE,
:VERSION, etc.  :HOST is mandatory.  In addition to this, there would be
a function named ALTER-PATHNAME that takes an original pathname and any
number of field arguments.  A copy is made of the original, but with the
specified fields over-riding those in the original.

MERGE-PATHNAME-DEFAULTS would be flushed.  I personally would find it
much less confusing to completely separate the idea of merging from the
idea of altering, and not try to roll them into a single function.  It's
a little verbose that way, but how many times a day do you write
filename code anyway?

All of that looks good to me.  If people would rather go with a :DEFAULT
argument to MAKE-PATHNAME (which if supplied would make it do what
ALTER-PATHANMES does in the description above) I would go along with
that, but I agree with Moon that it is clearer to have separate
functions for these two distinct uses.  Does anyone object to
ALTER-PATHNAME as a separate function?  (That name makes it sound like a
destructive modification, but I can't think of anything better.)

On a related matter, is there any good reason we cannot allow null HOST
slots in pathanmes, but require each implementation to somehow provide a
default host if none is supplied?  On most time-sharing systems, this
simply be "this machine".  On personal machine networks, it might
mean "the central file server".  On truly distributed file systems, in
which neither "this machine" nor "the central file server" is right, it
would be handled in some locally tasteful way -- maybe ask the user or
have him give a default in his init file.  That all seems more intuitive
to me than requiring a pathname always to have a host because the host
governs its behavior.

-- Scott

∂01-Oct-83  1210	RPG   	Random idea: bringing back lexprs
 ∂30-Sep-83  2224	FAHLMAN@CMU-CS-C.ARPA 	Random idea: bringing back lexprs    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 30 Sep 83  22:22:59 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 30 Sep 83 22:37:40-EDT
Date: Fri, 30 Sep 1983  22:37 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <MOON%SCRC-TENEX@MIT-MC.ARPA>
Cc:   bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA,
      steele@CMU-CS-C.ARPA
Subject: Random idea: bringing back lexprs
In-reply-to: Msg of 30 Sep 1983 17:45-EDT from David A. Moon <MOON%SCRC-TENEX at MIT-MC>


The SETF-ability of &MORE args is certainly flushable if it causes real
trouble.  SETF would seldom be useful here.  Your other objections to
&MORE seem kind of weak.

You say that a list is a more natural way to represent a sequence than
an index and accessor.  I agree.  But the index scheme is more tasteful
than either a pseudo-list or a list that is consed up when it doesn't
have to be.  Your system goes to great lengths and tolerates
considerable grunginess to avoid this bit of consing, so you must agree
on the latter point.

You say that many of the functions you've seen really want to use their
&rest args in true list-like fashion.  That's a good argument for
keeping &rest, but not a good argument for using it exclusively.  I
haven't gone back and counted, but I think that if &MORE were around,
our system code would use it 80% of the time and &REST only about 20%.
There are an awful lot of things like + around that just want to iterate
over their arguments.

Finally, you say that one of these years you will fix up your system to
make the pseudo-lists behave like real lists.  This is a good move, but
sounds like a lot of work for no reason.  Wouldn't it be nice to skip
all this new hair, along with the hairy cdr-coding on the stack, and
just cons up a righteous list when the user asks for one and give him an
alternative if he doesn't want to cons?  And besides, while this is a
possible solution for you, it doesn't do much for other implementations
that don't cdr-code.  They are stuck with consing a list every time.  (I
don't want to argue about whether cdr-coding is a good thing -- we want
this langauge to be easily portable, and cdr coding is tough on some
machines.)

I had hoped that everyone else would jump in to tell you that you are
just being stubborn... oops, that ploy has been used.

-- Scott

∂01-Oct-83  1210	RPG   	Pathnames: duh    
 ∂30-Sep-83  2242	@MIT-MC:MOON%SCRC-TENEX@MIT-MC 	Pathnames: duh    
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Sep 83  22:41:52 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Sat 1-Oct-83 01:43:37-EDT
Date: Saturday, 1 October 1983, 01:42-EDT
From: David A. Moon <MOON%SCRC-TENEX@MIT-MC>
Subject: Pathnames: duh
To: Scott E. Fahlman <Fahlman@CMU-CS-C>
Cc: bsg%SCRC-TENEX@MIT-MC, dlw%SCRC-TENEX@MIT-MC, guy.steele@CMU-CS-A,
    rpg@SU-AI, STEELE%TARTAN@CMU-CS-C, Moon%SCRC-TENEX@MIT-MC
In-reply-to: The message of 30 Sep 83 21:56-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: Fri, 30 Sep 1983  21:56 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    If I understand Moon's message correctly....
You do.  All the alternatives in your first three paragraphs are okay with me.

    On a related matter, is there any good reason we cannot allow null HOST
    slots in pathanmes, but require each implementation to somehow provide a
    default host if none is supplied?  
I'm not sure what this really means.  Do you expect PATHNAME-HOST to be able
to return NIL, or does this simply mean that if no :HOST is specified to
MAKE-PATHNAME, it chooses a default host in some way that is appropriate
to the implementation?  A related question is do you expect (PARSE-PATHNAME "")
to return something that is less specific about what host it is for
than what (PARSE-PATHNAME "<FAHLMAN.HACKS>FOO.LSP.105") returns?  How
about (PARSE-PATHNAME "RUMPLESTILTSKIN"), about the most complex string
that doesn't need a host because it doesn't contain any delimiters?  [Wait
a minute: what about hosts that store pathnames in lower case, allow a maximum
of six characters in a file name, or use the letter "L" as a delimiter
between name and type fields?]
Do you mean to refer to PARSE-PATHNAME (the fundamental building block
of pathname user interfaces) or to MAKE-PATHNAME (the fundamental
primitive for programs that make pathnames) or to both?

				       On most time-sharing systems, this
    simply be "this machine".  On personal machine networks, it might
    mean "the central file server".  
There are about ten central file servers that I use more than once a
week.  There are approximately 800 (last time I counted) file servers that
I could use if I so chose; probably I have used 30 or 40 of them in my
life.  So I don't think your "personal machine networks" case applies to
any personal machine networks I know about.

				     On truly distributed file systems, in
    which neither "this machine" nor "the central file server" is right, it
    would be handled in some locally tasteful way -- maybe ask the user or
    have him give a default in his init file.
I guess this is what we do.
In the Lisp machine's own pathname system, when parsing a pathname or
making a pathname (either of them) with no other specification of a host,
(send (fs:default-pathname fs:*default-pathname-defaults*) :host) is used,
which is the user's declared home host, or a host explicitly specified
at login time.  There is no particular reason to believe that all of a
user's files are on that host, but it is a less random default than any
other.  (fs:default-pathname fs:*default-pathname-defaults*) is set to
FOO.LISP on the user's home directory at login time, and not normally
changed.  This appears to be the same thing as *DEFAULT-PATHNAME-DEFAULTS*
in Common Lisp (excelsior).

    That all seems more intuitive
    to me than requiring a pathname always to have a host because the host
    governs its behavior.
I don't understand what the proposed difference from requiring a pathname
always to have a host is.  Possibly the answer is that the pathname specification
already does what you want, actually?

∂01-Oct-83  1210	RPG   	Random idea: bringing back lexprs
 ∂30-Sep-83  2251	@MIT-XX:MOON@SCRC-TENEX 	Random idea: bringing back lexprs  
Received: from MIT-XX by SU-AI with TCP/SMTP; 30 Sep 83  22:51:02 PDT
Received: from SCRC-EUPHRATES by SCRC-SPANIEL with CHAOS; Sat 1-Oct-83 01:51:39-EDT
Date: Saturday, 1 October 1983, 01:51-EDT
From: David A. Moon <MOON at SCRC>
Subject: Random idea: bringing back lexprs
To: Scott E. Fahlman <Fahlman at CMU-CS-C>
Cc: bsg at SCRC, dlw at SCRC, rpg at SU-AI, steele at CMU-CS-C
In-reply-to: The message of 30 Sep 83 22:37-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: Fri, 30 Sep 1983  22:37 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    The SETF-ability of &MORE args is certainly flushable if it causes real
    trouble.  SETF would seldom be useful here.  Your other objections to
    &MORE seem kind of weak.

    You say that a list is a more natural way to represent a sequence than
    an index and accessor.  I agree.  But the index scheme is more tasteful
    than either a pseudo-list or a list that is consed up when it doesn't
    have to be.  Your system goes to great lengths and tolerates
    considerable grunginess to avoid this bit of consing, so you must agree
    on the latter point.
It's not so clear the index scheme is more tasteful than a pseudo-list.
Only a tiny fraction of the functions that take &REST arguments need to
worry about the dynamic extent of the list.  The main problem comes in
explaining it to new users, who always barf (quite legitimately).

    You say that many of the functions you've seen really want to use their
    &rest args in true list-like fashion.  That's a good argument for
    keeping &rest, but not a good argument for using it exclusively.  I
    haven't gone back and counted, but I think that if &MORE were around,
    our system code would use it 80% of the time and &REST only about 20%.
    There are an awful lot of things like + around that just want to iterate
    over their arguments.
Having both is probably the right thing (even though it would slow down
our interpreter slightly; but it's already cretinously slow, so who cares).
Let's let one more flower bloom.

    Finally, you say that one of these years you will fix up your system to
    make the pseudo-lists behave like real lists.  This is a good move, but
    sounds like a lot of work for no reason.  Wouldn't it be nice to skip
    all this new hair, along with the hairy cdr-coding on the stack, 
The cdr-coding on the stack isn't hairy at all.
								     and
    just cons up a righteous list when the user asks for one and give him an
    alternative if he doesn't want to cons?  
Your proposed alternative is no alternative for the functions that want to
see a list and don't want to cons, which I maintain is the majority.

					     And besides, while this is a
    possible solution for you, it doesn't do much for other implementations
    that don't cdr-code.  They are stuck with consing a list every time.
I see no reason why a non-cdr-coded Lisp would be unable to allocate a
list in a stack.  It simply requires that the list take up twice as much
storage, a price they are already willing to pay when allocating in the
heap, where the extra storage use costs you much more than it does in
the stack.

∂01-Oct-83  1206	RPG   	Pathnames: duh    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 30 Sep 83  23:22:09 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sat 1 Oct 83 02:25:22-EDT
Date: Sat, 1 Oct 1983  02:25 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <MOON%SCRC-TENEX@MIT-MC.ARPA>
Cc:   bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA,
      guy.steele@CMU-CS-A.ARPA, rpg@SU-AI.ARPA, STEELE%TARTAN@CMU-CS-C.ARPA
Subject: Pathnames: duh
In-reply-to: Msg of 1 Oct 1983 01:42-EDT from David A. Moon <MOON%SCRC-TENEX at MIT-MC>


Yes, I was proposing that PATHNAME-HOST be allowed to return NIL.  For
certain low-level things this might make some sense, especially in an
environment where you are accessing the same host 95% of the time.
(Moon's routine use of many hosts is unusual right now, though it will
probably be the norm pretty soon.)  My problem was that I lost sight of
the fact that essentially all interesting pathnames come from
PARSE-PATHNAME and not from an explicit call to MAKE-PATHNAME or
something like that.  And, as Moon has pointed out, PARSE-PATHNAME has
to make assumptions about the host, even in the simplest cases.  So
(parse-pathname "<fahlman>foo.lisp") has to guess what host to use at
the time the pathname is created, and it may as well record that guess
in the resulting pathname rather than pretending to be uncommitted as to
host.  Actually, only the type of host has to be guessed, but we've been
around that issue before, and separating the concepts of host and
protocol seems to buy us nothing useful.

Anyway, I now see why a null host is not a useful concept.  Thanks for
bringing this into better focus.

-- Scott

∂01-Oct-83  1500	RPG   	My comments on the Excelsior manual   
 ∂30-Aug-83  1559	@MIT-MC:Moon%SCRC-TENEX%MIT-MC@SU-DSN 	My comments on the Excelsior manual 
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Aug 83  15:40:15 PDT
Received: from SCRC-QUABBIN by SCRC-TENEX with CHAOS; Tue 30-Aug-83 18:37:20-EDT
Date: Tuesday, 30 August 1983, 18:31-EDT
From: David A. Moon <Moon%SCRC-TENEX%MIT-MC@SU-DSN>
Subject: My comments on the Excelsior manual
To: Fahlman%CMU-CS-C@SU-DSN, Steele%CMU-CS-C@SU-DSN, RPG@SU-AI,
    Moon%SCRC-TENEX%MIT-MC@SU-DSN, DLW%SCRC-TENEX%MIT-MC@SU-DSN,
    BSG%SCRC-TENEX%MIT-MC@SU-DSN
File-References: SCRC:<MOON>COMMON-LISP.EXCELSIOR

Substantive technical comments on Excelsior edition of Common Lisp manual.
There are also a lot of typographical and clarity-of-explanation
comments; I will send a marked-up hardcopy by US mail.

p. 25: integer and ratio are not an exhaustive partition of rational?
fixnum and bignum are not an exhaustive partition of integer?  Yow!
Such possibilities for language extension...

p. 26 (second to last paragraph): it says that an implementation may not
unilaterally add new subtypes to common.  But since common includes all
types created by defstruct, anyone using defstruct is adding new subtypes
to common.  I think this is just a wording problem, probably this has
to do with the word "exhaustive union" and you're trying to say that only
the Common Lisp committee can add new subtypes to common that are not
subtypes of the existing list of subtypes.  But I'm not sure, which is
why this is included here.

p. 40: there is a typo in which the arguments to COERCE are given in the
wrong order, probably because you were reasoning by analogy from concatenate,
which appears in the same sentence.  Since THE, MAP, and MERGE also put the
type first, I suggest that COERCE is broken and should be changed to put the
type first.  However, this was rejected the last time I suggested it.  Since
the manual is inconsistent, I suggest that it is only a clarication to change
the order of arguments now.

p. 43 (third paragraph, parenthesized sentence): Does this mean that no
extensions to the evaluator are allowed, or that they have to use data
types that are not a subtype of common?  This depends on what "Common Lisp
data object" means exactly.

p. 48 (first sentence): &key keywords should not be required to be in the
keyword package, only encouraged to be.  There are occasional reasons to
have private-packaged keyword-argument names.  This applies to the second
paragraph on p.49 also.

p. 63 (third paragraph): This is the first of several typos that think
that EQUAL compares all arrays element by element (it used to, but now it
only compares strings and bit vectors).  I suggest the introduction of
a new function EQUALC ("equal components") that compares numbers and
characters the way EQUAL does, but compares arrays the way EQUALP does
(of course, when comparing strings the individual characters are compared
the way EQUAL does, case-dependently).  Another typo is on p.195 (copy-seq).

p. 69: It would be difficult for us to enforce the restriction that funcalling
the result of SYMBOL-FUNCTION of a special form (not a macro) WILL signal
an error.  I suggest that it IS an error.

p. 87: Declarations are not allowed at the beginning of the body of a COMPILER-LET,
I think.  Certainly declarations of the variables being bound -at compile time-
would not be meaningful.  It is probably best to require use of LOCALLY to
put declarations here.  COMPILER-LET is not in the table on p.117.

p. 103: In setting MULTIPLE-VALUES-LIMIT, what should I do about the fact
that on the LM-2 MULTIPLE-VALUE-PROG1 has a different limit on the number
of values than does everything else.  Should I set MULTIPLE-VALUES-LIMIT
to the higher limit, and say that MULTIPLE-VALUE-PROG1 has a bug (hopefully
it signals an error if you exceed its limit), should I set MULTIPLE-VALUES-LIMIT
to the lower of the two limits, or should I make MULTIPLE-VALUE-PROG1 accept
more values (and hence be slower and more prone to stack-frame-overflow
errors).  None of this applies to the 3600; no problems with multiple values
there.

p. 112: Is it a requirement that redefining a special form globally with
a macro must work?  Or is this just an example?  I don't think it works
now in our implementation.

p. 130: Is GENSYM required to return G7 rather than G0007, or is that
just an example?  In either case there should be a rationale or
compatibility note.  But there is also the technical issue of which it is.

p. 130: keywordp should be nil for all non-symbols, not an error.  I.e.
keywordp should be a data type predicate, not a symbol operation.

p. 141: the new-nicknames argument to rename-package should be &rest,
not &optional, for uniformity.

p. 168: shouldn't the third value of decode-float and integer-decode-float,
the sign, be an integer (1 or -1) rather than a float?  Same for float-sign
when given only one argument.  Maybe there's a reason for making this be
a float, that I don't see and that isn't set forth in the manual.

p. 175: I feel that BYTE-SPECIFIER ought to be a data type for declaration
purposes (but an implementation is allowed not to support it as a data type
for discrimination purposes).  Say that BYTE-SPECIFIER may or may not be
a subtype of NUMBER, depending on the implementation.

p. 189: I don't like MAKE-CHAR, since its arguments are not consistent
with the other MAKE-xxx functions, and the function seems quite redundant
with CODE-CHAR.  I suggest making CODE-CHAR accept characters as well
as integers as its first argument, or else making CHAR-BITS and CHAR-FONT
SETF'able.

p. 191: In an implementation that doesn't have "super bits", is
(char-bit char :super) always false or is it an error?  This may just be
a matter of clarity of explanation.

p. 199 (replace): What happens if sequence1 and sequence2 are not the same
object, but share storage because they are lists with shared substructure
or because one of them is a displaced array?  I propose that the result
be undefined, since it is expensive to check for and not a very useful
case to support.

p. 213: PUSHNEW takes the same keyword arguments as ADJOIN (probably this
is only a typographical error that no keyword arguments are listed).

p. 216 (SUBLIS): are the cars of the alist elements required to be symbols,
as in Maclisp and implied by the first sentence of SUBLIS's description,
or are any objects acceptable (i.e. SUBLIS just acts as if it calls ASSOC
its first argument and its keywords)?  I prefer the latter.

p. 229: What happens when array B is displaced to array A and their element
types are not the same?  Okay for this to be an error, but not okay to signal
an error (we want it as a language extension).

p. 230: What happens when array-total-size-limit is a function of the element
type?  Should this constant be set to the minimum over all element types?

p. 233: Is this feature that a third argument of t to the bit functions means
to use the first argument as the third argument really a win?  Maybe one should
simply pass the first argument twice in this case.  I don't feel very strongly
about this, but it seems like a kludge.

p. 241 (STRING): It doesn't mention using STRING to coerce a character to
a 1-element string.  My vague memory is that this was an accepted change to
the language (at the time character objects were made mandatory and the
ability for an implementation to use integers instead of characters was
removed).

p. 248: The "named structure" stuff in defstruct is confused.  The term
"named" is being used to mean two different things.  There are two kinds
of defstructs: One is a subtype of STRUCTURE; TYPEP and TYPE-OF know how
to find the type name symbol for any object that is a structure.  In some
implementations a structure is a vector with its type name in element 0,
and a magic "I am a structure" bit set.  In other implementations, STRUCTURE
is not a subtype of any other COMMON type (e.g. in NIL STRUCTURE is
a subtype of EXTEND).  When making this type of defstruct, the user does
not and cannot know exactly what type defstruct will make, since it is
implementation-dependent.  The other kind of defstruct is one where the user
has requested a specific data type, such as VECTOR or LIST.  In this case
TYPEP and TYPE-OF cannot be guaranteed to work (and in all implementations
I know of they won't work).  In this second kind of defstruct, "named" just
means that defstruct automatically allocates a structure slot containing
the name symbol.
I suggest that (:type structure) mean the same as not specifying :type,
namely an object for which TYPE-OF and TYPEP "work".  :unnamed is illegal
in connection with this, and :named is redundant.  I suggest then that
structure be made a legal type specifier again, for consistency with this.
It means only "defstructs of the first kind."
For the non-structure :types, :unnamed is redundant.
Hence I suggest that :unnamed be flushed.

p. 249: "Moreover, astronaut will have its own access functions for
components defined by the person structure."  Did we really agree to
this?  I don't recall ever hearing of this before.  It is incompatible
with previous usage, and seems useless to me.

p. 250: I suspect that the :eval-when option to defstruct is unnecessary
and should be removed.  I think it was put in to get around a bug in the
Lisp machine compiler that was fixed years ago.  I could be misremembering.

p. 254: I suspect that *EVAL is called only by "FEXPRs" and should be deleted
from the white pages.  It exists internally in the evaluator, of course.

p. 257 (second and fourth paragraphs, and parentheses in the first paragraph):
We agreed to change the rules for * to be consistent with what was printed,
rather than aligned with +, but the manual wasn't updated.

p. 263: are INPUT-STREAM-P and OUTPUT-STREAM-P type predicates like streamp,
or is it an error to call them on objects that aren't streams?  This may
just be a textual clarity issue.  INPUT-STREAM and OUTPUT-STREAM are not
in Table 4-1.

p. 264 (last sentence): The file is only deleted if it was newly-created.
Not if appending or overwriting an existing file.

p. 279: When #+/#- skips a form, certain read errors should be suppressed.
This is necessary in order to use #+ to conditionalize code that runs in
multiple implementations, or multiple environments with different packages
present.  The things to be suppressed include forms after #. and #, ,
floating-point exponent range errors, qualified name errors (no such package,
no such external symbol), #n=.  (#+lispm #1= <a> #+spice #1= <b> <c> #1#)
should work and not complain that the "tag" 1 is used twice, except if
the "lispm" and "spice" features are both true.

p. 281: What does the from-readtable argument to COPY-READTABLE default to?
The manual directly contradicts itself.  I suggest that it default to the
value of *READTABLE*, but that NIL mean "the standard Common Lisp readtable,
i.e. the value of *READTABLE* before any side-effects on the variable or
on the readtable were perpetrated."

p. 284 (last paragraph): When the printer decides whether a symbol's name
must be slashified because it looks like a number, is this decision dependent
on the value of *READ-BASE* or the value of *BASE*.  In other words, what
controls whether the symbol FF prints as FF or \FF?  I suggest *BASE*.

p. 287: What's the initial value of *PRINT-PRETTY* ?  I suggest NIL.

pp. 290, 296: "Ascii streams" should be called "Character streams", since
the character code in use is not necessarily Ascii.

pp. 290, 296: I'd still like to flush the feature that T and NIL as output
streams have special meanings.  This is a holdover from Maclisp.  The main
problem with this feature is that FORMAT uses T and NIL to mean something
incompatible with this.

p. 290: Recursive reads need control over eof handling.  Having the eof-errorp
of the top-level call to read controls what happens isn't good enough.  For
example, the semicolon reader macro has to read until a Return character or
EOF; it should not be an error for a comment to end at end-of-file without
a carriage return (it is all too easy for a user to forget to put in the
carriage return).  I suggest that in recursive reads, eof-errorp = nil means
return eof-value from the inner call to read regardless of the top-level read's
arguments, and eof-errorp = t means look at the top-level read's arguments,
and either signal an error or throw back to the top-level read and return
its eof-value, but in neither case return from the inner read.  Then fix
all examples that say (READ stream NIL NIL T) to be (READ stream T NIL T).

p. 292: READ-DELIMITED-LIST should allow comments as well as whitespace
between the last object and the delimiter.

p. 293 (READ-LINE): What happens if end-of-file terminates an empty line
(the second to last sentence says that when end-of-file terminates
a non-empty line, the line and T are returned)?  Should "" and T be
returned, or should the function take eof-errorp and eof-value arguments?
Zetalisp's READLINE function does the latter.

p. 294 (READ-FROM-STRING): Maybe the two optional arguments should be
keywords, to make things more uniform.  I suggest that :EOF-ERROR
should default to true unless :EOF-VALUE is specified and :EOF-ERROR
is not specified.  This makes one wonder whether there should be two
keywords or one.

p. 297 (WRITE-CHAR):  Do we really want this to return NIL?  All the
other writers return what they wrote (actually, their first argument,
which is not necessarily exactly what they wrote).

p. 298 (first paragraph): Regardless of what characters TERPRI on a
stream outputs to the physical device, it must be required that
(WRITE-CHAR #\RETURN stream) writes exactly the same characters.  Or
is this not true, in which case you better say so very explicitly.

p. 298 (last line): I assume it's a typo that WRITE-BINARY-OBJECT
can only write arrays of integers, not arrays of all numbers, since
it can write all kinds of numbers as scalars.

p. 299: ~F appears to require negative prefix parameters, which previous
format operators didn't require.  Say in the fourth paragraph on page 299
that minus signs may be used in prefix parameters.

p. 303: I think that if the third prefix parameter to ~E is omitted,
the exponent should use as many digit positions as required, rather than
using exactly 2.  This would make e be treated the same as w and d.

p. 305: I suggest the following exception handling in ~$:
If the arg is too small, it is taken to be zero.
If the arg is too large, it is printed with an exponent.  If w is not
specified, too large means something like 40 or 64 or 100 digits; this
could be left to the implementation.
If the arg is rational, it is first coerced to a single-float (see below).
If the arg is not a number, or complex, it is printed in ~wD format.

pp. 302-305: I suggest that when a rational is printed in ~F, ~E, ~G,
or ~$ format, an implementation be permitted either to coerce it to
single-float or to do something that retains more precision and doesn't
risk exponent overflow, at its discretion.  "Something" could be
coerce to long-float or could be format it "exactly", except that if
the number of digits is not specified and a ratio that does not have
an exact decimal representation (e.g. 1/3) is specified, a finite
number of digits must be printed.

p. 306: ~G should be replaced by ~@*.

p. 307: The example of ~? is not consistent with the text.  Possibly this
is a typo and the format string should have started ``"~1{~?~}~%~V...'',
but I'm not sure.  If so, be sure to point out that the ~? is expanded
outside of the iteration caused by the braces, and before the braces
pick up their argument (the list to iterate over).  This is the same
as use of ~V inside of braces.

p. 311: Does ~↑ with three prefix parameters do inclusive (<=) or
exclusive (<) comparison?  Ugh, bletch!

p. 312: I think we agreed to change the arguments to Y-OR-N-P and
YES-OR-NO-P to be ``&optional format-string &rest format-args''.  This
is compatible in the usual case where one argument is supplied and it
doesn't contain any tildes, and is much more useful than specifying a
stream; one never wants to change the stream locally, rather than
globally by binding *QUERY-IO*.

p. 317: Some file systems lack devices, types, or versions, hence you
need a value for these components in pathnames for those file systems.
The Lisp machine uses the keyword :UNSPECIFIC for this.  This is different
from NIL, which means that the component was not specified by the pathname
and can be supplied by merging.  Most file systems either never allow
a given component to be :UNSPECIFIC, or require it always to be :UNSPECIFIC,
but there are some where this is not the case.  ITS, for example, can
have either a type or a version, but not both.  When it has a version, the
type is :UNSPECIFIC.  When it has a type, the version is :UNSPECIFIC.
When it has neither, both components are NIL.  It may be possible to
get rid of :UNSPECIFIC and use NIL both to mean "this component was
not specified" and to mean "this component cannot be specified", but
this would require more complexity in the merging process.

p. 318: TRUENAME of a stream should be the truename of the file actually
open.  This cannot be done by first converting the stream to a pathname
then taking the truename of that.  File system operations performed after
the stream was opened might have changed the mapping of the stream's pathname
into a truename, for instance if the version of the stream's pathname
is :NEWEST and a newer file was created.

p. 319: If TRUENAME returns NIL if there is no such file, it is identical
to PROBE-FILE.  If it quietly returns its argument, it is a liar.  Probably
best to error.  Then PROBE-FILE is TRUENAME except that it returns NIL for
a file-not-found error (but not for other errors such as directory-not-found,
file name illegal, or foreign host not responding).

p. 319: Flush the stuff about "conventions" in parse-namestring.  parse-namestring
should take a junk-allowed argument that defaults to nil, like parse-integer.
Shouldn't start and end be keywords, like for most other functions that take
such arguments?  By analogy with parse-integer, all of the arguments to
parse-namestring except the first should be keywords.

p. 320 (pathname-plist): I don't think it is wise to put property lists on
pathnames in Common Lisp.  Especially when you don't say anything about
whether or not pathnames are "interned", i.e. does parse-namestring called
twice with the same arguments return two pathnames that share the same property
list, or two distinct pathnames, or is this implementation-dependent.  Better
not to include any operations that perform side-effects on pathnames in Common
Lisp, so that only EQ can tell whether they are "interned."  This means you
have to define what EQUAL means for pathnames.  I see that page 63 says that
EQUAL compares pathnames by components, which is good, but this should be
mentioned in the pathname chapter also.  EQL on pathnames should be the same
as EQ, and hence not useful, and this should be mentioned in the pathnames
chapter.

p. 321: INIT-FILE-PATHNAME should take an optional argument that is the file
type.  Whether this argument affects the result depends on the host (not
on the Common Lisp implementation!).  To be consistent with FS:INIT-FILE-PATHNAME
in Zetalisp, the type argument should come before the host argument.

p. 322 (second paragraph): Since all pathnames include a host, merging cannot
be responsible for putting in the default device.  Pathname parsing must
do this; parsing a string that specifies a host puts in a device component,
which is the default file device unless the string specifies an explicit
device.  Also the description of merging, and much of the rest of the pathname
chapter, doesn't know that MERGE-PATHNAME-DEFAULTS was flushed (over my mild
protests that both MERGE-PATHNAMES and MERGE-PATHNAME-DEFAULTS are useful).

pp. 322-324: Logical pathnames are useless without a standardized syntax.  The
logical pathname system here is based on an obsolete specification of logical
pathnames in the Lisp machine, hence is not attractive to us.  I suggest that
logical pathnames not be included in Common Lisp this time around; they can
be standardized later when they are better understood.

p. 325 (:element-type standard-char): What happens if the input file in fact
contains a non-character?  Is this a case of "is an error" or "signals an
error"?  What happens if the user calls WRITE-CHAR with a non-standard character?
What about READ?  Are any strings whose printed representation is read from a
stream with :element-type standard-char guaranteed to contain only standard chars?
If READ-CHAR and WRITE-CHAR are required to check and signal an error, does this
requirement extend to :element-type string-char as well?  How about READ-BYTE
and WRITE-BYTE checking that the bytes fit in the byte size declared by the
element-type?  The best thing for right now is probably to leave the specification
loose and let implementations decide how much error checking they want to have.

p. 326: :if-exists :rename and :if-exists :rename-and-delete, like :if-exists
:supersede, should be encouraged not to affect the existing file until the
stream is closed, and not to affect it at all if the stream is closed in
abort mode.  Should this be required rather than encouraged?  Encouraged is
probably better since file systems vary so widely in their capabilities.
Explain what :if-exists :supersede means more precisely.  Is it permissible
for this to mean the same as :rename, or :rename-and-delete?  Note that
:supersede rather than :error is the default when the file system does not
have versions, since the pathname component can't be :newest in this case.

p. 327: clarify that in :if-does-not-exist :create mode, "proceed as if
it had already existed" does not include any processing directed by the
:if-exists argument.  Someone here was confused by this.

p. 328 (rename-file): I think we agreed that values saying what was done,
rather than t, should be returned.  The Zetalisp RENAMEF function returns
three values, which are the second argument (new-name) after merging with
the first argument so that renaming leaves unspecified components unchanged,
the truename before renaming, and the truename after renaming.

p. 328 (rename-file and delete-file): if it is an error to specify a pathname
containing a :wild component, what about nil components?  Do they default
by merging with some default defaults, or are they an error?

p. 328 (file-creation-date): People here find the word "creation-date"
(used by Zetalisp) very ambiguous and confusing.  I suggest that this
function be named file-write-date or file-written-date (depending on our
grammatical preferences).

p. 328 (file-position): This can't work easily when there is character
set translation (including translation of the Return character to a
VAX/VMS or OS/360 record boundary).  Should file-position be defined to
return NIL when the element-type of the stream is one that requires such
translation, or should it be required to return the equivalent number
of READ-CHAR/WRITE-CHAR operations, or should it be required to return
a number in the units in which the file is actually read or written?
This is a probably for file-length, too.
file-position with two arguments should use :start and :end rather
than nil and t (or do I mean rather than t and nil?  That's the point.)
Bi-directional streams should have separate read and write positions,
shouldn't they?

p. 329: I don't approve of making the filename argument to LOAD
optional.  Am I overruled by a concensus of the committee?  I don't
recall this ever being discussed.

p. 335: The string subform of CHECK-TYPE is evaluated.

p. 335: The syntax for ASSERT hasn't been updated to the new syntax
we agreed on, which flushes the kludgey use of string as a delimiter.
string is evaluated now.

p. 347: Shouldn't the symbols on *FEATURES* be keywords?  If even
implementation-specific elements of *FEATURES* go in the LISP package,
they can cause accidental sharing.  In general implementations have
to be careful about adding their own symbols to the LISP package,
since this could make some programs become unportable (until they
do a SHADOW).
Note that one does not write a colon in the #+ (or #-) syntax; all
symbols in that syntax are assumed to be keywords.

∂01-Oct-83  1501	RPG   	Moon's comments on Excelsior
 ∂30-Aug-83  2154	FAHLMAN@CMU-CS-C.ARPA 	Moon's comments on Excelsior    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 30 Aug 83  21:54:00 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Wed 31 Aug 83 00:55:06-EDT
Date: Wed, 31 Aug 1983  00:55 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   steele@CMU-CS-C.ARPA
Cc:   moon%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA,
      bsg%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA, fahlman@CMU-CS-C.ARPA
Subject: Moon's comments on Excelsior
In-reply-to: Msg of 30 Aug 1983 18:31-EDT from David A. Moon <Moon%SCRC-TENEX at MIT-MC>


Guy,

A number of the issues raised by Moon need to be ruled on explicitly,
one way or the other, since they make a difference in actual code that
is very close to being wrapped up.  I don't think that many of Moon's
points are controversial, in the sense that anyone will care a lot about
which way things go, but having SOME decision is critical.  What may be
controversial is the question of whether we are willing to make some of
Moon's non-essential improvements at this point.  Someone needs to
decide how frozen the manual is on a number of these issues.

We need explicit rulings on these issues within the next couple of days.
The most critical thing is that each and every change or substantive
clarification that is made to excelsior must be noted in a separate file
of errata.  What we can't do, at this point, is wait around for the next
edition and then have to grovel through every line of it to see what has
been changed.  That saves you a couple of hours, but costs the rest of us
a week apiece.

If you want to rule on all these issues and maintain the complete
post-excelsior errata file in a publically visible place (CMUC, I
guess), that would be great.  If you'd rather have me do it, more or
less in the spirit of the Memorial Day effort, that is OK too.  Let me
know how you want to handle this.

Hope the editing goes well.  I met at LOT of people at AAAI who wanted
to get their hands on Common Lisp, so the bandwagon lives; some of them
are getting impatient, however.

-- Scott

∂01-Oct-83  1501	RPG   	Comments on Excelsior manual
 ∂02-Sep-83  1018	Guy.Steele@CMU-CS-A 	Comments on Excelsior manual 
Received: from SU-DSN by SU-AI with PUP; 02-Sep-83 10:17 PDT
Received: From CMU-CS-A by SU-DSN.ARPA; Fri Sep  2 10:00:51 1983
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP;  2 Sep 83 09:37:23 EDT
Date:  2 Sep 83 0047 EDT (Friday)
From: Guy.Steele@CMU-CS-A
To: fahlman@CMU-CS-C, rpg%su-ai@su-dsn, moon%scrc-tenex@MIT-MC,
    dlw%scrc-tenex@MIT-MC, bsg%scrc-tenex@MIT-MC
Subject: Comments on Excelsior manual


<This includes all of Moon's message, with comments by SEF in pointy
brackets.  Things that just need to be clarified and upon which I have
no strong opinion do not have a comment. >

[Comments by GLS are in square brackets.]


p. 25: integer and ratio are not an exhaustive partition of rational?
fixnum and bignum are not an exhaustive partition of integer?  Yow!
Such possibilities for language extension...

[What kept me from making the partition exhaustive was the nagging feeling
that someday someone might want to adjoin representations of infinity
(positive and negative).  Maybe this is a quibble, and we should describe
it as an exhaustive partition anyway.  Comments?]


p. 26 (second to last paragraph): it says that an implementation may not
unilaterally add new subtypes to common.  But since common includes all
types created by defstruct, anyone using defstruct is adding new subtypes
to common.  I think this is just a wording problem, probably this has
to do with the word "exhaustive union" and you're trying to say that only
the Common Lisp committee can add new subtypes to common that are not
subtypes of the existing list of subtypes.  But I'm not sure, which is
why this is included here.

[I think you have understood the intent; the wording does need to be fixed.]


p. 40: there is a typo in which the arguments to COERCE are given in the
wrong order, probably because you were reasoning by analogy from concatenate,
which appears in the same sentence.  Since THE, MAP, and MERGE also put the
type first, I suggest that COERCE is broken and should be changed to put the
type first.  However, this was rejected the last time I suggested it.  Since
the manual is inconsistent, I suggest that it is only a clarication to change
the order of arguments now.

< No, there's code written that uses COERCE as documented, so if we
change it, it is a change.  I'd let it be -- to swap it makes it more
consistent but less intuitive. >

[We already went around on this once, I think.  Let it stand.]


p. 43 (third paragraph, parenthesized sentence): Does this mean that no
extensions to the evaluator are allowed, or that they have to use data
types that are not a subtype of common?  This depends on what "Common Lisp
data object" means exactly.

[Sigh.  I guess it should read "is an error", and make a remark about
permitted extensions, and the desirability of explicit error signalling.]


p. 48 (first sentence): &key keywords should not be required to be in the
keyword package, only encouraged to be.  There are occasional reasons to
have private-packaged keyword-argument names.  This applies to the second
paragraph on p.49 also.

< We have to balance this "occasional need" against the error checking
provided if we require these things to be keywords.  I'd let it be. >

[Unless you can demonstrate the need more conclusively, I am inclined
to let it stand.  It "must" be a keyword, and therefore it "is an error"
if it is not, so implementations may make the extension if desired.]


p. 63 (third paragraph): This is the first of several typos that think
that EQUAL compares all arrays element by element (it used to, but now it
only compares strings and bit vectors).  I suggest the introduction of
a new function EQUALC ("equal components") that compares numbers and
characters the way EQUAL does, but compares arrays the way EQUALP does
(of course, when comparing strings the individual characters are compared
the way EQUAL does, case-dependently).  Another typo is on p.195 (copy-seq).

< Just fix the typos.  If Moon wants new equal functions, these are for
the second edition.  I think we've got enough. >

[I strongly oppose the introduction of new functions at this time
without clear-cut demonstration of language inadequacy.]


p. 69: It would be difficult for us to enforce the restriction that funcalling
the result of SYMBOL-FUNCTION of a special form (not a macro) WILL signal
an error.  I suggest that it IS an error.

< Yes, that sounds right to me.  Checking for this could be more
expensive than useful. >

[Sigh.  It should be zero cost for a properly arranged implementation,
but if you say so, I am willing to make it "is an error".]


p. 87: Declarations are not allowed at the beginning of the body of a COMPILER-LET,
I think.  Certainly declarations of the variables being bound -at compile time-
would not be meaningful.  It is probably best to require use of LOCALLY to
put declarations here.  COMPILER-LET is not in the table on p.117.

< I agree. >

[Right.]


p. 103: In setting MULTIPLE-VALUES-LIMIT, what should I do about the fact
that on the LM-2 MULTIPLE-VALUE-PROG1 has a different limit on the number
of values than does everything else.  Should I set MULTIPLE-VALUES-LIMIT
to the higher limit, and say that MULTIPLE-VALUE-PROG1 has a bug (hopefully
it signals an error if you exceed its limit), should I set MULTIPLE-VALUES-LIMIT
to the lower of the two limits, or should I make MULTIPLE-VALUE-PROG1 accept
more values (and hence be slower and more prone to stack-frame-overflow
errors).  None of this applies to the 3600; no problems with multiple values
there.

< If an implementation  has several values for one of these limits,
depending on fine-grained decisions, it should supply the most
restrictive limit. >

[Right.]


p. 112: Is it a requirement that redefining a special form globally with
a macro must work?  Or is this just an example?  I don't think it works
now in our implementation.

< Better not require this. >

[How about that it "is an error" to attempt to redefine a special form
in any way?]


p. 130: Is GENSYM required to return G7 rather than G0007, or is that
just an example?  In either case there should be a rationale or
compatibility note.  But there is also the technical issue of which it is.

< Can we leave this unbound?  Nobody should depend on these names.  If
not, I am in favor of G7. >

[For the sake of uniformity, I would rather bind the decision.
The four-digit business is partly influenced by the PDP-10, obviously.
It does have value in making the generated symbols more recognizable as such.
I would insist that the counter never "wrap around"; if you go above 9999
then you have to start using at least five digits, etc.  I am inclined
to let the spec stand.]


p. 130: keywordp should be nil for all non-symbols, not an error.  I.e.
keywordp should be a data type predicate, not a symbol operation.

< Right.  This was already discussed. >

[Right.]


p. 141: the new-nicknames argument to rename-package should be &rest,
not &optional, for uniformity.

< I don't see how this furthers the cause of uniformity.  I favor
&optional, though not passionately. >

[I don't see the uniformity either.]


p. 168: shouldn't the third value of decode-float and integer-decode-float,
the sign, be an integer (1 or -1) rather than a float?  Same for float-sign
when given only one argument.  Maybe there's a reason for making this be
a float, that I don't see and that isn't set forth in the manual.

< I don't care. >

[float-sign of two args needs to return a float.  For uniformity
within float-sign, it therefore also returns a float with one arg.
decode-float's third value should be similar to the result of
float-sign.  QED?]


p. 175: I feel that BYTE-SPECIFIER ought to be a data type for declaration
purposes (but an implementation is allowed not to support it as a data type
for discrimination purposes).  Say that BYTE-SPECIFIER may or may not be
a subtype of NUMBER, depending on the implementation.

< Seems like a lot of fuss for no reason.  But then, I've never like
byte specifiers at all. >

[Doesn't seem urgent enough to make a language change at this point.
Save it for second edition.]


p. 189: I don't like MAKE-CHAR, since its arguments are not consistent
with the other MAKE-xxx functions, and the function seems quite redundant
with CODE-CHAR.  I suggest making CODE-CHAR accept characters as well
as integers as its first argument, or else making CHAR-BITS and CHAR-FONT
SETF'able.

< Not a bad suggestion, but not worth unfreezing for. >

[Agreed.]


p. 191: In an implementation that doesn't have "super bits", is
(char-bit char :super) always false or is it an error?  This may just be
a matter of clarity of explanation.

< Whatever.>

[Good quesiton.  I don't care.  Recommendations?]


p. 199 (replace): What happens if sequence1 and sequence2 are not the same
object, but share storage because they are lists with shared substructure
or because one of them is a displaced array?  I propose that the result
be undefined, since it is expensive to check for and not a very useful
case to support.

< Yes, let's make this undefined. >

[Okay, twist my arm.]


p. 213: PUSHNEW takes the same keyword arguments as ADJOIN (probably this
is only a typographical error that no keyword arguments are listed).

< Yes, this must be a typo, right? >

[Yes, a typo.  Will fix.]


p. 216 (SUBLIS): are the cars of the alist elements required to be symbols,
as in Maclisp and implied by the first sentence of SUBLIS's description,
or are any objects acceptable (i.e. SUBLIS just acts as if it calls ASSOC
its first argument and its keywords)?  I prefer the latter.

< Yes, the latter is best, and this "clarification" would not break any
code.  I don't know what our implementation does right now. >

[Yes, the latter.  This was a "thinko" as I was transcribing some other
definition of sublis.]


p. 229: What happens when array B is displaced to array A and their element
types are not the same?  Okay for this to be an error, but not okay to signal
an error (we want it as a language extension).

< "It is an error", I guess.  Let each implementation worry about what
to do here.  Can't win in protable code. >

[Is an error.  I meant to say that and forgot.]


p. 230: What happens when array-total-size-limit is a function of the element
type?  Should this constant be set to the minimum over all element types?

< Yes, the minimum. >

[Right.]


p. 233: Is this feature that a third argument of t to the bit functions means
to use the first argument as the third argument really a win?  Maybe one should
simply pass the first argument twice in this case.  I don't feel very strongly
about this, but it seems like a kludge.

< We discussed this, I think (maybe just you and I), and decided that T
was the most convenient convention.  Passing the bit-vector twice often
requires a let or something equally awkward.  Anyway, it's too late for
such twiddles. >

[Ugly, but let it stand.]


p. 241 (STRING): It doesn't mention using STRING to coerce a character to
a 1-element string.  My vague memory is that this was an accepted change to
the language (at the time character objects were made mandatory and the
ability for an implementation to use integers instead of characters was
removed).

< I don't remember this, but don't care either way.  Flag it as a
change, though, if this is added. >

[I suppose this is okay, and implies that all the standard string
functions will accept characters in the cases that they accept symbols
in place of strings.]


p. 248: The "named structure" stuff in defstruct is confused.  The term
"named" is being used to mean two different things.  There are two kinds
of defstructs: One is a subtype of STRUCTURE; TYPEP and TYPE-OF know how
to find the type name symbol for any object that is a structure.  In some
implementations a structure is a vector with its type name in element 0,
and a magic "I am a structure" bit set.  In other implementations, STRUCTURE
is not a subtype of any other COMMON type (e.g. in NIL STRUCTURE is
a subtype of EXTEND).  When making this type of defstruct, the user does
not and cannot know exactly what type defstruct will make, since it is
implementation-dependent.  The other kind of defstruct is one where the user
has requested a specific data type, such as VECTOR or LIST.  In this case
TYPEP and TYPE-OF cannot be guaranteed to work (and in all implementations
I know of they won't work).  In this second kind of defstruct, "named" just
means that defstruct automatically allocates a structure slot containing
the name symbol.
I suggest that (:type structure) mean the same as not specifying :type,
namely an object for which TYPE-OF and TYPEP "work".  :unnamed is illegal
in connection with this, and :named is redundant.  I suggest then that
structure be made a legal type specifier again, for consistency with this.
It means only "defstructs of the first kind."
For the non-structure :types, :unnamed is redundant.
Hence I suggest that :unnamed be flushed.

< Glurk!  Too bad we couldn't all have decided that defstructs were
general vectors, period.  Do whatever it takes. >

[Modified proposal: flush :unnamed, but do not put in (:type structure);
just say that you get a type-1 defstruct if you don't have a :type
clause.]


p. 249: "Moreover, astronaut will have its own access functions for
components defined by the person structure."  Did we really agree to
this?  I don't recall ever hearing of this before.  It is incompatible
with previous usage, and seems useless to me.

< I never did understand this stuff.  Probably nobody would notice a
change here. >

[It's not incompatible and it's not useless; it allows the user to
state more exactly what is going on.  The selectors astro-name and
person-name are identical when applied to an astronaut; however,
pserson-name may be applied to any person, astronaut or not,
whereas it is an error to use astro-name on anything but an astronaut.
Admittedly the advantages of this requirement are primarily stylistic.
Implementationally, it may be faster for astro-name to check that
an astronaut is an astronaut than for person-name to check that
an astronaut is a person, by a matter of a very few cycles, depending
on the implementation.  This last is a minor quibble.]


p. 250: I suspect that the :eval-when option to defstruct is unnecessary
and should be removed.  I think it was put in to get around a bug in the
Lisp machine compiler that was fixed years ago.  I could be misremembering.

< I never understood why this was needed.  Looks to me like you could
just put EVAL-WHEN around the outside. >

[I agree -- flush it.]


p. 254: I suspect that *EVAL is called only by "FEXPRs" and should be deleted
from the white pages.  It exists internally in the evaluator, of course.

< I argued earlier that we should flush this, since we can't tell the
user what the environment looks like or where to get one.  Seems
ill-defined as is. >

[I included it for use by hook functions, but I suppose one can just
use the evalhook function and feed it two nil's.  The slight overhead
is a couple of extra binds.  So what.  Flush *eval.]


p. 257 (second and fourth paragraphs, and parentheses in the first paragraph):
We agreed to change the rules for * to be consistent with what was printed,
rather than aligned with +, but the manual wasn't updated.

< I am by now totally confused about what we agreed to.  If you can
spell it out, I'll try to implement it. >

[I remember what was agreed to.  + is as before.  *, **, *** are updated
every time a result is printed, whether it is the only result or one of
several.  If an evaluation produces zero values, then * does not change.
/, //, /// are updated when the results of an evaluation are printed,
however many (possibly zero) there are; if the computation is aborted,
/ is not updated.  Example:
(gensym)			;Interaction 1
G3141
(cons 'a 'b)			;Interaction 2
(A . B)
(hairy-loop)↑G			;Interaction 3
>>>> Moby quit: you are outside a small building at the end of a road.
(floor 13 4)			;Interaction 4
3
1
; At this point we have:
;    +++ => (cons 'a 'b)    *** => (A . B)    /// => (G3141)
;    ++  => (hairy-loop)    **  => 3          //  => ((A . B))
;    +   => (floor 13 4)    *   => 1          /   => (3 1)

Does that look right?]


p. 263: are INPUT-STREAM-P and OUTPUT-STREAM-P type predicates like streamp,
or is it an error to call them on objects that aren't streams?  This may
just be a textual clarity issue.  INPUT-STREAM and OUTPUT-STREAM are not
in Table 4-1.

< These don't want to become first-class data-types, so maybe make it an
error to call them on non-streams. >

[Right.]


p. 264 (last sentence): The file is only deleted if it was newly-created.
Not if appending or overwriting an existing file.

[Right.]


p. 279: When #+/#- skips a form, certain read errors should be suppressed.
This is necessary in order to use #+ to conditionalize code that runs in
multiple implementations, or multiple environments with different packages
present.  The things to be suppressed include forms after #. and #, ,
floating-point exponent range errors, qualified name errors (no such package,
no such external symbol), #n=.  (#+lispm #1= <a> #+spice #1= <b> <c> #1#)
should work and not complain that the "tag" 1 is used twice, except if
the "lispm" and "spice" features are both true.

< Ugh.  He's right.  I see now that we should have made these delimited
like #| so that we wouldn't have to call read at all, but can just skip
over stuff.  Any chance that we can define this somehow so that it is
easy to scan over what should be skipped -- one atom or balanced parens? >

[It's not that easy.  There is a question of whether or not one should invoke
user-defined macro-character definitions.  I am inclined to say yes; ideally
such a function could have access to flag saying whether you are ignoring
or not.  For now, let's carefully define what does and does not happen
when you're skipping.  In particular, the syntax of tokens made up of
constituents is completely unchecked.  (This is one reason why I wanted
this simplified theory of constituents so you could find the boundaries
of tokens before interpreting their syntax.)]


p. 281: What does the from-readtable argument to COPY-READTABLE default to?
The manual directly contradicts itself.  I suggest that it default to the
value of *READTABLE*, but that NIL mean "the standard Common Lisp readtable,
i.e. the value of *READTABLE* before any side-effects on the variable or
on the readtable were perpetrated."

< Sounds good. >

[Okay.]


p. 284 (last paragraph): When the printer decides whether a symbol's name
must be slashified because it looks like a number, is this decision dependent
on the value of *READ-BASE* or the value of *BASE*.  In other words, what
controls whether the symbol FF prints as FF or \FF?  I suggest *BASE*.

< We've already had a fight over this within the Slisp group.  I don't
care what we do as long as it gets nailed down. >

[Arrghhh!  It comes back to haunt us.  This is one the original reasons
for not introducing IBASE, and for requiring #nR syntax for supra-decimal
radices; it guaranteed that no number began with a letter.  Foo.
Yes, *BASE* is the one to use.]


p. 287: What's the initial value of *PRINT-PRETTY* ?  I suggest NIL.

< NIL is OK.  I'd rather leave it to the implementation. >

[I agree with Scott.]


pp. 290, 296: "Ascii streams" should be called "Character streams", since
the character code in use is not necessarily Ascii.

< Yeah. >

[Right.]


pp. 290, 296: I'd still like to flush the feature that T and NIL as output
streams have special meanings.  This is a holdover from Maclisp.  The main
problem with this feature is that FORMAT uses T and NIL to mean something
incompatible with this.

< Good suggestion, but probably too late. >

[Sigh.  I think we don't dare change this now.]


p. 290: Recursive reads need control over eof handling.  Having the eof-errorp
of the top-level call to read controls what happens isn't good enough.  For
example, the semicolon reader macro has to read until a Return character or
EOF; it should not be an error for a comment to end at end-of-file without
a carriage return (it is all too easy for a user to forget to put in the
carriage return).  I suggest that in recursive reads, eof-errorp = nil means
return eof-value from the inner call to read regardless of the top-level read's
arguments, and eof-errorp = t means look at the top-level read's arguments,
and either signal an error or throw back to the top-level read and return
its eof-value, but in neither case return from the inner read.  Then fix
all examples that say (READ stream NIL NIL T) to be (READ stream T NIL T).

< Sounds OK.  Some would view this as a change, but the present scheme
may be unworkable.>

[You're right; let's do it.]


p. 292: READ-DELIMITED-LIST should allow comments as well as whitespace
between the last object and the delimiter.

< OK with me. >

[Yes.  The wording needs to be fixed.]


p. 293 (READ-LINE): What happens if end-of-file terminates an empty line
(the second to last sentence says that when end-of-file terminates
a non-empty line, the line and T are returned)?  Should "" and T be
returned, or should the function take eof-errorp and eof-value arguments?
Zetalisp's READLINE function does the latter.

< Whatever. >

[What Zetalisp does, I guess.]


p. 294 (READ-FROM-STRING): Maybe the two optional arguments should be
keywords, to make things more uniform.  I suggest that :EOF-ERROR
should default to true unless :EOF-VALUE is specified and :EOF-ERROR
is not specified.  This makes one wonder whether there should be two
keywords or one.

< Too late. >

[Let it stand.]


p. 297 (WRITE-CHAR):  Do we really want this to return NIL?  All the
other writers return what they wrote (actually, their first argument,
which is not necessarily exactly what they wrote).

< Whatever. >

[Sigh.  Okay, do it.]


p. 298 (first paragraph): Regardless of what characters TERPRI on a
stream outputs to the physical device, it must be required that
(WRITE-CHAR #\RETURN stream) writes exactly the same characters.  Or
is this not true, in which case you better say so very explicitly.

< Seems to me that TERPRI does the locally tasteful thing, while
WRITE-CHAR does just what you tell it.  Maybe we should make a point
of this.>

[There are nasty issues on both sides of this.  I think I agree with
Scott: to get a proper line termination one should do TERPRI or
format's ~%.  Implementations may or may not choose to supply LF after
RETURN for certain kinds of streams?  Sigh.]


p. 298 (last line): I assume it's a typo that WRITE-BINARY-OBJECT
can only write arrays of integers, not arrays of all numbers, since
it can write all kinds of numbers as scalars.

< ??? >

[Well, you have to stop somewhere.  How about symbols?  Hashtables?
I'm inclined to let it stand for now.]


p. 299: ~F appears to require negative prefix parameters, which previous
format operators didn't require.  Say in the fourth paragraph on page 299
that minus signs may be used in prefix parameters.

[Sigh.  You're right -- the k parameter.  The *right* thing, after doing
this, would be to simplify some other things, such as ~:* => ~-1*.
However, we'll have to let that stand.]


p. 303: I think that if the third prefix parameter to ~E is omitted,
the exponent should use as many digit positions as required, rather than
using exactly 2.  This would make e be treated the same as w and d.

[Okay.]


p. 305: I suggest the following exception handling in ~$:
If the arg is too small, it is taken to be zero.
If the arg is too large, it is printed with an exponent.  If w is not
specified, too large means something like 40 or 64 or 100 digits; this
could be left to the implementation.
If the arg is rational, it is first coerced to a single-float (see below).
If the arg is not a number, or complex, it is printed in ~wD format.

[Okay.]


pp. 302-305: I suggest that when a rational is printed in ~F, ~E, ~G,
or ~$ format, an implementation be permitted either to coerce it to
single-float or to do something that retains more precision and doesn't
risk exponent overflow, at its discretion.  "Something" could be
coerce to long-float or could be format it "exactly", except that if
the number of digits is not specified and a ratio that does not have
an exact decimal representation (e.g. 1/3) is specified, a finite
number of digits must be printed.

[Okay.]


p. 306: ~G should be replaced by ~@*.

[Right.]


p. 307: The example of ~? is not consistent with the text.  Possibly this
is a typo and the format string should have started ``"~1{~?~}~%~V...'',
but I'm not sure.  If so, be sure to point out that the ~? is expanded
outside of the iteration caused by the braces, and before the braces
pick up their argument (the list to iterate over).  This is the same
as use of ~V inside of braces.

[~? was ill-specified.  I think it needs to have a @ modifier similar
in effect to that for braces.]


p. 311: Does ~↑ with three prefix parameters do inclusive (<=) or
exclusive (<) comparison?  Ugh, bletch!

[Inclusive.]


p. 312: I think we agreed to change the arguments to Y-OR-N-P and
YES-OR-NO-P to be ``&optional format-string &rest format-args''.  This
is compatible in the usual case where one argument is supplied and it
doesn't contain any tildes, and is much more useful than specifying a
stream; one never wants to change the stream locally, rather than
globally by binding *QUERY-IO*.

< I don't recall this agreement, but it sounds like a very good
suggestion, probably worth accepting even now. >

[Right.]


p. 317: Some file systems lack devices, types, or versions, hence you
need a value for these components in pathnames for those file systems.
The Lisp machine uses the keyword :UNSPECIFIC for this.  This is different
from NIL, which means that the component was not specified by the pathname
and can be supplied by merging.  Most file systems either never allow
a given component to be :UNSPECIFIC, or require it always to be :UNSPECIFIC,
but there are some where this is not the case.  ITS, for example, can
have either a type or a version, but not both.  When it has a version, the
type is :UNSPECIFIC.  When it has a type, the version is :UNSPECIFIC.
When it has neither, both components are NIL.  It may be possible to
get rid of :UNSPECIFIC and use NIL both to mean "this component was
not specified" and to mean "this component cannot be specified", but
this would require more complexity in the merging process.

< Since the current file chapter is pretty confused, and this is
everyone's first look at it, I don't regard it as frozen.  Adding
:UNSPECIFIC in the sense of "Illegal component in this type of file
system" sounds OK, but the name is damned confusing.  How about
:NOT-USED-IN-THIS-SYSTEM or some such? >

[I think NIL can be used for both purposes.]


p. 318: TRUENAME of a stream should be the truename of the file actually
open.  This cannot be done by first converting the stream to a pathname
then taking the truename of that.  File system operations performed after
the stream was opened might have changed the mapping of the stream's pathname
into a truename, for instance if the version of the stream's pathname
is :NEWEST and a newer file was created.

< Right. >

[Right.]


p. 319: If TRUENAME returns NIL if there is no such file, it is identical
to PROBE-FILE.  If it quietly returns its argument, it is a liar.  Probably
best to error.  Then PROBE-FILE is TRUENAME except that it returns NIL for
a file-not-found error (but not for other errors such as directory-not-found,
file name illegal, or foreign host not responding).

< Well, that would settle the query -- we would have both an
error-signalling form and one that returns NIL if the file isn't there.
I'd prefer to have truename return NIL and just flush PROBE-FILE, but I
don't really care. >

[Let it error out.]


p. 319: Flush the stuff about "conventions" in parse-namestring.  parse-namestring
should take a junk-allowed argument that defaults to nil, like parse-integer.
Shouldn't start and end be keywords, like for most other functions that take
such arguments?  By analogy with parse-integer, all of the arguments to
parse-namestring except the first should be keywords.

< Well, I'm not sure the analogy holds, but I don't really care much. >

[Right.]


p. 320 (pathname-plist): I don't think it is wise to put property lists on
pathnames in Common Lisp.  Especially when you don't say anything about
whether or not pathnames are "interned", i.e. does parse-namestring called
twice with the same arguments return two pathnames that share the same property
list, or two distinct pathnames, or is this implementation-dependent.  Better
not to include any operations that perform side-effects on pathnames in Common
Lisp, so that only EQ can tell whether they are "interned."  This means you
have to define what EQUAL means for pathnames.  I see that page 63 says that
EQUAL compares pathnames by components, which is good, but this should be
mentioned in the pathname chapter also.  EQL on pathnames should be the same
as EQ, and hence not useful, and this should be mentioned in the pathnames
chapter.

< I would say that parse namestring called twice produces two distinct
pathname objects with the same components.  If you put properties on one
of them, they don't magically appear on the other.  So what?  I don't
see this causing any trouble in normal use.  One can always use a
hashtable to intern these by hand if it matters.  I agree that EQL is
like EQ and that EQL probably looks at components. >

[I think that last EQL of Scott's should be EQUAL.  Yes, let's nuke
the pathname plists.]

< By the way, there seems to be no way to copy a pathname.  In the
compiler, I want to copy the input pathname and then bash the type field
of the copy to the LAP or FASL.  How am I supposed to do this?  I
couldn't figure out how to do it with MERGE-PATHNAMES alone.  Do we need
COPY-PATHNAME ? >

[You're not allowed to bash pathnames.  Use MERGE-PATHNAMES:
	;Change TYPE component of pathname X to BAZ.
	(merge-pathnames (make-pathname :host (pathname-host x) :type "BAZ")
			 x)
Does that do it?]


p. 321: INIT-FILE-PATHNAME should take an optional argument that is the file
type.  Whether this argument affects the result depends on the host (not
on the Common Lisp implementation!).  To be consistent with FS:INIT-FILE-PATHNAME
in Zetalisp, the type argument should come before the host argument.

[Okay by me; however, it would seem that the type would more logically
follow the host, as its presence  or absence depends on the host.]


p. 322 (second paragraph): Since all pathnames include a host, merging cannot
be responsible for putting in the default device.  Pathname parsing must
do this; parsing a string that specifies a host puts in a device component,
which is the default file device unless the string specifies an explicit
device.  Also the description of merging, and much of the rest of the pathname
chapter, doesn't know that MERGE-PATHNAME-DEFAULTS was flushed (over my mild
protests that both MERGE-PATHNAMES and MERGE-PATHNAME-DEFAULTS are useful).

< Ah, that clarifies something that was confusing to everyone.  I
wouldn't object to a return of MERGE-PATHNAME-DEFAULTS, but we need a
better name. >

[Suits me.]


pp. 322-324: Logical pathnames are useless without a standardized syntax.  The
logical pathname system here is based on an obsolete specification of logical
pathnames in the Lisp machine, hence is not attractive to us.  I suggest that
logical pathnames not be included in Common Lisp this time around; they can
be standardized later when they are better understood.

< Amen.  Nobody would mourn their loss. >

[Right.]


p. 325 (:element-type standard-char): What happens if the input file in fact
contains a non-character?  Is this a case of "is an error" or "signals an
error"?  What happens if the user calls WRITE-CHAR with a non-standard character?
What about READ?  Are any strings whose printed representation is read from a
stream with :element-type standard-char guaranteed to contain only standard chars?
If READ-CHAR and WRITE-CHAR are required to check and signal an error, does this
requirement extend to :element-type string-char as well?  How about READ-BYTE
and WRITE-BYTE checking that the bytes fit in the byte size declared by the
element-type?  The best thing for right now is probably to leave the specification
loose and let implementations decide how much error checking they want to have.

< How about flushing this element-type for files?  Is this confusing
crock really useful? >

[Flush it.]


p. 326: :if-exists :rename and :if-exists :rename-and-delete, like :if-exists
:supersede, should be encouraged not to affect the existing file until the
stream is closed, and not to affect it at all if the stream is closed in
abort mode.  Should this be required rather than encouraged?  Encouraged is
probably better since file systems vary so widely in their capabilities.
Explain what :if-exists :supersede means more precisely.  Is it permissible
for this to mean the same as :rename, or :rename-and-delete?  Note that
:supersede rather than :error is the default when the file system does not
have versions, since the pathname component can't be :newest in this case.

< Yes, encouraged. >

[Encouraged with a very big stick, or perhaps a cattle prod.]


p. 327: clarify that in :if-does-not-exist :create mode, "proceed as if
it had already existed" does not include any processing directed by the
:if-exists argument.  Someone here was confused by this.

[Yes.]


p. 328 (rename-file): I think we agreed that values saying what was done,
rather than t, should be returned.  The Zetalisp RENAMEF function returns
three values, which are the second argument (new-name) after merging with
the first argument so that renaming leaves unspecified components unchanged,
the truename before renaming, and the truename after renaming.

< I do seem to recall this. >

[Yes.]


p. 328 (rename-file and delete-file): if it is an error to specify a pathname
containing a :wild component, what about nil components?  Do they default
by merging with some default defaults, or are they an error?

[How about an error?  We should be very cautious with these.  Remember the
time some poor random deleted about 1/3 of MC's files because of a missing
set of parens?]


p. 328 (file-creation-date): People here find the word "creation-date"
(used by Zetalisp) very ambiguous and confusing.  I suggest that this
function be named file-write-date or file-written-date (depending on our
grammatical preferences).

< 4004 B.C. for all files.  But seriously.... >

[Fine.]


p. 328 (file-position): This can't work easily when there is character
set translation (including translation of the Return character to a
VAX/VMS or OS/360 record boundary).  Should fhu-position be defined to
return NIL when the element-type of the stream is one that requires such
translation, or should it be required to return the equivalent number
of READ-CHAR/WRITE-CHAR operations, or should it be required to return
a number in the units in which the file is actually read or written?
This is a probably for file-length, too.
file-position with two arguments should use :start and :end rather
than nil and t (or do I mean rather than t and nil?  That's the point.)
Bi-directional streams should have separate read and write positions,
shouldn't they?

< Good points.  Maybe encourage the file-position to indicate READ-CHAR
equivalents, but allow NIL if it can't hack this in certain crockish
systems. >

[How about just requiring that it be a monotonically increasing function
of the number of READ-CHAR/WRITE-CHAR operations; that is, xxx-CHAR always
increments it by some positive integer but not necessarily by 1?]


p. 329: I don't approve of making the filename argument to LOAD
optional.  Am I overruled by a concensus of the committee?  I don't
recall this ever being discussed.

< I don't recall this being discussed either, but thought that I might
have slept through this.  I agree strongly with MOON that the filename
arg to LOAD should be required, and the filename argument to
COMPILE-FILE as well.  I really don't like this
*LOAD-SET-DEFAULT-PATHNAME* stuff.  I guess it is reasonable to hold
that we're too late on this, however. >

[I'm inclined to let it stand.]


p. 335: The string subform of CHECK-TYPE is evaluated.

[Right.]


p. 335: The syntax for ASSERT hasn't been updated to the new syntax
we agreed on, which flushes the kludgey use of string as a delimiter.
string is evaluated now.

< Right, as agreed. >

[Right.]


p. 347: Shouldn't the symbols on *FEATURES* be keywords?  If even
implementation-specific elements of *FEATURES* go in the LISP package,
they can cause accidental sharing.  In general implementations have
to be careful about adding their own symbols to the LISP package,
since this could make some programs become unportable (until they
do a SHADOW).
Note that one does not write a colon in the #+ (or #-) syntax; all
symbols in that syntax are assumed to be keywords.

< No, this was a ballot item earlier, and we decided not to make these
keywords. >

[Scott is correct.]

∂01-Oct-83  1502	RPG   	Comments on Excelsior manual
 ∂02-Sep-83  1206	Moon%SCRC-TENEX@MIT-MC 	Comments on Excelsior manual   
Received: from SU-DSN by SU-AI with PUP; 02-Sep-83 12:06 PDT
Received: From MIT-MC by SU-DSN.ARPA; Fri Sep  2 12:07:44 1983
Received: from SCRC-SCHUYLKILL by SCRC-TENEX with CHAOS; Fri 2-Sep-83 14:41:21-EDT
Date: Friday, 2 September 1983, 14:41-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: Comments on Excelsior manual
To: Guy.Steele@CMU-CS-A, fahlman@CMU-CS-C, rpg%su-ai@SU-DSN,
    moon%SCRC-TENEX@MIT-MC, dlw%SCRC-TENEX@MIT-MC, bsg%SCRC-TENEX@MIT-MC
In-reply-to: The message of 2 Sep 83 00:47-EDT from Guy.Steele at CMU-CS-A

    Date:  2 Sep 83 0047 EDT (Friday)
    From: Guy.Steele@CMU-CS-A

    p. 25: integer and ratio are not an exhaustive partition of rational?
    fixnum and bignum are not an exhaustive partition of integer?  Yow!
    Such possibilities for language extension...

    [What kept me from making the partition exhaustive was the nagging feeling
    that someday someone might want to adjoin representations of infinity
    (positive and negative).  Maybe this is a quibble, and we should describe
    it as an exhaustive partition anyway.  Comments?]

Good point.  Leave it the way it is, and add a footnote about integer infinities.

    p. 48 (first sentence): &key keywords should not be required to be in the
    keyword package, only encouraged to be.  There are occasional reasons to
    have private-packaged keyword-argument names.  This applies to the second
    paragraph on p.49 also.

    < We have to balance this "occasional need" against the error checking
    provided if we require these things to be keywords.  I'd let it be. >

I see no additional error checking gained by requiring keyword argument names
to be keyword symbols.  There is already error-checking that only the correct
keyword argument names are used, unless the use specifically disables the error
checking with allow-other-keys.

    [Unless you can demonstrate the need more conclusively, I am inclined
    to let it stand.  It "must" be a keyword, and therefore it "is an error"
    if it is not, so implementations may make the extension if desired.]

I won't press the point now, as long as it doesn't "signal an error."

    p. 141: the new-nicknames argument to rename-package should be &rest,
    not &optional, for uniformity.

    < I don't see how this furthers the cause of uniformity.  I favor
    &optional, though not passionately. >

    [I don't see the uniformity either.]

This is the uniformity:

"rename-package takes a package and any number of names (at least one) as
arguments.  All the package's current names are discarded and the specified
names are made to be the names of the package.  The first name specified
is preferred for output, but all of the names are accepted on input."

Actually in our system the first name is preferred for output when printing
the package object, but the shortest name (unless the user specifies otherwise)
is preferred for printing qualified names of symbols in the package.

    p. 168: shouldn't the third value of decode-float and integer-decode-float,
    the sign, be an integer (1 or -1) rather than a float?  Same for float-sign
    when given only one argument.  Maybe there's a reason for making this be
    a float, that I don't see and that isn't set forth in the manual.

    [float-sign of two args needs to return a float.  For uniformity
    within float-sign, it therefore also returns a float with one arg.
    decode-float's third value should be similar to the result of
    float-sign.  QED?]

I see.  Put a @rationale footnote to this effect in the manual.

    p. 250: I suspect that the :eval-when option to defstruct is unnecessary
    and should be removed.  I think it was put in to get around a bug in the
    Lisp machine compiler that was fixed years ago.  I could be misremembering.

    < I never understood why this was needed.  Looks to me like you could
    just put EVAL-WHEN around the outside. >

    [I agree -- flush it.]

Bawden explained this to me.  The problem is that
	(eval-when (load) (defstruct s a))
expands into
	(eval-when (load)
	  (eval-when (compile load eval)
	    (defstruct-internal 's '(a))
	    (defstruct-define-constructor 'make-s 's))
	  (defun s-a (s)
	    (declare (open-codable))
	    (%structure-reference s 1)))
or something like that (I have written the above in a real bastardized language)
and the outer eval-when does not stop the inner eval-when from defining things
at compile time.  The real problem is with eval-when, since there is no way to
distinguish "eval this at load time ONLY" from "eval this normally".  A better
way than the :eval-when option, which everyone agrees is a kludge, would be to
make it possible for defstruct to find out what eval-when environment it is
being expanded in (e.g. eval-when could imply compiler-let of a specified
variable name).  In the absence of that, we have to decide whether to leave
the :eval-when feature in, or decide that the case it fixes is obscure and
leave it out of the manual (like dozens of other more useful defstruct features).
Either way is okay with me, probably default to leaving it in I guess.

    p. 257 (second and fourth paragraphs, and parentheses in the first paragraph):
    We agreed to change the rules for * to be consistent with what was printed,
    rather than aligned with +, but the manual wasn't updated.

    [I remember what was agreed to.  + is as before.  *, **, *** are updated
    every time a result is printed, whether it is the only result or one of
    several.  If an evaluation produces zero values, then * does not change.
    /, //, /// are updated when the results of an evaluation are printed,
    however many (possibly zero) there are; if the computation is aborted,
    / is not updated.  Example:
    (gensym)			;Interaction 1
    G3141
    (cons 'a 'b)			;Interaction 2
    (A . B)
    (hairy-loop)↑G			;Interaction 3
    >>>> Moby quit: you are outside a small building at the end of a road.
    (floor 13 4)			;Interaction 4
    3
    1
    ; At this point we have:
    ;    +++ => (cons 'a 'b)    *** => (A . B)    /// => (G3141)
    ;    ++  => (hairy-loop)    **  => 3          //  => ((A . B))
    ;    +   => (floor 13 4)    *   => 1          /   => (3 1)

    Does that look right?]

Interesting.  We would have *** => G3141, ** => (A . B), and * => 3,
on the grounds that usually only the primary value is interesting, and to
get at other values you have to use /.  But saying that -each- value
printed pushes the * history sounds reasonable too.  I can accept either
of these.  Does anyone have a strong opinion about this?

    p. 298 (first paragraph): Regardless of what characters TERPRI on a
    stream outputs to the physical device, it must be required that
    (WRITE-CHAR #\RETURN stream) writes exactly the same characters.  Or
    is this not true, in which case you better say so very explicitly.

    < Seems to me that TERPRI does the locally tasteful thing, while
    WRITE-CHAR does just what you tell it.  Maybe we should make a point
    of this.>

    [There are nasty issues on both sides of this.  I think I agree with
    Scott: to get a proper line termination one should do TERPRI or
    format's ~%.  Implementations may or may not choose to supply LF after
    RETURN for certain kinds of streams?  Sigh.]

This is a real can of worms I guess.  It is vital that the manual include
a discussion of this, since otherwise every implementation will go off in its
own incompatible direction.  There are two possibilities: either Common
Lisp hides local strange end of line conventions under the #\Return character,
or it doesn't attempt to deal with this.  Naturally I advocate the former,
since we already bit this bullet years ago, and I suggest that if one wants
to do device-dependent controls of a terminal one opens it with an element
type other than character.  Is the value of
	(= 3 (length "a
b")) permitted to be T in some implementations and NIL in others?  Note that
Common Lisp has already specified that record-oriented implementations can't
expose their native convention in the language, but must instead have a
#\Return character (the character is standard, not optional).

I can accept the language specifying it either way, reluctantly perhaps, but
the language -must- specify it.  If our goal is portability, we should
hide local vagaries under #\Return.

    p. 298 (last line): I assume it's a typo that WRITE-BINARY-OBJECT
    can only write arrays of integers, not arrays of all numbers, since
    it can write all kinds of numbers as scalars.

    [Well, you have to stop somewhere.  How about symbols?  Hashtables?
    I'm inclined to let it stand for now.]

I think you missed my point.  One can write-binary-object an integer, an
array of integers, or a float, but one cannot write-binary-object an array
of floats.  Surely this is not intentional, since most people who use
floats are likely to use arrays of them, and if you support arrays at all
arrays of floats are no harder than arrays of integers.

    p. 307: The example of ~? is not consistent with the text.  Possibly this
    is a typo and the format string should have started ``"~1{~?~}~%~V...'',
    but I'm not sure.  If so, be sure to point out that the ~? is expanded
    outside of the iteration caused by the braces, and before the braces
    pick up their argument (the list to iterate over).  This is the same
    as use of ~V inside of braces.

    [~? was ill-specified.  I think it needs to have a @ modifier similar
    in effect to that for braces.]

I can't see how the @ modifier for ~{ could make sense for ↑?; it controls
whether the arguments iterated over are all the rest of the arguments or
the elements of a list passed as a single argument.  ↑? doesn't deal in
arguments; it simply inserts a string into the control string (one could
presumably do the same thing with CONCATENATE before calling FORMAT).

    p. 319: If TRUENAME returns NIL if there is no such file, it is identical
    to PROBE-FILE.  If it quietly returns its argument, it is a liar.  Probably
    best to error.  Then PROBE-FILE is TRUENAME except that it returns NIL for
    a file-not-found error (but not for other errors such as directory-not-found,
    file name illegal, or foreign host not responding).

    < Well, that would settle the query -- we would have both an
    error-signalling form and one that returns NIL if the file isn't there.
    I'd prefer to have truename return NIL and just flush PROBE-FILE, but I
    don't really care. >

    [Let it error out.]

Either erring out or not having both TRUENAME and PROBE-FILE would be okay with me.

    p. 320 (pathname-plist):....

    < By the way, there seems to be no way to copy a pathname.  In the
    compiler, I want to copy the input pathname and then bash the type field
    of the copy to the LAP or FASL.  How am I supposed to do this?  I
    couldn't figure out how to do it with MERGE-PATHNAMES alone.  Do we need
    COPY-PATHNAME ? >

    [You're not allowed to bash pathnames.  Use MERGE-PATHNAMES:
	    ;Change TYPE component of pathname X to BAZ.
	    (merge-pathnames (make-pathname :host (pathname-host x) :type "BAZ")
			     x)
    Does that do it?]

Clumsily, but it does do it.  We have a :NEW-PATHNAME message for this purpose,
which takes the same arguments as MAKE-PATHNAME and returns a pathname which is
like the one that receives the message, as modified by the keywords.  You could
also make the component accessors PATHNAME-HOST .. PATHNAME-VERSION be SETF'able;
they would work like LDB, since pathnames are not modifiable.  Scott's specific
example is the main use of MERGE-PATHNAME-DEFAULTS, but of course that doesn't
solve the general problem of making a pathname which is "like this other pathname
except."  You could also add one more keyword to MAKE-PATHNAME, with a name
like :PATHNAME or :FROM-PATHNAME, which supplies all components except those
specified explicitly (if this argument is not supplied, all unspecified components
are NIL as before).

I expect all this pathname stuff will be revised in a second go-around on
Common Lisp anyway.

    p. 321: INIT-FILE-PATHNAME should take an optional argument that is the file
    type.  Whether this argument affects the result depends on the host (not
    on the Common Lisp implementation!).  To be consistent with FS:INIT-FILE-PATHNAME
    in Zetalisp, the type argument should come before the host argument.

    [Okay by me; however, it would seem that the type would more logically
    follow the host, as its presence  or absence depends on the host.]

I don't feel strongly about this, but the host argument is almost always omitted
and allowed to default to the "current" or "login" host; this suggests it should be last.

    p. 328 (file-position): This can't work easily when there is character
    set translation (including translation of the Return character to a
    VAX/VMS or OS/360 record boundary).  Should file-position be defined to
    return NIL when the element-type of the stream is one that requires such
    translation, or should it be required to return the equivalent number
    of READ-CHAR/WRITE-CHAR operations, or should it be required to return
    a number in the units in which the file is actually read or written?
    This is a probably for file-length, too.
    file-position with two arguments should use :start and :end rather
    than nil and t (or do I mean rather than t and nil?  That's the point.)
    Bi-directional streams should have separate read and write positions,
    shouldn't they?

    < Good points.  Maybe encourage the file-position to indicate READ-CHAR
    equivalents, but allow NIL if it can't hack this in certain crockish
    systems. >

    [How about just requiring that it be a monotonically increasing function
    of the number of READ-CHAR/WRITE-CHAR operations; that is, xxx-CHAR always
    increments it by some positive integer but not necessarily by 1?]

Okay.  I think we should specify that on binary streams, if READ/WRITE-BINARY-OBJECT
is not used, the file position exactly equals the number of READ-BYTE / WRITE-BYTE
operations.


For the items not included in this message, what Guy said is acceptable to me
(sometimes I'm enthusiastic, sometimes I'm not).

∂01-Oct-83  1502	RPG   	More comments on excelsior  
 ∂02-Sep-83  2131	FAHLMAN@CMU-CS-C.ARPA 	More comments on excelsior 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 2 Sep 83  21:31:23 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sat 3 Sep 83 00:32:36-EDT
Date: Sat, 3 Sep 1983  00:32 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   moon%SCRC-TENEX@MIT-MC.ARPA, steele@CMU-CS-C.ARPA
Cc:   rpg@SU-AI.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA, bsg%SCRC-TENEX@MIT-MC.ARPA
Subject: More comments on excelsior


    p. 141: the new-nicknames argument to rename-package should be &rest,
    not &optional, for uniformity.

    < I don't see how this furthers the cause of uniformity.  I favor
    &optional, though not passionately. >

    [I don't see the uniformity either.]

This is the uniformity:

"rename-package takes a package and any number of names (at least one) as
arguments.  All the package's current names are discarded and the specified
names are made to be the names of the package.  The first name specified
is preferred for output, but all of the names are accepted on input."

Actually in our system the first name is preferred for output when printing
the package object, but the shortest name (unless the user specifies otherwise)
is preferred for printing qualified names of symbols in the package.

< Well, you want to treat these all as names for the package, of which
some are slightly more equal than others.  I prefer to think of this as
one real name and a list of nicknames.  I'm not wedded to this view, but
your suggestion doesn't look like an improvement to me and the manual IS
supposed to be frozen...>


< On the issue of *** and friends, I really dislike Guy's
interpretation.  I agree with Moon that the first value is usually what
you want to get at, and that the /'s are there if you want the other
values.  I don't like the idea of * values scrolling out of reach just
because some stupid function returned mroe values than I thought it
would. >

< I think that it is treacherous and confusing for WRITE-CHAR #\RETURN
to do anything other than writing a #\RETURN character to the file.
Seems to me that this is low-level and should be literal.  Higher level
things like TERPRI can "do the right thing", but I really want some way
to write a #\RETURN code into a file, even on a system that normally
uses CRLF between lines. >

---------------------------------------------------------------------------
More by SEF:

I've still got a big problem with the #+, #- business.  I now realize
that these are fundamentally ill-formed constructs.  What you want to do
here, if the condition is not met, is to completely skip over the next
Lisp object as though it were commented out -- no value, no errors, no
side effects.  The obvious way to do this is to call READ and then throw
away the result, but Moon points out that this could cause errors which
we want to suppress.  Similarly, if we run any character macros,
particularly those defined by users, we could very well get errors or
side effects from the "commented out" stuff.  On the other hand, we have
to run some of these macros in order to determine where the next object
ends.  The problem is that we have made our reader powerful enough that
we cannot assume that the Lisp doing the reading can properly determine
the boundaries of a Lisp object meant to be read by some other
configuration or implementation.  We can't win.

The clean solution would be to flush the idea that #+ and #- read one
Lisp object and say that if the condition fails, everything up to some
trivially-recognized terminator is simply skipped, much in the manner
of #|...|# .  Then the systems in question would only have to agree on
this terminator, and neither would have to make guesses about the
other's reader.

It's probably too late to change #+ and #-, but I am not willing to
rewrite our reader to make these losing crocks work on a few extra
cases.  I propose that we state that if the condition for #+ and #-
fails, then the next object is read by the implementation using READ
(perhaps requiring to switch to the standard Common Lisp readtable to do
this) and the result is discarded.  Period.  If you've got a hairier
case that is likely to cause unwanted errors or side effects in some
implementation, you don't use #+ or #-.

What do you use in such cases?  Some evil #. form would do it, but I
propose that we make it easy for the user.  Suppose that we define #0+
and #0- to read a condition, just like #+ and #-, then to read a string
from the input stream, in the normal Common Lisp syntax.  (It is an
error if the next item read is anything but a string.)  If the condition
is met, the contents of the string gets passed to READ.  If the
condition is not met, the string is discarded.

Thus we get things looking like the following:

(defvar awkward-float #0+SYMOBLICS-3600 "+3L27" #0+PERQ "+3L100205")

Note that I hate the Teco-ish look of #0+, but unfortunately Maclisp
grabbed the good names for its faulty concept and it's probably too late
to switch.  I suppose that we could find two unused # characters and use
them instead of + and -.  Any nominations?

∂01-Oct-83  1503	RPG   	More comments on excelsior  
 ∂09-Sep-83  1126	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	More comments on excelsior  
Received: from MIT-MC by SU-AI with TCP/SMTP; 9 Sep 83  11:25:10 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Fri 9-Sep-83 14:23:16-EDT
Date: Friday, 9 September 1983, 14:26-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: More comments on excelsior
To: Scott E. Fahlman <Fahlman@CMU-CS-C>
Cc: moon%SCRC-TENEX@MIT-MC, steele@CMU-CS-C, rpg@SU-AI,
    dlw%SCRC-TENEX@MIT-MC, bsg%SCRC-TENEX@MIT-MC

    Date: Sat, 3 Sep 1983  00:32 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
	p. 141: the new-nicknames argument to rename-package should be &rest,
	not &optional, for uniformity.

	< I don't see how this furthers the cause of uniformity.  I favor
	&optional, though not passionately. >

	[I don't see the uniformity either.]

    This is the uniformity:

    "rename-package takes a package and any number of names (at least one) as
    arguments.  All the package's current names are discarded and the specified
    names are made to be the names of the package.  The first name specified
    is preferred for output, but all of the names are accepted on input."

    Actually in our system the first name is preferred for output when printing
    the package object, but the shortest name (unless the user specifies otherwise)
    is preferred for printing qualified names of symbols in the package.

    < Well, you want to treat these all as names for the package, of which
    some are slightly more equal than others.  I prefer to think of this as
    one real name and a list of nicknames.  I'm not wedded to this view, but
    your suggestion doesn't look like an improvement to me and the manual IS
    supposed to be frozen...>

I was hoping one of the other recipients of this message would answer, and
tell you you're just being stubborn.  Oh well, it's only a very minor matter
of tastefulness.

    < On the issue of *** and friends, I really dislike Guy's
    interpretation.  I agree with Moon that the first value is usually what
    you want to get at, and that the /'s are there if you want the other
    values.  I don't like the idea of * values scrolling out of reach just
    because some stupid function returned mroe values than I thought it
    would. >

Either way is okay with me, but tradition, such as it is, is on the side
of * only being the first value, and if you agree too, let's make it be
that way.

    < I think that it is treacherous and confusing for WRITE-CHAR #\RETURN
    to do anything other than writing a #\RETURN character to the file.
    Seems to me that this is low-level and should be literal.  Higher level
    things like TERPRI can "do the right thing", but I really want some way
    to write a #\RETURN code into a file, even on a system that normally
    uses CRLF between lines. >

Well, now, hold on.  Things aren't so clear as all that.  You are assuming
that Common Lisp uses the ascii character set (in fact, "Teletype ascii",
not some of the other ones going around such as "Unix ascii") and that
the character #\Return is the ascii CR character.  Some of us have been
assuming that #\Return stands for whatever the implementation-dependent
end-of-line sequence is, and that portable programs using files open
with :element-type character are shielded from the implementation-dependent
character set.  Both points of view have merit, and it seems that we have
only just noticed that we weren't all assuming the same thing.

But things like "seems to me that this is low-level and should be literal"
aren't arguments in favor of one or the other way of arranging the character
set.  The real issue is are we, or are we not, attempting to hide the
implementation-dependent character set by having a standard character set.
That is the issue.  If we are, then the way you write implementation-dependent
characters into a file (e.g. if you use a system whose end-of-line sequence
is Ascii CR followed by Ascii LF, and you want to write a CR without any
LF after it, perhaps to cause overstriking on a lineprinter) is to open
the file with some element-type other than character.  You could use an
element-type of (byte 8) [or 7 or 9 depending on the file system], or there
could be an element-type which is character objects, but representing the
implementation-specific character set, not the standard character set.

If on the other hand, we are not trying to hide the implementation-dependent
character set, then the manual must contain a list of standard and semi-standard
characters which, although standard, have implementation-dependent semantics,
and must warn you that the semantics of (write-char #\return) is not defined
by Common Lisp, and portable programs must use (terpri) instead.  There are
other warnings that go with this, such as the one about strings that cross
line-boundaries (the length and contents are implementation-dependent), and the
one about keyboard input either not reflecting keys typed one-for-one or not
being legal to just copy into a file, depending on the implementation.

We have to decide *now* which we are doing.

I just looked through the Excelsior manual, trying to find a description of
the character set.  I couldn't find one.  Page 275 (the #\ reader macro)
seems to be the only place that says what the Return character is.  The
phrasing is very ambiguous; I can't tell whether it means that #\return is
the same "newline" that is mentioned in connection with print, terpri, and
the ~% format operator.

Does the Spice operating system perpetuate the use of the two character CR-LF 
sequence as a line delimiter in files?

    ---------------------------------------------------------------------------
    More by SEF:

    I've still got a big problem with the #+, #- business.  I now realize
    that these are fundamentally ill-formed constructs....

We just had a discussion of various things related to #+ and #- here a
couple of weeks ago, by coincidence.  This convinced me that #+ and #-
are only for simple cases and the cases where you really need them, and
trying to hair them up to handle anything else is a mistake.

    ...I propose that we state that if the condition for #+ and #-
    fails, then the next object is read by the implementation using READ
    (perhaps requiring to switch to the standard Common Lisp readtable to do
    this) and the result is discarded.

Fine.  Except incorporate Steele's suggestion that tokens consisting entirely
of constituents (and colons) do not cause errors when read in "ignore mode."
This is not at all difficult to do (it's a total of perhaps 10 lines of code
in our reader, and simply requires a special variable bound to T by #+/#-
and checked at a few places that could signal errors).  There is no need to
handle macros inside of #+/#- specially, although certainly if those macros
care to look at that "ignore mode" flag you can let them.  But in general
#+/#- can't be made to "work for all cases."

Oh, and definitely do NOT switch readtables when going into "ignore mode".
Just call READ (with the recursive flag).

(I deleted your ``#0+condition string'' proposal from my reply, since I think
it comes under the category of hairing them up.)

∂01-Oct-83  1504	RPG   	More comments on excelsior  
 ∂11-Sep-83  1930	FAHLMAN@CMU-CS-C.ARPA 	More comments on excelsior 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 11 Sep 83  19:30:21 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sun 11 Sep 83 22:03:23-EDT
Date: Sun, 11 Sep 1983  17:56 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX@MIT-MC.ARPA>, steele@CMU-CS-C.ARPA
Cc:   bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA,
      fahlman@CMU-CS-C.ARPA
Subject: More comments on excelsior
In-reply-to: Msg of 9 Sep 1983 14:26-EDT from David A. Moon <Moon%SCRC-TENEX at MIT-MC>


OK, it seems that Moon and I both agree that the * variables should not
get a spread of all the returned values as Guy suggested, but only the
first values of the recent evaluations.

It also seems that RENAME-PACKAGE can stay the way it is.  (I'm not
being stubborn for the sake of stubbornness, but just to preserve the
principle that only the most important changes should go in at this
point, unless the current version is unworkable, the result of some
confusion, or is being seen for the first time.)

Moon is right in observing that there are two coherent ways to handle
the CRLF business.  One could assume, as I had assumed, that a character
within Lisp always represents one character in the external world of
files or whatever, and that translation of TERPRI into CR and LF
characters (on systems that use that convention) occurs within the Lisp,
before the call or calls to WRITE-CHAR.  This certainly seems simpler to
me, especially if you ever want to random-access into a text file.
However, if #\RETURN is uniformly treated as a single char within Lisp
but is expanded into a two-char sequence externally, that can be made
consistent too.  This hides the differences between systems better, but
anomalies arise if either CR or LF is ever seen in an external file
except as part of the CRLF idiom.

I prefer the scheme in which , but I guess I can live with any
scheme that is not overly complex and that is spelled out unambiguously.
If WRITE-CHAR becomes "helpful" we will want to have access to an
unhelpful (and un-portable) lower level that does exactly what you tell
it to, since sometimes you need to send out a real ASCII CR character as
part of an escape sequence or something.  But perhaps this should be a
red-pages function anyway, since it's below the portable level.  I think
that trying to make all of this strictly portable may be an open-ended
bag of worms, and I hope that we won't get hung up on this issue for too
long.

As far as I know, Spice has not yet settled on a system-wide standard
newline sequence for text files.  I think that Hemlock just writes an
ASCII CR between lines, and on input it turns either CR or CRLF into a
line break.  I'm not sure whether the non-Lispy Spice software does the
same.

As for #+ and #-, I guess we all agree that these macros read the item
to be discarded just by calling READ with the recursive flag.  The
implementation is required to find some way to prevent an error from
being signalled if the token to be discarded consists entirely of
constitutents and colons.  #+ and #- should not be used if this simple
paradigm is likely to result in errors or unwanted side effects.  My
suggestion for extending #+ and #- is withdrawn.

-- Scott

∂01-Oct-83  1505	RPG   	* * ** / // ///   
 ∂12-Sep-83  2051	Guy.Steele@CMU-CS-A 	* * ** / // ///    
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 12 Sep 83  20:51:46 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 12 Sep 83 23:41:27 EDT
Date: 12 Sep 83 2349 EDT (Monday)
From: Guy.Steele@CMU-CS-A
To: Scott E. Fahlman <Fahlman@CMU-CS-C>
Subject: * * ** / // ///
CC: moon%scrc-tenex@MIT-MC, bsg%scrc-tenex@MIT-MC, dlw%scrc-tenex@MIT-MC,
    rpg@SU-AI
In-Reply-To: "Scott E. Fahlman's message of 11 Sep 83 16:56-EST"

I have no particular attachment to any scheme for * ** ***.
What I elucidated was what I understood Moon to mean when he said
that the three kinds of variable (+ * /) were relatively independent.
I'm glad I sent out the explicit example, because it looks as though
I misinterpreted what he meant.  If everyone wants * and / kept in lock
step, fine.

∂01-Oct-83  1509	RPG   	370 Common Lisp   
 ∂26-Sep-83  1414	@MIT-MC:DLW%SCRC-TENEX@MIT-MC 	370 Common Lisp    
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Sep 83  14:14:40 PDT
Received: from SCRC-SHEPHERD by SCRC-TENEX with CHAOS; Mon 26-Sep-83 16:32:51-EDT
Date: Monday, 26 September 1983, 16:36-EDT
From: Daniel L. Weinreb <DLW%SCRC-TENEX@MIT-MC>
Subject: 370 Common Lisp
To: Fahlman@CMU-CS-C
Cc: rpg@SU-AI, moon%SCRC-TENEX@MIT-MC, Steele@CMU-CS-C

I just got a call from Dave Parker at MIT.  He's with something called
the Computation Research Center for Management Information Sciences,
associated with the Sloane School (MIT's business school).  He wants to
do a 370 Lisp, had decided to choose Zetalisp as the dialect, and wanted
to know if anyone had already done it.  I told him about Common Lisp,
about getting public domain software from CMU, etc, and recommended that
he call you.

I think it should be obvious how useful it would be for us to have an
implementation that runs on the 370!

I don't know anything about this guy, but from the phone call he
basically seems to be an easy-to-deal with person.  I hope he turns out
to be a decent implementor.  If, when you talk to him, you feel he's
serious, then I think we should cooperate fully and encourage him.

∂01-Oct-83  1509	RPG   	370 Common Lisp   
 ∂26-Sep-83  1424	FAHLMAN@CMU-CS-C.ARPA 	370 Common Lisp  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 26 Sep 83  14:24:00 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 CMU-CS-C.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 26 Sep 83 17:26:12-EDT
Date: Mon, 26 Sep 1983  17:26 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Daniel L. Weinreb <DLW%SCRC-TENEX@MIT-MC.ARPA>
Cc:   moon%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA, Steele@CMU-CS-C.ARPA
Subject: 370 Common Lisp
In-reply-to: Msg of 26 Sep 1983 16:36-EDT from Daniel L. Weinreb <DLW%SCRC-TENEX at MIT-MC>


Thanks for the warning.  If this guy is for real (that is, if the
probability of success is high and the amount of hand-holding he will
need is low) we would of course be willing to make our sources available
to him and help out as time allows.  I assume from what you said that
the plan is for this to be widely available as a free or nearly free
system -- ifhe's trying to make it a commercial product, that raises a
different set of issues.

-- Scott

∂01-Oct-83  1509	RPG   	Speaking of Public Domain Software    
 ∂26-Sep-83  2119	FAHLMAN@CMU-CS-C.ARPA 	Speaking of Public Domain Software   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 26 Sep 83  21:17:54 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Tue 27 Sep 83 00:20:25-EDT
Date: Tue, 27 Sep 1983  00:20 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Dick Gabriel <RPG@SU-AI.ARPA>
Cc:   fahlman@CMU-CS-C.ARPA
Subject: Speaking of Public Domain Software
In-reply-to: Msg of 26 Sep 83  1420 PDT from Dick Gabriel <RPG at SU-AI>


Dick,

That's good news.  The more you folks can do in the way of maintenance,
extension, and distribution of the public domain code, the happier we
will be.  I want to get on with AI work and my troops want to get into
various user-level things like Hemlock and the Lisp-based shell we're
about to start for Spice, plus assorted yellow-pages goodies.

We will want to have several long talks about the state of various parts
of the system and how best to phase the transfer.  For example, I'm in
the middle of a compiler rewrite now to get in lexical closures,
keywords, packages, and lots of little things.  You definitely won't
want the old compiler.  Also, Skef Wholey is in the middle of an effort
to greatly improve the Perq performance by changing the byte-code set
and hacking intensively on the microcode.  Any blue-pages effort should
use the new microcode set.

A key question is how much manpower you have available for this, and
when.  What fraction of an RPG is available, and who else, if anyone?
I've got some slightly decrepit files that have been orphaned and that
someone could hack right now, if you've got people who are ready to
roll.  An even more interesting question is how you're going to run our
code.  Do you have any Perqs?  A Dec-20 with extended memory (KL
processor)?  Vaxen?  Right now our stuff runs best on the slow Dec-20
emulator and on the Perq, though the Dec-20 Common Lisp from Rutgers
should be usable soon.  I could get you access to the Dec code for the
Vax, I'm sure, but they do funny things to the code at DEC so this
system lags our sources by a month or so.

A thought is that maybe the best way to get into the blue-pages business
would be to port our code to some other system, such as the Sun.  Just a
thought.  Without ucode, it would take a fair amount of work on the
compiler to get decent performance, but you could always write a system
that just interpets the byte codes as the microcode does on a Perq.

As for commercial arrangements, I'd like to see all of this code made
available to the world as freely as possible.  That's how we will
conquer the world.  Companies should pay for people's time at consulting
rates and a copying fee for getting a tape, but the code should be free.
Once you try to milk the company a little the lawyers show up, and if
you try to milk them a lot they might decide not to do a Common Lisp
after all.  If you folks at Stanford want to charge for the improved
version of the code, I guess we'd like a cut, but if you are willing to
keep it free, we'd prefer that.  The only reason I've made it
inconvenient to get at the code is to prevent people from getting things
prematurely.

All of our files are on CMU-CS-C (a Tops-20 system) in subdirectories of
PRVA:<SLISP>.  Help yourself.  If you need to login, use <SLGUEST>,
password "anthrax".  Other people (Symbolics, RMS, Rutgers...) use that
account too, so don't panic if you run into someone else.  Interesting
directories are the following:

PRVA:<SLISP.CODE.NEW>  Lisp-level code implementing Common Lisp.
PRVA:<SLISP.COMPILER.NEW>  The newest version of the old Maclisp-based
  compiler.
PRVA:<SLISP.DOCS> Various documents.  Note especially SLGUTS and RED.
PRVA:<EDITOR*> Versions of the Hemlock editor.

-- Scott

∂01-Oct-83  1509	RPG   	Varia        
 ∂27-Sep-83  1750	FAHLMAN@CMU-CS-C.ARPA 	Varia       
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 27 Sep 83  17:50:45 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Tue 27 Sep 83 20:53:18-EDT
Date: Tue, 27 Sep 1983  20:53 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Dick Gabriel <RPG@SU-AI.ARPA>
Subject: Varia    
In-reply-to: Msg of 27 Sep 83  1031 PDT from Dick Gabriel <RPG at SU-AI>


Dick,

It all sounds good.  Have you hired the Ph.D. level guy yet?  Any of
your possibilities for a machine to run on would work.  It is unclear to
me how far along Symbolics is on the 3600 CL or how legal it will be.
Any of a Dec-20, a Vax, or a quick and dirty port to the Sun would put
you in business.

Snarf away on the code.  Once you're able to run it there, then we can
think about what things you might want to polish.  When the time comes
that you are ready to make changes, we'll set up some sort of
version-control scheme to keep everything in sync.

On the manual: an annotated version would be fine, but probably the
right way to do it is to produce a separate document carefuly indexed to
the published manual.  That way you avoid having to deal with copyright
issues.  Or, I suppose, you could offer the result to Digital Press --
the Implementor's Edition of the Common Lisp Manual.  You, Guy and
Digital Press could doubtless work something out.

-- Scott

∂02-Oct-83  1759	RPG   	&rest args   
 ∂02-Oct-83  1447	@MIT-MC:MOON@SCRC-TENEX 	&rest args
Received: from MIT-MC by SU-AI with TCP/SMTP; 2 Oct 83  14:46:20 PDT
Date: Sunday, 2 October 1983  17:47-EDT
From: MOON at SCRC-TENEX
To:   Fahlman at cmuc
cc:   Steele at cmuc, RPG at su-ai, dlw at SCRC-TENEX, bsg at SCRC-TENEX
Subject:&rest args

I think I forgot to include in my message of a couple of days ago
the comment that handling &rest args in an efficient and safe way
is worth doing because it involves solving very much the same problems
as handling lexical closures in an efficient and safe way.  The
same spectrum of techniques (user-supplied declarations that dynamic extent of
the object is permissible, removal of the offending construct through
compile-time analysis, run-time detection of unsafe use of the
object that would otherwise be stack-allocated, implementation of
extra efficient consing and garbage collection so that heap allocation
may be used, phantom stacks, or giving up and telling the users that
the construct is inefficient and should not be used) applies in
either case.

∂03-Oct-83  0942	RPG   	Random idea: bringing back lexprs
 ∂03-Oct-83  0939	@MIT-MC:BSG%SCRC-TENEX@MIT-MC 	Random idea: bringing back lexprs 
Received: from MIT-MC by SU-AI with TCP/SMTP; 3 Oct 83  09:38:56 PDT
Received: from SCRC-BEAGLE by SCRC-TENEX with CHAOS; Mon 3-Oct-83 12:39:31-EDT
Date: Monday, 3 October 1983, 12:35-EDT
From: Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC>
Subject: Random idea: bringing back lexprs
To: MOON%SCRC-TENEX@MIT-MC, Fahlman@CMU-CS-C
Cc: dlw%SCRC-TENEX@MIT-MC, rpg@SU-AI, steele@CMU-CS-C
In-reply-to: The message of 1 Oct 83 01:51-EDT from David A. Moon <MOON at SCRC>

    Date: Saturday, 1 October 1983, 01:51-EDT
    From: David A. Moon <MOON at SCRC>
    I see no reason why a non-cdr-coded Lisp would be unable to allocate a
    list in a stack.  It simply requires that the list take up twice as much
    storage, a price they are already willing to pay when allocating in the
    heap, where the extra storage use costs you much more than it does in
    the stack.
And the garbage collector knowing about structures living in stacks,
which may or may not be alien to someone's implementation.

∂03-Oct-83  1516	RPG   	Random idea: bringing back lexprs
 ∂03-Oct-83  1043	FAHLMAN@CMU-CS-C.ARPA 	Random idea: bringing back lexprs    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 3 Oct 83  10:43:40 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 3 Oct 83 13:44:48-EDT
Date: Mon, 3 Oct 1983  13:44 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC.ARPA>
Cc:   dlw%SCRC-TENEX@MIT-MC.ARPA, MOON%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA,
      steele@CMU-CS-C.ARPA
Subject: Random idea: bringing back lexprs
In-reply-to: Msg of 3 Oct 1983 12:35-EDT from Bernard S. Greenberg <BSG%SCRC-TENEX at MIT-MC>


For some reason I never got Moon's comment about allocating
non-CDR-coded lists in a stack.  Sure, you could do this, but you
couldn't just modify the args as they are pushed (unless the caller
arranged to know just which args are going to be restified).  You would
have to copy the args into a separate stack or some other place on the
control stack.  And, as BSG points out, this can cause substantial pain
for the GC in many implementations.  All this hair is to provide the user
with the fiction of a list when what he really wants (usually, not
always) is to be able to iterate over an indefinite set of args.

Aside from compatibility, I guess I don't see why it is tasteful to
require the user to declare whether the list is to be real-but-slow or
fast-but-downward-only, but it is not OK to require him to say (by his
choice of &REST vs &MORE) whether he really wants a list or not?  Why
mess with the dangerous concept of volatile lists when the problem is
easily solved by not making a list at all if you don't need a real one?
I don't really buy Moon's argument that the exercise will do us good,
because there are other things that want to special cased if used
downwards-only.  Why hassle the user with one more tricky concept?

Oh, well, I seem to be losing this argument.  Guess I'll shelve the idea
until the night before flag day for the second edition and see if I can
zip it past people when they are all busy fighting about Flavors vs.
Smells or whatever.  In the meantime we'll just continue to cons up
&rest args for no good reason.  As I said, it was just a random
late-night idea.

-- Scott

∂03-Oct-83  1517	RPG   	Random idea: bringing back lexprs
 ∂03-Oct-83  1456	@MIT-MC:MOON@SCRC-TENEX 	Random idea: bringing back lexprs  
Received: from MIT-MC by SU-AI with TCP/SMTP; 3 Oct 83  14:56:06 PDT
Date: Monday, 3 October 1983  14:12-EDT
From: MOON at SCRC-TENEX
To:   Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>
Cc:   Bernard S. Greenberg <BSG%SCRC-TENEX at MIT-MC.ARPA>,
      dlw%SCRC-TENEX at MIT-MC.ARPA, MOON%SCRC-TENEX at MIT-MC.ARPA,
      rpg at SU-AI.ARPA, steele at CMU-CS-C.ARPA
Subject: Random idea: bringing back lexprs
In-reply-to: The message of Mon 3 Oct 1983  13:44 EDT from Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>

It sounds like you also never saw my message saying that &MORE was
an okay idea and would be all right to add to the language, even
though I think that the majority of functions that take &REST
arguments will not be able to switch to it.
---
The effing .ARPA's did their best to prevent you from seeing this
message.  I don't think that's why you didn't see my previous message,
since this time the mailer told me that it couldn't deliver the
message because the host names were bogus..

∂03-Oct-83  1601	RPG   	Random idea: bringing back lexprs
 ∂03-Oct-83  1531	FAHLMAN@CMU-CS-C.ARPA 	Random idea: bringing back lexprs    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 3 Oct 83  15:31:37 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 3 Oct 83 18:32:57-EDT
Date: Mon, 3 Oct 1983  18:32 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   MOON%SCRC-TENEX@MIT-MC.ARPA
Cc:   Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC.ARPA>,
      dlw%SCRC-TENEX@MIT-MC.ARPA, MOON%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA,
      steele@CMU-CS-C.ARPA
Subject: Random idea: bringing back lexprs
In-reply-to: Msg of 3 Oct 1983  14:12-EDT from MOON at SCRC-TENEX


Dave,

Indeed, I seem to have missed a few of your messages and only seem to
have seen the negative ones.  In any event, having added the message to
the queue for some future discussion and having kicked around some of
the issues, we should probably let it rest for now and the resurrect the
discussion on the full Common-Lisp list a few months hence.  We've all
got enough to do in the meantime, I imagine.

-- Scott

∂05-Oct-83  2020	@MIT-ML:ALAN@MIT-MC 	No No!  Flush it!! 
Received: from MIT-ML by SU-AI with TCP/SMTP; 5 Oct 83  20:20:15 PDT
Date: 5 October 1983 23:11 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject:  No No!  Flush it!!
To: Common-Lisp @ SU-AI

It is a Common Lisp philosophy that we shold flush any storing functions
where a SETF of an accessor will suffice.  Well I suggest that we flush
PUSH for this reason.  Instead of (PUSH X (CAR Y)) programmers should be
encouraged to write (SETF (POP (CAR Y)) X)!

∂05-Oct-83  2044	HEDRICK@RUTGERS.ARPA 	Re: No No!  Flush it!! 
Received: from RUTGERS by SU-AI with TCP/SMTP; 5 Oct 83  20:44:11 PDT
Date: 5 Oct 83 23:45:46 EDT
From: Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Subject: Re: No No!  Flush it!!
To: ALAN%MIT-MC@MIT-ML.ARPA
cc: Common-Lisp@SU-AI.ARPA
In-Reply-To: Message from "Alan Bawden <ALAN @ MIT-MC>" of 5 Oct 83 23:11:00 EDT

One of the reasons that I am suspicious about Common Lisp's design is
that I can never be sure when people are serious and when they are
spoofing me.  I would like to think that the idea of turning PUSH
into (SETF (POP was a joke.  But there is enough doubt that I am going
to answer it as if it were serious.

I think of SETF as being used to put values into "fields" of data
structures.  I am prepared to think of (CAR X) as being a field, and
I can even imagine (GET X Y) as one, though I think that is pushing
it.  (Certainly our Common Lisp will have PUTPROP.)  But PUSH 
simply pushes me beyond my ability to think in those terms.  Among
other things, it has sideeffects.  And (SETF (POP suggests the
wrong sideeffects, unless you think hard.

Please tell me you weren't serious.
-------

∂05-Oct-83  2052	FAHLMAN@CMU-CS-C.ARPA 	No No!  Flush it!!    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 5 Oct 83  20:52:30 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Wed 5 Oct 83 23:53:55-EDT
Date: Wed, 5 Oct 1983  23:53 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Alan Bawden <ALAN@MIT-MC.ARPA>
Cc:   Common-Lisp@SU-AI.ARPA
Subject: No No!  Flush it!!
In-reply-to: Msg of 5 Oct 1983 23:11 EDT from Alan Bawden <ALAN at MIT-MC>


Bawden is misinformed.  Common Lisp has no philosophy.  We are held
together only by a shared disgust for all the alternatives.

-- Scott

∂05-Oct-83  2112	GSB@MIT-ML 	No No!  Flush it!!
Received: from MIT-ML by SU-AI with TCP/SMTP; 5 Oct 83  21:12:38 PDT
Date: 6 October 1983 00:06 EDT
From: Glenn S. Burke <GSB @ MIT-ML>
Subject: No No!  Flush it!!
To: Fahlman @ CMU-CS-C
cc: Common-Lisp @ SU-AI, ALAN @ MIT-MC

Ahh, but if we use (setf (pop ...) ...) then we don't have to commit
ourselves to the order of arguments to PUS, because we don't have one.

p.s.  When alan and i were talking about this, Carrette walked into
the room and asked what kind of drugs were in the tea.  I don't think
Alan knows if he is serious.

∂05-Oct-83  2229	@MIT-ML:ALAN@MIT-MC 	No No!  Flush it!! 
Received: from MIT-ML by SU-AI with TCP/SMTP; 5 Oct 83  22:29:07 PDT
Date: 6 October 1983 01:28 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject:  No No!  Flush it!!
To: HEDRICK @ RUTGERS
cc: Common-Lisp @ SU-AI
In-reply-to: Msg of 5 Oct 83 23:45:46 EDT from Charles Hedrick <HEDRICK at RUTGERS.ARPA>

I wasn't sure whether I was kidding or not originally.  I took a poll of
those Lisp programmers who just happened to be here in the building, and
recieved the mixed reviews you might expect.  (Everything from "I like it!
I like it!" to "You're crazy Bawden, it's a total loss", about evenly
split.)

I think I am now convinced that the idea could fly.  BUT...  I took a close
look at the SETF method protocol documented on page 80 of the manual, and I
believe that the mechanism is not powerful enought to cover a case like
this one.  It says:

   "The value returned by the accessing form is (of course) affected by
    execution of the storing form, but otherwise either of these forms may
    be evaluated any number of times, and therefore should be free of side
    effects (other than the storing action of the storing form)."

All implementations that I have seen of this so far, including my own
original one, have the property that they depend on this clause of the
contract of a SETF method.  I don't see how to implement SETF of POP
without violating this clause.  I DO think that I could redesign this stuff
to handle this case, but I DON'T think Common Lisp should consider this
extension at this time.

You can all relax now.

∂05-Oct-83  2252	FAHLMAN@CMU-CS-C.ARPA 	No No!  Flush it!!    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 5 Oct 83  22:52:28 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Thu 6 Oct 83 01:53:56-EDT
Date: Thu, 6 Oct 1983  01:53 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Alan Bawden <ALAN@MIT-MC.ARPA>
Cc:   Common-Lisp@SU-AI.ARPA
Subject: No No!  Flush it!!
In-reply-to: Msg of 6 Oct 1983 01:28 EDT from Alan Bawden <ALAN at MIT-MC>


(-:
Gee, and while we're at it we can flush things like (setq x (sqrt 2))
and just go for (setf (* x x) 2).  Now if I can figure out how to do
matrix inversion by feeding this thing multiple values...
:-)

∂05-Oct-83  2351	Guy.Steele@CMU-CS-A 	You thought you were kidding 
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 5 Oct 83  23:51:21 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP;  6 Oct 83 02:39:54 EDT
Date:  6 Oct 83 0248 EDT (Thursday)
From: Guy.Steele@CMU-CS-A
To: Scott E. Fahlman <Fahlman@CMU-CS-C>
Subject: You thought you were kidding
CC: common-lisp@SU-AI
In-Reply-To: "Scott E. Fahlman's message of 6 Oct 83 00:53-EST"

)-:  Well, maybe we can't use setf to get sqrt, but:

(defsetf * (x &rest y) (newval)
  `(progn (setf ,x (/ ,newval ,@y)) ,newval))

(setf (* x 4) 24) => 24
   and now x => 6

I just tried this in VAX Common LISP.  It works fine.
--Guy

∂06-Oct-83  0013	FAHLMAN@CMU-CS-C.ARPA 	You thought you were kidding    
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 6 Oct 83  00:13:38 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Thu 6 Oct 83 03:15:34-EDT
Date: Thu, 6 Oct 1983  03:15 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Guy.Steele@CMU-CS-A.ARPA
Cc:   common-lisp@SU-AI.ARPA
Subject: You thought you were kidding
In-reply-to: Msg of 6 Oct 83 0248 EDT () from Guy.Steele at CMU-CS-A


Well, now I know how the Sorcerer's Apprentice felt when the brooms
started schlepping water.  (Or was that Mickey Mouse?)  Please, nobody
make any jokes about how to use SETF to launch the nuclear missiles --
Guy will try it and it will work.  This is scary.

∂06-Oct-83  0126	Guy.Steele@CMU-CS-A 	Really kidding, now
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 6 Oct 83  01:26:31 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP;  6 Oct 83 04:17:46 EDT
Date:  6 Oct 83 0426 EDT (Thursday)
From: Guy.Steele@CMU-CS-A
To: common-lisp@SU-AI
Subject: Really kidding, now

(defsetf logand (x y) (newval)
  `(if (= (logand y newval) newval)
       (progn (setf ,x (logior newval
			       (logand (random (expt 2 (max (integer-length
							       ,newval)
							    (integer-length
							       ,y))))
				       (lognot y))))
	      ,newval)
       (error "Impossible setf.")))

So then when you say

(setf (logand x 12) 8)

x will get set to some random value such that when it is anded with
12 you will get 8.  (X might get set to 9 or 10, for example.)
--Guy

∂06-Oct-83  0432	NEDVED@CMU-CS-C.ARPA 	please route 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 6 Oct 83  04:32:38 PDT
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 CMU-CS-C.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Received: ID <NEDVED@CMU-CS-C.ARPA>; Thu 6 Oct 83 07:33:44-EDT
Date: Thu 6 Oct 83 07:33:44-EDT
From: Nedved@CMU-CS-C.ARPA
Subject: please route
To: Common-Lisp@SU-AI.ARPA

Sigh. No "-Request" to send changes to without cluttering people's
mail boxes. 
                ---------------

Date: Thu 6 Oct 83 02:21:54-EDT
From: The Mailer Daemon <Mailer@CMU-CS-C.ARPA>
To: NEDVED@CMU-CS-C.ARPA
Subject: Message of 6-Oct-83 02:21:04

Message failed for the following:
Common-Lisp-Request@SU-AI.ARPA: 550 I don't know anybody named Common-Lisp-Request
	    ------------
Received: ID <NEDVED@CMU-CS-C.ARPA>; Thu 6 Oct 83 02:21:05-EDT
Date: Thu 6 Oct 83 02:21:04-EDT
From: Nedved@CMU-CS-C.ARPA
Subject: please route
To: Common-Lisp-Request@SU-AI.ARPA
cc: Feinberg@CMU-CS-C.ARPA

Please change Feinberg@CMU-CS-C to Feinberg%scrc-vixen@MIT-MC. Thanks!

-Rudy
A CMU Postmaster
-------
-------
-------

∂07-Oct-83  0101	Guy.Steele@CMU-CS-A 	SETF madness :-)   
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 7 Oct 83  01:01:15 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP;  7 Oct 83 03:51:28 EDT
Date:  7 Oct 83 0400 EDT (Friday)
From: Guy.Steele@CMU-CS-A
To: common-lisp@SU-AI
Subject: SETF madness :-)

Who needs UNREAD-CHAR?  Just  (SETF (READ-CHAR stream) char)  and
then (READ-CHAR stream) will return the char.

Who needs RENAME-FILE?  Just  (SETF (TRUENAME stream) newname) ...

Who needs DELETE-FILE?  Just  (MAKUNBOUND (TRUENAME stream)) ...
(Clearly we need MAKUNBOUNDF.)

And, of course, if you need to make the sun stand still for a while,
you do (LET ((NOW (GET-UNIVERSAL-TIME)))
         (LOOP (SETF (GET-UNIVERSAL-TIME) NOW)))
until you hit ↑G.

Who needs CHAR-UPCASE?  Just put the character into the variable X,
and then do (SETF (UPPER-CASE-P X) T).  Presto change-o!

What some implementations call KWOTE [ (DEFUN KWOTE (X) (LIST 'QUOTE X)) ]
can be done as  (SETF (EVAL X) Y)  <=>  (SETQ X (KWOTE Y)).

Finally, who needs PARSE-INTEGER???  If X is a string, and you want to
know what it means as an octal integer, just say
	(SETF (FORMAT NIL "~O" VAL) X)
Simple, eh?

--Quux

∂07-Oct-83  1531	@MIT-XX:BENSON@SPA-NIMBUS 	SETF madness :-)  
Received: from MIT-XX by SU-AI with TCP/SMTP; 7 Oct 83  15:31:24 PDT
Received: from SPA-RUSSIAN by SPA-Nimbus with CHAOS; Fri 7-Oct-83 15:29:53-PDT
Date: Friday, 7 October 1983, 15:29-PDT
From: Eric Benson <BENSON at SPA-Nimbus>
Subject: SETF madness :-)
To: Guy.Steele at CMU-CS-A, common-lisp at SU-AI
In-reply-to: The message of 7 Oct 83 01:00-PDT from Guy.Steele at CMU-CS-A

    Date:  7 Oct 83 0400 EDT (Friday)
    From: Guy.Steele@CMU-CS-A

    Who needs DELETE-FILE?  Just  (MAKUNBOUND (TRUENAME stream)) ...
    (Clearly we need MAKUNBOUNDF.)

No, we don't need MAKUNBOUNDF.  Just make a small change in the
definition of multiple values so that storing zero values is the same as
making something unbound.  Thus

(MAKUNBOUND 'FOO) becomes (SETF (SYMBOL-VALUE 'FOO) (VALUES))
(FMAKUNBOUND 'FOO) becomes (SETF (SYMBOL-FUNCTION 'FOO) (VALUES))
(REMPROP 'FOO 'BAR) becomes (SETF (GET 'FOO 'BAR) (VALUES))

And another thing, why are multiple values restricted to come in
non-negative integral quantities?  We shouldn't unduly restrict users
who may desire fractional, negative or complex numbers of values.

Why is RANDOM restricted to numbers?  I think it should be defined to
return an arbitrary Lisp object at random.  (For those of you with
3600s, try (%MAKE-POINTER (RANDOM 63.) (RANDOM (↑ 2 28.))).  Don't do it
if you have any active buffers, though.)

∂08-Oct-83  0728	WVANROGGEN@DEC-MARLBORO.ARPA 	SETF madness   
Received: from DEC-MARLBORO by SU-AI with TCP/SMTP; 8 Oct 83  07:28:42 PDT
Date: Sat 8 Oct 83 10:29:44-EDT
From: WVANROGGEN@DEC-MARLBORO.ARPA
Subject: SETF madness
To: Guy.Steele@CMU-CS-A.ARPA
cc: common-lisp@SU-AI.ARPA

This is going a bit too far. I can put up with a lot of the cute
features Common Lisp has, but these last suggestions are just going to
be too difficult to implement (at least on a Vax). I'd strongly
recommend *against* these changes to the Excelsior edition.

Instead, we ought to consider what users would really want. If Common
Lisp is supposed to be a general, common language usable by everyone,
we should provide something like:

(setf (eval `(let* ((pretime (get-internal-real-time))
		    (precond <<pre-condition>>))
		,@x
		(and precond
		     <<post-condition>>
		     (< pretime (+ (get-internal-real-time) <<time-limit>>)))))
      t)

for user-supplied <<pre-condition>>, <<post-condition>>, and <<time-limit>>.
Implementors should be encouraged to design SETF so that it also meets
these conditions (of course).

			---Walter
-------

∂10-Oct-83  2320	RPG   	Nasty issues for Common LISP Manual   
 ∂06-Oct-83  2312	STEELE@CMU-CS-C.ARPA 	Nasty issues for Common LISP Manual   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 6 Oct 83  23:11:47 PDT
Received: ID <STEELE@CMU-CS-C.ARPA>; Fri 7 Oct 83 02:14:58-EDT
Date: Fri 7 Oct 83 02:14:54-EDT
From: STEELE@CMU-CS-C.ARPA
Subject: Nasty issues for Common LISP Manual
To: fahlman@CMU-CS-C.ARPA, rpg@SU-AI.ARPA, moon%scrc-tenex@MIT-ML.ARPA,
    dlw%scrc-tenex@MIT-ML.ARPA

In going over all the comments of the last several months, attempting to
get the manual wrapped up, I have encountered some Very Hard Issues that
I have been avoiding until I had reread all the mail.  The time has come.
For each of the issues below, I review the problem and recommend a
solution that I will use unless I get a lot of flak real soon.
I solicit your advice on all of these issues.

I have committed to sending a "final draft" to Digital press by Tuesday;
it is from this copy that we will begin doing the SCRIBE design for the
phototypesetter.  This "final draft" need not contain the final solutions
to all these problems, but it would be nice to get in as many as
possible, so I beg for quick turnaround.  It sure would be nice to get
this whole thing off my back well before Thanksgiving.
--Guy


(1) The lexical scope of MACROLET macros.  The scope of the defined
names is clearly lexical; there is no problem there.  But what is
visible to the *bodies* of the macros defined in a MACROLET?
Lexical scope doesn't work, because the values don't exist at
macro-expansion time in a compiler, for example.

	(defun foo (x z)
	  (macrolet ((weirdp (y) `(or (numberp ,y) (eql ,y ,x))))
               (if (weirdp z) ...)))

The compiler has no hope of producing a correct expansion of weirdp
if the reference to x is taken to be to the parameter of foo.
(Note that there would be no difficulty if "x" had appeared in place
of ",x"; but that would mean a different thing.)

Proposed solution: the bodies of macro-definition functions for
macros defined by macrolet are closed in the global environment,
not in the current lexical environment.


(2) How shall tokens be parsed?  The introduction of package syntax has
made this very complicated.  It is almost impossible to provide a
coherent explanation of how vertical bars work if vertical bar remains a
macro character.  There are also questions of what such tokens as 123:foo
and foo:123 mean.  This is not so much an implementation problem as a
documentation problem, but the fact that the explanation has to be
contorted indicates that something is wrong.

Proposed solution: introduce a fifth kind of character (in addition to
constituent, escape, whitespace, and macro character), called a multiple
escape character, the old kind of escape being called a single escape
character.  Change the rules for reading a token so that when a
multiple-escape character is seen, either initially or in the middle of a
token, then all following whitespace, constituent, and macro characters
are treated as alphabetic constituents; any single escape character
causes the next character to be treated as a constituent; and any
multiple escape character causes you to revert to the mode of
accumulating until a whitespace or terminating macro character is seen.
Put more simply, a vertical bar toggles whether you are in vertical-bar
mode, but never terminates the token in and of itself.

This has the consequence that |foo|bar|baz| is a single token that will
be interpreted as a symbol with print name "fooBARbaz", rather than being
three distinct symbols.  It also has the more desirable consequence that
|foo|:|bar| naturally gets treated as a single token.

The rule is then: scan over a single token.  Then the characters must
fall into one of the following patterns:
	number-syntax		a number
	no-package-markers	a symbol
	:xxxxx			a keyword
	xxxxx:yyyyy		a symbol yyyyy in package xxxxx
	xxxxx#:yyyyy		an internal symbol yyyyy in package xxxxx
It "is an error" for xxxxx or yyyyy to have the syntax of a number.
It "is an error" for there to be any other pattern of package markers
(therefore an implementation can define what xxxxx: or xxx:yyy:zzz means).


(3) What shall the infix internal package syntax be?  Currently it
is "#:".  If the proposed solution to problem (2) above is implemented,
then thee is no remaining problem with having to look ahead two characters
after a vertical bar; given that # is a non-terminating macro character,
|foo|#:bar will naturally be read as a single token with no difficulty.
But there remain a few smaller difficulties.  One is explaining that
infix #: syntax has nothing whatsoever to do with the use of "#" as
a macro character, and indeed depends heavily on that particular use
of "#" *not* being interpreted as a macro character.  Another is the
confusion of the "#|" reader macro when it encounters "|#" within
|foo|#:|bar|.  A third, primarily aesthetic, is simply avoiding making
any more characters "magic" in the syntax of tokens than is necessary;
it would be nice to avoid wiring "#" into the syntax of symbols.

Proposed solution: use "::" instead of "#:" for internal package references.


(4) End-of-file and the recursive-p argument.  It has been pointed out
that if the recursive-p argument is true and you hit end-of-file, then
you must be in the middle of reading an object, and therefore it is in
order always to signal an error, regardless of the value of eof-errorp of
the top-level call.  (However, having the recursive-p argument be true
differs from having eof-errorp be true, because recursive-p also controls
whitespace preservation and the scoping of #n=.  Also, the method of
error handling may be interested in being able to locate the top-level
call.)

Proposed solution: adopt this interpretation of recursive-p, changing the
definition on page 290.


(5) How shall #+ and #- operate?  In current implementations there is
typically the problem that a strange kind of number or a symbol that
refers to a non-existent package will not be skipped properly even when
conditionalized out by #+ or #-.

Proposed solution: define #+ and #-, when skipping, to perform a READ
operation and then return zero values.  All the normal operations of
reading are carried out, including invocation of macro characters,
with the following exceptions:
(a) all tokens are completely uninterpreted and are treated as being NIL;
    that is, when a token is scanned, its characters are discarded and
    NIL is returned as its value.  (This stipulation is important because
    user-defined macro characters may read something following, and you have
    to say what they see.)
(b) #\ will not complain about reading the name of an unknown character.
(c) #*, #B, #O, #X, #nR will swallow a following token but not complain
    about its syntax.

An optional sub-proposal, which I will *not* put in, despite the fact
that I favor it, unless you all agree it is a very good thing, is to make
this machinery accessible to the user: stipulate that the function read
takes one more optional argument called suppress-p, and that
macro-character functions receive suppress-p as an additional argument
(this is how #\ can decide whether or not to complain).


(6) Must the compiler preserve the eql-ness of constants defined
by defconstant?

Proposed solution:  Yes.  The compiler may perform substitutions of
constants only when it can guarantee preservation of the semantics
of defconstant as a global variable whose value is fixed.


(7) Must there be a new data type lexical-closure?

Proposed solution:  No.  Let "closure" be a "conceptual data type",
like a-list.  The predicate "functionp" must be true of a closure,
just as it must be true of a symbol or of a list whose car is LAMBDA
(these elicidations need to be added to the description of functionp,
by the way).  A closure may be implemented as a list whose car is
CLOSURE, for example, or perhaps more tastefully as a structure.
A closure for compiled code might have a type that is a subtype of
COMPILED-FUNCTION, but need not.


(8) The functions READ-BINARY-OBJECT and WRITE-BINARY-OBJECT are
so general that they are giving implementors fits.

Proposed solution: flush them.  Individual implementors may choose
to demote them to the red pages, possibly supporting only restricted
cases.  For portable purposes there are READ-BYTE and WRITE-BYTE.
Leave any other binary I/O for the second edition.


(9) There are arguments to the effect that a single pathname in
*load-pathname-defaults* cannot possibly interact correctly with
both LOAD and COMPILE-FILE.  The argument essentially centers
on the use of non-standard file types.  There have been other
complaints about this defaulting mechanism as being too restrictive
or culturally incompatible with certain host environments.

Proposed solution: flush the variables *load-pathname-defaults*,
*compile-file-set-default-pathname*, and *load-set-default-pathname*.
Flush the :set-default-pathname argument to both load and compile-file.
Make the first argument to both load and compile-file be required, not
optional.


(10) There is a functional hole in the language in that there is
no good way to copy a structure.  One implementor has proposed that
a function COPY-STRUCTURE is needed.  This is not necessarily a
good idea, particularly since there is no STRUCTURE type specifier.
One could also just invent a general COPY function.  This has problems,
too.

Proposed solution: let there be a new defstruct option, :COPIER,
analogous to :PREDICATE and :CONSTRUCTOR.  If omitted, you get
a simple copying function that is more or less equivalent to
extracting all the fields from the given structure and feeding
their values to the standard constructor.  The advantage of this
is that if a structure has complicated internal invariants (such
as not sharing certain substructure with other instances of the
structure) then the copier, like the constructor, can maintain
these invariants.  Question: is this (a per-structure-type copy
function) sufficient for people's needs?


(11) Should read-delimited-list modify the readtable?
As currently documented, read-delimited-list does *not* modify
the readtable entry for the delimiter character; it is suggested
that this is the responsibility of the user of read-delimited-list.
Others have suggested that read-delimited-list itself should
temporarily, under an unwind-protect, make the delimiter character
have macro syntax equivalent to, say, ")".

Proposed solution: leave it as is, and put in a stronger warning
about setting it up yourself.


(12) There have been complaints that letting ~T output two spaces
when the column cannot be determined is relatively useless.
It has been suggested that the behavior for strings be adopted
in all cases where the absolute column cannot be determined:
assume that one is at column zero as of the beginning of the
format operation, and format can keep track of things from there.

Proposed solution: adopt this idea.  Put in a note that code will be
portable among more devices if all format strings actually do begin at
column zero or start out with ~% or ~&.


(13) What should enough-namestring really do?  It has been pointed out
that if the first argument to enough-namestring is a partially-specified
pathname (missing components) then it cannot possibly obey the given
definition in general.

Proposed solution: change the requirement to:

(merge-pathnames (enough-namestring <pathname> <defaults>) <defaults>)
   <=> (merge-pathnames <pathname> <defaults>)

I think this is what I originally meant, but muddled it up.  Will this work?


(14) Are (terpri) and (write-char #\Return) the same or different?
And what of (format t "~%")?  How many characters are in the string "
" ?

Proposed solution:  It is imperative that Common LISP shield the
user from character-set problems for ordinary textual cases, even at
the cost of some mapping.  Define (terpri) and (write-char #\Return)
to be identical in behavior.  Note that (write-char #\Linefeed)
is implementation-dependent; in particular, it is permissible for
an implementation to make (write-line #\Linefeed) emit an ASCII ↑J
unless it immediately follows a (write-line #\Return) for which
the sequence ↑M ↑J was emitted.  (This is what MacLISP does.)
This has repercussions on the definition of filepos on text files.
The string "
" may be of length 1 or 2, depending on whether CRLF sequences are
mapped to simply #\Return or not in that implementation.  However,
printing such a string will always produce the effect of a single
end-of-line operation.
Note that the internal encoding of #\Return need not be as the ASCII
↑M; indeed, it might well be ↑J!  (We should have called it #\Newline,
except for the fact that #\Return reminds people so nicely of the
Return key on most keyboards.)
All this applies only to files opened with :element-type string-char
or character.


(15) What are the precise semantics of eval-when?

Proposed solution:  This is what is used in Spice LISP.

When EVAL sees an EVAL-WHEN form, it looks for the word EVAL,
and evaluates the forms in the body.  That's all there is to it.

When the compiler processes a file, it first binds two flag variables:
E-W-LOAD to true and E-W-COMPILE to false.  It then processes each
form in the file.

The compiler processes a form as follows.  First, if E-W-COMPILE is true,
then give the form to EVAL and discard the result.  Next, if E-W-LOAD is
true, then there are several cases.  If the form is an EVAL-WHEN, then
rebind the two flag variables, each one being true iff its corresponding
word is present in the EVAL-WHEN form, and recursively process the forms
contained; the flag variables are unbound when the recursive processing
of the EVAL-WHEN ends.  If E-W-LOAD is true and the form is one of
several things, such as DEFUN, then code is compiled.  Certain actions
are taken for some forms regardless of the state of E-W-LOAD, such
as recognizing macros defined by DEFMACRO for compilation purposes.
The compiler may choose to apply MACROEXPAND or MACROEXPAND-1 to
a form at any time in order to determine what kind of form it is,
and it must do so before deciding that a form is not of a certain
kind (such as EVAL-WHEN or DEFMACRO).

This model has several interesting consequences.  One is that when
EVAL-WHEN forms are nested, the successive values of E-W-COMPILE may
oscillate, but once E-W-LOAD becomes false no more nested EVAl-WHEN forms
are processed (in the sense of rebinding the flag variables, although
they will be evaluated, and thus subject to the EVAL word, if E-W-COMPILE
is true).

Consider this example:
(eval-when (compile load)
   (eval-when (compile eval)
      (setq x (+ x 1))))
The variable x will be incremented *twice* at compile time!
It gets incremented once because the inner EVAL-WHEN form is given
to EVAL at the direction of the outer EVAL-WHEN form; EVAL then evaluates
the SETQ because it sees EVAL there.  It gets incremented again because
the outer EVAL-WHEN form, which contains LOAD, directs that the inner
EVAL-WHEN form be "processed", and because the inner EVAL-WHEN contains
COMPILE, the SETQ will be evaluated at compile time at the direction of
the inner EVAL-WHEN.

Question: is this model consistent with what LISP Machine LISP does now?
If not, why not?  Is this an acceptable model?
-------

∂10-Oct-83  2320	RPG   	[STEELE: Nasty issues for Common LISP Manual]   
 ∂07-Oct-83  1005	FAHLMAN@CMU-CS-C.ARPA 	[STEELE: Nasty issues for Common LISP Manual]  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 7 Oct 83  10:04:59 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Fri 7 Oct 83 13:07:38-EDT
Date: Fri, 7 Oct 1983  13:07 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   steele@CMU-CS-C.ARPA, rpg@SU-AI.ARPA, moon%SCRC-TENEX@MIT-ML.ARPA,
      dlw%SCRC-TENEX@MIT-ML.ARPA, fahlman@CMU-CS-C.ARPA
Subject: [STEELE: Nasty issues for Common LISP Manual]


< My comments in pointy brackets -- SEF >

(1) The lexical scope of MACROLET macros.

Proposed solution: the bodies of macro-definition functions for
macros defined by macrolet are closed in the global environment,
not in the current lexical environment.

< Actually, given all this scoping hair, I would favor flushing
MACROLET.  It is confusing as hell and not very useful.  Does anyone
have an important use for this facility?  However, I would be willing to
live with the solution above. >

(2) How shall tokens be parsed?  

Proposed solution: introduce a fifth kind of character (in addition to
constituent, escape, whitespace, and macro character), called a multiple
escape character, the old kind of escape being called a single escape
character.  Change the rules for reading a token so that when a
multiple-escape character is seen, either initially or in the middle of a
token, then all following whitespace, constituent, and macro characters
are treated as alphabetic constituents; any single escape character
causes the next character to be treated as a constituent; and any
multiple escape character causes you to revert to the mode of
accumulating until a whitespace or terminating macro character is seen.
Put more simply, a vertical bar toggles whether you are in vertical-bar
mode, but never terminates the token in and of itself.

It "is an error" for xxxxx or yyyyy to have the syntax of a number.
It "is an error" for there to be any other pattern of package markers
(therefore an implementation can define what xxxxx: or xxx:yyy:zzz means).

< I like this solution.  It is painful to add another magic character
type, but this makes everything much simpler and clearer than plans that
tried to handle || as a macro. >

(3) Proposed solution: use "::" instead of "#:" for internal package
references.

< I have discussed this at some length with Guy, and strongly favor the
change to "::".  Again, we are avoiding a lot of conceptual hair by
doing this. >

(4) End-of-file and the recursive-p argument.  It has been pointed out
that if the recursive-p argument is true and you hit end-of-file, then
you must be in the middle of reading an object, and therefore it is in
order always to signal an error, regardless of the value of eof-errorp of
the top-level call.  (However, having the recursive-p argument be true
differs from having eof-errorp be true, because recursive-p also controls
whitespace preservation and the scoping of #n=.  Also, the method of
error handling may be interested in being able to locate the top-level
call.)

Proposed solution: adopt this interpretation of recursive-p, changing the
definition on page 290.

< Actually, I think one more epicycle is needed: EOF should be
considered as a terminating character followed by the end of file.  If a
terminator would cause the recursive call to exit normally, it does
that, and the EOF problem is worried about by the caller.  If you're
down in a recursive-p call in a situation where the terminator would not
exit normally, that's when you ALWAYS signal the error. >

(5) How shall #+ and #- operate?

Proposed solution: define #+ and #-, when skipping, to perform a READ
operation and then return zero values.  All the normal operations of
reading are carried out, including invocation of macro characters,
with the following exceptions:
(a) all tokens are completely uninterpreted and are treated as being NIL;
    that is, when a token is scanned, its characters are discarded and
    NIL is returned as its value.  (This stipulation is important because
    user-defined macro characters may read something following, and you have
    to say what they see.)
(b) #\ will not complain about reading the name of an unknown character.
(c) #*, #B, #O, #X, #nR will swallow a following token but not complain
    about its syntax.

< This proposal sounds like the lesser of evils to me.  I'd go with it.>

An optional sub-proposal, which I will *not* put in, despite the fact
that I favor it, unless you all agree it is a very good thing, is to make
this machinery accessible to the user: stipulate that the function read
takes one more optional argument called suppress-p, and that
macro-character functions receive suppress-p as an additional argument
(this is how #\ can decide whether or not to complain).

< We will obviously have something like this inside, but rather than
hair up the language any further in this area, I'd send in suppress-p
via a special and not tell the user about it.  I wouldn't object to
treating it as an argument, however, if other people think the user
wants to get at this. >

(6) Must the compiler preserve the eql-ness of constants defined
by defconstant?

Proposed solution:  Yes.  The compiler may perform substitutions of
constants only when it can guarantee preservation of the semantics
of defconstant as a global variable whose value is fixed.

< I agree with this. >

(7) Must there be a new data type lexical-closure?

Proposed solution:  No.  Let "closure" be a "conceptual data type",
like a-list.  The predicate "functionp" must be true of a closure,
just as it must be true of a symbol or of a list whose car is LAMBDA
(these elicidations need to be added to the description of functionp,
by the way).  A closure may be implemented as a list whose car is
CLOSURE, for example, or perhaps more tastefully as a structure.
A closure for compiled code might have a type that is a subtype of
COMPILED-FUNCTION, but need not.

< No objection.  On the other hand, I wouldn't object to a real
data-type either.  A careful pass is needed to make the manual
self-consistent in this area in any event. >

(8) The functions READ-BINARY-OBJECT and WRITE-BINARY-OBJECT are
so general that they are giving implementors fits.

Proposed solution: flush them.

< Yes!!!  We will implement some parts of this facility as
implementation-dependent calls usable by FASLOAD and friends.  For
example, we want to be able to quickly dump a bignum or vector of
numbers into a binary stream of 8-bit bytes.  But having to be able to
read and dump ALL of these data types from ALL kinds of binary streams
creates a huge case analysis task, only a few of whose branches are
useful.  Leave stuff on this level to the implementor's discretion. >

(9) Proposed solution: flush the variables *load-pathname-defaults*,
*compile-file-set-default-pathname*, and *load-set-default-pathname*.
Flush the :set-default-pathname argument to both load and compile-file.
Make the first argument to both load and compile-file be required, not
optional.

< I am strongly in favor of this.  The slight added convenience of being
able to just say (LOAD) is not worth the added conceptual hair of all
this defaulting.  Plus, the current scheme has some bugs and people who
have not used ITS will find the sticky-name convention confusing.  It's
easy enough to define a COMPILE-AND-LOAD or RELOAD option locally, if
that's what people want. >

(10) There is a functional hole in the language in that there is
no good way to copy a structure.  One implementor has proposed that
a function COPY-STRUCTURE is needed.  This is not necessarily a
good idea, particularly since there is no STRUCTURE type specifier.
One could also just invent a general COPY function.  This has problems,
too.

Proposed solution: let there be a new defstruct option, :COPIER,
analogous to :PREDICATE and :CONSTRUCTOR.  If omitted, you get
a simple copying function that is more or less equivalent to
extracting all the fields from the given structure and feeding
their values to the standard constructor.  The advantage of this
is that if a structure has complicated internal invariants (such
as not sharing certain substructure with other instances of the
structure) then the copier, like the constructor, can maintain
these invariants.  Question: is this (a per-structure-type copy
function) sufficient for people's needs?

< Hirsute, but workable.  Why is a universal COPY function (top-level
structure only in structured objects) not workable? >

(11) Should read-delimited-list modify the readtable?
As currently documented, read-delimited-list does *not* modify
the readtable entry for the delimiter character; it is suggested
that this is the responsibility of the user of read-delimited-list.
Others have suggested that read-delimited-list itself should
temporarily, under an unwind-protect, make the delimiter character
have macro syntax equivalent to, say, ")".

Proposed solution: leave it as is, and put in a stronger warning
about setting it up yourself.

< OK by me.  This facility will not be used by the casual user anyway,
and keeping it simple will help to prevent unforseen interactions with
other stuff in the language. >

(12) There have been complaints that letting ~T output two spaces
when the column cannot be determined is relatively useless.
It has been suggested that the behavior for strings be adopted
in all cases where the absolute column cannot be determined:
assume that one is at column zero as of the beginning of the
format operation, and format can keep track of things from there.

Proposed solution: adopt this idea.  Put in a note that code will be
portable among more devices if all format strings actually do begin at
column zero or start out with ~% or ~&.

< I'm in favor.  But make that note say "... if all format strings
CONTAINING ~T actually do begin at column zero..." >

(13) What should enough-namestring really do?
Proposed solution: change the requirement to:

(merge-pathnames (enough-namestring <pathname> <defaults>) <defaults>)
   <=> (merge-pathnames <pathname> <defaults>)

< Looks OK to me. >

(14) Are (terpri) and (write-char #\Return) the same or different?
And what of (format t "~%")?  How many characters are in the string "
" ?

< Well, I could live with Guy's proposal, but a bit unhappily.  Let me
propose the following slight modification: require every implementation
to recognize NEWLINE as a character name and require (write-char
#\Newline) to be the same as (terpri).  There is NO requirement that
#\Return and #\Newline be the same.  This will allow (but not require)
implementations to give #\Return and #\Linefeed their traditional ASCII
meanings, and will allow #\Newline to be the magic thing that "does the
right thing" in write-char.  For Spice, we have decided to go with the
unix convention of having a single LF between lines in a file, and this
will give us the flexibility to come up with an elegant solution: CR and
LF are the usual ASCII chars, and NEWLINE is the same as LF. >
                                               
(15) What are the precise semantics of eval-when?

< We are happy to stick with the model Guy described or to accept minor
variations on this theme.  I really don't care what nested EVAL-WHENs
do, so if some slightly different convention makes life easier for other
implementations, I can live with that, as long as it is implementable
and clear. >

∂10-Oct-83  2321	RPG   	Nasty issues for Common LISP Manual   
 ∂07-Oct-83  1915	@MIT-ML:Moon@SCRC-TENEX 	Nasty issues for Common LISP Manual
Received: from MIT-ML by SU-AI with TCP/SMTP; 7 Oct 83  19:14:26 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Fri 7-Oct-83 22:14:08-EDT
Date: Friday, 7 October 1983, 22:09-EDT
From: David A. Moon <Moon@SCRC-TENEX>
Subject: Nasty issues for Common LISP Manual
To: STEELE%CMU-CS-C@MIT-ML
Cc: fahlman%CMU-CS-C@MIT-ML, rpg%SU-AI@MIT-ML, moon@SCRC-TENEX,
    dlw@SCRC-TENEX, bsg@SCRC-TENEX
In-reply-to: The message of 7 Oct 83 02:14-EDT from STEELE at CMU-CS-C,
             The message of 7 Oct 83 13:07-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: Fri 7 Oct 83 02:14:54-EDT
    From: STEELE@CMU-CS-C.ARPA

    (1) The lexical scope of MACROLET macros.  The scope of the defined
    names is clearly lexical; there is no problem there.  But what is
    visible to the *bodies* of the macros defined in a MACROLET?

I don't see that this is much of a problem.  There are two sets of
lexical environments involved; the compile-time set and the run-time
set.  Obviously the body of a local macro is in the compile-time lexical
environment.  The run-time lexical environment does not exist at compile
time; only the compiler's model of what it will be exists.  Now, do we
have any special forms that create bindings in the compile-time lexical
environment, or is it always identical to the global environment?  A
while back we had a discussion of compiling things like
	(LET ((A (FOO)))
	  (DEFUN BAR ()
	    ...))
I believe we decided that A is bound in a run-time lexical environment,
not a compile-time environment.  Then we have to ask what
	(MACROLET ((FOO ...))
	  (DEFUN BAR ()
	    (MACROLET ((BAZ ...))
	      ...)))
does, specifically whether FOO is defined in the compile-time lexical
environment or the run-time lexical environment or both.  I suggest
just the run-time lexical environment.

It's fine with me if we say that there is no way to create local
compile-time lexical environments, so that the compile-time lexical
environment is always the global environment.

There is, of course, one special form that creates a degenerate
compile-time environment that contains only special bindings, not
lexical bindings: COMPILER-LET.  No closure is required.

    Proposed solution: the bodies of macro-definition functions for
    macros defined by macrolet are closed in the global environment,
    not in the current lexical environment.

    < Actually, given all this scoping hair, I would favor flushing
    MACROLET.  It is confusing as hell and not very useful.  Does anyone
    have an important use for this facility?  However, I would be willing to
    live with the solution above. --SEF>

We have numerous places in "Zetalisp" where we wish we had MACROLET but
have to define some macros and then undefine them (which loses because
you can't do incremental recompilation) or else give them funny names
and hope no one else uses those names (which makes the code
unnecessarily hard to read).  So I vote that we keep MACROLET and adopt
Steele's solution of using the global environment with my gloss
explaining why that is obviously the only right thing.


    (2) How shall tokens be parsed?  The introduction of package syntax has
    made this very complicated.  It is almost impossible to provide a
    coherent explanation of how vertical bars work if vertical bar remains a
    macro character.  There are also questions of what such tokens as 123:foo
    and foo:123 mean.

Back in "Zetalisp", this was simple.  Any token that ended in a colon
was a package prefix, regardless of whether up until the colon it looked
like a number or like a symbol; vertical bars were not allowed before
colons; and a package prefix worked by reading one expression relative
to the specified package, regardless of whether it was a symbol, a
number, a list, something involving macro characters, or whatever.  Thus
it is quite clear what 123:foo means (the external symbol foo of the 123
package, probably sold by Lotus) and what foo:123 means (same as 123,
numbers don't care about packages).

This got broken and complicated by various "simplifications".

    Proposed solution: ...a vertical bar toggles whether you are in vertical-bar
    mode, but never terminates the token in and of itself.
Okay.  A change, but not a terrible one.

I counterpropose that we return to the "Zetalisp" package prefix syntax,
in other words that we not allow vertical bars to be used around package
names, only around symbol names.  This solves #3 as well.  I already went
to considerable trouble to implement vertical bars in package-prefix tokens,
but I wouldn't mind ripping that code out again if it made the language
easier to understand and nicer.

If you wish to restrict the successor of a package-prefix token to be
a symbolic token, rather than an arbitrary expression, that's okay with me.
But making them separate tokens simplifies things.


    (3) What shall the infix internal package syntax be?  Currently it
    is "#:".... Another problem is the
    confusion of the "#|" reader macro when it encounters "|#" within
    |foo|#:|bar|.

    Proposed solution: use "::" instead of "#:" for internal package references.

    < I have discussed this at some length with Guy, and strongly favor the
    change to "::".  Again, we are avoiding a lot of conceptual hair by
    doing this. --SEF>

I could live with changing #: to ::, although I had hopes to use :: for
something else and the symmetry between infix #: and prefix #: is pleasant.
I would rather keep #: and flush vertical bar in package-prefix tokens.
I don't understand what is the conceptual hair Fahlman alludes to; perhaps
this means non-token-terminating macro characters?


    (4) End-of-file and the recursive-p argument.  It has been pointed out
    that if the recursive-p argument is true and you hit end-of-file, then
    you must be in the middle of reading an object, and therefore it is in
    order always to signal an error, regardless of the value of eof-errorp of
    the top-level call.  (However, having the recursive-p argument be true
    differs from having eof-errorp be true, because recursive-p also controls
    whitespace preservation and the scoping of #n=.  Also, the method of
    error handling may be interested in being able to locate the top-level
    call.)

    Proposed solution: adopt this interpretation of recursive-p, changing the
    definition on page 290.

    < Actually, I think one more epicycle is needed: EOF should be
    considered as a terminating character followed by the end of file.  If a
    terminator would cause the recursive call to exit normally, it does
    that, and the EOF problem is worried about by the caller.  If you're
    down in a recursive-p call in a situation where the terminator would not
    exit normally, that's when you ALWAYS signal the error. --SEF>

It is not true that end of file while doing recursive reading is always an
error, as I have pointed out before.  Consider the recursive read-char
performed by the semicolon reader-macro; a comment that ends in an end of
file should be as good as a comment that ends in a carriage return followed
by an end of file.  Otherwise users of non-record-oriented systems, that don't
force files to end in carriage returns, will think we are stupid.  I think
this is what Fahlman is getting at in his comment.

What I said about this last time was a bit confused.  Here is a
better proposal.  When recursive-p is t, then eof-error-p = nil means
that eof-value should be returned from the inner call to read if it
hits an eof, regardless of the top-level read's arguments; and
eof-error-p = t means that an eof-in-middle-of-object error should
be signalled, again regardless of the top-level read's arguments.
Then fix all examples that say (READ stream NIL NIL T) to be
(READ stream T NIL T).

It may be that only READ-CHAR would ever be called with arguments
of stream NIL eof-value T, and READ would never be called that way, but
they must behave consistently, of course.

Why is "eof-error-p" spelled "eof-errorp"?  Isn't that inconsistent with
the new "-p" rules?


    (5) How shall #+ and #- operate?  In current implementations there is
    typically the problem that a strange kind of number or a symbol that
    refers to a non-existent package will not be skipped properly even when
    conditionalized out by #+ or #-.

    Proposed solution: define #+ and #-, when skipping, to perform a READ
    operation and then return zero values.  All the normal operations of
    reading are carried out, including invocation of macro characters,
    with the following exceptions:
    (a) all tokens are completely uninterpreted and are treated as being NIL;
	that is, when a token is scanned, its characters are discarded and
	NIL is returned as its value.  (This stipulation is important because
	user-defined macro characters may read something following, and you have
	to say what they see.)
    (b) #\ will not complain about reading the name of an unknown character.
    (c) #*, #B, #O, #X, #nR will swallow a following token but not complain
	about its syntax.

This is the right thing.  I would add
(d) #A, #S, "#.", and "#," will read a list but not attempt to interpret
it and build an array or structure, but instead will just read as NIL.
(e) #= is ignored.
(f) ## reads as NIL.
(g) #>, #<return>, etc. continue to signal errors.

    An optional sub-proposal, which I will *not* put in, despite the fact
    that I favor it, unless you all agree it is a very good thing, is to make
    this machinery accessible to the user: stipulate that the function read
    takes one more optional argument called suppress-p, and that
    macro-character functions receive suppress-p as an additional argument
    (this is how #\ can decide whether or not to complain).

This is good.  Definitely it should be made available to the user.


    (6) Must the compiler preserve the eql-ness of constants defined
    by defconstant?

    Proposed solution:  Yes.  The compiler may perform substitutions of
    constants only when it can guarantee preservation of the semantics
    of defconstant as a global variable whose value is fixed.

This is good.  After some experimentation & lossage we settled on this
interpretation of DEFCONSTANT (which already existed internally in
"Zetalisp" under a different name), and experience indicates it is
definitely right.


    (7) Must there be a new data type lexical-closure?

    Proposed solution:  No.  Let "closure" be a "conceptual data type".

Sure.  We can always "actualize" this data type later if it seems
desirable and the implementors don't mind.


    (8) The functions READ-BINARY-OBJECT and WRITE-BINARY-OBJECT are
    so general that they are giving implementors fits.

    Proposed solution: flush them.

    <...having to be able to
    read and dump ALL of these data types from ALL kinds of binary streams
    creates a huge case analysis task, only a few of whose branches are
    useful. --SEF>

We had no trouble at all implementing these.  But we did assume that
the stream could accomodate 16-bit bytes, as our default binary streams
do.  If one must deal with arbitrary byte sizes it is more painful.
By the way, our write-binary-object ignores the type argument, and
our read-binary-object only uses it for a gratuitous error check, but
I think it is not unreasonable to have those arguments there for other
implementations that might want them.

Leaving things to the second edition is fine with me, especially
inessential things like this.  Let's just get this sucker published.


    (9) There are arguments to the effect that a single pathname in
    *load-pathname-defaults* cannot possibly interact correctly with
    both LOAD and COMPILE-FILE.  The argument essentially centers
    on the use of non-standard file types.  

I'd like to see the arguments, since I suspect they are bogus.

					    There have been other
    complaints about this defaulting mechanism as being too restrictive
    or culturally incompatible with certain host environments.

Well, the host controls whether the defaults are sticky or not, subject
to override by the user, which seems pretty culturally compatible.

    Proposed solution: flush the variables *load-pathname-defaults*,
    *compile-file-set-default-pathname*, and *load-set-default-pathname*.
    Flush the :set-default-pathname argument to both load and compile-file.
    Make the first argument to both load and compile-file be required, not
    optional.

Leaving this out and worrying about it in the second edition is fine with me.
Sensible programs always call LOAD with fully-specified pathnames.


    (10) There is a functional hole in the language in that there is
    no good way to copy a structure.  One implementor has proposed that
    a function COPY-STRUCTURE is needed.  This is not necessarily a
    good idea, particularly since there is no STRUCTURE type specifier.
    One could also just invent a general COPY function.  This has problems,
    too.

    Proposed solution: let there be a new defstruct option, :COPIER,
    analogous to :PREDICATE and :CONSTRUCTOR.  If omitted, you get
    a simple copying function that is more or less equivalent to
    extracting all the fields from the given structure and feeding
    their values to the standard constructor.  

This sounds good.  I would really prefer if all these defstruct-defined
functions defaulted to not being defined unless one specified the appropriate
keyword, rather than defaulting to being defined automatically even if
one doesn't need them.


    (11) Should read-delimited-list modify the readtable?
    As currently documented, read-delimited-list does *not* modify
    the readtable entry for the delimiter character; it is suggested
    that this is the responsibility of the user of read-delimited-list.
    Others have suggested that read-delimited-list itself should
    temporarily, under an unwind-protect, make the delimiter character
    have macro syntax equivalent to, say, ")".

    Proposed solution: leave it as is, and put in a stronger warning
    about setting it up yourself.

Okay.  I think Bawden (readermeister) has an opinion about how
read-delimited-list should be implemented, but I don't know what it is,
and he's on vacation for a week.


    (12) There have been complaints that letting ~T output two spaces
    when the column cannot be determined is relatively useless.
    It has been suggested that the behavior for strings be adopted
    in all cases where the absolute column cannot be determined:
    assume that one is at column zero as of the beginning of the
    format operation, and format can keep track of things from there.

    Proposed solution: adopt this idea.  Put in a note that code will be
    portable among more devices if all format strings actually do begin at
    column zero or start out with ~% or ~&.

This is good.  We'll have to remodularize our FORMAT, which is just as well.
As Fahlman said, only use of ~T (without @) renders this note applicable.


    (13) What should enough-namestring really do?  It has been pointed out
    that if the first argument to enough-namestring is a partially-specified
    pathname (missing components) then it cannot possibly obey the given
    definition in general.

    Proposed solution: change the requirement to:

    (merge-pathnames (enough-namestring <pathname> <defaults>) <defaults>)
       <=> (merge-pathnames <pathname> <defaults>)

    I think this is what I originally meant, but muddled it up.  Will this work?

This is the right thing.  enough-namestring should return the string with the
fewest characters that satisfies that identity.


    (14) Are (terpri) and (write-char #\Return) the same or different?
    And what of (format t "~%")?  How many characters are in the string "
    " ?

    Proposed solution:  It is imperative that Common LISP shield the
    user from character-set problems for ordinary textual cases, even at
    the cost of some mapping.  Define (terpri) and (write-char #\Return)
    to be identical in behavior.  Note that (write-char #\Linefeed)
    is implementation-dependent; in particular, it is permissible for
    an implementation to make (write-line #\Linefeed) emit an ASCII ↑J
    unless it immediately follows a (write-line #\Return) for which
    the sequence ↑M ↑J was emitted.  (This is what MacLISP does.)
    This has repercussions on the definition of filepos on text files.
    The string "
    " may be of length 1 or 2, depending on whether CRLF sequences are
    mapped to simply #\Return or not in that implementation.  However,
    printing such a string will always produce the effect of a single
    end-of-line operation.
    Note that the internal encoding of #\Return need not be as the ASCII
    ↑M; indeed, it might well be ↑J!  (We should have called it #\Newline,
    except for the fact that #\Return reminds people so nicely of the
    Return key on most keyboards.)
    All this applies only to files opened with :element-type string-char
    or character.

This seems like the best compromise.

    < Well, I could live with Guy's proposal, but a bit unhappily.  Let me
    propose the following slight modification: require every implementation
    to recognize NEWLINE as a character name and require (write-char
    #\Newline) to be the same as (terpri).  There is NO requirement that
    #\Return and #\Newline be the same.  This will allow (but not require)
    implementations to give #\Return and #\Linefeed their traditional ASCII
    meanings, and will allow #\Newline to be the magic thing that "does the
    right thing" in write-char.  For Spice, we have decided to go with the
    unix convention of having a single LF between lines in a file, and this
    will give us the flexibility to come up with an elegant solution: CR and
    LF are the usual ASCII chars, and NEWLINE is the same as LF. --SEF>

What about READ-CHAR from the keyboard?  Presumably READ-CHAR when the
usual line-terminating key is pressed must return #\Newline, even if
this is not the same character as #\Return but the key has "Return"
engraved on it.  What about READ-CHAR from a file stream opened with
element-type STRING-CHAR?  Is it required to return exactly one #\Newline
character at the end of each line, or is it allowed to return another
totally random character between lines?  I suggest the former.  Open
in a non-standard or "binary" mode if you want to see the actual exact
bits in the file, rather than standard character objects.

#\Newline is okay with me provided that Return and Linefeed are REMOVED
from the standard character set.  There is no point in providing these
as standard names if what they mean is implementation-dependent.  We
should suggest that implementations that use "teletype ascii" and have
these characters should use these particular names for them, and note
that depending on the implementation either name, or neither of them,
can be the same character as #\Newline.  Can both of them be the same
character as #\Newline?  Presumably not.

We already implement #\Newline, as it turns out!  But it isn't the
preferred name on output, since the corresponding key has "Return"
engraved on it, not "Newline."

I cannot find explicit documentation of the Common Lisp character set anywhere
in the manual.  I looked in the Characters chapter, the Input/Output chapter,
and the Concept Index.  At least fix the index.


    (15) What are the precise semantics of eval-when?

The rules Guy described are not what we use in the Lisp machine.  The primary
reason for the difference is to avoid the anomaly where compiling
    (eval-when (compile load)
       (eval-when (compile eval)
	  (setq x (+ x 1))))
will increment x twice.  (There are much worse manifestations of this
anomaly).

The compiler has a state variable called COMPILE-TIME-TOO.  (Not a special
variable!)  It is initially NIL.  Processing of EVAL-WHEN by the compiler
is as follows:
	If it contains COMPILE, or if COMPILE-TIME-TOO is true and
	 it contains EVAL, bind COMPILE-TIME-TOO to true.  Otherwise
	 bind COMPILE-TIME-TOO to false.  (When I say "bind", I am
	 really referring to passing an argument to a recursive call
	 of the compiler's top-level-form-processing loop.)
	If it contains LOAD, process the body forms in the compiler's
	 normal way, using the new value of COMPILE-TIME-TOO.
	If it does not contain LOAD, then if the new value of
	 COMPILE-TIME-TOO is true, EVAL the body forms, otherwise ignore them.
Processing of other forms by the compiler is as follows:
	If COMPILE-TIME-TOO is true, call EVAL on the form.
	Then do normal compiler processing of the form.
	(Note that there is no place where a form gets processed twice
	other than the immediately-preceding two lines.)

I think this is actually simpler than what Guy described.

The actual code is more complicated, with three states for COMPILE-TIME-TOO,
but there is no good reason for this, it's just due to the modularity of the
way macros and declarations are placed into just the compiler's database (not
into the compile-time Lisp interpreter's data base) unless surrounded by
an (EVAL-WHEN (COMPILE)...).  This should be part of "then do normal compiler
processing of the form", except that doing it that way would be slightly
less general and modular since the compiler would have to know the names
of all the forms that put things into the compiler's database.  Like many
ancient holdovers in the Lisp machine compilers, this is totally insane,
since it turns out it has to know their names anyway to manipulate the
third state of COMPILE-TIME-TOO correctly!

    The compiler may choose to apply MACROEXPAND or MACROEXPAND-1 to
    a form at any time in order to determine what kind of form it is,
    and it must do so before deciding that a form is not of a certain
    kind (such as EVAL-WHEN or DEFMACRO).

Yes.

I hope it didn't take you as long to read this message as it took me to write it!

∂10-Oct-83  2321	RPG   	Nasty issues for Common LISP Manual   
 ∂08-Oct-83  0120	FAHLMAN@CMU-CS-C.ARPA 	Nasty issues for Common LISP Manual  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 8 Oct 83  01:19:21 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sat 8 Oct 83 04:21:01-EDT
Date: Sat, 8 Oct 1983  04:20 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX@MIT-ML.ARPA>
Cc:   bsg%SCRC-TENEX@MIT-ML.ARPA, dlw%SCRC-TENEX@MIT-ML.ARPA,
      fahlman@CMU-CS-C.ARPA, rpg@SU-AI.ARPA, STEELE@CMU-CS-C.ARPA
Subject: Nasty issues for Common LISP Manual
In-reply-to: Msg of 7 Oct 1983 22:09-EDT from David A. Moon <Moon at SCRC-TENEX>


Now that we've heard from Steele, me, and Moon, let me try to sift out
the issues where at lest the three of us seem to agree and the issues where
there are some rough edges.  We seem to have problems on issues 5 and
14.  Tuning is needed on 4.

(1) Macrolet ought to stay around, I guess.  The bodies see a global
(empty) lexical environment, the only one you can get at compile-time.
Some version of Moon's analysis of this might want to go into the
manual.

(2) I don't think that we can accept Moon's suggestion that we revert to
the allegedly simpler Zetalisp syntax for packages.  It wouldn't bother
me to do so -- I don't think we want to encourage people to put weird
characters in package names anyway -- but a few people were very adamant
about this when it was on the ballot, and it got discussed at
considerable length.  A proposal to change this in some drastic way
would require the same sort of discussion, I think.

Moon seems willing to agree with Guy's proposal on vertical bars, as am I.

(3) If we agree not to attempt the major syntax simplification Moon
proposes in issue 2, then it seems ot me that :: is preferable to #:
because it avoids the ugly interaction with #|...|# and the need to make
# a non-terminating macro character.  Moon says he is willing to go along,
though it's not his first choice.

(4)  I think that Moon's proposal below would work.  I'll have to think
harder about this tomorrow.

    What I said about this last time was a bit confused.  Here is a
    better proposal.  When recursive-p is t, then eof-error-p = nil means
    that eof-value should be returned from the inner call to read if it
    hits an eof, regardless of the top-level read's arguments; and
    eof-error-p = t means that an eof-in-middle-of-object error should
    be signalled, again regardless of the top-level read's arguments.
    Then fix all examples that say (READ stream NIL NIL T) to be
    (READ stream T NIL T).

    It may be that only READ-CHAR would ever be called with arguments
    of stream NIL eof-value T, and READ would never be called that way, but
    they must behave consistently, of course.

(5) I said before that we were unwilling to implement a whole separate
reader just to get various special cases to work right in #+ and #-,
evil constructs whose syntax cannot be made to work in general.
Somebody proposed one exception (don't complain about unknown packages)
and I agreed.  Guy just proposed seven more exceptions, and I got scared.
Now Moon wants another seven, and if these get in there are sure to be
128 more.  I think that the camel's back just broke.

We should say that #+ and #- are worthless crocks kept around for
historical reasons, and that they do a very dumb thing: just call READ
and discard the result.  No exceptions.  If you want to conditionally
read hairy things without getting syntax errors, you have to use
something like the #0+ I proposed earlier, which gobbles a string
delimited by double-quotes and conditionally passes the result to the
reader.  I agree that #0+ and #0- are hideous, so I propose #W for the
#+ analogue and #U for the #- analogue.  W stands for for "when" and U
for "unless".  Neither char is used in Common Lisp sharp macros right
now -- does anyone have a conflict?

(6) Looks like we all agree on Guy's proposal for the semantics of
DEFCONSTANT.

(7) Looks like we all agree that there is no pressing need for a lexical
closure data-type, but that whatever is returned for a lexical closure
must be an instance of FUNCTION.

(8) No two people seem to interpret the manual the same way with respect
to READ/WRITE-BINARY-OBJECT.  Glad everyone is willing to go along with
flushing these things from the white-pages for now.  However, the
"official" interpretation might be clarified, some groups would have to
completely re-do their implementation.  Dike it out.

(9) Guy and I want to flush the *load-pathname-defaults* stuff and Moon
is willing to go along.

(10) Everyone seems willing to go along with Guys :COPIER proposal.  I
agree with Moon that some of these defstruct-defined functions should
not exist by default, but that's not a change we want to consider now,
is it?

(11) Should read-delimited-list modify the readtable?
Everyone seems to be willing to live with Guy's proposal that it does
not do this, and that the burden of readtable hacking falls on the user.

(12) Everyone seems happy with the proposal to have ~T assume that the
format started in column 0 if it cannot find out for sure.

(13) Everyone seems happy with Guy's clarification of ENOUGH-NAMESTRING.

(14) This is complicated.

    #\Newline is okay with me provided that Return and Linefeed are REMOVED
    from the standard character set.  There is no point in providing these
    as standard names if what they mean is implementation-dependent.  We
    should suggest that implementations that use "teletype ascii" and have
    these characters should use these particular names for them, and note
    that depending on the implementation either name, or neither of them,
    can be the same character as #\Newline.  Can both of them be the same
    character as #\Newline?  Presumably not.

OK, if we want to hide character-set differences under READ-CHAR
and WRITE-CHAR, and to have everything be portable from there on up, I
guess we agree that #\Newline is the character used to separate lines,
and that #\Return and #\Linefeed are not part of the standard character
set and should be avoided in portable code.  An implementation is free
to provide these as non-standard extensions, along with all sorts of
other ugly cursor-control codes, but only #\newline has the seal of
approval for portable code.

I'm not really sure what READ-CHAR from the keyboard should do.  In our
implementation the RETURN key would normally be read in as a random code
that is a command for Hemlock, telling it to break the line, and some
other characters might break the line in different ways.  If the Hemlock
buffer gets passed on to Lisp via a character stream, that line break
would show up as a #\newline, but when Lisp sucks raw characters from the
keyboard it is unclear to me that "the usual line-terminating key" is a
well-defined concept.  This might well vary according to what you are
doing.

I think the reason this is hard is that the things coming from a
keyboard should not be considered to be the same as normal characters
from a character-type stream.  They are keystroke-objects or something,
and only keystroke objects have "bits" like meta and super.  Keyboard of
reading is more like reading from a binary stream than like reading
characters from a file -- a mapping has to be done before you have a
character stream.  Maybe the right move is to say that in reading a
character stream -- the only stream type for which READ-CHAR and
WRITE-CHAR are defined, #\Newline is indeed the convention, and that
"civilized" keyboard input can be obtained that way, but that "raw mode"
keyboard input is implementation-dependent and does not necessarily come
in via a character stream.  Bleh!

(15) If I understand what Moon is saying about their treatment of
EVAL-WHEN, I think I can implement the same thing and I think it will
work OK.  I'm not passionate about this stuff, as long as I can see what
to implement and it's not too hairy.  I'm not sure exactly how much of
this wants to be specified by the manual.  Any proposals for what to
say?

-- Scott

∂10-Oct-83  2321	RPG   	Nasty issues for Common LISP Manual   
 ∂08-Oct-83  0831	FAHLMAN@CMU-CS-C.ARPA 	Nasty issues for Common LISP Manual  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 8 Oct 83  08:31:44 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sat 8 Oct 83 11:33:32-EDT
Date: Sat, 8 Oct 1983  11:33 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   bsg%SCRC-TENEX@MIT-ML.ARPA, dlw%SCRC-TENEX@MIT-ML.ARPA,
      David A. Moon <Moon%SCRC-TENEX@MIT-ML.ARPA>, rpg@SU-AI.ARPA,
      STEELE@CMU-CS-C.ARPA
Subject: Nasty issues for Common LISP Manual
In-reply-to: Msg of 8 Oct 1983  04:20-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>


I just re-read my note of last night.  While I still believe the content
of what I said, I apologize for the number of typos and
ungrammaticalities.  When you're tired, I/O seems to be the thing
that breaks.

One sentence is so messed up that it might be misinterpreted:

  However, the "official" interpretation might be clarified, some groups
  would have to completely re-do their implementation.

should have been

  However the "official" interpretation might be clarified, some groups
  would have to completely re-do their implementation.

-- Scott

∂10-Oct-83  2321	RPG   	Nasty issues for Common LISP Manual   
 ∂08-Oct-83  1237	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	Nasty issues for Common LISP Manual   
Received: from MIT-MC by SU-AI with TCP/SMTP; 8 Oct 83  12:37:26 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Sat 8-Oct-83 15:39:13-EDT
Date: Saturday, 8 October 1983, 15:41-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: Nasty issues for Common LISP Manual
To: Scott E. Fahlman <Fahlman@CMU-CS-C>, Dick Gabriel <RPG@SU-AI>,
    STEELE@CMU-CS-C
Cc: moon%SCRC-TENEX@MIT-MC, bsg%SCRC-TENEX@MIT-MC, dlw%SCRC-TENEX@MIT-MC
In-reply-to: The message of 8 Oct 83 04:20-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>,
             The message of 8 Oct 83 11:33-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>,
             The message of 8 Oct 83 13:16-EDT from Dick Gabriel <RPG at SU-AI>

Looks like we are in general agreement.

I'd prefer to get rid of the vertical bars in package prefixes, but I guess
I can live with keeping this crockish syntax and switching #: to :: if I have to.
I still think that's an inferior choice.

As far as #+ and #- go, I am adamantly opposed to introducing new syntax for
the same functionality.  I don't think I was introducing additional exceptions;
I was trying to make Steele's suggested interpretation of how the reader skips
over an expression more consistent.  I don't think there is anything complicated
or scary here; basically while reading the expression after an unsatisfied #+,
tokens are scanned and read as NIL, and built-in macros read whatever their
syntax is then return NIL, without performing any side-effects.  Maybe instead
of enumerating them one by one we should simply state a general rule this way.
User macros get the suppress-p argument and are encouraged to do the same thing
as built-in macros.

I agree with RPG that first-class lexical closures are an important part of
the language.  Since they are an innovation in the Maclisp family, we want to
be a bit cautious about just what we standardize.  Everyone is in agreement.

I made an error in my discussion of #\Newline; everywhere where I said
"the keyboard" please substitute "the stream that is the value of *TERMINAL-IO*
and the initial value of *STANDARD-INPUT*".  Indeed one wants a way to really
get at the exact keystrokes on the keyboard, something we haven't tried to
standardize yet.

∂10-Oct-83  2321	RPG   	Nasty issues for Common LISP Manual   
 ∂08-Oct-83  1329	FAHLMAN@CMU-CS-C.ARPA 	Nasty issues for Common LISP Manual  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 8 Oct 83  13:26:34 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sat 8 Oct 83 16:27:49-EDT
Date: Sat, 8 Oct 1983  16:27 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX@MIT-MC.ARPA>
Cc:   bsg%SCRC-TENEX@MIT-MC.ARPA, dlw%SCRC-TENEX@MIT-MC.ARPA,
      Dick Gabriel <RPG@SU-AI.ARPA>, STEELE@CMU-CS-C.ARPA,
      fahlman@CMU-CS-C.ARPA
Subject: Nasty issues for Common LISP Manual
In-reply-to: Msg of 8 Oct 1983 15:41-EDT from David A. Moon <Moon%SCRC-TENEX at MIT-MC>


  As far as #+ and #- go, I am adamantly opposed to introducing new syntax for
  the same functionality. -- Moon

Well, I was proposing that we introduce a single workable syntax for
what one really wants #+ and #- to do: read some stuff or not, depending
on what system you are using.  The historical syntax of #+ and #- is
unfortunately unworkable, except for the simplest of cases.  Since I
have proposed what I believe to be a simple, clean, elegant solution
that does the right thing and is trivial to implement, I am adamantly
opposed to doing a lot of work in the reader to make a few more cases
work properly in old-style #+ and #-.  Moon says that he is proposing a
single principle rather than a lot of specific exceptions.  That may be,
but the fact remains that to put in this suppress-p machinery, we would
have to change about thirty places in our reader.

I could live with that, though it would irritate me mightily -- I hate
patching up crocks when a clean solution to the problem is evident.  But
if we go that far, I am afraid that it will appear that we are endorsing
the principle that #+ and #- have to read the next Lisp object without
complaining and without side effects, regardless of how complicated that
gets.  Users will start expecting that and reporting violations as bugs.
I'm not willing to buy into that, because it can get very complicated
indeed, and can never be completely right in all cases.

In the #U and #W proposal, I am not proposing ADDITIONAL syntax for the
same functionality.  I am proposing that we introduce a single decent
functionality and phase out the old broken one.  Probably the best thing
to do, if the world were young, would be to take the #+ and #- characters
for the new syntax and flush the old one right now, but I didn't propose
that because it would cause Symbolics a lot of trouble and would send
RMS into a veritable meltdown.  We don't have any old code around that
hasn't been rewritten, so it wouldn't bother us.

-- Scott

∂10-Oct-83  2321	RPG   	Yet another proposal for #+ and #-    
 ∂08-Oct-83  2128	Guy.Steele@CMU-CS-A 	Yet another proposal for #+ and #-
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 8 Oct 83  21:28:47 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP;  9 Oct 83 00:14:03 EDT
Date:  9 Oct 83 0024 EDT (Sunday)
From: Guy.Steele@CMU-CS-A
To: Scott.Fahlman <FAHLMAN@CMU-CS-C>, rpg@SU-AI, dlw%scrc-tenex@MIT-ML,
    moon%scrc-tenex@MIT-ML, bsg%scrc-tenex@MIT-ML
Subject: Yet another proposal for #+ and #-

Flush them.

I'm serious.  How many of you would find it more objectionable to
eliminate #+ and #- from the white pages than to standardize on
whatever you perceive to be "the wrong thing"?
--Guy

∂10-Oct-83  2321	RPG   	Yet another proposal for #+ and #-    
 ∂08-Oct-83  2138	FAHLMAN@CMU-CS-C.ARPA 	Yet another proposal for #+ and #-   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 8 Oct 83  21:38:11 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sun 9 Oct 83 00:39:46-EDT
Date: Sun, 9 Oct 1983  00:39 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Guy.Steele@CMU-CS-A.ARPA
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      moon%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA, fahlman@CMU-CS-C.ARPA
Subject: Yet another proposal for #+ and #-
In-reply-to: Msg of 9 Oct 83 0024 EDT () from Guy.Steele at CMU-CS-A


Flushing these things from the white pages sounds good to me.  Then
people with bootstrapping problems or multiple systems could do whatever
they need to do, and portable Common Lisp code would be devoid of these
"features".

-- Scott

∂10-Oct-83  2321	RPG   	Yet another proposal for #+ and #-    
 ∂08-Oct-83  2219	FAHLMAN@CMU-CS-C.ARPA 	Yet another proposal for #+ and #-   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 8 Oct 83  22:18:55 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sun 9 Oct 83 01:20:36-EDT
Date: Sun, 9 Oct 1983  01:20 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      Guy.Steele@CMU-CS-A.ARPA, moon%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA
Subject: Yet another proposal for #+ and #-
In-reply-to: Msg of 9 Oct 1983  00:39-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>


Presumably if #+ and #- are dropped from the white pages, *FEATURES*
should be too.  That gets rid of the issue of whether the entries on
this list are symbols, keywords, or arbitrary objects.

-- Scott

∂10-Oct-83  2322	RPG   	Yet another proposal for #+ and #-    
 ∂09-Oct-83  1243	@MIT-MC:Moon@SCRC-TENEX 	Yet another proposal for #+ and #- 
Received: from MIT-MC by SU-AI with TCP/SMTP; 9 Oct 83  12:43:36 PDT
Received: from scrc-euphrates by scrc-vixen with CHAOS; 9 Oct 1983 15:36:20-EDT
Date: Sunday, 9 October 1983, 15:35-EDT
From: David A. Moon <Moon at SCRC at mit-mc>
Subject: Yet another proposal for #+ and #-
To: Guy.Steele at CMU-CS-A at mit-mc, Scott E. Fahlman <Fahlman at CMU-CS-C at mit-mc>
Cc: rpg at SU-AI at mit-mc, dlw at SCRC at mit-mc, moon at SCRC at mit-mc, bsg at SCRC at mit-mc
In-reply-to: The message of 9 Oct 83 00:24-EDT from Guy.Steele at CMU-CS-A,
             The message of 9 Oct 83 00:39-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>,
             The message of 9 Oct 83 01:20-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date:  9 Oct 83 0024 EDT (Sunday)
    From: Guy.Steele@CMU-CS-A
    Flush them.

    I'm serious.  How many of you would find it more objectionable to
    eliminate #+ and #- from the white pages than to standardize on
    whatever you perceive to be "the wrong thing"?

    Date: Sun, 9 Oct 1983  00:39 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    Flushing these things from the white pages sounds good to me.  Then
    people with bootstrapping problems or multiple systems could do whatever
    they need to do, and portable Common Lisp code would be devoid of these
    "features".

    Date: Sun, 9 Oct 1983  01:20 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    Presumably if #+ and #- are dropped from the white pages, *FEATURES*
    should be too.  That gets rid of the issue of whether the entries on
    this list are symbols, keywords, or arbitrary objects.

Do you realize how insane this sounds?  What possible point is there in
having a syntactic feature to allow the same source program to run on
multiple implementations that are not completely compatible with each
other, if that very syntactic feature is not itself compatible between
all implementations??  It makes no sense at all!

Face it: the various Common Lisp implementations will not be 100%
compatible.  People who are trying to write programs that are both
complex and efficient are going to have some implementation-dependent
sections.  Program maintenance is much easier if a common source file
can be compiled for all implementations.  Much hiding of
implementation-dependent features can be done with macros, but not
everything can be done that way.  Having two slightly-differing versions
of the same function in different files, because the reader won't
allow them to be placed together, is an excellent way to ensure
divergence of implementation-dependent versions and eventual
non-maintainability.  I speak from experience on this point.

The #+ and #- features are extremely simple, extremely easy to
implement, and rather easy to explain, much more so than a myriad of
other Common Lisp features.  Flushing them is not good policy.

∂10-Oct-83  2322	RPG   	Binding arbitration?   
 ∂10-Oct-83  0924	FAHLMAN@CMU-CS-C.ARPA 	Binding arbitration?  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 10 Oct 83  09:23:53 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 10 Oct 83 12:25:23-EDT
Date: Mon, 10 Oct 1983  12:25 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   moon%SCRC-TENEX@MIT-ML.ARPA
Cc:   bsg%SCRC-TENEX@MIT-ML.ARPA, dlw%SCRC-TENEX@MIT-ML.ARPA,
      steele@CMU-CS-C.ARPA, fahlman@CMU-CS-C.ARPA, rpg@SU-AI.ARPA
Subject: Binding arbitration?
In-reply-to: Msg of 9 Oct 1983  16:54-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C.ARPA>


Guy has to finalize the manual tomorrow, and I don't want the stupid #+
business to delay that.  Unless my last message persuaded Moon of the
beauty and elegance of #U and #W, which I doubt, we seem to have reached
an impasse -- a fundamental divergence in taste, I guess.  I propose
that we settle the matter by vote of three tasteful and impartial
referees: Steele, DLW, and RPG.  I have not heard from any of the three
on this issue.  If 2/3 of them think that patching #+ and #- is the more
tasteful way to go, I will go along with that, for the first edition at
least.  If 2/3 of them think that #U and #W are preferable and that #+
and #- should be phased out of Common Lisp, I hope that Moon will go
along.  From his messages, it seems that there is no technical problem
with this course, but that he objects strongly on the grounds of taste.
I hope that all three judges are available for a quick vote, and not
off on vacation or something.  If DLW is unavailable, BSG would be an
acceptable substitute.

There are two proposals on the table.  The first says that we add an
optional suppress-p argument to the various READ functions, and that
anyone who wants to read something harmlessly, with the results to be
discarded, should call READ with that switch on.  This would disable
error signalling in the cases listed earlier by Steele and Moon, and
would cause certain calls to return NIL instead of whatever they read.

The second proposal says that #W and #U would read a symbol and look for
it on the *features* list, just like #+ and #- do now.  However, instead
of calling read with suppress-p to get the next thing, they simply call
read.  The next item has to be a string, else an error is signalled.
The contents of that string are then conditionally sent to the reader,
depending on whether the control symbol was found on *features*, and the
results of that read are returned.  (Probably best to just allow one
Lisp object in the string, though the syntax could support more than
one.)  The advantage is that stuff in the string never gets processed by
the reader if it is not supposed to, so we read or skip the item very
cleanly.  The disadvantage is that this uses up two more macro
characters, though #+ and #- could eventually be recycled, and that it
is a break with the past in an area where compatibility is useful.

As I said, if the judges rule in favor of #+ and #-, we will implement
them as specified in Moon's earlier mail, though I would like the manual
to mention that we can not guarantee to read ANY lisp object without
complaint.

-- Scott

∂10-Oct-83  2322	RPG   	Binding arbitration?   
 ∂10-Oct-83  1054	@MIT-MC:BSG%SCRC-TENEX@MIT-MC 	Binding arbitration?    
Received: from MIT-MC by SU-AI with TCP/SMTP; 10 Oct 83  10:54:13 PDT
Received: from SCRC-BEAGLE by SCRC-TENEX with CHAOS; Mon 10-Oct-83 13:56:25-EDT
Date: Monday, 10 October 1983, 13:54-EDT
From: Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC>
Subject: Binding arbitration?
To: Fahlman@CMU-CS-C, moon%SCRC-TENEX@MIT-MC
Cc: dlw%SCRC-TENEX@MIT-MC, steele@CMU-CS-C, rpg@SU-AI
In-reply-to: The message of 10 Oct 83 12:25-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: Mon, 10 Oct 1983  12:25 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    Guy has to finalize the manual tomorrow, and I don't want the stupid #+
    business to delay that.  Unless my last message persuaded Moon of the
    beauty and elegance of #U and #W, which I doubt, we seem to have reached
    an impasse -- a fundamental divergence in taste, I guess.  I propose
    that we settle the matter by vote of three tasteful and impartial
    referees: Steele, DLW, and RPG.  I have not heard from any of the three
    on this issue.  If 2/3 of them think that patching #+ and #- is the more
    tasteful way to go, I will go along with that, for the first edition at
    least.  If 2/3 of them think that #U and #W are preferable and that #+
    and #- should be phased out of Common Lisp, I hope that Moon will go
    along.  From his messages, it seems that there is no technical problem
    with this course, but that he objects strongly on the grounds of taste.
    I hope that all three judges are available for a quick vote, and not
    off on vacation or something.  If DLW is unavailable, BSG would be an
    acceptable substitute.
Whether or not DLW replies, I agree with Moon fully.  The #W proposal, although
not completely lacking in appeal, will create hideous multiply-quoted 
unreadable junk.  

The proposals to flush read-time conditionalization are patently unacceptable.
Moon's remarks on source maintenance are completely in agreement with 
my own opinions on the subject.  

∂10-Oct-83  2322	RPG   	#W etc  
 ∂10-Oct-83  1507	RPG@SU-SCORE.ARPA 	#W etc
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Oct 83  15:07:35 PDT
Date: Mon 10 Oct 83 14:01:29-PDT
From: Dick Gabriel <RPG@SU-SCORE.ARPA>
Subject: #W etc
To: dlw%scrc-tenex%MIT-MC@MIT-XX.ARPA, fahlman@CMU-CS-C.ARPA,
    steele@CMU-CS-C.ARPA, moon%scrc-tenex%MIT-MC@MIT-XX.ARPA
cc: rpg@SU-AI.ARPA

Several things. First we need something like #+ and friends for
the near-term when MacLisp code will co-exist with Common Lisp code.
Second, I don't like the names #W and #U because I, and I suspect others,
will find it hard to recall what they mean. Third, I like not having
to enclose code with ".." Fourth, I don't like having to hair up
a possibly already-slow reader to give the user supress-p control,
though I like giving the user a handle on many things. So, I'm willing
to let the reader be haired-up and to go with #+ and #- over #W and #U,
and to live with suppress-p.  This is not to say that I don't think there
is an elegance to Scott's #W proposal.
			-rpg-
-------

∂10-Oct-83  2322	RPG   	#+foo   
 ∂10-Oct-83  1752	FAHLMAN@CMU-CS-C.ARPA 	#+foo  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 10 Oct 83  17:52:20 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 10 Oct 83 20:53:59-EDT
Date: Mon, 10 Oct 1983  20:53 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   steele@CMU-CS-C.ARPA, moon%SCRC-TENEX@MIT-ML.ARPA,
      bsg%SCRC-TENEX@MIT-ML.ARPA, dlw%SCRC-TENEX@MIT-ML.ARPA, rpg@SU-AI.ARPA
Cc:   fahlman@CMU-CS-C.ARPA
Subject: #+foo
In-reply-to: Msg of 10 Oct 1983 13:54-EDT from Bernard S. Greenberg <BSG%SCRC-TENEX at MIT-MC>


OK, the time to settle this is upon us.  Given the preference for #+/#-
expressed by BSG and RPG, and a bug that someone pointed out today in
the #U/#W syntax, I'm ready to throw in the towel.  (The bug is that the
/ vs. \ business still causes trouble because it interacts with the ""
syntax for strings.  This is also a problem for the old syntax in some
cases, but it is no longer perfect versus imperfect.)

We'll agree to #+ and #- with the suppress-p mechanism proposed
by Moon and Steele.  I would only request that this be described in a
narrow way in the manual so that users do not get the idea that we are
willing to extend this mechanism ad infinitum to make more and more odd
cases "work".

-- Scott

∂10-Oct-83  2323	RPG   	Monkey wrench from left field (#+ and #-)  
 ∂10-Oct-83  2027	Guy.Steele@CMU-CS-A 	Monkey wrench from left field (#+ and #-)   
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 10 Oct 83  20:27:23 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 10 Oct 83 23:16:17 EDT
Date: 10 Oct 83 2326 EDT (Monday)
From: Guy.Steele@CMU-CS-A
To: Scott.Fahlman <FAHLMAN@CMU-CS-C>, bsg%scrc-tenex@MIT-ML,
    moon%scrc-tenex@MIT-ML, dlw%scrc-tenex@MIT-ML, rpg@SU-AI
Subject: Monkey wrench from left field (#+ and #-)

Okay, guys, here is a (literally) eleventh-hour proposal for #+ and #-.
Pick some other random sharp-sign combination, say "#;", for a delimiter.
Then define the action of #+ (and similarly #-) as follows:
If the feature spec is true, then read one form, then peek ahead and
insist on finding "#;", which is swallowed, then return the form read.
If the feature spec is false, then skip over arbitrary text that is
balanced in "#+"/"#-" and "#;" until the closing #; is seen and swallowed,
and then return no values.  For robustness, "#;", like "#)" and others,
has a definition that complains if read by itself.

Example:  (setq x #+SPICE 1.0L15000 #;
		  #-SPICE #-LISPM #-EBCDIC #o40 #;
				  #+EBCDIC #x40 #;
		          #;
			  #+LISPM #\Hyper-Space #;
		  #;)

This scheme has the following advantages:
(1) It's just like the current scheme except that you have to sprinkle
    in some extra gritches ("#;").
(2) It can skip over almost any text.
(3) There is a gritch explicitly delimiting the end of the conditionalized
    text, so it's easy to see exactly what falls under the condition.
    (Compare the above example with this:

	  (setq x #+SPICE 1.0L15000
		  #-SPICE #-LISPM #-EBCDIC #o40
				  #+EBCDIC #x40
			  #+LISPM #\Hyper-Space )
    Even with the indentation, I judge this harder to read.)
(4) It does not interact with other syntaxes very much, so it can be thrown
    around any form with almost complete impunity.  (One problem with the #W/#U
    proposal is that any form that had "..." thrown around it would have to have
    any double-quotes within it escaped with a backslash.)
(5) It is almost completely backwards compatible, in that if one made "#;"
    have a macro definition that did nothing (like ; but swallows no additional
    text) in, for example, MacLISP, then MacLISP could swallow this syntax
    too, except that it would have the old bug it always had of not being able to
    skip over an unreadable syntax.
    
What do you all think?
--Guy

∂10-Oct-83  2323	RPG   	Monkey wrench from left field (#+ and #-)  
 ∂10-Oct-83  2131	FAHLMAN@CMU-CS-C.ARPA 	Monkey wrench from left field (#+ and #-) 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 10 Oct 83  21:31:18 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Tue 11 Oct 83 00:32:41-EDT
Date: Tue, 11 Oct 1983  00:32 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Guy.Steele@CMU-CS-A.ARPA
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      moon%scrc-tenex@MIT-ML.ARPA, rpg@SU-AI.ARPA
Subject: Monkey wrench from left field (#+ and #-)
In-reply-to: Msg of 10 Oct 83 2326 EDT () from Guy.Steele at CMU-CS-A


Guy,

I like your new proposal better than the existing #+/#-, since it solves
the same problem I was trying to solve with my ill-fated proposal, only
better.  I would not dwell heavily on the ability to nest these things
in explaining this -- we don't want the users going overboard -- but
it's nice to have that capability available when you need it.

Obviously I'm not the one who will be hard to convince on this one.
It's the people with lots of inherited code, and therefore the need to
put in lots of "gritches", who might object.  If those
ultra-conservatives at SYmbolics go along with this, then we've got a
winner.

-- Scott

∂10-Oct-83  2323	RPG   	Monkey wrench from left field (#+ and #-)  
 ∂10-Oct-83  2132	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	Monkey wrench from left field (#+ and #-)  
Received: from MIT-MC by SU-AI with TCP/SMTP; 10 Oct 83  21:31:53 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Tue 11-Oct-83 00:36:11-EDT
Date: Tuesday, 11 October 1983, 00:33-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: Monkey wrench from left field (#+ and #-)
To: Guy.Steele@CMU-CS-A
Cc: Scott.Fahlman <FAHLMAN@CMU-CS-C>, bsg%SCRC-TENEX@MIT-MC,
    dlw%SCRC-TENEX@MIT-MC, rpg@SU-AI
In-reply-to: The message of 10 Oct 83 23:26-EDT from Guy.Steele at CMU-CS-A

    Date: 10 Oct 83 2326 EDT (Monday)
    From: Guy.Steele@CMU-CS-A
    Okay, guys, here is a (literally) eleventh-hour proposal for #+ and #-.
    Pick some other random sharp-sign combination, say "#;", for a delimiter.
    Then define the action of #+ (and similarly #-) as follows:
    If the feature spec is true, then read one form, then peek ahead and
    insist on finding "#;", which is swallowed, then return the form read.
    If the feature spec is false, then skip over arbitrary text that is
    balanced in "#+"/"#-" and "#;" until the closing #; is seen and swallowed,
    and then return no values.  For robustness, "#;", like "#)" and others,
    has a definition that complains if read by itself.

    Example:  (setq x #+SPICE 1.0L15000 #;
		      #-SPICE #-LISPM #-EBCDIC #o40 #;
				      #+EBCDIC #x40 #;
			      #;
			      #+LISPM #\Hyper-Space #;
		      #;)

If you must have grouping, how about:

(setq x #+SPICE                                        1.0L15000
        #+LISPM                                        #\Hyper-Space
	#+(AND (NOT LISPM) (NOT SPICE) (NOT EBCDIC))   #o40
	#+(AND (NOT LISPM) (NOT SPICE) EBCDIC)         #x40)

Look, I'd really prefer that we keep #+ and #- simple like they were
intended to be.  Let's not introduce new syntax that allows/requires
people to write unreadable messes such as your example above.  It's
not necessary to be able to implement Turing machines in #+/#- language.
It's not even necessary to be able to write arbitrarily complex
conditionals in the minimum number of characters.  People don't
write arbitrarily-complex conditionals with #+; they write
simple ones.  At Symbolics we considered allowing you to write
	(make-array 500 #2+LISPM :area disaster-area)
and rejected it, requiring you to write the more verbose but
easier to understand and explain
	(make-array 500 #+LISPM :area #+LISPM disaster-area)
Another thing we rejected is
	(make-array 500 #0+LISPM(:area disaster-area))
If you can't understand why we rejected these, think about it for
a few days.

This whole business with suppressing read errors while inside
a failing #+ has gotten totally out of hand.  All it was supposed
to be is a very simple kludge so that one need not be bothered with
obviously pointless error messages caused by floating-point overflow
in numeric constants intended for other implementations; the same
consideration applies to character constants and external-symbol
references using package prefixes.  There is no real need to be
able to skip over arbitrary Cobol programs in the middle of a Lisp
program, so let's not make life more difficult for the Lisp programmer
just to support something no one will ever need.  Let's simply make
it that inside a failing #+ READ will parse expressions but not
interpret them.

Actually I don't think the 11th hour is 11 pm.  More like 5 pm, unless
my memory is broken.  Just before sundown if you're at a Mediterraneanly
low latitude.

∂10-Oct-83  2323	RPG   	#+ and #- syntax  
 ∂10-Oct-83  2234	Guy.Steele@CMU-CS-A 	#+ and #- syntax   
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 10 Oct 83  22:33:53 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 11 Oct 83 01:22:01 EDT
Date: 11 Oct 83 0132 EDT (Tuesday)
From: Guy.Steele@CMU-CS-A
To: rpg@SU-AI, Scott.Fahlman <FAHLMAN@CMU-CS-C>, moon%scrc-tenex@MIT-ML,
    dlw%scrc-tenex@MIT-ML, bsg%scrc-tenex@MIT-ML
Subject: #+ and #- syntax

:-)  How about
	(setq x #+(and lispm gritches) #\Hyper-Space #;
		#+(and lispm (not gritches)) #\Hyper-Space
	      )

... no, I guess that wouldn't work, would it?

Seriously, folks, it looks like it was a bad idea, and I retract
the suggestion.  (Those of you familiar with constraint languages
know that that means it disappears unless someone else supports
it independently.)  It looks like our best bet is the straight
existing #+/#- syntax with the suppress-p thing.
--Guy

∂12-Oct-83  0642	Meehan@YALE 	SETF and Prolog  
Received: from YALE by SU-AI with TCP/SMTP; 12 Oct 83  06:42:04 PDT
Received: by YALE-BULLDOG via CHAOS; Wed, 12 Oct 83 09:45:40 EDT
Date:    Wed, 12 Oct 83 09:39:39 EDT
From:    Jim Meehan <Meehan@YALE.ARPA>
Subject: SETF and Prolog
To:      common-lisp@SU-AI.ARPA

As long as you're doing (SETF (+ X 3) 10), why not use SETF as a
notation for Prolog-style assertions?  E.g.,
(SETF (GRANDFATHER-OF X) 'THOMAS).

(-: There. We embedded it again. :-)
-------

∂12-Oct-83  1056	LES@CMU-CS-C.ARPA 	Re: SETF and Prolog  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 12 Oct 83  10:52:54 PDT
Received: ID <LES@CMU-CS-C.ARPA>; Wed 12 Oct 83 13:54:38-EDT
Date: Wed 12 Oct 83 13:54:37-EDT
From: LES@CMU-CS-C.ARPA
Subject: Re: SETF and Prolog
To: Meehan@YALE.ARPA
cc: common-lisp@SU-AI.ARPA
In-Reply-To: Message from "Jim Meehan <Meehan@YALE.ARPA>" of Wed 12 Oct 83 09:50:24-EDT

Assuming that x has the structure of a person, i.e. that you use a defstruct
to define what a person is, then the setf form for the grandfather field
is well defined.  Something like:
(defstruct person (:conc-name nil ...)
. 
. 
(grandfather-of nil)
. 
. 
)
Will make the supplied setf form entirely valid.

-Lee
-------

∂13-Oct-83  1051	GALWAY@UTAH-20.ARPA 	SETF and LAMBDAs (semi-serious?)  
Received: from UTAH-20 by SU-AI with TCP/SMTP; 13 Oct 83  10:51:37 PDT
Date: Thu 13 Oct 83 11:53:06-MDT
From: William Galway <Galway@UTAH-20.ARPA>
Subject: SETF and LAMBDAs (semi-serious?)
To: Common-Lisp@SU-AI.ARPA

All this discussion about the power of SETF has reminded me of an
idea that I've been toying with for awhile.  Basically, I'd like
to modify the lambda calculus to allow non-atomic "arguments" for
the lambda.  So, in addition to things like:

    (lambda (x)
      (lambda (y)
        x))

(the K combinator, I think), it would also be legitimate to have
things like:

    (lambda (x)
      ((lambda ((y x)) (y x))
        x))

(which I think would be the identity function).

I don't claim that there's any good reason for doing this--it just seemed
like a natural thing to do during one of my weirder moments.  If anyone
knows if work has already been done with this funny variant of the lambda
calculus, I'd be interested in hearing about it.

On the other hand, if we leave the world of pure mathematical systems and
enter the world of Lisp, it does kind of appeal to me to be able to write
code like:

    (let (
          ((elt v 0) (length v)))
      v)

but, I'm not seriously suggesting that it be implemented...

-- Will Galway
-------

∂13-Oct-83  1232	@MIT-MC:MOON@SCRC-TENEX 	SETF and LAMBDAs (semi-serious?)   
Received: from MIT-MC by SU-AI with TCP/SMTP; 13 Oct 83  12:32:01 PDT
Date: Thursday, 13 October 1983  14:39-EDT
From: MOON at SCRC-TENEX
To:   William Galway <Galway at UTAH-20.ARPA>
Cc:   Common-Lisp at SU-AI.ARPA
Subject: SETF and LAMBDAs (semi-serious?)
In-reply-to: The message of Thu 13 Oct 83 11:53:06-MDT from William Galway <Galway@UTAH-20.ARPA>

This was implemented in Maclisp under the name of "destructuring LET".
It raises a number of complicated issues that I don't think we should get
involved in right now.  Note, as just one example, that you were not
consistent with yourself in the examples you give of what it might do.
Your first example 
    (lambda (x)
      ((lambda ((y x)) (y x))
        x))
seems to expect the argument to be a list of two elements, where y
is bound to the first element and x is bound to the second.  In other
words the "pattern" in the lambda-expression is a piece of data acting
as a template.  Unless I misunderstand the example completely, which
is possible since it certainly isn't written in Common Lisp.
In your second example,
    (let (
          ((elt v 0) (length v)))
      v)
the "pattern" in the lambda-expression (a let this time) is a piece of
code describing a location to be stored into.  I won't even go into the
deeper semantic issues raised by your second example; the syntactic
issues are enough to show that it is complicated.

Note that if you add a new special form (a macro) to the language,
rather than redefining existing things such as let and lambda, you
can experiment with various versions of such things in any Common Lisp
implementation, or any other reasonable Lisp implementation, without
any need to change the compiler, interpreter, or run-time.

∂13-Oct-83  1851	GALWAY@UTAH-20.ARPA 	Whoops (SETF and LAMBDAs)    
Received: from UTAH-20 by SU-AI with TCP/SMTP; 13 Oct 83  18:51:18 PDT
Date: Thu 13 Oct 83 19:52:53-MDT
From: William Galway <Galway@UTAH-20.ARPA>
Subject: Whoops (SETF and LAMBDAs)
To: Common-Lisp@SU-AI.ARPA

I guess I didn't make myself very clear in my previous message,
so I'll try again.  First, I'm talking about two related but
distinct things, namely
 1.) the lambda calculus
 2.) Lisp.

I picked a rather miserable example to illustrate what I want to
do in the lambda calculus, so here's another try.  The way I'd
assign a value to an expression like

    ((lambda (x)
       (plus x 1))
      2)

is to "evaluate (plus x 1) in an environment where `x' has the
value 2".  What I want to do is to extend this to the case where
x isn't an atom.  So to evaluate

    ((lambda ((sin x))
       (cos x))
      1)

I'd "evaluate (cos x) in an environment where the non-atomic
expression `(sin x)' has the value 1".  (I'm not necessarily
claiming that "plus", "sin", "cos", "2", "1", have their typical
meanings--I suppose it depends on what they're bound to.)  With
the usual meanings, I'd expect the value of the expression to be
zero, and if we plugged 0 instead of 1 into the lambda, the
result would be ambiguous, but either +1 or -1 should be valid
"interpretations".


In the case of Lisp, I was just thinking of LET as being a
convenient shorthand for LAMBDA.  So

    (let (
          ((elt v 0) (length v)))
      v)

is equivalent to

    ((lambda ((elt v 0))
       v)
      (length v))

(or should that be a "(function (lambda ...))"?  Anyway...)
So, is that roughly what MacLisp's "destructuring LET" did?
Something like SETF for lambdas, only without the idea that the
LET actually expanded to a lambda?

Let me also repeat that I'm not seriously suggesting
implementation of this stuff (or non-implementation for that
matter).  I'm just interested in toying with the ideas (for now).

Hope that clarifies what I was trying to get across.

-- Will
-------

∂13-Oct-83  2133	JONL.PA@PARC-MAXC.ARPA 	SETF madness    
Received: from PARC-MAXC by SU-AI with TCP/SMTP; 13 Oct 83  21:33:01 PDT
Date: 13 OCT 83 21:25 PDT
From: JONL.PA@PARC-MAXC.ARPA
Subject: SETF madness
To: Galway@UTAH-20.ARPA
cc: Common-Lisp@SAIL.ARPA

Sometime during the 1930s or 1940s it was proven that general recursion
equtions were turing equivalent . . . 

But in my lifetime, they haven't been taken too seriously as a computation
model by anyone who actually has to get work done "in real time" (Yes, PROLOG
is an exception, but I'll comment on that later.  Maybe.)


What all the partially-facetious, somewhat-wishful, nearly-serious suggestions
about extending SETF are leading up to is a re-introduction of recursion
equations as a programming paradigm.  To be realistic, there must be some
limit on the paradigms used.  MacLisp's limit was simply that 
  1) variables would be bound, to
  2) values obtained from CAR/CDR sequences over the input [VAX/NIL, which
     I think still has some remnant of the destructing LET idea, also permits
     vector-referencing in addition to CAR/CDRing.]
This led to the very-compact description of how to "spread" the computed
values of a LET into a bunch of variables: namely, a data structure whose
form was to match that of an argument, and whose symbols were the variables
to be bound.  That's actually a very limited paradigm; but I stress that
those of us who used it found it most convenient.

Another serious proposal would have extended the limitation in point 1)
by including any LOCF'able place; this was coupled with a paradigm that
used a "program", rather than merely a data structure, to indicate how to
destructure the data.  Thus
   (LET ((`(,X ,Y) (MUMBLE))) ...)
instead of 
   (LET (((X Y) (MUMBLE))) ...)
That is, the "evaluable" places in the program would be places
were the destructured value would be "put".  Note that
   (LET (((GET 'X 'SOMEPROP) (MUMBLE))) ...)    ;"Program paradigm"
would bind the result of (MUMBLE) to a spot on the property list of the
symbol X; nothing at all in the "data paradigm" can express this.  In fact,
we just couldn't see how to do such binding efficiently in PDP10 MacLisp
(variables were easy -- they were built-in from antiquity), so that is one
reason we didn't try it out.  On the other hand, the LispMachine apparently
is capable of binding any random cell just as easily as binding a varialbe,
so it would have made more sense to attempt it there.  I believe it was
Rich Stallman's insight that this "program paradigm" was a super-set of
the "data paradigm", and could be implemented straight-forwardly; however,
I don't believe anyone ever did go to the trouble of building a prototype
and trying it out.


Now, back to the snipe at PROLOG.  It is true that PROLOG as a programming
language resembles recursion equations very much, and it is also true that
there are rumors of some PROLOG implementations running "in real time".
Furthermore, I admit that some problems are very succinctly stated in
PROLOG, and their re-formulation into one of the more classic programming
languages is a "hard" problem; so it's tempting to hope that some automating
of the conversion process will make programming a lot easier.  But I don't 
buy it, yet.  As you probe deeper into the use of PROLOG, you find that
you really can't do much without significant use of "cuts" and "directions",
which are in fact the serializers which tend to convert an apparently
descriptive program into a prescriptive one [i.e., rather than a program
which says "I would like such-and-such an efffect" into one that says
"First, do this, then do that, then do something else, then . . . "]

Except for all this madness about SETF, I'm fairly confident that the
Lisp community has it's head screwed on straight about avoiding paradigms
that trap the unwary into an exponentially-worse performance.  If - - - 
If Prolog really makes it big, it will have to make it possible to
express simple things like, say,
   (for I from 1 to 100 sum (AREF A I))
in such a way that the loop-overhead doesn't take an order of magnitude
more time than the array referencing.  SIgh, and again, if it "makes it
big", then the prolog manual will probably expand in size from a small
thin phamplet into something the size of the ZetaLisp or Interlisp tomes.

∂13-Oct-83  2155	Guy.Steele@CMU-CS-A 	SETF and LAMBDAs   
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 13 Oct 83  21:55:21 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 14 Oct 83 00:40:05 EDT
Date: 14 Oct 83 0050 EDT (Friday)
From: Guy.Steele@CMU-CS-A
To: William Galway <Galway@UTAH-20>
Subject: SETF and LAMBDAs
CC: common-lisp@SU-AI
In-Reply-To: "William Galway's message of 13 Oct 83 20:52-EST"

Yes, you have certainly hit upon an interesting class of ideas.
You might want to explore the literature on "declarative" and
"relational" languages, check out the PROLOG language, and perhaps
look at a couple of papers Sussman and I wrote on constraint
languages (one is in the proceedings of APL '79, and another in
the AI Journal a couple of years ago).  These all have some related
notions, though not exactly what you have suggested.  You might
ponder the notions that your idea requires (1) a general method
for inverting functions calls to which appear as lambda parameters,
and (2) some means of dealing with multiple solutions when the
inverses turn out to be one-many relations.
--Guy

∂14-Oct-83  1141	RPG   	Insane  
 ∂11-Oct-83  0928	FAHLMAN@CMU-CS-C.ARPA 	Insane 
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 11 Oct 83  09:27:26 PDT
Received: ID <VAF@CMU-CS-C.ARPA>; Tue 11 Oct 83 12:05:36-EDT
Date: Sun, 9 Oct 1983  16:54 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moo%CRC@MIT-MC.ARPA>
Cc:   bs%CRC@MIT-MC.ARPA, dl%CRC@MIT-MC.ARPA, Guy.Steel@CMU-CS-A.ARPA
      rp@λSU-AI@λ fahlman@CMU-CS-C.ARPA
Subject: Insane
In-reply-to: Msg of 9 Oct 1983 15:35-EDT from David A. Moon <Moon at SCRC at mit-mc>


The proposal to drop #+ and #- from the white pages is indeed insane if
you view these as mechanisms for porting code among Common Lisp
implementations.  It makes sense if you view these macros as being only
for the purpose of more easily inheriting code from earlier non-Common
Lisps.  In that case, these things could well be part of the Zetalisp
compatibility package, or whatever.  And, of course, compatibility with
earlier Lisps is the only possible reason for retaining the
brain-damaged syntax of #+ and #-.

Moon is persuasive in his argument that we ought to have some sort of
conditional reading facility in the white pages for sharing among
slightly different versions of Common Lisp.  The most common need for
this will be in the area of differing floating-point formats, but there
will probably be other places.  I now agree that we must provide such a
facility.  If it's worth doing this, it's worth doing it right.

My proposed #U and #W are extremely simple, extremely easy to implement,
and extremely easy to explain, and they do the right thing in all cases.
These benefits come from the fact that the alien stuff never gets near
the reader, except as uninterpreted characters in the guts of a string.
Now that Moon has convincingly made the argument that Common Lisp needs
such a facility in the white pages, I think it should be this one.  (I
would, of course, be happy to rename these to #+ and #- if people can
live with that.)

Moon is wrong in stating that #+ and #- are "extremely simple, extremely
easy to implement, and rather easy to explain."  He can't have all of
those at once.  The traditional #+ and #- macros, which just call READ
on the uninterpreted object, are easy to implement but would cause
errors in many of the cases of interest, including flonums and
non-standard character objects.  If we put in all the suppress-p
exceptions that he wants, it is hard to implement and hard to explain
properly, since all of the cases must be enumerated.  If we offer the
simple explanation that these forms conditionally read a lisp object and
discard the result, without causing any side effects or signaling
errors, it is impossible to implement #+ and #- correctly with the old
undelimited syntax.

I am strongly in favor of putting something with the syntax of #U and #W
into the white pages, whatever characters we assign these things to.  If
people really want to include the nearly-worthless simple forms of #+
and #- (just READ normally and discard the result), I can go along with
that as well.  I am still unwilling to do a lot of work to patch up our
reader to make #+ and #- work better.

-- Scott

∂14-Oct-83  1141	RPG   	#+ and Bob and #W and Alice 
 ∂11-Oct-83  2103	Guy.Steele@CMU-CS-A 	#+ and Bob and #W and Alice  
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 11 Oct 83  21:03:28 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 11 Oct 83 23:52:27 EDT
Date: 12 Oct 83 0002 EDT (Wednesday)
From: Guy.Steele@CMU-CS-A
To: Scott.Fahlman <FAHLMAN@CMU-CS-C>
Subject: #+ and Bob and #W and Alice
CC: rpg@SU-AI, bsg%scrc-tenex@MIT-ML, moon%scrc-tenex@MIT-ML,
    dlw%scrc-tenex@MIT-ML

Actually, #+ suffices:

	#+LISPM #,(read-from-stri#+LISPM "#C(xiv viii)" #-LISPM "()")

instead of

	#WLISPM "#C(xiv viii)"

Admittedly it's a bit verbose.

I think I am somewhat more persuaded by Scott's arguments now.  Putting
double quotes around what (as Moon says) ought to be fairly short pieces
of code doesn't seem so bad.

I don't think positive and negative forms are all that necessary;
why can't we just write (not lispm) ?  I would suggest "#?" except
that we have reserved that to the user.  Therefore I propose "#@"
as a new name for "#W".

	(setq x #@SPICE "1.0L1500"
	        #@(AND (NOT SPICE) LISPM) "#\Hyper-Space"
		#@(AND (NOT SPICE) (NOT LISPM) EBCDIC) "#x40"
		#@(AND (NOT SPICE) (NOT LISPM) (NOT EBCDIC)) "#o40"
		)

However, I don't feel warranted in putting in this syntax as a language
change unless everyone can give it at least grudging support.
--Guy

∂14-Oct-83  1141	RPG   	#+ and Bob and #W and Alice 
 ∂11-Oct-83  2132	@MIT-MC:Moon%SCRC-TENEX@MIT-MC 	#+ and Bob and #W and Alice 
Received: from MIT-MC by SU-AI with TCP/SMTP; 11 Oct 83  21:32:33 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Wed 12-Oct-83 00:24:48-EDT
Date: Wednesday, 12 October 1983, 00:32-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-MC>
Subject: #+ and Bob and #W and Alice
To: Guy.Steele@CMU-CS-A
Cc: Scott.Fahlman <FAHLMAN@CMU-CS-C>, rpg@SU-AI, bsg%SCRC-TENEX@MIT-MC,
    dlw%SCRC-TENEX@MIT-MC
In-reply-to: The message of 12 Oct 83 00:02-EDT from Guy.Steele at CMU-CS-A

    Date: 12 Oct 83 0002 EDT (Wednesday)
    From: Guy.Steele@CMU-CS-A

    I think I am somewhat more persuaded by Scott's arguments now.  Putting
    double quotes around what (as Moon says) ought to be fairly short pieces
    of code doesn't seem so bad.

#+ is frequently used in front of entire function definitions.  Thus you quickly
get into the infinitely deep morass of re-quoting quoted strings.  Ask anyone
who has written exec←com command procedures on Multics.  The more of this
discussion I read the more I think the present #+ is more well-thought-out than
I thought originally.

By the way, what's so great about using double-quotes for grouping?  Isn't
this Lisp, where we customarily use parentheses for grouping?

∂14-Oct-83  1141	RPG   	#+ and Bob and #W and Alice 
 ∂12-Oct-83  0637	@MIT-MC:BSG%SCRC-TENEX@MIT-MC 	#+ and Bob and #W and Alice  
Received: from MIT-MC by SU-AI with TCP/SMTP; 12 Oct 83  06:36:59 PDT
Received: from SCRC-BEAGLE by SCRC-TENEX with CHAOS; Wed 12-Oct-83 09:40:13-EDT
Date: Wednesday, 12 October 1983, 09:38-EDT
From: Bernard S. Greenberg <BSG%SCRC-TENEX@MIT-MC>
Subject: #+ and Bob and #W and Alice
To: Guy.Steele@CMU-CS-A, FAHLMAN@CMU-CS-C
Cc: rpg@SU-AI, moon%SCRC-TENEX@MIT-MC, dlw%SCRC-TENEX@MIT-MC
In-reply-to: The message of 12 Oct 83 00:02-EDT from Guy.Steele at CMU-CS-A

    Date: 12 Oct 83 0002 EDT (Wednesday)
    From: Guy.Steele@CMU-CS-A

    I think I am somewhat more persuaded by Scott's arguments now.  Putting
    double quotes around what (as Moon says) ought to be fairly short pieces
    of code doesn't seem so bad.

	    (setq x #@SPICE "1.0L1500"
		    #@(AND (NOT SPICE) LISPM) "#\Hyper-Space"
Hey, great advocate of the triviality and the simplicity of the
syntax, you forgot to double the backslash.

I think this speaks against it.
		    #@(AND (NOT SPICE) (NOT LISPM) EBCDIC) "#x40"
		    #@(AND (NOT SPICE) (NOT LISPM) (NOT EBCDIC)) "#o40"
		    )

∂14-Oct-83  1142	RPG   	SETF and Prolog   
 ∂12-Oct-83  0642	Meehan@YALE 	SETF and Prolog  
Received: from YALE by SU-AI with TCP/SMTP; 12 Oct 83  06:42:04 PDT
Received: by YALE-BULLDOG via CHAOS; Wed, 12 Oct 83 09:45:40 EDT
Date:    Wed, 12 Oct 83 09:39:39 EDT
From:    Jim Meehan <Meehan@YALE.ARPA>
Subject: SETF and Prolog
To:      common-lisp@SU-AI.ARPA

As long as you're doing (SETF (+ X 3) 10), why not use SETF as a
notation for Prolog-style assertions?  E.g.,
(SETF (GRANDFATHER-OF X) 'THOMAS).

(-: There. We embedded it again. :-)
-------

∂14-Oct-83  1143	RPG   	#+ and Bob and #W and Alice 
 ∂12-Oct-83  1015	@MIT-MC:DLW%SCRC-TENEX@MIT-MC 	#+ and Bob and #W and Alice  
Received: from MIT-MC by SU-AI with TCP/SMTP; 12 Oct 83  10:15:17 PDT
Received: from SCRC-SHEPHERD by SCRC-TENEX with CHAOS; Wed 12-Oct-83 13:17:56-EDT
Date: Wednesday, 12 October 1983, 13:13-EDT
From: Daniel L. Weinreb <DLW%SCRC-TENEX@MIT-MC>
Subject: #+ and Bob and #W and Alice
To: Moon%SCRC-TENEX@MIT-MC, Guy.Steele@CMU-CS-A
Cc: FAHLMAN@CMU-CS-C, rpg@SU-AI, bsg%SCRC-TENEX@MIT-MC
In-reply-to: The message of 12 Oct 83 00:32-EDT from David A. Moon <Moon at SCRC-TENEX>

Hi.  I've been out of town and just caught up with my mail.  I'm
reasonably happy with the discussion and the way decisions went.  The
decision about vertical bars in package names and use of :: is OK with
me.  I'm also in favor of keeping MACROLET as GLS recommended and Moon
further justified.  I agree fully with Moon about the portability of
Newline question; Moon and I have discussed this extensively in the
past.  On all the other issues besides #+, I agree with the concensus
on all those issues that I know/care about.

Regarding #+, I agree completely with what Moon and BSG have already
said.  It would be ludicrous to flush the feature; it's very important.
I do NOT want to get into re-quoting issues; I have seen the evil that
this leads to and really don't want to get us involved in that.  The
feature to suppress errors is quite simple to explain and implement and
saves a lot of trouble.  The various hairy proposals do not particularly
appeal to me; let's keep it simple.

∂14-Oct-83  1143	RPG   	#+ and Bob and #W and Alice 
 ∂12-Oct-83  1028	FAHLMAN@CMU-CS-C.ARPA 	#+ and Bob and #W and Alice
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 12 Oct 83  10:28:31 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Wed 12 Oct 83 13:29:53-EDT
Date: Wed, 12 Oct 1983  13:29 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Daniel L. Weinreb <DLW%SCRC-TENEX@MIT-MC.ARPA>
Cc:   bsg%SCRC-TENEX@MIT-MC.ARPA, Guy.Steele@CMU-CS-A.ARPA,
      Moon%SCRC-TENEX@MIT-MC.ARPA, rpg@SU-AI.ARPA
Subject: #+ and Bob and #W and Alice
In-reply-to: Msg of 12 Oct 1983 13:13-EDT from Daniel L. Weinreb <DLW%SCRC-TENEX at MIT-MC>


Looks like everything is settled now.  Can this really be????

-- Scott

∂14-Oct-83  1143	RPG   	Re: SETF and Prolog    
 ∂12-Oct-83  1056	LES@CMU-CS-C.ARPA 	Re: SETF and Prolog  
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 12 Oct 83  10:52:54 PDT
Received: ID <LES@CMU-CS-C.ARPA>; Wed 12 Oct 83 13:54:38-EDT
Date: Wed 12 Oct 83 13:54:37-EDT
From: LES@CMU-CS-C.ARPA
Subject: Re: SETF and Prolog
To: Meehan@YALE.ARPA
cc: common-lisp@SU-AI.ARPA
In-Reply-To: Message from "Jim Meehan <Meehan@YALE.ARPA>" of Wed 12 Oct 83 09:50:24-EDT

Assuming that x has the structure of a person, i.e. that you use a defstruct
to define what a person is, then the setf form for the grandfather field
is well defined.  Something like:
(defstruct person (:conc-name nil ...)
. 
. 
(grandfather-of nil)
. 
. 
)
Will make the supplied setf form entirely valid.

-Lee
-------

∂14-Oct-83  1144	RPG   	#+/#-   
 ∂12-Oct-83  1408	@MIT-MC:BSG@SCRC-TENEX 	#+/#- 
Received: from MIT-MC by SU-AI with TCP/SMTP; 12 Oct 83  14:08:29 PDT
Received: from scrc-beagle by scrc-vixen with CHAOS; 11 Oct 1983 15:20:53-EDT
Date: Tuesday, 11 October 1983, 15:24-EDT
From: Bernard S. Greenberg <BSG at SCRC at mit-mc>
Subject: #+/#-
To: Fahlman at CMU-CS-C at mit-mc, Moon at SCRC at mit-mc
Cc: dlw at SCRC at mit-mc, rpg at SU-AI at mit-mc, STEELE at CMU-CS-C at mit-mc
In-reply-to: The message of 8 Oct 83 16:27-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

Look, do you think that anyone is going to believe

  #W(whatever)
    "(if (string-search-char #\\: (fs:default-mumble \"FOO\"))..."
God forbid you tried to nest them?  Putting something in a #W 
causes miniscule, pervasive text modifications in the code conditionalized,
which are neither easy to read, understand, add or remove.

For this reason, #; has it all over #W.

But don't get me wrong, my votes are all cast for #+ #-, whatever kludgery
is needed in implementation.

∂14-Oct-83  1144	RPG   	Ted and #U and Carol and #- 
 ∂12-Oct-83  2011	Guy.Steele@CMU-CS-A 	Ted and #U and Carol and #-  
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 12 Oct 83  20:11:14 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 12 Oct 83 23:00:56 EDT
Date: 12 Oct 83 2310 EDT (Wednesday)
From: Guy.Steele@CMU-CS-A
To: Scott.Fahlman <FAHLMAN@CMU-CS-C>, rpg@SU-AI, bsg%scrc-tenex@MIT-ML,
    moon%scrc-tenex@MIT-ML, dlw%scrc-tenex@MIT-ML
Subject: Ted and #U and Carol and #-

I hereby declare the #+/#- issue settled.  It will remain as described,
with the various suppression features described by myself and Moon.

To my knowledge, then, the only outstanding point is this issue of LET*
and SETF.  I await eagerly the exposition from the Symbolics folks.
--Guy

∂14-Oct-83  1145	RPG   	SETF and LAMBDAs (semi-serious?) 
 ∂13-Oct-83  1051	GALWAY@UTAH-20.ARPA 	SETF and LAMBDAs (semi-serious?)  
Received: from UTAH-20 by SU-AI with TCP/SMTP; 13 Oct 83  10:51:37 PDT
Date: Thu 13 Oct 83 11:53:06-MDT
From: William Galway <Galway@UTAH-20.ARPA>
Subject: SETF and LAMBDAs (semi-serious?)
To: Common-Lisp@SU-AI.ARPA

All this discussion about the power of SETF has reminded me of an
idea that I've been toying with for awhile.  Basically, I'd like
to modify the lambda calculus to allow non-atomic "arguments" for
the lambda.  So, in addition to things like:

    (lambda (x)
      (lambda (y)
        x))

(the K combinator, I think), it would also be legitimate to have
things like:

    (lambda (x)
      ((lambda ((y x)) (y x))
        x))

(which I think would be the identity function).

I don't claim that there's any good reason for doing this--it just seemed
like a natural thing to do during one of my weirder moments.  If anyone
knows if work has already been done with this funny variant of the lambda
calculus, I'd be interested in hearing about it.

On the other hand, if we leave the world of pure mathematical systems and
enter the world of Lisp, it does kind of appeal to me to be able to write
code like:

    (let (
          ((elt v 0) (length v)))
      v)

but, I'm not seriously suggesting that it be implemented...

-- Will Galway
-------

∂14-Oct-83  1145	RPG   	SETF and LAMBDAs (semi-serious?) 
 ∂13-Oct-83  1232	@MIT-MC:MOON@SCRC-TENEX 	SETF and LAMBDAs (semi-serious?)   
Received: from MIT-MC by SU-AI with TCP/SMTP; 13 Oct 83  12:32:01 PDT
Date: Thursday, 13 October 1983  14:39-EDT
From: MOON at SCRC-TENEX
To:   William Galway <Galway at UTAH-20.ARPA>
Cc:   Common-Lisp at SU-AI.ARPA
Subject: SETF and LAMBDAs (semi-serious?)
In-reply-to: The message of Thu 13 Oct 83 11:53:06-MDT from William Galway <Galway@UTAH-20.ARPA>

This was implemented in Maclisp under the name of "destructuring LET".
It raises a number of complicated issues that I don't think we should get
involved in right now.  Note, as just one example, that you were not
consistent with yourself in the examples you give of what it might do.
Your first example 
    (lambda (x)
      ((lambda ((y x)) (y x))
        x))
seems to expect the argument to be a list of two elements, where y
is bound to the first element and x is bound to the second.  In other
words the "pattern" in the lambda-expression is a piece of data acting
as a template.  Unless I misunderstand the example completely, which
is possible since it certainly isn't written in Common Lisp.
In your second example,
    (let (
          ((elt v 0) (length v)))
      v)
the "pattern" in the lambda-expression (a let this time) is a piece of
code describing a location to be stored into.  I won't even go into the
deeper semantic issues raised by your second example; the syntactic
issues are enough to show that it is complicated.

Note that if you add a new special form (a macro) to the language,
rather than redefining existing things such as let and lambda, you
can experiment with various versions of such things in any Common Lisp
implementation, or any other reasonable Lisp implementation, without
any need to change the compiler, interpreter, or run-time.

∂14-Oct-83  1146	RPG   	Whoops (SETF and LAMBDAs)   
 ∂13-Oct-83  1851	GALWAY@UTAH-20.ARPA 	Whoops (SETF and LAMBDAs)    
Received: from UTAH-20 by SU-AI with TCP/SMTP; 13 Oct 83  18:51:18 PDT
Date: Thu 13 Oct 83 19:52:53-MDT
From: William Galway <Galway@UTAH-20.ARPA>
Subject: Whoops (SETF and LAMBDAs)
To: Common-Lisp@SU-AI.ARPA

I guess I didn't make myself very clear in my previous message,
so I'll try again.  First, I'm talking about two related but
distinct things, namely
 1.) the lambda calculus
 2.) Lisp.

I picked a rather miserable example to illustrate what I want to
do in the lambda calculus, so here's another try.  The way I'd
assign a value to an expression like

    ((lambda (x)
       (plus x 1))
      2)

is to "evaluate (plus x 1) in an environment where `x' has the
value 2".  What I want to do is to extend this to the case where
x isn't an atom.  So to evaluate

    ((lambda ((sin x))
       (cos x))
      1)

I'd "evaluate (cos x) in an environment where the non-atomic
expression `(sin x)' has the value 1".  (I'm not necessarily
claiming that "plus", "sin", "cos", "2", "1", have their typical
meanings--I suppose it depends on what they're bound to.)  With
the usual meanings, I'd expect the value of the expression to be
zero, and if we plugged 0 instead of 1 into the lambda, the
result would be ambiguous, but either +1 or -1 should be valid
"interpretations".


In the case of Lisp, I was just thinking of LET as being a
convenient shorthand for LAMBDA.  So

    (let (
          ((elt v 0) (length v)))
      v)

is equivalent to

    ((lambda ((elt v 0))
       v)
      (length v))

(or should that be a "(function (lambda ...))"?  Anyway...)
So, is that roughly what MacLisp's "destructuring LET" did?
Something like SETF for lambdas, only without the idea that the
LET actually expanded to a lambda?

Let me also repeat that I'm not seriously suggesting
implementation of this stuff (or non-implementation for that
matter).  I'm just interested in toying with the ideas (for now).

Hope that clarifies what I was trying to get across.

-- Will
-------

∂14-Oct-83  1146	RPG   	SETF madness 
 ∂13-Oct-83  2133	JONL.PA@PARC-MAXC.ARPA 	SETF madness    
Received: from PARC-MAXC by SU-AI with TCP/SMTP; 13 Oct 83  21:33:01 PDT
Date: 13 OCT 83 21:25 PDT
From: JONL.PA@PARC-MAXC.ARPA
Subject: SETF madness
To: Galway@UTAH-20.ARPA
cc: Common-Lisp@SAIL.ARPA

Sometime during the 1930s or 1940s it was proven that general recursion
equtions were turing equivalent . . . 

But in my lifetime, they haven't been taken too seriously as a computation
model by anyone who actually has to get work done "in real time" (Yes, PROLOG
is an exception, but I'll comment on that later.  Maybe.)


What all the partially-facetious, somewhat-wishful, nearly-serious suggestions
about extending SETF are leading up to is a re-introduction of recursion
equations as a programming paradigm.  To be realistic, there must be some
limit on the paradigms used.  MacLisp's limit was simply that 
  1) variables would be bound, to
  2) values obtained from CAR/CDR sequences over the input [VAX/NIL, which
     I think still has some remnant of the destructing LET idea, also permits
     vector-referencing in addition to CAR/CDRing.]
This led to the very-compact description of how to "spread" the computed
values of a LET into a bunch of variables: namely, a data structure whose
form was to match that of an argument, and whose symbols were the variables
to be bound.  That's actually a very limited paradigm; but I stress that
those of us who used it found it most convenient.

Another serious proposal would have extended the limitation in point 1)
by including any LOCF'able place; this was coupled with a paradigm that
used a "program", rather than merely a data structure, to indicate how to
destructure the data.  Thus
   (LET ((`(,X ,Y) (MUMBLE))) ...)
instead of 
   (LET (((X Y) (MUMBLE))) ...)
That is, the "evaluable" places in the program would be places
were the destructured value would be "put".  Note that
   (LET (((GET 'X 'SOMEPROP) (MUMBLE))) ...)    ;"Program paradigm"
would bind the result of (MUMBLE) to a spot on the property list of the
symbol X; nothing at all in the "data paradigm" can express this.  In fact,
we just couldn't see how to do such binding efficiently in PDP10 MacLisp
(variables were easy -- they were built-in from antiquity), so that is one
reason we didn't try it out.  On the other hand, the LispMachine apparently
is capable of binding any random cell just as easily as binding a varialbe,
so it would have made more sense to attempt it there.  I believe it was
Rich Stallman's insight that this "program paradigm" was a super-set of
the "data paradigm", and could be implemented straight-forwardly; however,
I don't believe anyone ever did go to the trouble of building a prototype
and trying it out.


Now, back to the snipe at PROLOG.  It is true that PROLOG as a programming
language resembles recursion equations very much, and it is also true that
there are rumors of some PROLOG implementations running "in real time".
Furthermore, I admit that some problems are very succinctly stated in
PROLOG, and their re-formulation into one of the more classic programming
languages is a "hard" problem; so it's tempting to hope that some automating
of the conversion process will make programming a lot easier.  But I don't 
buy it, yet.  As you probe deeper into the use of PROLOG, you find that
you really can't do much without significant use of "cuts" and "directions",
which are in fact the serializers which tend to convert an apparently
descriptive program into a prescriptive one [i.e., rather than a program
which says "I would like such-and-such an efffect" into one that says
"First, do this, then do that, then do something else, then . . . "]

Except for all this madness about SETF, I'm fairly confident that the
Lisp community has it's head screwed on straight about avoiding paradigms
that trap the unwary into an exponentially-worse performance.  If - - - 
If Prolog really makes it big, it will have to make it possible to
express simple things like, say,
   (for I from 1 to 100 sum (AREF A I))
in such a way that the loop-overhead doesn't take an order of magnitude
more time than the array referencing.  SIgh, and again, if it "makes it
big", then the prolog manual will probably expand in size from a small
thin phamplet into something the size of the ZetaLisp or Interlisp tomes.

∂14-Oct-83  1147	RPG   	SETF and LAMBDAs  
 ∂13-Oct-83  2155	Guy.Steele@CMU-CS-A 	SETF and LAMBDAs   
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 13 Oct 83  21:55:21 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 14 Oct 83 00:40:05 EDT
Date: 14 Oct 83 0050 EDT (Friday)
From: Guy.Steele@CMU-CS-A
To: William Galway <Galway@UTAH-20>
Subject: SETF and LAMBDAs
CC: common-lisp@SU-AI
In-Reply-To: "William Galway's message of 13 Oct 83 20:52-EST"

Yes, you have certainly hit upon an interesting class of ideas.
You might want to explore the literature on "declarative" and
"relational" languages, check out the PROLOG language, and perhaps
look at a couple of papers Sussman and I wrote on constraint
languages (one is in the proceedings of APL '79, and another in
the AI Journal a couple of years ago).  These all have some related
notions, though not exactly what you have suggested.  You might
ponder the notions that your idea requires (1) a general method
for inverting functions calls to which appear as lambda parameters,
and (2) some means of dealing with multiple solutions when the
inverses turn out to be one-many relations.
--Guy

∂20-Oct-83  1710	AS%HP-HULK.HP-Labs@Rand-Relay 	character names    
Received: from RAND-RELAY by SU-AI with TCP/SMTP; 20 Oct 83  17:10:12 PDT
Date: 20 Oct 1983 1341-PDT
From: AS.HP-HULK@Rand-Relay
Return-Path: <AS%HP-HULK.HP-Labs@Rand-Relay>
Subject: character names
Received: by HP-VENUS via CHAOSNET; 20 Oct 1983 13:43:08-PDT
To: Common-Lisp@SU-AI
Cc: AS%AS.HP-LABS@Rand-Relay
Message-Id: <435530590.19147.hplabs@HP-VENUS>
Via:  HP-Labs; 20 Oct 83 15:07-PDT

I could not find the answers to these questions in the Excelsior edition: In
which package do the character name symbols returned by the function CHAR-NAME
reside?  [I would guess the Lisp package.]  How does the function NAME-CHAR
compare its argument to the known character-name symbols?  Does it use EQ or
does it examine the print-name and do something fancy like #\ (treat a single
character print-name case-sensitively, but treat a longer print-name
case-insensitively).  [I would guess the latter.]
-------

∂25-Oct-83  0820	RPG   	Need advice on token scanning in Common LISP    
 ∂24-Oct-83  0012	Guy.Steele@CMU-CS-A 	Need advice on token scanning in Common LISP
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 24 Oct 83  00:12:33 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 24 Oct 83 03:01:11 EDT
Date: 24 Oct 83 0302 EDT (Monday)
From: Guy.Steele@CMU-CS-A
To: Scott.Fahlman <FAHLMAN@CMU-CS-C>, rpg@SU-AI, rms%oz@MIT-ML,
    moon%scrc-tenex@MIT-ML, dlw%scrc-tenex@MIT-ML, bsg%scrc-tenex@MIT-ML
Subject: Need advice on token scanning in Common LISP

I foresee problems in making future extensions to Common LISP unless we
make provisions now.  I will make *none* of these changes, except the
one that fixes an ambiguity, unless most of you agree to them, and soon.

The ambiguity is that when *read-base* is 16, for example, 1E0 can be
interpreted as a a hexidecimal number *and* as a floating-point number.

The extensibility issues are in the treatment of tokens.  Right now, the
specification is "anything that doesn't look like a number is a symbol".
This can be misleading, and doesn't leave much room for extensions.  For
example, "5R0" and "5J0" are required to be treated as symbols, whereas
"5S0" and "5L0" are floating-point numbers.  The implementation that
wishes to experiment with a notation for complex numbers such as "5+3J"
is also out of luck.

I have several relatively orthogonal proposals:

(1) When a letter could be interpreted as both a digit (because
*read-base* is larger than 10 and the token contains no decimal point)
and as a floating-point exponent marker, it shall be interpreted as a
digit.  (I will make this clarification unless someone provides a better
resolution.)

(2) Make all letters be valid floating-point exponent markers (most of
them being reserved for future use).  (Item (3) subsumes this proposal
in a more general way.)

(3) Any token satisfying the following description and not fitting the
syntax of a Common LISP number is not a symbol, but is reserved for
extensions; for each such token every implementation must either
provide an interpretation as a LISP object or else signal an error.

    Consists entirely of digits, plus or minus signs (+ and -), ratio
    markers (/), decimal points (.), the extension characters "↑" and "←",
    and number markers (a letter may be interpreted as a number marker
    only if not preceded or followed by another letter).  No character
    whose interpretation must be alphabetic is permitted.  A letter is
    treated as alphabetic if it cannot be treated as a digit or number
    marker.
    
    Begins with a digit, sign, decimal point, or "↑" or "←".
    
    Contains at least one digit (very important).  Letters may be
    considered to be digits, depending on *read-base*, but only in tokens
    containing no decimal points.
    
    Does not end with a sign (this legitimizes 1+ and 1-).

(4) Tokens containing the character "%" are specifically reserved to
    the implementor and are not portable.

(5) For robustness, make the whitespace and rubout characters in table 22-3
    (Excelsior page 270) have the attribute "illegal" instead of "alphabetic".
    This move is purely for robustness: even if someone is idiotic enough
    to try to make them be constituents instead of whitespace (or whatever),
    it is still necessary to use escape chaarcters to get them into tokens.

--Guy

∂25-Oct-83  0822	RPG   	Need advice on token scanning in Common LISP    
 ∂24-Oct-83  0843	FAHLMAN@CMU-CS-C.ARPA 	Need advice on token scanning in Common LISP   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 24 Oct 83  08:43:30 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 24 Oct 83 11:46:46-EDT
Date: Mon, 24 Oct 1983  11:46 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   Guy.Steele@CMU-CS-A.ARPA
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      moon%scrc-tenex@MIT-ML.ARPA, rms%oz@MIT-ML.ARPA, rpg@SU-AI.ARPA
Subject: Need advice on token scanning in Common LISP
In-reply-to: Msg of 24 Oct 83 0302 EDT () from Guy.Steele at CMU-CS-A


Regarding the ambiguity Guy points out, I'd like to keep this simple.
Remember that we allowed *READ-BASE* to crawl back into the language
only so that it could be used in reading data files and to aid in
importing old code from the brain-damged octal Lisps, not for use in
poartable code.  With respect to supra-decimal bases, only the former
use comes up.  So it would not bother me too much to restrict the
freedom of expression of those people who set *READ-BASE* to something
greater than 10.  Remember too that it was a rather close decision about
whether 1E6 should be a legal flonum.  Let me propose the following way
to resolve the abiguity:

In general, floating point numbers must have a decimal point embedded in
them somewhere (not at the end).  For user convenience, if *read-base*
is ten or less, the decimal point may be omitted and a flonum may be
indicated by any one of the built-in exponent markers: s, f, d, l, b (or
the upper case versions thereof).  If *read-base* is > 10., the decimal
point is required, else the number is interpreted as an integer or symbol,
depending on whether it is a legal integer in the specified radix.

If people don't like that, I could live with Guy's simple solution 1: if
you can't tell if something is a digit or exponent, it's a digit.  Might
be a bit hairy to implement -- I haven't thought through all the cases
yet -- but it's unambiguous.


    (4) Tokens containing the character "%" are specifically reserved to
        the implementor and are not portable.

In fact, both the CMU-derived Common Lisp implementations and Lispm use
% as a "don't tread on me" character in function and variable names, so
maybe we should make it official.  However, there seem to be a bunch of
people who automatically object to any attempt to remove classes of
names from the legal name space, and the right long-run solution is to
do this with packages, so maybe we shouldn't make this official.

Guy's proposed change to the handling of whitespace characters looks OK
to me, though I haven't felt the need to save users from this particular
form of insanity.  I abstain on point 5.

-- Scott

∂25-Oct-83  0822	RPG   	Need advice on token scanning in Common LISP    
 ∂24-Oct-83  1020	@MIT-ML:Moon%SCRC-TENEX@MIT-ML 	Need advice on token scanning in Common LISP    
Received: from MIT-ML by SU-AI with TCP/SMTP; 24 Oct 83  10:19:48 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Mon 24-Oct-83 13:17:03-EDT
Date: Monday, 24 October 1983, 13:15-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-ML>
Subject: Need advice on token scanning in Common LISP
To: Guy.Steele@CMU-CS-A, Scott E. Fahlman <Fahlman@CMU-CS-C>
Cc: rpg@SU-AI, rms%oz@MIT-ML, moon%scrc-tenex@MIT-ML, dlw%scrc-tenex@MIT-ML,
    bsg%scrc-tenex@MIT-ML
In-reply-to: The message of 24 Oct 83 03:02-EDT from Guy.Steele at CMU-CS-A,
             The message of 24 Oct 83 11:46-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

    Date: 24 Oct 83 0302 EDT (Monday)
    From: Guy.Steele@CMU-CS-A

    The ambiguity is that when *read-base* is 16, for example, 1E0 can be
    interpreted as a a hexidecimal number *and* as a floating-point number.

    (1) When a letter could be interpreted as both a digit (because
    *read-base* is larger than 10 and the token contains no decimal point)
    and as a floating-point exponent marker, it shall be interpreted as a
    digit.  (I will make this clarification unless someone provides a better
    resolution.)

This is right.  We should definitely say this in the manual.  A
higher-level way of saying the same thing is that if a token could be
interpreted as either a floating-point number or an integer, it is
always interpreted as an integer.  (This rule is already enforced by the
slight contortions in the BNF (Excelsior page 268) to make the syntax
for floating-point number exclude the uses of decimal points in
integers.)

    The extensibility issues are in the treatment of tokens.  Right now, the
    specification is "anything that doesn't look like a number is a symbol".
    This can be misleading, and doesn't leave much room for extensions.  For
    example, "5R0" and "5J0" are required to be treated as symbols, whereas
    "5S0" and "5L0" are floating-point numbers.  The implementation that
    wishes to experiment with a notation for complex numbers such as "5+3J"
    is also out of luck.

    (2) Make all letters be valid floating-point exponent markers (most of
    them being reserved for future use).  (Item (3) subsumes this proposal
    in a more general way.)

    (3) Any token satisfying the following description and not fitting the
    syntax of a Common LISP number is not a symbol, but is reserved for
    extensions; for each such token every implementation must either
    provide an interpretation as a LISP object or else signal an error.

What else could they do but provide an interpretation as a Lisp object or
signal an error?  Or do you mean that the Lisp object is required to
be something other than a symbol?

	Consists entirely of digits, plus or minus signs (+ and -), ratio
	markers (/), decimal points (.), the extension characters "↑" and "←",

It seems odd to special-case ↑ and ← since Common Lisp does not have them.
Maclisp has them but what if somebody decides they want to use < and > as
extension characters in numbers?

	and number markers (a letter may be interpreted as a number marker
	only if not preceded or followed by another letter).  No character
	whose interpretation must be alphabetic is permitted.  A letter is
	treated as alphabetic if it cannot be treated as a digit or number
	marker.
    
	Begins with a digit, sign, decimal point, or "↑" or "←".
    
	Contains at least one digit (very important).  Letters may be
	considered to be digits, depending on *read-base*, but only in tokens
	containing no decimal points.
    
	Does not end with a sign (this legitimizes 1+ and 1-).

This is okay with me provided that an implementation is allowed to interpret
these tokens as symbols.  In that case what we are specifying is simply that
there is a certain class of tokens that may not be used in portable programs,
because different implementations will interpret them differently.  Elsewhere
in the Common Lisp manual the phrase "is an error" is sometimes used to mean
this, i.e. "is an implementation-dependent extension or an error".

    (4) Tokens containing the character "%" are specifically reserved to
	the implementor and are not portable.

The use of %'s in the Lisp machine is merely a holdover from the days before
there were packages.  I don't think this anomaly should be enshrined in
Common Lisp.  Whether or not an implementation uses such a convention internally,
these symbols should not be in the LISP package.  USE-PACKAGE can be used to
get them in places where they are wanted.

    (5) For robustness, make the whitespace and rubout characters in table 22-3
	(Excelsior page 270) have the attribute "illegal" instead of "alphabetic".
	This move is purely for robustness: even if someone is idiotic enough
	to try to make them be constituents instead of whitespace (or whatever),
	it is still necessary to use escape chaarcters to get them into tokens.

I don't care about this one either way.


    Date: Mon, 24 Oct 1983  11:46 EDT
    From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
    Regarding the ambiguity Guy points out, I'd like to keep this simple....

I think Guy's proposal is simpler than Scott's proposal.


Possibly also relevant to this discussion is the solution we adopted a while back
to the *READ-BASE* > 10 problem.  We provide the following two variables:

;;; Handling of tokens that can be symbols, floating-point numbers, or integers in
;;; bases greater than ten.

;;; The following flags are global modes that control what happens when
;;; IBASE is greater than ten.  Certain tokens made up of digits and letters
;;; could be interpreted either as integers or as symbols (or in some cases
;;; floating-point numbers).
;;; The initial values given here are compatible with both Zetalisp and Common Lisp.
   ******* actually Common Lisp seems to have changed its mind since 
   ******* the above comment was written.  The bottom of Excelsior 268
   ******* indicates that both variables would be T.

(DEFVAR *READ-EXTENDED-IBASE-UNSIGNED-NUMBER* ':SINGLE
  "Controls how a token that could be a number or a symbol, and does not start
with a + or - sign, is interpreted when IBASE is greater than ten.
NIL => it is never a number.
T => it is always a number.
:SHARPSIGN => it is a symbol at top level, but a number after #X or #nR.
:SINGLE => it is a symbol except immediately after #X or #nR.")

(DEFVAR *READ-EXTENDED-IBASE-SIGNED-NUMBER* ':SHARPSIGN		;"White's Hack"
  "Controls how a token that could be a number or a symbol, and starts
with a + or - sign, is interpreted when IBASE is greater than ten.
NIL => it is never a number.
T => it is always a number.
:SHARPSIGN => it is a symbol at top level, but a number after #X or #nR.
:SINGLE => it is a symbol except immediately after #X or #nR.")

∂25-Oct-83  0822	RPG   	Need advice on token scanning in Common LISP    
 ∂24-Oct-83  1912	FAHLMAN@CMU-CS-C.ARPA 	Need advice on token scanning in Common LISP   
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 24 Oct 83  19:12:23 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Mon 24 Oct 83 22:15:16-EDT
Date: Mon, 24 Oct 1983  22:15 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To:   David A. Moon <Moon%SCRC-TENEX@MIT-ML.ARPA>
Cc:   bsg%scrc-tenex@MIT-ML.ARPA, dlw%scrc-tenex@MIT-ML.ARPA,
      Guy.Steele@CMU-CS-A.ARPA, rms%oz@MIT-ML.ARPA, rpg@SU-AI.ARPA
Subject: Need advice on token scanning in Common LISP
In-reply-to: Msg of 24 Oct 1983 13:15-EDT from David A. Moon <Moon%SCRC-TENEX at MIT-ML>


I guess that given Moon's rephrasing (if a token could be either a
floating-point number or an integer, it's an integer), I agree that
Guy's solution is simpler than mine.  Let's go with that.  And let's not
mess with reserving tokens with % in them.

-- Scott

∂25-Oct-83  0823	RPG   	RMS replies to 15 hairy issues   
 ∂25-Oct-83  0758	Guy.Steele@CMU-CS-A 	RMS replies to 15 hairy issues    
Received: from CMU-CS-A by SU-AI with TCP/SMTP; 25 Oct 83  07:57:19 PDT
Received: from [128.2.254.192] by CMU-CS-PT with CMUFTP; 25 Oct 83 10:44:33 EDT
Date: 25 Oct 83 1052 EDT (Tuesday)
From: Guy.Steele@CMU-CS-A
To: Scott.Fahlman <FAHLMAN@CMU-CS-C>, rpg@SU-AI, moon%scrc-tenex@MIT-ML,
    dlw%scrc-tenex@MIT-ML, bsg%scrc-tenex@MIT-ML
Subject: RMS replies to 15 hairy issues


- - - - Begin forwarded message - - - -
Received: from CMU-CS-PT by CMU-CS-A; 11 Oct 83 05:37:16 EDT
Received: from CMU-CS-C by CMU-CS-PT; 11 Oct 83 05:26:08 EDT
Received: from MIT-ML by CMU-CS-C with TCP; Tue 11 Oct 83 05:36:16-EDT
Date: Tue 11 Oct 83 03:47:35-EDT
From: RMS%MIT-OZ@MIT-MC.ARPA
Subject: Re: Nasty Common LISP Issues
To: STEELE@CMU-CS-C.ARPA
In-Reply-To: Message from "STEELE@CMU-CS-C.ARPA" of Sat 8 Oct 83 01:14:34-EDT

1. I agree with you about MACROLET bodies.

2. I have already gone to trouble to convert the reader to fit
the last version of the manual.  This change is neither the old
way I had it nor the new way, therefore means more hassles changing
reader and readtable together.  Ugh.
3. I have told the users that "::" prefixes refer to "shadowed"
package names.  Common Lisp does not have local nicknames that
can override global names for packages, but I do, and I also need
a way to override them (refer only to global names).
I suppose I could swap the meanings of #: and ::, but then I am
no better off than now (I still have to be using #: for something)
and it does mean a hassle.  (I also use the combination #::).
So I'm somewhat opposed to changes 2 and 3.

4. You speak as if you were passing on what I said about RECURSIVE-P,
but you have changed it.  What I propose is that RECURSIVE-P should
have no effect on the handling of end-of-file.  That should be controlled
only by EOF-ERRORP and EOF-VALUE.  Recursive calls to READ can simply
pass T for EOF-ERRORP.  This is simpler to describe, and also eliminates
the need for any hair to pass along the outer values of EOF-ERRORP and
EOF-VALUE.  This is never done, in my proposed scheme.

What advantage is there in passing along the top-level value of EOF-ERRORP
and never looking at it?

Meanwhile, what is the use of the RECURSIVE-P argument in READ-LINE,
READ-CHAR and PEEK-CHAR?  Have you decided to flush it?

5. I have fixed this #+ problem approximately as you have said,
so I approve.  I did it using a special variable which is non-NIL
when within failing conditionals.  This is a much better way to
allow user readmacros to test for that situation, since it does not
require a wholesale change to their calling sequence.  I would not
mind adding a new name to this special variable.  *READ-SKIP* perhaps?

7. I approve.

8. Yes, let's flush them.  Or perhaps you can just write them
and hand them out to everyone else?

9. I don't care.  I've already implemented it as specified but undocumenting
those things is no problem.

11. I approve.

12. This change to ~T is a pain, since it gets the character position by
simply asking the stream :READ-CURSORPOS.

15. I changed EVAL-WHEN this summer.  In the compiler, if an EVAL-WHEN
appears immediately inside another EVAL-WHEN, their COMPILE and LOAD
attributes are effectively AND'ed together.  I think your proposal is
essentially what it used to do, with one exception: DEFMACRO.
Evalling a DEFMACRO during compilation just causes the DEFMACRO to
be recognized for the rest of the file.  A DEFMACRO inside an
EVAL-WHEN that did not include COMPILE used to be not recognized.
I think this is the reason why DEFSTRUCT has always produced
(EVAL-WHEN (COMPILE LOAD EVAL) ...)

I fixed that by making a DEFMACRO inside an EVAL-WHEN with LOAD
also be recognized for the rest of compilation.  Then there was
no reason why DEFSTRUCT had to make such an EVAL-WHEN, and in fact
that was a nuisance.  But I couldn't change it since the Common Lisp
manual wanted DEFSTRUCT to keep on making such EVAL-WHENs.  The
only other fix was to have a way to get rid of the COMPILE in certain
situations, and that is why I made nested EVAL-WHENs AND together.

Your proposal, however, also makes it unnecessary for DEFSTRUCT
to say EVAL-WHEN COMPILE.  In addition, it makes it wrong for DEFSTRUCT
to say that, because then compiling a file with a DEFSTRUCT in it
would permanently define its macros.  It seems correct for DEFSTRUCT
to make no EVAL-WHEN whatever.

With this change, I will no longer need to be concerned so much with
what nested EVAL-WHENs do.  As things stand, it is very difficult for
me to change anything since then there might be no way at all for me
to make certain things work.


There is a fundamental problem with DEFMACROs and EVAL-WHEN.
There seem to be four things one can do with DEFMACROs:
  eval them when the file is read
  eval them when the file is compiled
  put them in the QFASL file
  recognize them for compilation of the rest of the file
Different designs for EVAL-WHEN often differ mainly by how they let these
four options be controlled by three bits of information.
So do we really need a fourth EVAL-WHEN time?  RECOGNIZE?  COMPILE-FILE?
Is it clear that one never wants a DEFMACRO that is not going to be
recognized for compilation of the rest of the file?
-------
- - - - End forwarded message - - - -

∂25-Oct-83  0826	RPG   	Random idea  
 ∂19-Oct-83  0912	@MIT-MC:DLW%SCRC-TENEX@MIT-MC 	Random idea   
Received: from MIT-MC by SU-AI with TCP/SMTP; 19 Oct 83  09:11:57 PDT
Received: from SCRC-SHEPHERD by SCRC-TENEX with CHAOS; Wed 19-Oct-83 12:12:09-EDT
Date: Wednesday, 19 October 1983, 12:13-EDT
From: Daniel L. Weinreb <DLW%SCRC-TENEX@MIT-MC>
Subject: Random idea
To: Fahlman@CMU-CS-C, steele@CMU-CS-C, rpg@SU-AI, moon%SCRC-TENEX@MIT-MC,
    bsg%SCRC-TENEX@MIT-MC, dlw%SCRC-TENEX@MIT-MC
In-reply-to: The message of 30 Sep 83 02:37-EDT from Scott E. Fahlman <Fahlman at CMU-CS-C>

I finally got around to reading the &MORE discussion (I'm struggling my
way out of my back mail, today).  Just for the record, I agree with Moon.
It's OK to introduce &MORE if people really want it, but if we went back
and re-evaluated every &REST in our system, I would guess no more than
5% of them are really better off as &MOREs, 20% are just iterated over
so it really doesn't make any differerence (DOLIST and DOTIMES are equally
easy to use), and the rest are used as lists, generally for APPLY (a.k.a.
LEXPR-FUNCALL), and so very few &RESTs would get converted to &MOREs.

Anyway, that was just for the record; I don't mean to start up the
discussion again.  I agree that it should wait for Common Lisp II ("Son
of Common Lisp"?  "SOCL"?  "Common Lisp Strikes Back?").

∂28-Oct-83  1602	RPG   	RMS replies to 15 hairy issues   
 ∂28-Oct-83  1559	@MIT-ML:Moon%SCRC-TENEX@MIT-ML 	RMS replies to 15 hairy issues   
Received: from MIT-ML by SU-AI with TCP/SMTP; 28 Oct 83  15:59:07 PDT
Received: from SCRC-EUPHRATES by SCRC-TENEX with CHAOS; Fri 28-Oct-83 18:53:31-EDT
Date: Friday, 28 October 1983, 18:52-EDT
From: David A. Moon <Moon%SCRC-TENEX@MIT-ML>
Subject: RMS replies to 15 hairy issues
To: Guy.Steele@CMU-CS-A
Cc: Scott.Fahlman <FAHLMAN@CMU-CS-C>, rpg@SU-AI, moon%SCRC-TENEX@MIT-ML,
    dlw%SCRC-TENEX@MIT-ML, bsg%SCRC-TENEX@MIT-ML
In-reply-to: The message of 25 Oct 83 10:52-EDT from Guy.Steele at CMU-CS-A

I looked over RMS's comments.  I don't think he's brought up any new
issues that we need to consider, assuming my comments on the same
things didn't get overlooked (e.g. recursive-p in read).
They did bring two remarks to mind:

In his discussion of DEFSTRUCT and EVAL-WHEN, he didn't look carefully
enough at what DEFSTRUCT does.  The way it is implemented leaves it
no choice but to modify the compile-time environment, because it has
to put properties on symbols such as the name of the structure.  Without
a way to store those properties instead in the same data base that the
compiler uses to remember macro definitions, DEFSTRUCT has no choice
but to have an internal EVAL-WHEN (COMPILE LOAD EVAL).  See my message
of Friday, 2 September 1983, 14:41-EDT.

I'm still unhappy with the vertical bars in package prefixes on qualified
names, and still unhappy with the change of #: to ::.  I'd like to agitate
again for simplifying the syntax as I proposed in my message of
Friday, 7 October 1983, 22:09-EDT.  Even though it would make RMS happy.