perm filename COMMON.MSG[COM,LSP]1 blob sn#640411 filedate 1982-02-08 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00125 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00018 00002	∂30-Dec-81  1117	Guy.Steele at CMU-10A 	Text-file versions of DECISIONS and REVISIONS documents  
C00020 00003	∂23-Dec-81  2255	Kim.fateman at Berkeley 	elementary functions
C00023 00004	∂01-Jan-82  1600	Guy.Steele at CMU-10A 	Tasks: A Reminder and Plea 
C00027 00005	∂08-Dec-81  0650	Griss at UTAH-20 (Martin.Griss) 	PSL progress report   
C00036 00006	∂15-Dec-81  0829	Guy.Steele at CMU-10A 	Arrgghhh blag    
C00038 00007	∂18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
C00042 00008	∂21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
C00044 00009	∂21-Dec-81  1101	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
C00045 00010	∂21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
C00048 00011	∂22-Dec-81  0811	Kim.fateman at Berkeley 	various: arithmetic  commonlisp broadcasts  
C00051 00012	∂22-Dec-81  0847	Griss at UTAH-20 (Martin.Griss) 	[Griss (Martin.Griss): Re: Common Lisp]   
C00055 00013	∂23-Dec-81 1306	Guy.Steele at CMU-10A 	Re: various: arithmetic commonlisp broadcasts 
C00063 00014	∂18-Dec-81  1533	Jon L. White <JONL at MIT-XX> 	Extended-addressing Common Lisp   
C00064 00015	∂21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
C00066 00016	∂22-Dec-81  0827	Griss at UTAH-20 (Martin.Griss) 	Re: various: arithmetic  commonlisp broadcasts
C00068 00017	∂04-Jan-82  1754	Kim.fateman at Berkeley 	numbers in common lisp   
C00077 00018	∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
C00083 00019	∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
C00085 00020	∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
C00087 00021	∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
C00089 00022	∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
C00092 00023	∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
C00095 00024	∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
C00096 00025	∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
C00099 00026	∂17-Jan-82  2207	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
C00101 00027	∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
C00103 00028	∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
C00104 00029	∂02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
C00107 00030	∂14-Jan-82  0732	Griss at UTAH-20 (Martin.Griss) 	Common LISP 
C00108 00031	∂14-Jan-82  2032	Jonathan A. Rees <JAR at MIT-MC>   
C00111 00032	∂15-Jan-82  0109	RPG   	Rutgers lisp development project 
C00124 00033	∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
C00130 00034	∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
C00132 00035	∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
C00134 00036	∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
C00136 00037	∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
C00139 00038	∂16-Jan-82  1252	Griss at UTAH-20 (Martin.Griss) 	Kernel for Commaon LISP    
C00141 00039	∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
C00144 00040	∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
C00145 00041	∂17-Jan-82  0618	Griss at UTAH-20 (Martin.Griss) 	Agenda 
C00148 00042	∂17-Jan-82  1751	Feigenbaum at SUMEX-AIM 	more on Interlisp-VAX    
C00154 00043	∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
C00157 00044	∂17-Jan-82  2042	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
C00159 00045	∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
C00161 00046	∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
C00162 00047	∂18-Jan-82  1602	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
C00163 00048	∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
C00166 00049	∂19-Jan-82  1551	RPG  	Suggestion    
C00168 00050	∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
C00170 00051	∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
C00187 00052	∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
C00196 00053	∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
C00198 00054	∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
C00200 00055	∂18-Jan-82  1537	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
C00201 00056	∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
C00204 00057	∂19-Jan-82  1551	RPG  	Suggestion    
C00207 00058	∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
C00209 00059	∂19-Jan-82  2113	Fahlman at CMU-20C 	Re: Suggestion      
C00211 00060	∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
C00228 00061	∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
C00237 00062	∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
C00239 00063	∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
C00247 00064	∂20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
C00255 00065	∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
C00258 00066	∂21-Jan-82  1746	Earl A. Killian <EAK at MIT-MC> 	SET functions    
C00259 00067	∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
C00261 00068	∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
C00264 00069	∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
C00267 00070	∂21-Jan-82  1144	Sridharan at RUTGERS (Sri) 	S-1 CommonLisp   
C00278 00071	∂21-Jan-82  1651	Earl A. Killian <EAK at MIT-MC> 	SET functions    
C00279 00072	∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
C00281 00073	∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
C00284 00074	∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
C00286 00075	∂22-Jan-82  1842	Fahlman at CMU-20C 	Re: adding to kernel
C00290 00076	∂22-Jan-82  1914	Fahlman at CMU-20C 	Multiple values
C00292 00077	∂22-Jan-82  2132	Kim.fateman at Berkeley 	Re: adding to kernel
C00296 00078	∂23-Jan-82  0409	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00300 00079	∂23-Jan-82  0910	RPG  
C00302 00080	∂23-Jan-82  1841	Fahlman at CMU-20C  
C00305 00081	∂23-Jan-82  2029	Fahlman at CMU-20C 	Re:  adding to kernel    
C00311 00082	∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
C00312 00083	∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
C00314 00084	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00316 00085	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00318 00086	∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00322 00087	∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
C00324 00088	∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
C00326 00089	∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
C00329 00090	∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
C00330 00091	∂27-Jan-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
C00333 00092	∂27-Jan-82  1445	Jon L White <JONL at MIT-MC> 	Multiple mailing lists?  
C00334 00093	∂27-Jan-82  1438	Jon L White <JONL at MIT-MC> 	Two little suggestions for macroexpansion    
C00340 00094	∂27-Jan-82  2202	RPG  	MVLet    
C00343 00095	∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00345 00096	∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
C00346 00097	∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
C00348 00098	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00350 00099	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00352 00100	∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00356 00101	∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
C00358 00102	∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
C00360 00103	∂25-Jan-82  1436	Hanson at SRI-AI 	NIL and DEC VAX Common LISP
C00362 00104	∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
C00365 00105	∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
C00366 00106	∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00368 00107	∂28-Jan-82  1235	Fahlman at CMU-20C 	Re: MVLet      
C00373 00108	∂28-Jan-82  1416	Richard M. Stallman <rms at MIT-AI> 	Macro expansion suggestions 
C00375 00109	∂28-Jan-82  1914	Howard I. Cannon <HIC at MIT-MC> 	Macro expansion suggestions    
C00380 00110	∂27-Jan-82  1633	Jonl at MIT-MC Two little suggestions for macroexpansion
C00386 00111	∂28-Jan-82  1633	Fahlman at CMU-20C 	Re: Two little suggestions for macroexpansion
C00388 00112	∂29-Jan-82  0945	DILL at CMU-20C 	Re: eql => eq?    
C00392 00113	∂29-Jan-82  1026	Guy.Steele at CMU-10A 	Okay, you hackers
C00394 00114	∂29-Jan-82  1059	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
C00398 00115	∂29-Jan-82  1146	Guy.Steele at CMU-10A 	MACSYMA timing   
C00400 00116	∂29-Jan-82  1204	Guy.Steele at CMU-10A 	Re: eql => eq?   
C00402 00117	∂29-Jan-82  1225	George J. Carrette <GJC at MIT-MC> 	MACSYMA timing
C00405 00118	∂29-Jan-82  1324	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
C00406 00119	∂29-Jan-82  1332	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
C00407 00120	∂29-Jan-82  1336	Guy.Steele at CMU-10A 	Re: Re: eql => eq?    
C00409 00121	∂29-Jan-82  1654	Richard M. Stallman <RMS at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs.    
C00412 00122	∂29-Jan-82  2149	Kim.fateman at Berkeley 	Okay, you hackers   
C00415 00123	∂29-Jan-82  2235	HIC at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs.  
C00419 00124	∂30-Jan-82  0006	MOON at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs 
C00420 00125	∂30-Jan-82  0431	Kent M. Pitman <KMP at MIT-MC> 	Those two little suggestions for macroexpansion 
C00422 ENDMK
C⊗;
∂30-Dec-81  1117	Guy.Steele at CMU-10A 	Text-file versions of DECISIONS and REVISIONS documents  
Date: 30 December 1981 1415-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Text-file versions of DECISIONS and REVISIONS documents
Message-Id: <30Dec81 141557 GS70@CMU-10A>

The files DECISIONS DOC and REVISIONS DOC  on directory  GLS;
at  MIT-MC  are available.  They are text files, as opposed to
PRESS files.  The former is 9958 lines long, and the latter is
1427.
--Guy

∂23-Dec-81  2255	Kim.fateman at Berkeley 	elementary functions
Date: 23 Dec 1981 22:48:00-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: elementary functions
Cc: Kim.jkf@UCB-C70, gjc@MIT-MC, griss@utah-20, jonl@MIT-MC, masinter@PARC-MAXC,
    rpg@SU-AI

I have no objection to making lisp work better with numerical computation.
I think that it is a far more complicated issue than you seem to think
to put in elementary functions.  Branch cuts are probably not hard.
APL's notion of a user-settable "fuzz" is gross.  Stan Brown's
model of arithmetic is (Ada notwithstanding) inadequate as a prescriptive
model (Brown agrees).  If you provide a logarithm function, are you
willing to bet that it will hold up to the careful scrutiny of people
like Kahan?
  
As for the vagaries of arithmetic in Franz, I hope such things will
get ironed out along with vagaries in the Berkeley UNIX system.  Kahan
and I intend to address such issues.  I think it is a mistake to
address such issues as LANGUAGE issues, though.

I have not seen Penfield's article (yet). 

As for the rational number implementation question, it seems to me
that implementation of rational numbers (as pairs) loses little by
being programmed in Lisp.  Writing bignums in lisp loses unless you
happen to have access to machine instructions like 64-bit divided by
32 bit, from Lisp.  

I would certainly like to see common lisp be successful;  if you
have specific plans for the arithmetic that you wish to get comments and/or
help on, please give them a wider circulation.  E.g. the IEEE
floating point committee might like to see how you might incorporate
good ideas in a language.
I would be glad to pass your plans on to them.

∂01-Jan-82  1600	Guy.Steele at CMU-10A 	Tasks: A Reminder and Plea 
Date:  1 January 1982 1901-EST (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Tasks: A Reminder and Plea
Message-Id: <01Jan82 190137 GS70@CMU-10A>

At the November meeting, a number of issues were deferred with the
understanding that certain people would make concrete proposals for
consideration and inclusion in the second draft of the manual.  I
promised to get the second draft out in January, and to do that I need
those proposals pretty soon.  I am asking to get them in two weeks (by
January 15).  Ideally they would already be in SCRIBE format, but I'll
settle for any reasonable-looking ASCII file of text approximately in
the style of the manual.  BOLIO files are okay too; I can semi-automate
BOLIO to SCRIBE conversion.  I would prefer not to get rambling prose,
outlines, or sentence fragments; just nice, clean, crisp text that
requires only typographical editing before inclusion in the manual.
(That's the goal, anyway; I realize I may have to do some
industrial-strength editing for consistency.)  A list of the outstanding
tasks follows.

--Guy

GLS: Propose a method for allowing special forms to have a dual
implementation as both a macro (for user and compiler convenience)
and as a fexpr (for interpreter speed).  Create a list of primitive
special forms not easily reducible via macros to other primitives.
As part of this suggest an alternative to FUNCTIONP of two arguments.

MOON: Propose a rigorous mathematical formulation of the treatment
of the optional tolerance-specification argument for MOD and REMAINDER.
(I had a crack at this and couldn't figure it out, though I think I
came close.)

GLS: Propose specifications for lexical catch, especially a good name for it.

Everybody: Propose a clean and consistent declaration system.

MOON/DLW/ALAN: Propose a cleaned-up version of LOOP.  Alter it to handle
most interesting sequence operations gracefully.

SEF: Propose a complete set of keyword-style sequence operations.

GLS: Propose a set of functional-style sequence operations.

GJC/RLB: Polish the VAXMAX proposal for feature sets and #+ syntax.

ALAN: Propose a more extensible character-syntax definition system.

GLS: Propose a set of functions to interface to a filename/pathname
system in the spirit of the LISP Machine's.

LISPM: Propose a new error-handling system.

LISPM: Propose a new package system.


∂08-Dec-81  0650	Griss at UTAH-20 (Martin.Griss) 	PSL progress report   
Date:  8 Dec 1981 0743-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: PSL progress report
To: rpg at SU-AI
cc: griss at UTAH-20

How was common LISP meeting?
Did you meet Ohlander?

Excuse me if following remailed to you, seems to be a mailer bug:
                            PSL Interest Group
                              2 December 1981


     Since my last message at the end of October, we have made significant
progress on the VAX version of PSL. Most of the effort this last month has
been directed at VAX PSL, with some utility work on the DEC-20 and Apollo.
Please send a message if you wish to be removed from this mailing LIST, or
wish other names to be added.

	Martin L. Griss,
	CS Dept., 3160 MEB,
	University of Utah,
	Salt Lake City, Utah 84112.
	(801)-581-6542

--------------------------------------------------------------------------

Last month, we started the VAX macros and LAP to UNIX as converter in
earnest.  We used the PSL-20 V2 sources and PSL to MIDAS compiler c-macros
and tables as a guide. After some small pieces of code were tested, cross
compilation on the DEC-20 and assembly on the VAX proceeded full-bore. Just
before Thanksgiving, there was rapid progress resulting in the first
executing PSL on the VAX. This version consisted mostly of the kernel
modules of the PSL-20 version, without the garbage collector, resident LAP
and some debugging tools. Most of the effort in implementing these smaller
modules is the requirement for a small amount of LAP to provide the
compiled function/interpreted function interface, and efficient variable
binding operations.  The resident LAP has to be newly written for the VAX.
The c-macros and compiler of course have been fully tested in the process
of building the kernel.

It was decided to produce a new stop-and-copy (two space) collector for
PSL-VAX, to replace the PSL-20 compacting collector.  This collector was
written in about a day and tested by loading it into PSL-20 and dynamically
redefining the compacting collector. On the DEC-20, it seems about 50%
faster than the compacting collector, and MUCH simpler to maintain. It will
be used for the Extended addressing PSL-20. This garbage collector is now
in use with PSL-VAX.

Additional ("non-kernel") modules have also been incorporated in this
cross-compilation phase (they are normally loaded as LAP into PSL-20) to
provide a usable interpreted PSL. PSL-VAX V1 now runs all of the Standard
LISP test file, and most utility modules run interpretively (RLISP parser,
structure editor, etc).  We may compile the RLISP parser and support in the
next build and have a complete RLISP for use until we have resident LAP and
compiler.  The implementation of the resident LAP, a SYSCALL function, etc
should take a few weeks. One possibility is to look at the Franz LISP fasl
and object file loader, and consider using the Unix assembler in a lower
fork with a fasl loader.

Preliminary timings of small interpreted code segments indicate that this
version PSL runs somewhat slower than FranzLISP. There are functions that
are slower and functions that are faster (usually because of SYSLISP
constructs).  We will time some compiled code shortly (have to
cross-compile and link into kernel in current PSL) to identify good and bad
constructs.  We will also spend some time studying the code emitted, and
change the code-generator tables to produce the next version, which we
expect to be quite a bit faster. The current code generator does not use
any three address or indexing mode operations.

We will shortly concentrate on the first Apollo version of PSL.  We do not
expect any major surprises. Most of the changes from the PSL-20 system
(byte/word conflicts) have now been completely flushed out in the VAX
version.  The 68000 tables should be modeled very closely on the VAX
tables. The current Apollo assembler, file-transfer program, and debugger
are not as powerful as the corresponding VAX tools, and this will make work
a little harder. To compensate, there will be less source changes to check
out.



M
-------

Eric
Just finished my long trip plus recovery from East coast flu's etc. Can
you compile the TAK function for me using your portable compiler and send
me the code. Also, could you time it on (TAK 18. 12. 6.). Here's the code
I mean:

(defun tak (x y z)
       (cond ((not (< y x))
	      z)
	     (t (tak (tak (1- x) y z)
		     (tak (1- y) z x)
		     (tak (1- z) x y))))))

I'm in the process of purring together a synopsis of the results from
the meeting. In short, from your viewpoint, we decided that it would be
necessary for us (Common Lisp) to specify a small virtual machine and
for us to then supply to all interested parties the rest of the system
in Common Lisp code. This means that there would be a smallish number
of primitives that you would need to implement. I assume that this
is satisfactory for the Utah contingent. 

Unfortunately, a second meeting will be necessary to complete the agenda 
since we did not quite finish. In fact, I was unable to travel to
Washington on this account.
∂15-Dec-81  0829	Guy.Steele at CMU-10A 	Arrgghhh blag    
Date: 15 December 1981 1127-EST (Tuesday)
From: Guy.Steele at CMU-10A
To: rpg at SU-AI
Subject:  Arrgghhh blag
Message-Id: <15Dec81 112717 GS70@CMU-10A>

Foo.  I didn't want to become involved in an ANSI standard, and I have
told people so. or one thing, it looks like a power play and might
alienate people such as the InterLISP crowd, and I wouldn't blame them.
In any case, I don't think it is appropriate to consider this until
we at least have a full draft manual.  If MRG wants to fight that fight,
let him at it.
I am working on collating the bibliographic entries.  I have most of them
on-line already, but just have to convert from TJ6 to SC
RIBE format.  I agree that the abstract is not very exciting -- it is
practically stodgy.  I was hoping you would know how to give it some oomph,
some sparkle.  If not, we'll just send it out as is and try to sparkle up
the paper if it is accepted.  Your siggestions about explaining TNBIND
and having a diagram are good.
--Q

∂18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
Date: 18 Dec 1981 1214-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: information about Common Lisp implementation
To: rpg at SU-AI, jonl at MIT-AI

We are about to sign a contract with DEC's LCG whereby they sponsor us
to produce an extended addressing Lisp.  We are still discussing whether
this should be Interlisp or Common Lisp.  I can see good arguments in
both directions, and do not have a strong perference, but I would
slightly prefer Common Lisp.  Do you know whether there are any
implementations of Common Lisp, or something reasonably close to it? I
am reconciled to producing my own "kernel", probably in assembly
language, though I have some other candidates in mind too. But I would
prefer not to have to do all of the Lisp code from scratch.

As you may know, DEC is probably going to support a Lisp for the VAX. My
guess is that we will be very likely to do the same dialect that  is
decided upon there.  The one exception would be if it looks like MIT (or
someone else) is going to do an extended implementation of Common Lisp.
If so, then we would probably do Interlisp, for completeness.

We have some experience in Lisp implementation now, since Elisp (the
extended implementation of Rutgers/UCI Lisp) is essentially finished.
(I.e. there are some extensions I want to put in, and some optimizations,
but it does allow any sane R/UCI Lisp code to run.) The interpreter now
runs faster than the original R/UCI lisp interpreter. Compiled code is
slightly slower, but we think this is due to the fact that we are not
yet compiling some things in line that should be. (Even CAR is not
always done in line!)  The compiler is Utah's portable compiler,
modified for the R/UCI Lisp dialect.  It does about what you would want
a Lisp compiler to do, except that it does not open code arithmetic
(though a later compiler has some abilities in that direction).  I
suspect that for a Common Lisp implementation we would try to use the
PDP-10 Maclisp compiler as a base, unless it is too crufty to understand
or modify.  Changing compilers to produce extended code turns out not to
be a very difficult job.
-------

∂21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
Date: 21 Dec 1981 0957-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Extended-addressing Common Lisp
To: JONL at MIT-XX
cc: rpg at SU-AI
In-Reply-To: Your message of 18-Dec-81 1835-EST

thanks.  At the moment the problem is that DEC is not sure whether they
are interested in Common Lisp or Interlisp.  We will probably
follow the decision they make for the VAX, which should be done
sometime within a month.  What surprised me about that was from what I
can hear one of Interlisp's main advantages was supposed to be that the
project was further along on the VAX than the NIL project.  That sounds
odd to me.  I thought NIL had been released.  You might want to talk
with some of the folks at DEC.  The only one I know is Kalman Reti,
XCON.RETI@DEC-MARLBORO.
-------

∂21-Dec-81  1101	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1355-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 21-Dec-81 1323-EST

I am very happy to hear this.  we have used their compiler for Elisp,
as you may know, and have generally been following their work.  I
have been very impressed also, and would be very happy to see their
work get into something that is going to be more widely used them
Standard Lisp.
-------

∂21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
Date: 21 Dec 1981 1806-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Common Lisp
To: rpg at SU-AI, griss at UTAH-20

I just had a conversation with JonL which I found to be somewhat
unsettling.  I had hoped that Common Lisp was a sign that the Maclisp
community was willing to start doing a common development effort. It
begins to look like this is not the case.  It sounds to me like the most
we can hope for is a bunch of Lisps that will behave quite differently,
have completely different user facilities, but will have a common subset
of language facilities which will allow knowlegable users to write
transportable code, if they are careful.  I.e. it looks a lot like the
old Standard Lisp effort, wherein you tried to tweak existing
implementations to support the Standard Lisp primitives.  I thought more
or less everyone agreed that hadn't worked so well, which is why the new
efforts at Utah to do something really transportable.  I thought
everybody agreed that these days the way you did a Lisp was to write
some small kernel in an implementation language, and then have a lot of
Lisp code, and that the Lisp code would be shared.

Supposing that we and DEC do agree to proceed with Common Lisp, would
you be interested in starting a Common Lisp sub-conspiracy, i.e. a group
of people interested in a shared Common Lisp implementation?  While we
are going to have support from DEC, that support is going to be $70K
(including University overhead) which is going to be a drop in the
bucket if we have to do a whole system, rather than just a VM and some
tweaking.

-------

∂22-Dec-81  0811	Kim.fateman at Berkeley 	various: arithmetic;  commonlisp broadcasts  
Date: 22 Dec 1981 08:04:24-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: various: arithmetic;  commonlisp broadcasts
Cc: gjc@mit-mc, griss@utah-20, Kim.jkf@Berkeley, jonl@mit-mc, masinter@parc-maxc, rpg@su-ai

seem to include token representatives from berkeley (jkf) and utah (dm).
I think that including fateman@berkeley and griss@utah, too, would be nice.

I noticed in the the interlisp representative's report (the first to arrive
in "clear text" (not press format), that arithmetic needs are being
dictated in such a way as to be "as much as you would want for an
algebraic manipulation system such as Macsyma."   Since ratios and
complex numbers are not supported in the base Maclisp, I wonder why
they would be considered important to have in the base common lisp?

Personally, having the common lisp people dictate the results of
elementary functions, the semantics of bigfloat (what happened to
bigfloat? Is it gone?), single and double...
and such, seems overly ambitious and unnecessary.
No other language, even Fortran or ADA does much of this, and what it
does is usually not very good.

The true argument for including such stuff is NOT requirements of 
algebraic  manipulation stuff, but the prospect of doing
ARITHMETIC manipulation stuff with C.L.  Since only a few people are
familiar with Macsyma and Macsyma-like systems, requirements expressed
in the form "macsyma needs it"  seem unarguable.  But they are not...

∂22-Dec-81  0847	Griss at UTAH-20 (Martin.Griss) 	[Griss (Martin.Griss): Re: Common Lisp]   
Date: 22 Dec 1981 0944-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: [Griss (Martin.Griss): Re: Common Lisp]
To: rpg at SU-AI
cc: griss at UTAH-20

This is part of my response to Hedrick's last message. I guess I dont know
what JonL siad to him... I feel that I would be able to make more informed
decisions, and to interact more on Common LISP if I was on the mailing list.
I believe that PSL is pretty viable replacement for Standard LISP, and 
maybe have some kernel for CL. We are on a course now that really wants us to
finish our current "new-LISP" and to begin using it for applications in next
2-3 months (eg NSF and Boeing support). I think having association with CL
would help some funding efforts, maybe ARPA, Schlumberger, etc.

Perhaps we could talk on phone?
M
                ---------------

Date: 22 Dec 1981 0940-MST
From: Griss (Martin.Griss)
Subject: Re: Common Lisp
To: HEDRICK at RUTGERS
cc: Griss
In-Reply-To: Your message of 21-Dec-81 1606-MST

   Some more thoughts. Actually, I havent heard anything "official" about
decisions on CommonLISP. RPG visited here, and I think our concerns that CL
definition was too large (even larger than InterLISP VM), helped formulate
a Kernel+CL extension files.  Clearly that is what we are doing now in PSL,
building on relatively successful parts of Standard LISP, such as compiler,
etc. (SL worked well enough for us, just didnt have resources to do more
then).  I agree that JonL's comments as relayed by you sound much more
Anarchistic...

  I would really like to get involved in Common LISP, probably do VAX and
68000, since I guess you seem to be snapping up DEC-20 market. I currently
plan to continue with PSL on 20, VAX and 68000, since we are almost done
first round. VAX 90% complete and 68000 partially underway. In same sense
that SYSLISP could be basis for your 20 InterLISP, I think SYSLISP and some
of PSL could be transportable kernel for CL.

I need of course to find more funding, I cant cover out of my NSF effort,
since we are just about ready to start using PSL. Ill be teaching class
using PSL on DEC-20 and VAX (maybe even 68000?) this quarter, get soem
Algebra and Graphics projects underway. I will of course strive to be as CL
compatible as I can afford at this time.
-------
-------

∂23-Dec-81 1306	Guy.Steele at CMU-10A 	Re: various: arithmetic; commonlisp broadcasts 
Date: 23 December 1981 0025-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: Kim.fateman at UCB-C70
Subject:  Re: various: arithmetic; commonlisp broadcasts
CC: gjc at MIT-MC, griss at utah-20, Kim.jkf at UCB-C70, jonl at MIT-MC,
    masinter at PARC-MAXC, rpg at SU-AI
In-Reply-To:  Kim.fateman@Berkeley's message of 22 Dec 81 11:06-EST
Message-Id: <23Dec81 002535 GS70@CMU-10A>

I sent the mail to the specified representatives of Berkeley and Utah
not because they were "token" but because they were the ones that had
actually contributed substantially to the discussion of outstanding issues.
I assumed that they would pass on the news.  I'll be glad to add you to
the mailing list if you really want that much more junk mail.

It should be noted that the InterLISP representative's report is just that:
the report of the InterLISP representative.  I think it is an excellent
report, but do not necessarily agree with all of its value judgements
and perspectives.  Therefore the motivations induced by vanMelle and
suggested in his report are not necessarily the true ones of the other
people involved.  I assume, however, that they accurately reflect vanMelle's
*perception* of people's motives, and as such are a valuable contribution
(because after all people may not understand their own motives well, or
may not realize how well or poorly they are communicating their ideas!).

You ask why Common LISP should support ratios and complex numbers, given
that MacLISP did not and yet MACSYMA got built anyway.  In response,
I rhetorically ask why MacLISP should have supported bignums, since
the PDP-10 does not?  Ratios were introduced primarily because they are
useful, they are natural for novices to use (at least as natural as
binary floating-point, with all its odd quirks, and with the advantage
of calculating exact results, such as (* 3 1/3) => 1, *always*), and
they solve problems with the quotient function.  Complex numbers were
motivated primarily by the S-1, which can handle complex floating-point
numbers and "Gaussian fixnums" primitively.  They need not be in Common
LISP, I suppose, but they are not much work to add.

The results of elementary functions are not being invented in a vacuum,
as you have several times insinuated, nor are the Common LISP implementors
going off and inventing some arbitrary new thing.  I have researched
the implementation, definition, and use of complex numbers in Algol 68,
PL/I, APL, and FORTRAN, and the real elementary functions in another
half-dozen languages.  The definitions of branch cuts and boundary cases,
which are in general not agreed on by any mathematicians at all (they tend
to define them *ad hoc* for the purpose at hand), are taken from a paper
by Paul Penfield for the APL community, in which he considers the problem
at length, weighs alternatives, and justifies his results according to
ten general principles, among which are consistency, keeping branch cuts
away from the positive real axis, preserving identities at boundaries,
and so on.  This paper has appeared in the APL '81 conference.  I agree that
mistakes have been made in other programming languages, but that does not
mean we should hide our heads in the sand.  A serious effort is being made
to learn from the past.  I think this effort is more substantial than will
be made by the dozens of Common LISP users who will have to write their
own trig functions if the language does not provide them.

Even if a mistake is made, it can be compensated for.  MACSYMA presently
has to compensate for MacLISP's ATAN function, whose range is 0 to 2*pi
(for most purposes -pi to pi is more appropriate, and certainly more
conventional).

[Could I inquire as to whether (FIX 1.0E20) still produces a smallish
negative number in Franz LISP?]

I could not agree more that all of this is relevant, not to *algebraic*
manipulation, but to *arithmetic* manipulation (although certainly the
presence of rational arithmetic will relieve MACSYMA of that particular
small burden).  But there is no good reason why LISP cannot become a
useful computational and well as symbolic language.  In particular,
certain kinds of AI work such as vision and speech research require
great amounts of numerical computation.  I know that you advocate
methods for linking FORTRAN or C programs to LISP for this purpose.
That is well and good, but I (for one) would like it also to be
practical to do it all in LISP if one so chooses.  LISP has already
expanded its horizons to support text editors and disk controllers;
why not also number-crunching?

--Guy

∂18-Dec-81  1533	Jon L. White <JONL at MIT-XX> 	Extended-addressing Common Lisp   
Date: 18 Dec 1981 1835-EST
From: Jon L. White <JONL at MIT-XX>
Subject: Extended-addressing Common Lisp
To: Hedrick at RUTGERS
cc: rpg at SU-AI

Sounds likea win for you to do it.  As far as I know, no one else
is going to do it (at least not now).  Probably some hints from
the NIL design would be good for you -- at one time the
file MC:NIL;VMACH >  gave a bunch of details about the
NIL "virtual machine".  Probably you should get in personal
touch with me (phone or otherwise) to chat about such "kernels".
-------

∂21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1012-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 20-Dec-81 2304-EST

thanks.  Are you sure Utah is producing Common Lisp?  they have a thing
they call Standard Lisp, which is something completely different.  I have
never heard of a Common Lisp project there, and I work very closely with
their Lisp development people so I think I would have.
-------

I visited there the middle of last month for about 3 days and talked
the technical side of Common Lisp being implemented in their style. Martin told
me that if we only insisted on a small virtual machine with most of the
rest in Lisp code from the Common Lisp people he'd like to do it.

I've been looking at their stuff pretty closely for the much behind schedule
Lisp evaluation thing and I'm pretty impressed with them. We discussed
grafting my S-1 Lisp compiler front end on top of their portable compiler.
			-rpg-
∂22-Dec-81  0827	Griss at UTAH-20 (Martin.Griss) 	Re: various: arithmetic;  commonlisp broadcasts
Date: 22 Dec 1981 0924-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: various: arithmetic;  commonlisp broadcasts
To: Kim.fateman at UCB-C70, guy.steele at CMU-10A
cc: gjc at MIT-MC, Kim.jkf at UCB-C70, jonl at MIT-MC, masinter at PARC-MAXC,
    rpg at SU-AI, Griss at UTAH-20
In-Reply-To: Your message of 22-Dec-81 0905-MST

I agree with Dick re being on the commonlisp mailing list. The PSL effort
is a more modest atempt at defining a transportable modern LISP, extending
Standard LISP with more powerful and efficient functions. I find no trace
of DM@utah-20 on our system, and have tried various aliases, still with
no luck.

Martin
-------

∂04-Jan-82  1754	Kim.fateman at Berkeley 	numbers in common lisp   
Date: 4 Jan 1982 17:54:03-PST
From: Kim.fateman at Berkeley
To: fahlman@cmu-10a, guy.steele@cmu-10a, moon@mit-ai, rpg@su-ai
Subject: numbers in common lisp
Cc: Kim.jkf@Berkeley, Kim.sklower@Berkeley


*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #(times 4 (atan 1.0)),  #(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual.

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support.

∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
Date: 15 January 1982 1124-EST (Friday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Multiple Values
CC: Scott.Fahlman at CMU-10A
Message-Id: <15Jan82 112415 SF50@CMU-10A>


I hate to rock the boat, but I would like to re-open one of the issues
supposedly settled at the November meeting, namely issue 55: whether to
go with the simple Lisp Machine style multiple-value receving forms, or
to go with the more complex forms in the Swiss Cheese Edition, which
provide full lambda-list syntax.

My suggestion was that we go with the simple forms and also provide the
Multiple-Value-Call construct, which more or less subsumes the
interesting uses for the Lambda-list forms.  The latter is quite easy
to implement, at least in Spice Lisp and I believe also in Lisp Machine
Lisp: you open the specified function call frame, evaluate the
arguments (which may return multiples) leaving all returned values on
the stack, then activate the call.  The normal argument-passing
machinery  (which is highly optimized) does all the lambda grovelling.
Furthermore, since this is only a very slight variation on a normal
function call, we should not be screwed in the future by unanticipated
interactions between this and, say, the declaration mechanism.

Much to my surprise, the group's decision was to go with all of the
above, but also to require that the lambda-hacking forms be supported.
This gives me real problems.  Given the M-V-CALL construct, I think
that these others are quite useless and likely to lead to many bad
interactions: this is now the only place where general lambda-lists have
to be grovelled outside of true function calls and defmacro.  I am not
willing to implement yet another variation on lambda-grovelling
just to include these silly forms, unless someone can show me that they
are more useful than I think they are.

The November vote might reflect the notion that M-V-LET and M-V-SETQ
can be implemented merely as special cases of M-V-CALL.  Note however,
that the bodies of the M-V-LET and M-V-SETQ forms are defined as
PROGNs, and will see a different set of local variables than they would
see if turned into a function to be called.  At least, that will be the
case unless Guy can come up with some way of hacking lexical closures
so as to make embedded lambdas see the lexical binding environment in
which they are defined.  Right now, for me at least, it is unclear
whether this can be done for all Common Lisp implementations with low
enough cost that we can make it a required feature.  In the meantime, I
think it is a real mistake to include in the language any constructs
that require a successful solution to this problem if they are to be
implemented decently.

So my vote (with the maximum number of exclamation points) continues to
be that Common Lisp should include only the Lisp Machine style forms,
plus M-V-CALL of multiple arguments.  Are the other forms really so
important to the rest of you?

All in all, I think that the amount of convergence made in the November
meeting was really remarkable, and that we are surprisingly close to
winning big on this effort.

-- Scott

∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
Date: 15 January 1982 12:14-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: multiple values.
To: Scott.Fahlman at CMU-10A
cc: Common-lisp at SU-AI

[1] I think your last note has some incorrect assumptions about how
    the procedure call mechanism will work on future Lisp machines.
    Not that the assumption isn't reasonable, but as I recall the procedure
    ARGUMENT mechanism and the mechanism for passing the back
    the FIRST VALUE was designed to be inconsistent with the mechanism
    for passing the rest of the values. This puts a whole different
    perspective on the language semantics.
[2] At least one implementation, NIL, guessed that there would be
    demand in the future for various lambda extensions, so a
    sufficiently general lambda-grovelling mechanism was painlessly
    introduce from the begining.

∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
Date: Saturday, 16 January 1982, 02:36-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
Cc: common-lisp at su-ai

We are planning for implementation of the new multiple-value receiving
forms with &optional and &rest, on the L machine, but are unlikely to
be able to implement them on the present Lisp machine without a significant
amount of work.  I would just as soon see them flushed, but am willing
to implement them if the concensus is to keep them.

If by lambda-grovelling you mean (as GJC seems to think you mean) a
subroutine in the compiler that parses out the &optionals, that is about
0.5% of the work involved.  If by lambda-grovelling you mean the generated
code in a compiled function that takes some values and defaults the
unsupplied optionals, indeed that is where the hair comes in, since in
most implementations it can't be -quite- the same as the normal function-entry
case of what might seem to be the same thing.

∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
Date: 16 January 1982 0930-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: David A. Moon <Moon at MIT-MC> 
Subject:  Re: Multiple Values
CC: common-lisp at su-ai
In-Reply-To:  David A. Moon's message of 16 Jan 82 02:36-EST
Message-Id: <16Jan82 093009 SF50@CMU-10A>


As Moon surmises, my concern for "Lambda-grovelling" was indeed about
needing a second, slightly different version of the whole binding and
defaulting and rest-ifying machinery, not about the actual parsing of
the Lambda-list syntax which, as GJC points out, can be mostly put into
a universal function of its own.
-- Scott

∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
Date: Saturday, 16 January 1982, 10:22-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A, common-lisp at su-ai

What Moon says is true: I am writing a compiler, and parsing the
&-mumbles is quite easy compared to generating the code that implements
taking the returned values off of the stack and putting them where they
go while managing to run the default-forms and so on.  I could live
without the &-mumble forms of the receivers, although they seem like
they may be a good idea, and we are willing to implement them if they
appear in the Common Lisp definition.  I would not say that it is
generally an easy feature to implement.

It should be kept in mind that multiple-value-call certainly does not
provide the functionality of the &-mumble forms.  Only rarely do you
want to take all of the values produced by a function and pass them all
as successive arguments to a function.  Often they are some values
computed by the same piece of code, and you want to do completely
different things with each of them.

The goal of the &-mumble forms was to provide the same kind of
error-checking that we have with function calling.  Interlisp has no
such error-checking on function calls, which seems like a terrible thing
to me; the argument says that the same holds true of returned values.
I'm not convinced by that argument, but it has some merit.

∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
Date: 16 January 1982 17:11-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
cc: common-lisp at SU-AI

I mostly agree with SEF.

Better than a separate function M-V-CALL would be a new option to the
function CALL that allows one or more of several arg-forms to be
treated a la M-V-CALL.  Then it is possible to have more than one arg
form, all of whose values become separate args, intermixed with lists
of evaluated args, and ordinary args; but it is not really any harder
to implement than M-V-CALL alone.

[Background note: the Lisp machine function CALL takes alternating
options and arg-forms.  Each option says how to pass the following
arg-form.  It is either a symbol or a list of symbols.  Symbols now
allowed are SPREAD and OPTIONAL.  SPREAD means pass the elements of
the value as args.  OPTIONAL means do not get an error if the function
being called doesn't want the args.  This proposal is to add VALUES as
an alternative to SPREAD, meaning pass all values of the arg form as
args.]

If the &-keyword multiple value forms are not going to be implemented
on the current Lisp machine, that is an additional reason to keep them
out of Common Lisp, given that they are not vitally necessary for
anything.

∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
Date: 16 January 1982 2333-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Keyword sequence fns
Message-Id: <16Jan82 233312 SF50@CMU-10A>


My proposal for keyword-style sequence functions can be found on CMUA as

TEMP:NEWSEQ.PRE[C380SF50]

or as

TEMP:NEWSEQ.DOC[C380SF50]

Fire away.
-- Scott

∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
Date: 17 January 1982 2056-EST (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Sequence functions
Message-Id: <17Jan82 205656 GS70@CMU-10A>

Here is an idea I would like to bounce off people.

The optional arguments given to the sequence functions are of two general
kinds: (1) specify subranges of the sequences to operate on; (2) specify
comparison predicates.  These choices tend to be completely orthogonal
in that it would appear equally likely to want to specify (1) without (2)
as to want to specify (2) without (1).  Therefore it is probably not
acceptable to choose a fixed ortder for them as simple optional arguments.

It is this problem that led me to propose the "functional-style" sequence
functions.  The minor claimed advantage was that the generated functions
might be useful as arguments to other functionals, particularly MAP.  The
primary motivation, however, was that this would syntactically allow
two distinct places for optional arguments, as:
   ((FREMOVE ...predicate optionals...) sequence ...subrange optionals...)

Here I propose to solve this problem in a different way, which is simply
to remove the subrange optionals entirely.  If you want to operate on a
subsequence, you have to use SUBSEQ to specify the subrange.  (Of course,
this won't work for the REPLACE function, which is in-place destructive.)
Given this, consistently reorganize the argument list so that the sequence
comes first.  This would give:
	(MEMBER SEQ #'EQL X)
	(MEMBER SEQ #'NUMBERP)
and so on.

Disadvantages:
(1) Unfamiliar argument order.
(2) Using SUBSEQ admittedlt is not as efficient as the subrange arguments
("but good a compiler could...").
(3) This doesn't allow you to elide EQL or EQUAL or whatever the chosen
default is.

Any takers?
--Guy




∂17-Jan-82  2207	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
Date: 17 January 1982 23:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Sequence functions
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Using subseq instead of additional arguments is of course what
other languages do, and it is quite tasteful in those languages
because the creating a subsequence doesn't cons.  In Lisp it
does, which makes a lot of difference.  Unless you're willing to
GUARENTEE that the consing will be avoided, I don't think the
proposal is acceptable.  Consider a TECO style buffer management
that wanted to use string-replace to copy stuff around; it'd be
terrible if it consed the stuff it wanted to move!

∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
Date: 18 January 1982 05:25-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

Even if SUBSEQ itself conses,
if you offer compiler optimizations which take expressions
where sequence functions are applied to calls to subseq
and turn them into calls to other internal functions which
take extra args and avoid consing, this is good enough
in efficiency and provides the same simplicity in user interface.

While on the subject, how about eliminating all the functions
to set this or that from the language description
(except a few for Maclisp compatibility) and making SETF
the only way to set anything?
The only use for the setting-functions themselves, as opposed
to SETF, is to pass to a functional--they are more efficient perhaps
than a user-written function that just uses SETF.  However, such
user-written functions that only use SETF can be made to expand
into the internal functions which exist to do the dirty work.
This change would greatly simplify the language.

∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
Date: 18 Jan 1982 0918-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: subseq and consing
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 18-Jan-82 0325-MST

And, after you've eliminated all the setting functions/forms, including
SETQ, change the name from SETF to SETQ.
-------

∂02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
Date:  2 Jan 1982 1005-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Com L
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I have retrieved the revisions and decisions, will look them over.
I will try to set up arrangements to be at POPL Mondat-Wednesday,
depends on flights,

What is Common LISP schedule, next meeting, etc? Will we be invited to
attend, or is this one of topics for us to dicuss, etc. at POPL.
What in fact are we to dicuss, and what should I be thinking about.
As I explained, I hope to finish this round of PSL implementation
on DEC-20, VAX and maybe even first version on 68000 by then.
We then will fill in some missing features, and start bringup up REDUCE,
meta-compiler, BIGfloats, and PictureRLISP graphics. At that point I
have accomplished a significant amount of my NSF goals this year.

Next step is to signficantly improve PSL, SYSLISP, merge with Mode Analysis
phase for improved LISP<->SYSLISP comunications and efficiency.

At the same time, we will be looking over various LISP systems to see what sort of good
features can be adapted, and what sort of compatibility packages (eg, UCI-LISP
package, FranzLISP package, etc).

Its certainly in this pahse that I could easily attempt to modify PSL to
provide a ComonLISP kernel, assuming that we have not already adapted much of the
code.
M
-------

∂14-Jan-82  0732	Griss at UTAH-20 (Martin.Griss) 	Common LISP 
Date: 14 Jan 1982 0829-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Common LISP
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I just received amessage from Hedrick, regarding his project of doing an
extended Addressing common LISP on the DEC-20; it also refers to 
CMU doing the VAX version. I thought one of the possibilities we
were to discuss was whether we might become involved in doing the
VAX version? Is this true - ie what do you see as the possible routes
of joint work.?
Martin
-------

∂14-Jan-82  2032	Jonathan A. Rees <JAR at MIT-MC>   
Date: 14 January 1982 23:32-EST
From: Jonathan A. Rees <JAR at MIT-MC>
To: GLS at MIT-MC
cc: BROOKS at MIT-MC, RPG at MIT-MC

We've integrated your changes to the packing phase into our
code... we'll see pretty soon whether the new preferencing stuff works.
I've written a fancy new closure analysis phase which you might be
interested in snarfing at some point.  Much smarter than RABBIT about
SETQ'ed closed-over variables.
Using NODE-DISPATCH now.  Win.
I now have an ALIASP slot in the NODE structure, and the ALIAS-IF-SAFE
analysis has been moved into TARGETIZE-CALL-PRIMOP.  I'm debugging
that now.  This means the DEPENDENTS slot goes away.  I'm trying to
get e.g. (RPLACA X (FOO)) where X must be in a register (because
it's an RPLACA) and (FOO) is a call to an unknown function (and thus
clobbers all regs) to work fairly efficiently in all cases.
In fact I've rewritten a lot of TARGETIZE...

Does the <S1LISP.COMPILER> directory still exist?  I can't seem to read
it from FTP.  Has anyone done more work on S1COMP?

The T project, of course, is behind schedule.  As I told you before,
a toy interpreter runs on the Vax, but so far nothing besides
a read-factorial-print loop runs on the 68000.  But soon, I hope,...

∂15-Jan-82  0109	RPG   	Rutgers lisp development project 
 ∂14-Jan-82  1625	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Rutgers lisp development project    
Mail-from: ARPANET site RUTGERS rcvd at 13-Jan-82 2146-PST
Date: 14 Jan 1982 0044-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Rutgers lisp development project
To: bboard at RUTGERS, griss at UTAH-20, admin.mrc at SU-SCORE, jsol at RUTGERS
Remailed-date: 14 Jan 1982 1622-PST
Remailed-from: Mark Crispin
Remailed-to: Feigenbaum at SUMEX-AIM, REG at SU-AI

It now appears that we are going to do an implementation of Common Lisp
for the DEC-20.  This project is being funded by DEC.

		Why are we doing this project at all?

This project is being done because a number of our researchers are going
to want to be able to move their programs to other systems than the
DEC-20.  We are proposing to get personal machines over the next few
years.  Sri has already run into problem in trying to give AIMDS to
someone that only has a VAX.  Thus we think our users are going to want
to move to a dialect that is widely portable.

Also, newer dialects have some useful new features.  Although these
features can be put into Elisp, doing so will introduce
incompatibilities with old programs.  R/UCI Lisp already has too many
inconsistencies introduced by its long history.  It is probably better
to start with a dialect that has been designed in a coherent fashion.

			Why Common Lisp?

There are only three dialects of Lisp that are in wide use within the
U.S. on a variety of systems:  Interlisp, meta-Maclisp, and Standard
Lisp.  (By meta-Maclisp I mean a family of dialects that are all
related to Maclisp and generally share ideas.)  Of these, Standard Lisp
has a reputation of not being as "rich" a language, and in fact is not
taken seriously by most sites.  This is not entirely fair, but there is
probably nothing we can do about that fact at this stage. So we are left
with Interlisp and meta-Maclisp.  A number of implementors from the
Maclisp family have gotten together to define a common dialect that
combines the best features of their various dialects, while still being
reasonable in size.  A manual is being produced for it, and once
finished will remain reasonably stable.  (Can you believe it?
Documentation before coding!)  This dialect is now called Common Lisp.
The advantages of Common Lisp over Interlisp are:

  - outside of BBN and Xerox, the Lisp development efforts now going on
	all seem to be in the Maclisp family, and now are being
	redirected towards Common Lisp.  These efforts include 
	CMU, the Lisp Machine companies (Symbolics, LMI), LRL and MIT.

  - Interlisp has some features, particularly the spaghetti stack,
	that make it impossible to implement as efficiently and cleanly
	as Common Lisp.  (Note that it is possible to get as good
	effiency out of compiled code if you do not use these features,
	and if you use special techniques when compiling.  However that
	doesn't help the interpreter, and is not as clean.)

  - Because of these complexities in Interlisp, implementation is a
	large and complex job.  ARPA funded a fairly large effort at
	ISI, and even that seems to be marginal.  This comment is based
	on the report on the ISI project produced by Larry Masinter,
	<lisp>interlisp-vax-rpt.txt.  Our only hope would be to take
	the ISI implementation and attempt to transport it to the 20.
	I am concerned that the result of this would be extremely slow.
	I am also concerned that we might turn out not to have the
	resources necessary to do it a good job.

  - There seems to be a general feeling that Common Lisp will have a
	number of attractive features as a language.  (Notice that I am
	not talking about user facilities, which will no doubt take some
	time before they reach the level of Interlisp.)  Even people
	within Arpa are starting to talk about it as the language of the
	future.  I am not personally convinced that it is seriously
	superior to Interlisp, but it is as good (again, at the language
	level), and the general Maclisp community seems to have a number
	of ideas that are significantly in advance of what is likely to
	show up in Interlisp with the current support available for it.

There are two serious disadvantages of Common Lisp:

  - It does not exist yet.  As of this week, there now seem to be
	sufficient resources committed to it that we can be sure it will
	be implemented.  The following projects are now committed, at a
	level sufficient for success:  VAX (CMU), DEC-20 (Rutgers), PERQ
	and other related machines (CMU), Lisp Machine (Symbolics), S-1
	(LRL).  I believe this is sufficient to give the language a
	"critical mass".

  - It does not have user facilities defined for it.  CMU is heavily
	committed to the Spice (PERQ) implementation, and will produce
	the appropriate tools.  They appear to be funded sufficiently
	that this will happen.

		 Why is DEC funding it, and what will be
		 	our relationship with them?

LCG (the group within DEC that is responsible for the DEC-20) is
interested in increasing the software that will support the full 30-bit
address space possible in the DEC-20 architecture.  (Our current
processor will only use 23 bits of this, but this is still much better
than what was supported by the old software, which is 18 bits.)  They
are proceeding at a reasonable rate with the software that is supported
by DEC.  However they recognize that many important languages were
developed outside of DEC, and that it will not be practical for them
to develop large-address-space implementations of all of them in-house.
Thus DEC is attempting to find places that are working on the more
important of these languages, and they are funding efforts to develop
large address versions.  They are sponsoring us for Lisp, and Utah
for C.  Pascal is being done in a slightly complex fashion.  (In fact
some of our support from DEC is for Pascal.)

DEC does not expect to make money directly from these projects.  We will
maintain control over the software we develop, and could sell support
for it if we wanted to. We are, of course, expected to make the software
widely available. (Most likely we will submit it to DECUS but also
distribute it ourselves.)  What DEC gets out of it is that the large
address space DEC-20 will have a larger variety of software available
for it than otherwise.  I believe this will be an important point for
them in the long run, since no one is going to want to buy a machine for
which only the Fortran compiler can generate programs larger than 256K.
Thus they are facing the following facts:
  - they can't do things in house nearly as cheaply as universities
	can do them.
  - universities are no longer being as well funded to do language
	development, particularly not for the DEC-20.

			How will we go about it?

We have sufficient funding for one full-time person and one RA.  Both
DEC and Rutgers are very slow about paperwork.  But these people should
be in place sometime early this semester.  The implementation will
involve a small kernel, in assembly language, with the rest done in
Lisp.  We will get the Lisp code from CMU, and so will only have to do
the kernel.  This project seems to be the same size as the Elisp
project, which was done within a year using my spare time and a month of
so of Josh's time.  It seems clear that we have sufficient manpower. (If
you think maybe we have too much, I can only say that if we finish the
kernel sooner than planned, we will spend the time working on user
facilities, documentation, and helping users here convert to it.) CMU
plans to finish the VAX project in a year, with a preliminary version in
6 months and a polished release in a year.  Our target is similar.
-------

∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
Date: 15 January 1982 1124-EST (Friday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Multiple Values
CC: Scott.Fahlman at CMU-10A
Message-Id: <15Jan82 112415 SF50@CMU-10A>


I hate to rock the boat, but I would like to re-open one of the issues
supposedly settled at the November meeting, namely issue 55: whether to
go with the simple Lisp Machine style multiple-value receving forms, or
to go with the more complex forms in the Swiss Cheese Edition, which
provide full lambda-list syntax.

My suggestion was that we go with the simple forms and also provide the
Multiple-Value-Call construct, which more or less subsumes the
interesting uses for the Lambda-list forms.  The latter is quite easy
to implement, at least in Spice Lisp and I believe also in Lisp Machine
Lisp: you open the specified function call frame, evaluate the
arguments (which may return multiples) leaving all returned values on
the stack, then activate the call.  The normal argument-passing
machinery  (which is highly optimized) does all the lambda grovelling.
Furthermore, since this is only a very slight variation on a normal
function call, we should not be screwed in the future by unanticipated
interactions between this and, say, the declaration mechanism.

Much to my surprise, the group's decision was to go with all of the
above, but also to require that the lambda-hacking forms be supported.
This gives me real problems.  Given the M-V-CALL construct, I think
that these others are quite useless and likely to lead to many bad
interactions: this is now the only place where general lambda-lists have
to be grovelled outside of true function calls and defmacro.  I am not
willing to implement yet another variation on lambda-grovelling
just to include these silly forms, unless someone can show me that they
are more useful than I think they are.

The November vote might reflect the notion that M-V-LET and M-V-SETQ
can be implemented merely as special cases of M-V-CALL.  Note however,
that the bodies of the M-V-LET and M-V-SETQ forms are defined as
PROGNs, and will see a different set of local variables than they would
see if turned into a function to be called.  At least, that will be the
case unless Guy can come up with some way of hacking lexical closures
so as to make embedded lambdas see the lexical binding environment in
which they are defined.  Right now, for me at least, it is unclear
whether this can be done for all Common Lisp implementations with low
enough cost that we can make it a required feature.  In the meantime, I
think it is a real mistake to include in the language any constructs
that require a successful solution to this problem if they are to be
implemented decently.

So my vote (with the maximum number of exclamation points) continues to
be that Common Lisp should include only the Lisp Machine style forms,
plus M-V-CALL of multiple arguments.  Are the other forms really so
important to the rest of you?

All in all, I think that the amount of convergence made in the November
meeting was really remarkable, and that we are surprisingly close to
winning big on this effort.

-- Scott

∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
Date: 15 January 1982 12:14-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: multiple values.
To: Scott.Fahlman at CMU-10A
cc: Common-lisp at SU-AI

[1] I think your last note has some incorrect assumptions about how
    the procedure call mechanism will work on future Lisp machines.
    Not that the assumption isn't reasonable, but as I recall the procedure
    ARGUMENT mechanism and the mechanism for passing the back
    the FIRST VALUE was designed to be inconsistent with the mechanism
    for passing the rest of the values. This puts a whole different
    perspective on the language semantics.
[2] At least one implementation, NIL, guessed that there would be
    demand in the future for various lambda extensions, so a
    sufficiently general lambda-grovelling mechanism was painlessly
    introduce from the begining.

∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
Date: Saturday, 16 January 1982, 02:36-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
Cc: common-lisp at su-ai

We are planning for implementation of the new multiple-value receiving
forms with &optional and &rest, on the L machine, but are unlikely to
be able to implement them on the present Lisp machine without a significant
amount of work.  I would just as soon see them flushed, but am willing
to implement them if the concensus is to keep them.

If by lambda-grovelling you mean (as GJC seems to think you mean) a
subroutine in the compiler that parses out the &optionals, that is about
0.5% of the work involved.  If by lambda-grovelling you mean the generated
code in a compiled function that takes some values and defaults the
unsupplied optionals, indeed that is where the hair comes in, since in
most implementations it can't be -quite- the same as the normal function-entry
case of what might seem to be the same thing.

∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
Date: 16 January 1982 0930-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: David A. Moon <Moon at MIT-MC> 
Subject:  Re: Multiple Values
CC: common-lisp at su-ai
In-Reply-To:  David A. Moon's message of 16 Jan 82 02:36-EST
Message-Id: <16Jan82 093009 SF50@CMU-10A>


As Moon surmises, my concern for "Lambda-grovelling" was indeed about
needing a second, slightly different version of the whole binding and
defaulting and rest-ifying machinery, not about the actual parsing of
the Lambda-list syntax which, as GJC points out, can be mostly put into
a universal function of its own.
-- Scott

∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
Date: Saturday, 16 January 1982, 10:22-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A, common-lisp at su-ai

What Moon says is true: I am writing a compiler, and parsing the
&-mumbles is quite easy compared to generating the code that implements
taking the returned values off of the stack and putting them where they
go while managing to run the default-forms and so on.  I could live
without the &-mumble forms of the receivers, although they seem like
they may be a good idea, and we are willing to implement them if they
appear in the Common Lisp definition.  I would not say that it is
generally an easy feature to implement.

It should be kept in mind that multiple-value-call certainly does not
provide the functionality of the &-mumble forms.  Only rarely do you
want to take all of the values produced by a function and pass them all
as successive arguments to a function.  Often they are some values
computed by the same piece of code, and you want to do completely
different things with each of them.

The goal of the &-mumble forms was to provide the same kind of
error-checking that we have with function calling.  Interlisp has no
such error-checking on function calls, which seems like a terrible thing
to me; the argument says that the same holds true of returned values.
I'm not convinced by that argument, but it has some merit.

∂16-Jan-82  1252	Griss at UTAH-20 (Martin.Griss) 	Kernel for Commaon LISP    
Date: 16 Jan 1982 1347-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Kernel for Commaon LISP
To: guy.steel at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

What was actually decided about a "small" common kernel, rest
in LISP. Were core functions identified? This first place that
my work and expertise will strongly overlap; the smaller the
kernel, and the more jazzy-features that can be imnpleemnted
in terms of it, the better. 

Have you sent out a revised Ballot, or are there pending questions that
the "world-at-large" should respond to (as apposed to the ongoing
group that has been making decisions). The last bit about the
lambda stuff for multiples is pretty obscure, seems to depend on
a model that was discussed, but not documented (as far as I can see).

In general, where are the proposed to solutions to the hard implementation
issues being recorded.
Martin
-------

∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
Date: 16 January 1982 17:11-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
cc: common-lisp at SU-AI

I mostly agree with SEF.

Better than a separate function M-V-CALL would be a new option to the
function CALL that allows one or more of several arg-forms to be
treated a la M-V-CALL.  Then it is possible to have more than one arg
form, all of whose values become separate args, intermixed with lists
of evaluated args, and ordinary args; but it is not really any harder
to implement than M-V-CALL alone.

[Background note: the Lisp machine function CALL takes alternating
options and arg-forms.  Each option says how to pass the following
arg-form.  It is either a symbol or a list of symbols.  Symbols now
allowed are SPREAD and OPTIONAL.  SPREAD means pass the elements of
the value as args.  OPTIONAL means do not get an error if the function
being called doesn't want the args.  This proposal is to add VALUES as
an alternative to SPREAD, meaning pass all values of the arg form as
args.]

If the &-keyword multiple value forms are not going to be implemented
on the current Lisp machine, that is an additional reason to keep them
out of Common Lisp, given that they are not vitally necessary for
anything.

∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
Date: 16 January 1982 2333-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Keyword sequence fns
Message-Id: <16Jan82 233312 SF50@CMU-10A>


My proposal for keyword-style sequence functions can be found on CMUA as

TEMP:NEWSEQ.PRE[C380SF50]

or as

TEMP:NEWSEQ.DOC[C380SF50]

Fire away.
-- Scott

∂17-Jan-82  0618	Griss at UTAH-20 (Martin.Griss) 	Agenda 
Date: 17 Jan 1982 0714-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Agenda
To: guy.Steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

Still havent any indication from you guys as to what we should be dicussing;
ie what should I be thinking about as our possible mode of intercation with
the common Lispers?
M
-------

I had been deferring to GLS on this by silence, but let me tell you my thoughts
on the current situation.

First, the DEC/Rutgers things took me somewhat by surprise. I know that Hedrick
thinks very highly of the Standard Lisp stuff, and I wouldn't mind seeing
a joint effort from the Common Lisp core people, Dec/Rutgers, and Utah.

From the Utah connection I would like to see a clean looking virtual machine,
a set of Lisp code to implement the fluff from Common Lisp, and a reasonable
portable type of compiler.

By `connection' I mean Utah providing the virtual machine for a few specific
computers, Common Lisp core people providing most of the Lisp code, and
maybe S-1 and Utah providing the compiler.

Even with Dec/Rutgers doing the Vax/20 versions, Utah provides us with
the expertise to do many other important, but bizarre machines, such as
68k based machines, IBM equipment, and Burroughs, to name a few. Perhaps
Rutgers/DEC wouldn't mind working with us all on this.

That is what I would to discuss for political topics.

For technical topics, the virtual machine specification and the compiler
technology.

			-rpg-
∂17-Jan-82  1751	Feigenbaum at SUMEX-AIM 	more on Interlisp-VAX    
Date: 17 Jan 1982 1744-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on Interlisp-VAX
To:   rindfleisch at SUMEX-AIM, barstow at SUMEX-AIM, bonnet at SUMEX-AIM,
      hart at SRI-KL, csd.hbrown at SU-SCORE
cc:   csd.genesereth at SU-SCORE, buchanan at SUMEX-AIM, lenat at SUMEX-AIM,
      friedland at SUMEX-AIM, pople at SUMEX-AIM, gabriel at SU-AI

Mail-from: ARPANET host USC-ISIB rcvd at 17-Jan-82 1647-PST
Date: 17 Jan 1982 1649-PST
From: Dave Dyer       <DDYER at USC-ISIB>
Subject: Interlisp-VAX report
To: feigenbaum at SUMEX-AIM, lynch at USC-ISIB, balzer at USC-ISIB,
    bengelmore at SRI-KL, nilsson at SRI-AI
cc: rbates at USC-ISIB, saunders at USC-ISIB, voreck at USC-ISIB, mcgreal at USC-ISIB,
    ignatowski at USC-ISIB, hedrick at RUTGERS, admin.mrc at SU-SCORE,
    jsol at RUTGERS, griss at UTAH-20, bboard at RUTGERS, reg at SU-AI

	Addendum to Interlisp-VAX: A report

		Jan 16, 1982


  Since Larry Masinter's "Interlisp-VAX: A Report" is being
used in the battle of LISPs, it is important that it be as
accurate as possible.  This note represents the viewpoint of
the implementors of Interlisp-VAX, as of January 1982.

  The review or the project, and the discussions with other
LISP implementors, that provided the basis for "Interlisp-VAX:
A report", were done in June 1981.  We were given the opportunity
to review and respond to a draft of the report, and had few
objections that were refutable at the time of its writing.

  We now have the advantage of an additional 6 month's development
effort, and can present as facts what would have been merely
counter arguments at the time.


  We believed at the time, and still believe now, that Masinter's
report is largely a fair and accurate presentation of Interlisp-VAX,
and of the long term efforts necesary to support it.  However,
a few very important points he made have proven to be inaccurate.


AVAILABILITY AND FUNCTINALITY
-----------------------------

  Interlisp-VAX has been in beta test, here at ISI and at several
sites around the network, since November 13 (a friday - we weren't worried).
We are planning the first general release for February 1982 - ahead
of the schedule that was in effect in June, 1981.

  The current implementation incudes all of the features of Interlisp-10
with very minor exceptions.  There is no noticable gap in functionality
among Interlisp-10, Interlisp-D and Interlisp-VAX.

   Among the Interlisp systems we are running here are KLONE, AP3,
HEARSAY, and AFFIRM.

PERFORMANCE
-----------

   Masinter's analysis of the problems of maximizing performance,
both for Interlisp generally and for the VAX particularly was excellent.
It is now reasonable to quantify the performance based on experiance
with real systems.   I don't want to descend into the quagmire of
benchmarking LISPs here, so I'll limit my statements to the most basic.

  CPU speed (on a vax/780) is currently in the range of 1/4 the speed
of Interlisp-10 (on a KL-10), which we believe is about half the 
asymptoticaly acheivalbe speed.

   Our rule of thumb for real memory is 1 mb. per active user.


-------

∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
Date: 17 January 1982 2056-EST (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Sequence functions
Message-Id: <17Jan82 205656 GS70@CMU-10A>

Here is an idea I would like to bounce off people.

The optional arguments given to the sequence functions are of two general
kinds: (1) specify subranges of the sequences to operate on; (2) specify
comparison predicates.  These choices tend to be completely orthogonal
in that it would appear equally likely to want to specify (1) without (2)
as to want to specify (2) without (1).  Therefore it is probably not
acceptable to choose a fixed ortder for them as simple optional arguments.

It is this problem that led me to propose the "functional-style" sequence
functions.  The minor claimed advantage was that the generated functions
might be useful as arguments to other functionals, particularly MAP.  The
primary motivation, however, was that this would syntactically allow
two distinct places for optional arguments, as:
   ((FREMOVE ...predicate optionals...) sequence ...subrange optionals...)

Here I propose to solve this problem in a different way, which is simply
to remove the subrange optionals entirely.  If you want to operate on a
subsequence, you have to use SUBSEQ to specify the subrange.  (Of course,
this won't work for the REPLACE function, which is in-place destructive.)
Given this, consistently reorganize the argument list so that the sequence
comes first.  This would give:
	(MEMBER SEQ #'EQL X)
	(MEMBER SEQ #'NUMBERP)
and so on.

Disadvantages:
(1) Unfamiliar argument order.
(2) Using SUBSEQ admittedlt is not as efficient as the subrange arguments
("but good a compiler could...").
(3) This doesn't allow you to elide EQL or EQUAL or whatever the chosen
default is.

Any takers?
--Guy




∂17-Jan-82  2042	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
Date: 17 January 1982 23:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Sequence functions
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Using subseq instead of additional arguments is of course what
other languages do, and it is quite tasteful in those languages
because the creating a subsequence doesn't cons.  In Lisp it
does, which makes a lot of difference.  Unless you're willing to
GUARENTEE that the consing will be avoided, I don't think the
proposal is acceptable.  Consider a TECO style buffer management
that wanted to use string-replace to copy stuff around; it'd be
terrible if it consed the stuff it wanted to move!

∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
Date: 18 January 1982 05:25-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

Even if SUBSEQ itself conses,
if you offer compiler optimizations which take expressions
where sequence functions are applied to calls to subseq
and turn them into calls to other internal functions which
take extra args and avoid consing, this is good enough
in efficiency and provides the same simplicity in user interface.

While on the subject, how about eliminating all the functions
to set this or that from the language description
(except a few for Maclisp compatibility) and making SETF
the only way to set anything?
The only use for the setting-functions themselves, as opposed
to SETF, is to pass to a functional--they are more efficient perhaps
than a user-written function that just uses SETF.  However, such
user-written functions that only use SETF can be made to expand
into the internal functions which exist to do the dirty work.
This change would greatly simplify the language.

∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
Date: 18 Jan 1982 0918-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: subseq and consing
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 18-Jan-82 0325-MST

And, after you've eliminated all the setting functions/forms, including
SETQ, change the name from SETF to SETQ.
-------

∂18-Jan-82  1602	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
Date: Monday, 18 January 1982, 18:04-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

I agree that GLS's proposal is nice, that it is only acceptable if the
compiler optimizes it, and that it is very easy to optimize.  It is also
extremely clear to the reader of the program, and it cuts down on the
number of arguments that he has to remember.  This sounds OK to me.

∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
Date: 19 January 1982 0103-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: Guy.Steele at CMU-10A
Subject:  Re: Sequence functions
CC: common-lisp at su-ai
In-Reply-To:  <17Jan82 205656 GS70@CMU-10A>
Message-Id: <19Jan82 010338 SF50@CMU-10A>


Guy,

I agree that the index-range and the comparison-choice parameters are
orthogonal.  I like your proposal to use SUBSEQ for the ranges -- it
would appear to be no harder to optimize this in the compiler than to
do the equivalent keyword or optional argument thing, and the added
consing in interpreted code (only!)  should not make much difference.
And the semantics of what is going on with all the start and end
options now becomes crystal clear.  We would need a style suggestion in
the manual urging the programmer to use SUBSEQ for this and not some
random thing he cooks up, since the compiler will only recognize fairly
obvious cases.  Good idea!

I do not like the part of your proposal that relates to reordering the
arguments, on the grounds of gross incompatibility.  Unless we want to
come up with totally new names for all these functions, the change will
make it a real pain to move code and programmers over from Maclisp or
Franz.  Too high a price to pay for epsilon increase in elegance.  I
guess that of the suggestions I've seen so far, I would go with your
subseq idea for ranges and my keywords for specifying the comparison,
throwing out the IF family.

-- Scott

∂19-Jan-82  1551	RPG  	Suggestion    
To:   common-lisp at SU-AI  
I would like to make the following suggestion regarding the
strategy for designing Common Lisp. I'm not sure how to exactly
implement the strategy, but I think it is imperative we do something
like this soon.

We should separate the kernel from the Lisp based portions of the system
and design the kernel first. Lambda-grovelling, multiple values,
and basic data structures seem kernel. Sequence functions and names
can be done later.

The reason that we should do this is so that the many man-years of effort
to immplement a Common Lisp can be done in parallel with the design of
less critical things. 
			-rpg-

∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
Date: 19 Jan 1982 1832-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 19-Jan-82 1651-MST

I agree entirely. In terms of my 2 interests:
a) Implementing Common LISP kernel/compatibility in/for PSL
b) Getting our and other LISP tools working for Common LISP

I would very much like to see a clear effort NOW to isolate some of the
kernel features, and major implementation issues (data-types, user
control over storage manager, etc) so that some of us can implement
a kernel, and others can design extensions.
-------

∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
Date: 20 January 1982 16:34-EST
From: David A. Moon <MOON5 at MIT-AI>
Subject: Keyword style sequence functions
To: common-lisp at SU-AI

Comments on Fahlman's Proposal for Keyword Style Sequence Functions for
Common Lisp of 16 January 1982

I think this is a good proposal and a step in the right direction.  There
are some problems with it, and also a couple issues that come to mind while
reading it.  I will first make some minor comments to get flamed up, and
then say what I really want to say.


- Negative comments first:

ELT and SETELT should be provided in type-specific versions.

My intuition suggests that MAP would be more readable with the result data
type before the function instead of after.  I don't really feel strongly
about this, but it's a suggestion.

I don't like the idea of flushing CONCAT (catenate) and making TO-LIST
allow multiple arguments, for some reason.

There is a problem with the :compare and :compare-not keywords.  For some
functions (the ones that take two sequences as arguments), the predicate is
really and truly an equality test.  It might be clearer to call it :equal.
For these functions I think it makes little sense to have a :compare-not.
Note that these are the same functions for which :if/:if-not are meaningless.
For other functions, such as POSITION, the predicate may not be a symmetric
equality predicate; you might be trying to find the first number in a list
greater than 50, or the number of astronauts whose grandmothers are not
ethnic Russians.  Here it makes sense to have a :compare-not.  It may actually
make sense to have a :compare keyword for these functions and a :equal
keyword for the others.  I'm not ecstatic about the name compare for this,
but I haven't thought of anything better.  This is only a minor esthetic
issue; I wouldn't really mind leaving things the way they are in Fahlman's
proposal.

Re :start and :end.  A nil value for either of these keywords should be
the same as not supplying it (i.e. the appropriate boundary of the sequence.)
This makes a lot of things simpler.  In :from-end mode, is the :start where
you start processing the sequence or the left-hand end of the subsequence?
In the Lisp machine, it is the latter, but either way would be acceptable.

The optional "count" argument to REMOVE and friends should be a keyword
argument.  This is more uniform, doesn't hurt anything, and is trivially
mechanically translatable from the old way.

The set functions, from ADJOIN through NSET-XOR, should not take keywords.
:compare-not is meaningless for these (unlike say position, where you would
use it to find the first element of a sequence that differed from a given
value).  That leaves only one keyword for these functions.  Also it is
-really- a bad idea to put keywords after an &rest argument (as in UNION).
I would suggest that the equal-predicate be a required first argument for
all the set functions; alternatively it could be an optional third argument
except for UNION and INTERSECTION, or those functions could be changed
to operate on only two sets like the others.  I think EQUAL is likely
to be the right predicate for set membership only in rare circumstances,
so that it would not hurt to make the predicate a required argument and
have no default predicate.

The :eq, :eql, :nequal, etc. keywords are really a bad idea.  The reasons
are:  1) They are non-uniform, with some keywords taking arguments and
some not.  See the tirade about this below.  2) They introduce an artificial
barrier between system-defined and user-defined predicates.  This is always
a bad idea, and here serves no useful purpose.  3) They introduce an
unesthetic interchangeability between foo and :foo, which can lead to
a significant amount of confusion.  If the keyword form of specifying the
predicate is too verbose, I would be much happier with making the predicate
be an optional argument, to be followed by keywords.  Personally I don't
think it is verbose enough to justify that.

There are still a lot of string functions in the Lisp machine not generalized
into sequence functions.  I guess it is best to leave that issue for future
generations and get on with the initial specification of Common Lisp.


- Negative comments not really related to the issue at hand:

"(the :string foo)".  Data type names cannot have colons, i.e. cannot be
keywords.  The reason is that the data type system is user-extensible, at
least via defstruct and certainly via other mechanisms such as flavors in
individual implementations and in future Common extensions.  This means
that it is important to be able to use the package system to avoid name
clashes between data types defined by different programs.  The standard
primitive data type names should be globals (or more exactly, should be
in the same package as the standard primitive functions that operate
on those data types.)

Lisp machine experience suggests that it is really not a good idea to have
some keywords take arguments and other keywords not take arguments.  It's a
bit difficult to explain why.  When you are just using these functions with
their keywords in a syntactic way, i.e. essentially as special forms, it
makes no difference except insofar as it makes the documentation more
confusing.  But when you start having programs processing the keywords,
i.e. using the sequence functions as functions rather than special forms,
all hell breaks loose if the syntax isn't uniform.  I think the slight
ugliness of an extra "t" sometimes is well worth it for the sake of
uniformity and simplicity.  On the Lisp machine, we've gone through an
evolution in the last couple of years in which keywords that don't take
arguments have been weeded out.

I don't think much of the scheme for having keywords be constants.  There
is nothing really bad about this except for the danger of confusing
novices, so I guess I could be talked into it, but I don't think getting
rid of the quote mark is a significant improvement (but perhaps it is in
some funny place on your keyboard, where you can't find it, rather than
lower case and to the right of the semicolon as is standard for
typewriters?)


- Minor positive comments

Making REPLACE take keywords is a good idea.

:start1/:end1/:start2/:end2 is a good idea.

The order of arguments to the compare/compare-not function needs to be
strictly defined (since it is not always a commutative function).  Presumably 
the right thing is to make its arguments come in the same order as the
arguments to the sequence function from which they derive.  Thus for SEARCH
the arguments would be an element of sequence1 followed by an element of
sequence2, while for POSITION the arguments would be the item followed
by an element of the sequence.

In addition to MEMQ, etc., would it be appropriate to have MEMQL, etc.,
which would use EQL as the comparison predicate?

MEMBER is a better name than POSITION for the predicate that tests for
membership of an element in a sequence, when you don't care about its
position and really want simply a predicate.  I am tempted to propose that
MEMBER be extended to sequences.  Of course, this would be a non-uniform
extension, since the true value would be T rather than a tail of a list (in
other words, MEMBER would be a predicate on sequences but a semi-predicate
on lists.)  This might be a nasty for novices, but it really seems worth
risking that.  Fortunately car, cdr, rplaca, and rplacd of T are errors in
any reasonable implementation, so that accidentally thinking that the truth
value is a list is likely to be caught immediately.


- To get down to the point:

The problems remaining after this proposal are basically two.  One is that there
is still a ridiculous family of "assoc" functions, and the other is that the
three proposed solutions to the -if/-if-not problem (flushing it, having an
optional argument before a required argument, or passing nil as a placeholder)
are all completely unacceptable.

My solution to the first problem is somewhat radical: remove ASSOC and all
its relatives from the language entirely.  Instead, add a new keyword,
:KEY, to the sequence functions.  The argument to :KEY is the function
which is given an element of the sequence and returns its "key", the object
to be fed to the comparison predicate.  :KEY would be accepted by REMOVE,
POSITION, COUNT, MEMBER, and DELETE.  This is the same as the new optional
argument to SORT (and presumably MERGE), which replaced SORTCAR and
SORTSLOT; but I guess we don't want to make those take keywords.  It is
also necessary to add a new sequence function, FIND, which takes arguments
like POSITION but returns the element it finds.  With a :compare of EQ and
no :key, FIND is (almost) trivial, but with other comparisons and/or other
keys, it becomes extremely useful.

The default value for :KEY would be #'ID or IBID or CR, whatever we call
the function that simply returns its argument [I don't like any of those
names much.]  Using #'CAR as the argument gives you ASSOC (from FIND),
MEMASSOC (from MEMBER), POSASSOC (from POSITION), and DELASSOC (from
DELETE).  Using #'CDR as the argument gives you the RASS- forms.  Of
course, usually you don't want to use either CAR or CDR as the key, but
some defstruct structure-element-accessor.

In the same way that it may be reasonable to keep MEMQ for historical
reasons and because it is used so often, it is probably good to keep
ASSQ and ASSOC.  But the other a-list searching functions are unnecessary.

My solution to the second problem is to put in separate functions for
the -if and -if-not case.  In fact this is a total of only 10 functions:

	remove-if	remove-if-not	position-if	position-if-not
	count-if	count-if-not	delete-if	delete-if-not
	find-if		find-if-not

MEMBER-IF and MEMBER-IF-NOT are identical to SOME and NOTEVERY if the above
suggestion about extending MEMBER to sequences is adopted, and if my memory
of SOME and NOTEVERY is correct (I don't have a Common Lisp manual here.)
If they are put in anyway, that still makes only 12 functions, which are
really only 6 entries in the manual since -if/-if-not pairs would be
documented together.

∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
Date: 20 Jan 1982 16:29:10-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: numerics and common-lisp

The following stuff was sent a while back to GLS, and seemed to
provoke no comment; although it probably raises more questions
than answers, here goes:

*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #.(times 4 (atan 1.0)),  #.(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual. Constants like e could
be handled too...

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support, and that Franz
will be extended.

∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
Date: Wednesday, 20 January 1982, 21:04-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI

Sounds good, unless it turns out to be difficult to figure out just
which things are the kernel and which aren't.  Also, when the kernel is
designed, things should be set up so that even if some higher-level
function is NOT in the kernel, it is still possible for some
implementations to write a higher-level function in "machine language"
if they want to, without losing when they load in gobs and gobs of
Lisp-coded higher-level stuff.

∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
Date: 20 Jan 1982 22:04:29-PST
From: Kim.fateman at Berkeley
To: dlw@MIT-AI
Subject: adding to kernel
Cc: common-lisp@su-ai

One of the features of Franz which we addressed early on in the
design for the VAX was how we would link to system calls in UNIX, and
provide calling sequences and appropriate data structures for use
by other languages (C, Fortran, Pascal).  An argument could be made
that linkages of this nature could be done by message passing, if
necessary; an argument could be made that  CL will be so universal
that it would not be necessary to make such linkages at all.  I
have not found these arguments convincing in the past, though in
the perspective of a single CL virtual machine running on many machines,
they might seem better. 

I am unclear as to how many implementations of CL are anticipated, also:
for what machines; 
who will be doing them;
who will be paying for the work;
how much it will cost to get a copy (if CL is done "for profit");
how will maintenance and standardization happen (e.g. under ANSI?);

If these questions have been answered previously, please forgive my
ignorance/impertinence.


∂18-Jan-82  1537	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
Date: Monday, 18 January 1982, 18:04-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

I agree that GLS's proposal is nice, that it is only acceptable if the
compiler optimizes it, and that it is very easy to optimize.  It is also
extremely clear to the reader of the program, and it cuts down on the
number of arguments that he has to remember.  This sounds OK to me.

∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
Date: 19 January 1982 0103-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: Guy.Steele at CMU-10A
Subject:  Re: Sequence functions
CC: common-lisp at su-ai
In-Reply-To:  <17Jan82 205656 GS70@CMU-10A>
Message-Id: <19Jan82 010338 SF50@CMU-10A>


Guy,

I agree that the index-range and the comparison-choice parameters are
orthogonal.  I like your proposal to use SUBSEQ for the ranges -- it
would appear to be no harder to optimize this in the compiler than to
do the equivalent keyword or optional argument thing, and the added
consing in interpreted code (only!)  should not make much difference.
And the semantics of what is going on with all the start and end
options now becomes crystal clear.  We would need a style suggestion in
the manual urging the programmer to use SUBSEQ for this and not some
random thing he cooks up, since the compiler will only recognize fairly
obvious cases.  Good idea!

I do not like the part of your proposal that relates to reordering the
arguments, on the grounds of gross incompatibility.  Unless we want to
come up with totally new names for all these functions, the change will
make it a real pain to move code and programmers over from Maclisp or
Franz.  Too high a price to pay for epsilon increase in elegance.  I
guess that of the suggestions I've seen so far, I would go with your
subseq idea for ranges and my keywords for specifying the comparison,
throwing out the IF family.

-- Scott

∂19-Jan-82  1551	RPG  	Suggestion    
To:   common-lisp at SU-AI  
I would like to make the following suggestion regarding the
strategy for designing Common Lisp. I'm not sure how to exactly
implement the strategy, but I think it is imperative we do something
like this soon.

We should separate the kernel from the Lisp based portions of the system
and design the kernel first. Lambda-grovelling, multiple values,
and basic data structures seem kernel. Sequence functions and names
can be done later.

The reason that we should do this is so that the many man-years of effort
to immplement a Common Lisp can be done in parallel with the design of
less critical things. 
			-rpg-

∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
Date: 19 Jan 1982 1832-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 19-Jan-82 1651-MST

I agree entirely. In terms of my 2 interests:
a) Implementing Common LISP kernel/compatibility in/for PSL
b) Getting our and other LISP tools working for Common LISP

I would very much like to see a clear effort NOW to isolate some of the
kernel features, and major implementation issues (data-types, user
control over storage manager, etc) so that some of us can implement
a kernel, and others can design extensions.
-------

∂19-Jan-82  2113	Fahlman at CMU-20C 	Re: Suggestion      
Date: 19 Jan 1982 2328-EST
From: Fahlman at CMU-20C
Subject: Re: Suggestion    
To: RPG at SU-AI
In-Reply-To: Your message of 19-Jan-82 1851-EST


Dick,
Your suggestion makes sense for implementations that are just getting started
now, but for those of us who have already got something designed, coded, an
close to up (and that includes most of the implementations that anyone now
cares about) I'm not sure that identifying and concentrating on a kernel is
a good move.  Sequence functions are quite pervasive and I, for one, would
like to see this issue settled soon.  Multiples, on the other hand, are fairly
localized.  Is there some implementation that is being particularly screwed
by the ordering of the current ad hoc agenda?
-- Scott
-------

I think it is possible for us to not define the kernel explicitly but to
identify those decisions that definitely apply to the kernel as opposed to
the non-kernel. It would seem that an established implementation would rather
know now about any changes to its kernel than later. I suggest that the
order of decisions be changed to decide `kernelish' issues first.
			-rpg-
∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
Date: 20 January 1982 16:34-EST
From: David A. Moon <MOON5 at MIT-AI>
Subject: Keyword style sequence functions
To: common-lisp at SU-AI

Comments on Fahlman's Proposal for Keyword Style Sequence Functions for
Common Lisp of 16 January 1982

I think this is a good proposal and a step in the right direction.  There
are some problems with it, and also a couple issues that come to mind while
reading it.  I will first make some minor comments to get flamed up, and
then say what I really want to say.


- Negative comments first:

ELT and SETELT should be provided in type-specific versions.

My intuition suggests that MAP would be more readable with the result data
type before the function instead of after.  I don't really feel strongly
about this, but it's a suggestion.

I don't like the idea of flushing CONCAT (catenate) and making TO-LIST
allow multiple arguments, for some reason.

There is a problem with the :compare and :compare-not keywords.  For some
functions (the ones that take two sequences as arguments), the predicate is
really and truly an equality test.  It might be clearer to call it :equal.
For these functions I think it makes little sense to have a :compare-not.
Note that these are the same functions for which :if/:if-not are meaningless.
For other functions, such as POSITION, the predicate may not be a symmetric
equality predicate; you might be trying to find the first number in a list
greater than 50, or the number of astronauts whose grandmothers are not
ethnic Russians.  Here it makes sense to have a :compare-not.  It may actually
make sense to have a :compare keyword for these functions and a :equal
keyword for the others.  I'm not ecstatic about the name compare for this,
but I haven't thought of anything better.  This is only a minor esthetic
issue; I wouldn't really mind leaving things the way they are in Fahlman's
proposal.

Re :start and :end.  A nil value for either of these keywords should be
the same as not supplying it (i.e. the appropriate boundary of the sequence.)
This makes a lot of things simpler.  In :from-end mode, is the :start where
you start processing the sequence or the left-hand end of the subsequence?
In the Lisp machine, it is the latter, but either way would be acceptable.

The optional "count" argument to REMOVE and friends should be a keyword
argument.  This is more uniform, doesn't hurt anything, and is trivially
mechanically translatable from the old way.

The set functions, from ADJOIN through NSET-XOR, should not take keywords.
:compare-not is meaningless for these (unlike say position, where you would
use it to find the first element of a sequence that differed from a given
value).  That leaves only one keyword for these functions.  Also it is
-really- a bad idea to put keywords after an &rest argument (as in UNION).
I would suggest that the equal-predicate be a required first argument for
all the set functions; alternatively it could be an optional third argument
except for UNION and INTERSECTION, or those functions could be changed
to operate on only two sets like the others.  I think EQUAL is likely
to be the right predicate for set membership only in rare circumstances,
so that it would not hurt to make the predicate a required argument and
have no default predicate.

The :eq, :eql, :nequal, etc. keywords are really a bad idea.  The reasons
are:  1) They are non-uniform, with some keywords taking arguments and
some not.  See the tirade about this below.  2) They introduce an artificial
barrier between system-defined and user-defined predicates.  This is always
a bad idea, and here serves no useful purpose.  3) They introduce an
unesthetic interchangeability between foo and :foo, which can lead to
a significant amount of confusion.  If the keyword form of specifying the
predicate is too verbose, I would be much happier with making the predicate
be an optional argument, to be followed by keywords.  Personally I don't
think it is verbose enough to justify that.

There are still a lot of string functions in the Lisp machine not generalized
into sequence functions.  I guess it is best to leave that issue for future
generations and get on with the initial specification of Common Lisp.


- Negative comments not really related to the issue at hand:

"(the :string foo)".  Data type names cannot have colons, i.e. cannot be
keywords.  The reason is that the data type system is user-extensible, at
least via defstruct and certainly via other mechanisms such as flavors in
individual implementations and in future Common extensions.  This means
that it is important to be able to use the package system to avoid name
clashes between data types defined by different programs.  The standard
primitive data type names should be globals (or more exactly, should be
in the same package as the standard primitive functions that operate
on those data types.)

Lisp machine experience suggests that it is really not a good idea to have
some keywords take arguments and other keywords not take arguments.  It's a
bit difficult to explain why.  When you are just using these functions with
their keywords in a syntactic way, i.e. essentially as special forms, it
makes no difference except insofar as it makes the documentation more
confusing.  But when you start having programs processing the keywords,
i.e. using the sequence functions as functions rather than special forms,
all hell breaks loose if the syntax isn't uniform.  I think the slight
ugliness of an extra "t" sometimes is well worth it for the sake of
uniformity and simplicity.  On the Lisp machine, we've gone through an
evolution in the last couple of years in which keywords that don't take
arguments have been weeded out.

I don't think much of the scheme for having keywords be constants.  There
is nothing really bad about this except for the danger of confusing
novices, so I guess I could be talked into it, but I don't think getting
rid of the quote mark is a significant improvement (but perhaps it is in
some funny place on your keyboard, where you can't find it, rather than
lower case and to the right of the semicolon as is standard for
typewriters?)


- Minor positive comments

Making REPLACE take keywords is a good idea.

:start1/:end1/:start2/:end2 is a good idea.

The order of arguments to the compare/compare-not function needs to be
strictly defined (since it is not always a commutative function).  Presumably 
the right thing is to make its arguments come in the same order as the
arguments to the sequence function from which they derive.  Thus for SEARCH
the arguments would be an element of sequence1 followed by an element of
sequence2, while for POSITION the arguments would be the item followed
by an element of the sequence.

In addition to MEMQ, etc., would it be appropriate to have MEMQL, etc.,
which would use EQL as the comparison predicate?

MEMBER is a better name than POSITION for the predicate that tests for
membership of an element in a sequence, when you don't care about its
position and really want simply a predicate.  I am tempted to propose that
MEMBER be extended to sequences.  Of course, this would be a non-uniform
extension, since the true value would be T rather than a tail of a list (in
other words, MEMBER would be a predicate on sequences but a semi-predicate
on lists.)  This might be a nasty for novices, but it really seems worth
risking that.  Fortunately car, cdr, rplaca, and rplacd of T are errors in
any reasonable implementation, so that accidentally thinking that the truth
value is a list is likely to be caught immediately.


- To get down to the point:

The problems remaining after this proposal are basically two.  One is that there
is still a ridiculous family of "assoc" functions, and the other is that the
three proposed solutions to the -if/-if-not problem (flushing it, having an
optional argument before a required argument, or passing nil as a placeholder)
are all completely unacceptable.

My solution to the first problem is somewhat radical: remove ASSOC and all
its relatives from the language entirely.  Instead, add a new keyword,
:KEY, to the sequence functions.  The argument to :KEY is the function
which is given an element of the sequence and returns its "key", the object
to be fed to the comparison predicate.  :KEY would be accepted by REMOVE,
POSITION, COUNT, MEMBER, and DELETE.  This is the same as the new optional
argument to SORT (and presumably MERGE), which replaced SORTCAR and
SORTSLOT; but I guess we don't want to make those take keywords.  It is
also necessary to add a new sequence function, FIND, which takes arguments
like POSITION but returns the element it finds.  With a :compare of EQ and
no :key, FIND is (almost) trivial, but with other comparisons and/or other
keys, it becomes extremely useful.

The default value for :KEY would be #'ID or IBID or CR, whatever we call
the function that simply returns its argument [I don't like any of those
names much.]  Using #'CAR as the argument gives you ASSOC (from FIND),
MEMASSOC (from MEMBER), POSASSOC (from POSITION), and DELASSOC (from
DELETE).  Using #'CDR as the argument gives you the RASS- forms.  Of
course, usually you don't want to use either CAR or CDR as the key, but
some defstruct structure-element-accessor.

In the same way that it may be reasonable to keep MEMQ for historical
reasons and because it is used so often, it is probably good to keep
ASSQ and ASSOC.  But the other a-list searching functions are unnecessary.

My solution to the second problem is to put in separate functions for
the -if and -if-not case.  In fact this is a total of only 10 functions:

	remove-if	remove-if-not	position-if	position-if-not
	count-if	count-if-not	delete-if	delete-if-not
	find-if		find-if-not

MEMBER-IF and MEMBER-IF-NOT are identical to SOME and NOTEVERY if the above
suggestion about extending MEMBER to sequences is adopted, and if my memory
of SOME and NOTEVERY is correct (I don't have a Common Lisp manual here.)
If they are put in anyway, that still makes only 12 functions, which are
really only 6 entries in the manual since -if/-if-not pairs would be
documented together.

∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
Date: 20 Jan 1982 16:29:10-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: numerics and common-lisp

The following stuff was sent a while back to GLS, and seemed to
provoke no comment; although it probably raises more questions
than answers, here goes:

*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #.(times 4 (atan 1.0)),  #.(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual. Constants like e could
be handled too...

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support, and that Franz
will be extended.

∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
Date: Wednesday, 20 January 1982, 21:04-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI

Sounds good, unless it turns out to be difficult to figure out just
which things are the kernel and which aren't.  Also, when the kernel is
designed, things should be set up so that even if some higher-level
function is NOT in the kernel, it is still possible for some
implementations to write a higher-level function in "machine language"
if they want to, without losing when they load in gobs and gobs of
Lisp-coded higher-level stuff.

∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Scott:
	Here are some messages I received recently. I'm worried about
Hedrick and the Vax. I'm not too worried about Lisp Machine, you guys,
and us guys (S-1). I am also worried about Griss and Standard Lisp,
which wants to get on the bandwagon. I guess I'd like to settle kernel
stuff first, fluff later.

	I understand your worry about sequences etc. Maybe we could try
to split the effort of studying issues a little. I dunno. It was just
a spur of the moment thought.
			-rpg-

∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Date: 19 Jan 1982 1443-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on common lisp
To:   gabriel at SU-AI

Mail-from: ARPANET host PARC-MAXC rcvd at 19-Jan-82 1331-PST
Date: 19 Jan 1982 13:12 PST
From: Masinter at PARC-MAXC
to: Feigenbaum@sumex-aim
Subject: Common Lisp- reply to Hedrick

It is a shame that such misinformation gets such rapid dissemination....

Date: 19 Jan 1982 12:57 PST
From: Masinter at PARC-MAXC
Subject: Re: CommonLisp at Rutgers
To: Hedrick@Rutgers
cc: Masinter

A copy of your message to "bboard at RUTGERS, griss at UTAH-20, admin.mrc at
SU-SCORE, jsol at RUTGERS" was forwarded to me. I would like to rebut some of
the points in it:

I think that Common Lisp has the potential for being a good lisp dialect which
will carry research forward in the future. I do not think, however, that people
should underestimate the amount of time before Common Lisp could possibly be a
reality.

The Common Lisp manual is nowhere near being complete. Given the current
rate of progress, the Common Lisp language definition would probably not be
resolved for two years--most of the hard issues have merely been deferred (e.g.,
T and NIL, multiple-values), and there are many parts of the manual which are
simply missing. Given the number of people who are joining into the discussion,
some drastic measures will have to be taken to resolve some of the more serious
problems within a reasonable timeframe (say a year).

Beyond that, the number of things which would have to be done to bring up a
new implementation of CommonLisp lead me to believe that the kernel for
another machine, such as the Dec-20, would take on the order of 5 man-years at
least. For many of the features in the manual, it is essential that the be built
into the kernel (most notably the arithmetic features and the multiple-value
mechanism) rather than in shared Lisp code. I believe that many of these may
make an implementation of Common Lisp more "difficult to implement efficiently
and cleanly" than Interlisp. 

I think that the Interlisp-VAX effort has been progressing quite well. They have
focused on the important problems before them, and are proceeding quite well. I
do not know for sure, but it is likely that they will deliver a useful system
complete with a programming enviornment long before the VAX/NIL project,
which has consumed much more resources. When you were interacting with the
group of Interlisp implementors at Xerox, BBN and ISI about implementing
Interlisp, we cautioned you about being optimistic about the amount of
manpower required. What seems to have happened is that you have come away
believing that Common Lisp would be easier to implement.  I don't think that is
the case by far.

Given your current manpower estimate (one full-time person and one RA) I do
not believe you have the critical mass to bring off a useful implemention of
Common Lisp. I would hate to see a replay of the previous situation with
Interlisp-VAX, where budgets were made and machines bought on the basis of a
hopeless software project. It is not that you are not competent to do a reasonable
job of implementation, it is just that creating a new implementation of an already
specified language is much much harder than merely creating a new
implementation of a language originally designed for another processor. 

I do think that an Interlisp-20 using extended virtual addressing might be
possible, given the amount of work that has gone into making Interlisp
transportable, the current number of compatible implementations (10, D, Jericho,
VAX) and the fact that Interlisp "grew up" in the Tenex/Tops-20 world, and that
some of the ordinarily more difficult problems, such as file names and operating
system conventions, are already tuned for that operating system. I think that a
year of your spare time and Josh for one month seems very thin.

Larry
-------

∂20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
Date: 21 Jan 1982 0024-EST
From: Fahlman at CMU-20C
Subject: Implementations
To: rpg at SU-AI
cc: steele at CMU-20C, fahlman at CMU-20C

Dick,

I agree that, where a choice must be made, we should give first priority
to settling kernel-ish issues.  However, I think that the debate on
sequence functions is not detracting from more kernelish things, so I
see no reason not to go on with that.

Thanks for forwarding Masinter's note to me.  I found him to be awfully
pessimistic.  I believe that the white pages will be essentially complete
and in a form that just about all of us can agree on within two months.
Of course, the Vax NIL crowd (or anyone else, for that matter) could delay
ratification indefinitely, even if the rest of us have come together, but I
think we had best deal with that when the need arises.  We may have to
do something to force convergence if it does not occur naturally.  My
estimate may be a bit optimistic, but I don't see how anyone can look at
what has happened since last April and decide that the white pages will
not be done for two years.

Maybe Masinter's two years includes the time to develop all of the
yellow pages stuff -- editors, cross referencers, and so on.  If so, I
tend to agree with his estimate.  To an Interlisper, Common Lisp will
not offer all of the comforts of home until all this is done and stable,
and a couple of years is a fair estimate for all of this stuff, given
that we haven't really started thinking about this.  I certainly don't
expect the Interlisp folks to start flocking over until all this is
ready, but I think we will have the Perq and Vax implementations
together within 6 months or so and fairly stable within a year.

I had assumed that Guy had been keeping you informed of the negotiations
we have had with DEC on Common Lisp for VAX, but maybe he has not.  The
situation is this: DEC has been extremely eager to get a Common Lisp up
on Vax VMS, due to pressure from Slumberger and some other customers,
plus their own internal plans for building some expert systems.  Vax NIL
is not officially abandoned, but looks more and more dubious to them,
and to the rest of us.  A couple of months ago, I proposed to DEC that
we could build them a fairly decent compiler just by adding a
post-processor to the Spice Lisp byte-code compiler.  This
post-processor would turn the simple byte codes into in-line Vax
instructions and the more complex ones into jumps off to hand-coded
functions.  Given this compiler, one could then get a Lisp system up
simply by using the Common Lisp in Common Lisp code that we have
developed for Spice.  The extra effort to do the Vax implementation
amounts to only a few man-months and, once it is done, the system will
be totally compatible with the Spice implementation and will track any
improvements.  With some additional optimizations and a bit of tuning,
the performance of this sytem should be comparable to any other Lisp on
the Vax, and probably better than Franz.

DEC responded to this proposal with more enthusiasm than I expected.  It
is now nearly certain that they will be placing two DEC employees
(namely, ex-CMU grad students Dave McDonald and Water van Roggen) here
in Pittsburgh to work on this, with consulting by Guy and me.  The goal
is to get a Common Lisp running on the Vax in six months, and to spend
the following 6 months tuning and polishing.  I feel confident that this
goal will be met.  The system will be done first for VMS, but I think we
have convinced DEC that they should invest the epsilon extra effort
needed to get a Unix version up as well.

So even if MIT totally drops the ball on VAX NIL, I think that it is a
pretty safe bet that a Common Lisp for Vax will be up within a year.  If
MIT wins, so much the better: the world will have a choice between a
hairy NIL and a basic Common Lisp implementation.

We are suggesting to Chuck Hedrick that he do essentially the same thing
to bring up a Common Lisp for the extended-address 20.  If he does, then
this implementation should be done in finite time as well, and should
end up being fully compatible with the other systems.  If he decides
instead to do a traditinal brute-force implementation with lots of
assembly code, then I tend to agree with Masinter's view: it will take
forever.

I think we may have come up with an interesting kind of portability
here.  Anyway, I thought you would be interested in hearing all the
latest news on this.

-- Scott
-------

∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
Date: 20 Jan 1982 22:04:29-PST
From: Kim.fateman at Berkeley
To: dlw@MIT-AI
Subject: adding to kernel
Cc: common-lisp@su-ai

One of the features of Franz which we addressed early on in the
design for the VAX was how we would link to system calls in UNIX, and
provide calling sequences and appropriate data structures for use
by other languages (C, Fortran, Pascal).  An argument could be made
that linkages of this nature could be done by message passing, if
necessary; an argument could be made that  CL will be so universal
that it would not be necessary to make such linkages at all.  I
have not found these arguments convincing in the past, though in
the perspective of a single CL virtual machine running on many machines,
they might seem better. 

I am unclear as to how many implementations of CL are anticipated, also:
for what machines; 
who will be doing them;
who will be paying for the work;
how much it will cost to get a copy (if CL is done "for profit");
how will maintenance and standardization happen (e.g. under ANSI?);

If these questions have been answered previously, please forgive my
ignorance/impertinence.


The known and suspected implementations for Common Lisp are:

	S-1 Mark IIA, paid for by ONR, done by RPG, GLS, Rod Brooks and others
	SPICELISP, paid for by ARPA, done by SEF, GLS, students, some RPG
	ZETALISP, paid for by Symbolics, by Symbolics
	VAX Common Lisp, probably paid for by DEC, done by CMU Spice personnel
	Extended addressing 20, probably paid for by DEC, done by Rutgers (Hedrick)
	68000, Burroughs, IBM, Various portable versions done by Utah group,
		paid for by ARPA (hopefully spoken).
	Retrofit to MacLisp by concerned citizens, maybe.
∂21-Jan-82  1746	Earl A. Killian <EAK at MIT-MC> 	SET functions    
Date: 21 January 1982 17:26-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET functions
To: Morrison at UTAH-20, RMS at MIT-AI
cc: common-lisp at SU-AI

Well if you're going to propose two changes like that, you might
as well do SETF -> SET, instead of SETF -> SETQ.  It's shorter
and people wouldn't wonder what the Q or F means.

But actually I'm not particularly in favor of eliminating the set
functions, even though I tend to use SETF instead myself, merely
because I don't see how their nonexistance would clean up
anything.

∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
Date: 21 January 1982 18:01-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: EAK at MIT-MC
cc: common-lisp at SU-AI

The point is not to get rid of the setting functions, but to
reduce their status in the documentation.  Actually getting rid of
them doesn't accomplish much, as you say, and also is too great
an incompatibility.  (For the same reason, SETF cannot be renamed
to SET, but can be renamed to SETQ).  But moving them all to an
appendix on compatibility and telling most users simply
"to alter anything, use SETF" is a tremendous improvement in
the simplicity of the language as perceived by users, even if
there is no change in the actual system that they use.
(At the same time, any plans to introduce new setting functions
that are not needed for compatibility can be canceled).

∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
Date: 21 Jan 1982 1939-MST
From: Don Morrison <Morrison at UTAH-20>
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 1601-MST

I'm not convinced  that drastic  renamings (such  as SETF  => SET)  are
impractical.  Just as  you move the  documentation to a  "compatability
appendix", you move  the old  semantics to  a "compatability  package".
Old code must be run with  the reader interning in the MACLISP  package
or the Franz  LISP package, or  whatever.  The only  things which  must
really change  are the  programmers  -- and  I  believe the  effort  of
changing ones thoughts  to a  conceptually simpler LISP  would, in  the
long run, save  programmers time and effort.

There is, however, the problem of  maintenance of old code.  One  would
not like  to  have to  remember  seventeen  dialects of  LISP  just  to
maintain old  code.  But  I suspect  that maintenance  would  naturally
proceed by rewiting large  hunks of code, which  would then be done  in
the "clean" dialect.  LISP code is  not exempt from the usual  folklore
that  tweeking  broken  code  only  makes  it  worse.   This  is   just
conjecture; has experience on the  LISP Machine shown that old  MACLISP
code tends to get rewritten as it needs to change, or does it just  get
tweeked, mostly using those historical  atrocities left in for  MACLISP
compatability? 

It would be a shame to  see a standardized Common LISP incorporate  the
same sort of historical  abominations as those  which FORTRAN 77  lives
with.
-------

∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
Date: 21 January 1982 23:50-EST
From: George J. Carrette <GJC at MIT-MC>
To: Morrison at UTAH-20
cc: RMS at MIT-AI, common-lisp at SU-AI

My experience with running macsyma in maclisp and lispm is that what
happens is that compatibility features are not quite compatible, and
that gross amounts of tweeking beyond the scope of a possibility in
FORTRAN 77 goes on. Much of the tweeking takes the form of adding
another layer of abstraction through macros, not using ANY known form
of lisp, but one which is a generalization, and obscure to anyone but
a macsyma-lisp hacker. At the same time the *really* gross old code
gets rewritten, when significant new features are provided, like
Pathnames.

Anyway, in NIL I wanted to get up macsyma as quickly as possible
without grossing out RLB or myself, or overloading NIL with so many
compatibility features, as happened in the Lispmachine. Also there
was that bad-assed T and NIL problem we only talked about a little
at the common-lisp meeting. [However, more severe problems, like the
fact that macsyma would not run with error-checking in CAR/CDR 
had already been fixed by smoking it out on the Lispmachine.]



∂21-Jan-82  1144	Sridharan at RUTGERS (Sri) 	S-1 CommonLisp   
Date: 21 Jan 1982 1435-EST
From: Sridharan at RUTGERS (Sri)
Subject: S-1 CommonLisp
To: rpg at SU-AI, guy.steele at CMU-10A

I have been kicking around an idea to build a multiprocessor aimed at
running some form of Concurrent Lisp as well my AI language AIMDS.
I came across S-1 project and it is clear I need to find out about
this project in detail.  Can you arrange to have me receive what
reports and documents are available on this project?

More recently, Hedrick mentioned in a note that there is an effort
to develop Lisp for the S-1.  How exciting!  Can you provide me
some background on this and describe the goals and current status?

My project is an attempt to develop coarse-grain parallelism in
a multprocessor environment, each processor being of the order of a
Lisp-machine, with a switching element between processors and memories,
with ability for the user/programmer to write ordinary Lisp code,
enhanced in places with necessary declarations and also new primitives
to make it feasible to take advantage of parallelism.  One of the
goals of the project is to support gradual conversion of existing
code to take advantage of available concurrency.

My mailing address is
N.S.Sridharan
Department of Computer Science
Rutgers University, Hill Center
New Brunswick, NJ 08903
-------

∂21-Jan-82  1651	Earl A. Killian <EAK at MIT-MC> 	SET functions    
Date: 21 January 1982 17:26-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET functions
To: Morrison at UTAH-20, RMS at MIT-AI
cc: common-lisp at SU-AI

Well if you're going to propose two changes like that, you might
as well do SETF -> SET, instead of SETF -> SETQ.  It's shorter
and people wouldn't wonder what the Q or F means.

But actually I'm not particularly in favor of eliminating the set
functions, even though I tend to use SETF instead myself, merely
because I don't see how their nonexistance would clean up
anything.

∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
Date: 21 January 1982 18:01-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: EAK at MIT-MC
cc: common-lisp at SU-AI

The point is not to get rid of the setting functions, but to
reduce their status in the documentation.  Actually getting rid of
them doesn't accomplish much, as you say, and also is too great
an incompatibility.  (For the same reason, SETF cannot be renamed
to SET, but can be renamed to SETQ).  But moving them all to an
appendix on compatibility and telling most users simply
"to alter anything, use SETF" is a tremendous improvement in
the simplicity of the language as perceived by users, even if
there is no change in the actual system that they use.
(At the same time, any plans to introduce new setting functions
that are not needed for compatibility can be canceled).

∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
Date: 21 Jan 1982 1939-MST
From: Don Morrison <Morrison at UTAH-20>
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 1601-MST

I'm not convinced  that drastic  renamings (such  as SETF  => SET)  are
impractical.  Just as  you move the  documentation to a  "compatability
appendix", you move  the old  semantics to  a "compatability  package".
Old code must be run with  the reader interning in the MACLISP  package
or the Franz  LISP package, or  whatever.  The only  things which  must
really change  are the  programmers  -- and  I  believe the  effort  of
changing ones thoughts  to a  conceptually simpler LISP  would, in  the
long run, save  programmers time and effort.

There is, however, the problem of  maintenance of old code.  One  would
not like  to  have to  remember  seventeen  dialects of  LISP  just  to
maintain old  code.  But  I suspect  that maintenance  would  naturally
proceed by rewiting large  hunks of code, which  would then be done  in
the "clean" dialect.  LISP code is  not exempt from the usual  folklore
that  tweeking  broken  code  only  makes  it  worse.   This  is   just
conjecture; has experience on the  LISP Machine shown that old  MACLISP
code tends to get rewritten as it needs to change, or does it just  get
tweeked, mostly using those historical  atrocities left in for  MACLISP
compatability? 

It would be a shame to  see a standardized Common LISP incorporate  the
same sort of historical  abominations as those  which FORTRAN 77  lives
with.
-------

∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
Date: 21 January 1982 23:50-EST
From: George J. Carrette <GJC at MIT-MC>
To: Morrison at UTAH-20
cc: RMS at MIT-AI, common-lisp at SU-AI

My experience with running macsyma in maclisp and lispm is that what
happens is that compatibility features are not quite compatible, and
that gross amounts of tweeking beyond the scope of a possibility in
FORTRAN 77 goes on. Much of the tweeking takes the form of adding
another layer of abstraction through macros, not using ANY known form
of lisp, but one which is a generalization, and obscure to anyone but
a macsyma-lisp hacker. At the same time the *really* gross old code
gets rewritten, when significant new features are provided, like
Pathnames.

Anyway, in NIL I wanted to get up macsyma as quickly as possible
without grossing out RLB or myself, or overloading NIL with so many
compatibility features, as happened in the Lispmachine. Also there
was that bad-assed T and NIL problem we only talked about a little
at the common-lisp meeting. [However, more severe problems, like the
fact that macsyma would not run with error-checking in CAR/CDR 
had already been fixed by smoking it out on the Lispmachine.]



∂22-Jan-82  1842	Fahlman at CMU-20C 	Re: adding to kernel
Date: 22 Jan 1982 2140-EST
From: Fahlman at CMU-20C
Subject: Re: adding to kernel
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 0104-EST


The ability to link system calls and compiled routines written in the
barbarous tongues into Common Lisp will be important in some
implementations.  In others, this will be handled by inter-process
message passing (Spice) or by translating everything into Lisp or
Lispish byte-codes (Symbolics).  In any event, it seems clear that
features of this sort must be implementation-dependent packages rather
than parts of the Common Lisp core.

As for what implementations are planned, I know of the following that
are definitely underway: Spice Lisp, S1-NIL, VAX-NIL, and Zetalisp
(Symbolics).  Several other implementations (for Vax, Tops-20, IBM 4300
series, and a portable implementation from the folks at Utah) are being
considered, but it is probably premature to discuss the details of any
of these, since as far as I know none of them are definite as yet.  The
one implmentation I can discuss is Spice Lisp.

Spice is a multiple process, multiple language, portable computing
environment for powerful personal machines (i.e. more powerful than the
current generation of micros).  It is being developed by a large group
of people at CMU, with mostly ARPA funding.  Spice Lisp is the Common
Lisp implementation for personal machines running Spice.  Scott Fahlman
and Guy Steele are in charge.  The first implementation is for the Perq
1a with 16K microstore and 1 Mbyte main memory (it will NOT run on the
Perq 1).  We will probably be porting all of the Spice system, including
the Lisp, to the Symbolics 3600 when this machine is available, with
other implementations probably to follow.

The PERQ implementation will probably be distributed and maintained by
3RCC as one of the operating systems for the PERQ; we would hope to
develop similar arrangements with other manufacturers of machines on
which Spice runs, since we at CMU are not set up to do maintenance for
lots of customers ourselves.

Standardization for awhile will (we hope) be a result of adhering to the
Common Lisp Manual; once Common Lisp has had a couple of years to
settle, it might be worth freezing a version and going for ANSI
standardization, but not until then.
-------

∂22-Jan-82  1914	Fahlman at CMU-20C 	Multiple values
Date: 22 Jan 1982 2209-EST
From: Fahlman at CMU-20C
Subject: Multiple values
To: common-lisp at SU-AI


It has now been a week since I suggested flushing the lambda-list
versions of the multiple value catching forms.  Nobody has leapt up to
defend these, so I take it that nobody is as passionate about keeping
these around as I am about flushing them.  Therefore, unless strong
objections appear soon, I propose that we go with the simple Lisp
Machine versions plus M-V-Call in the next version of the manual.  (If,
once the business about lexical binding is resolved, it is clear that
these can easily be implemented as special cases of M-V-Call, we can put
them back in again.)

The CALL construct proposed by Stallman seems very strange and low-level
to me.  Does anyone really use this?  For what?  I wouldn't object to
having this around in a hackers-only package, but I'm not sure random
users ought to mess with it.  Whatever we do with CALL, I would like to
keep M-V-Call as well, as its use seems a good deal clearer without the
spreading and such mixed in.

-- Scott
-------

∂22-Jan-82  2132	Kim.fateman at Berkeley 	Re: adding to kernel
Date: 22 Jan 1982 21:27:03-PST
From: Kim.fateman at Berkeley
To: Fahlman@CMU-20C
Subject: Re: adding to kernel
Cc: common-lisp@su-ai

There is a difference between the "common lisp core" and the
"kernel" of a particular implementation.  The common lisp core
presumably would have a function which obtains the time.  Extended
common lisp might convert the time to Roman numerals.  The kernel
would have to have a function (in most cases, written in something
other than lisp) which obtains the time from the hardware or
operating system.  I believe that the common lisp core should be
delineated, and the extended common lisp (written in common lisp core)
should be mostly identical from system to system.  What I would like
to know, though, is what will be required of the kernel, because it
will enable one to say to a manufacturer, it is impossible to write
a common lisp for this architecture because it lacks (say) a real-time
clock, or does not support (in the UNIX parlance) "raw i/o", or

perhaps multiprocessing...

I hope that the results of common lisp discussions become available for
less than the $10k (or more) per cpu that keeps us at Berkeley from
using Scribe.  I have no objection to a maintenance organization, but
I hope copies of relevant programs (etc) are made available in an
unmaintained form for educational institutions or other worthy types.

Do the proprietor(s) of NIL think it is a "common lisp implementation"?
That is, if NIL and CL differ in specifications, will NIL change, or
will NIL be NIL, and a new thing, CL emerge?  If CL is sufficiently
well defined that, for example, it can be written in Franz Lisp with
some C-code added, presumably a CL compatibility package could be
written.  Would that make Franz a "common lisp implementation"?
(I am perfectly happy with the idea of variants of Franz; e.g. users
here have a choice of the CMU top-level or the (raw) default; they
can have a moderately interlisp-like set of functions ("defineq" etc.)
or the default maclisp-ish.  ).

∂23-Jan-82  0409	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 23 January 1982 07:07-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI, Fahlman at CMU-20C

I don't know the exact delivery time for Symbolics new "L" machine,
nor the exact state of CMU spice-lisp, [which is on the front-burner
now for micro-coded implementation on their own machine no?] with
respect to any possible VAX implementation; but I suspect that of
all the lisp implementations planning to support the COMMON-LISP
standard, MIT's NIL is the closest to release. Can I get some
feedback on this?

As far as bucks go "$$$" gee. CPU's that can run lisp are not cheap
in themselves. However, I don't anything concrete about the
marketing of NIL. Here is a cute one, when the New Implementation of Lisp,
becomes the Old Implementation of Lisp, then NIL becomes OIL.
However, right now it is still NEW, so you don't have to worry.

Unstated assumptions (so far) in Common-lisp?
[1] Error-checking CAR/CDR by default in compiled code.
[2] Lispm-featurefull debugging in compiled code.

Maybe this need not be part of the standard, but everbody knows that
it is part of the usability and marketability of a modern lisp.

Here is my guess as to what NIL will look like by the time the UNIX
port is made: Virtual Machine written in SCHEME, with the SCHEME compiler
written in NIL producing standard UNIX assembler. NIL written in NIL,
and the common-lisp support written in NIL and common-lisp. A Maclisp
compatibility namespace supported by functions written in NIL.
VM for unix written in Scheme rather than "C" might seem strange to
some, but it comes from a life-long Unix/C hacker around here who
wants to raise the stakes a bit to make it interesting. You know, one
thing for sure around MIT => If it ain't interesting it ain't going to
get done! <= There being so many other things to do, not to even
mention other, possibly commercial organizations.



∂23-Jan-82  0910	RPG  
To:   common-lisp at SU-AI  
MV Gauntlet Picked Up
Ok. I believe that even if the implementation details are grossly different
all constructs that bind should have the same syntax. Thus,
if any MV construct binds, and is called ``-BIND'', ``-LAMBDA'', or
``-LET'', it should behave the same way as anything else that purports
to bind (like LAMBDA).  Since LET and LAMBDA are similar to most naive
users, too, I would like to see LET and LAMBDA be brought into line.

I would like a uniform, consistent language, so I strongly propose
either simplifying LAMBDA to be as simple as Lisp Machine multiple-value-bind
and using Lisp Machine style MV's as Scott suggests, or going to complex
LAMBDA, complex MV-lambda as in the current scheme, and flushing Lisp
Machine Multiple-value-bind. I propose not doing a mixture. 
			-rpg-

∂23-Jan-82  1841	Fahlman at CMU-20C  
Date: 23 Jan 1982 2136-EST
From: Fahlman at CMU-20C
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 23-Jan-82 1210-EST


It seems clear to me that we MUST support two kinds of binding forms: a
simple-syntax form as in PROG and LET, and a more complex form as in
DEFUN and LAMBDA.  (Not to mention odd things like DO and PROGV that are
different but necessary.)  It clearly makes no sense to hair up PROG and
LET with optionals and rest args, since there is no possible use for
these things -- they would just confuse people and be a pain to
implement.  It is also clear that we are not going to abandon optionals
and rest args in DEFUN and LAMBDA in the name of uniformity -- they are
too big a win when you are defining functions that are going to be
called from a lot of different places, not all of them necessarily known
at compile-time.  So I don't really see what RPG is arguing for.  The
issue is not whether to support both a simple and a hairy syntax for
binding forms; the issue is simply which of these we want the
MV-catching forms to be.  And in answering that question, as in many
other places in the language, we must consider not only uniformity as
seen by Lisp theologians, but also implementation cost, runtime
efficiency, and what will be least confusing to the typical user.

-- Scott
-------

∂23-Jan-82  2029	Fahlman at CMU-20C 	Re:  adding to kernel    
Date: 23 Jan 1982 2319-EST
From: Fahlman at CMU-20C
Subject: Re:  adding to kernel
To: GJC at MIT-MC
cc: common-lisp at SU-AI
In-Reply-To: Your message of 23-Jan-82 0707-EST

In reply to GJC's recent message:

It is hard to comment on whether NIL is closer to being released than
other Common Lisp implementations, since you don't give us a time
estimate for NIL, and you don't really explain what you mean by
"released".  I understand that you have something turning over on
various machines at MIT, but it is unclear to me how complete this
version is or how much work has to be done to make it a Common Lisp
superset.  Also, how much manpower do you folks have left?

The PERQ implementation of Spice Lisp is indeed on our front burner.
Unfortumately, we do not yet have an instance of the PERQ 1a processor
upon which to run this.  The PERQ microcode is essentially complete and
has been debugged on an emulator.  The rest of the code, written in
Common Lisp itself, is being debugged on a different emulator.  If we
get get the manual settled soon and if 3RCC delivers the 1a soon, we
should have a Spartan but usable Common Lisp up by the start of the
summer.  The Perq 1a wil probably not be generally available until
mid-summer, given the delays in getting the prototype together.

By summer's end we should have an Emacs-like editor running, along with
some fairly nice debugging tools.  Of course, the system will be
improving for a couple of years beyond that as additional user amenities
appear.  I have no idea how long it will take 3RCC to start distributing
and supporting this Lisp, if that's what you mean by "release".  Their
customers might force them to move more quickly on this than they
otherwise would, but they have a lot of infrastructure to build -- no
serious Lispers over there at present.

As for your "unstated assumptions":

1. The amount of runtime error checking done by compiled code must be
left up to the various implementations, in general.  A machine like the
Vax will probably do less of this than a microcoded implementation, and
a native-code compiler may well want to give the user a compile-time
choice between some checking and maximum speed.  I think that the white
pages should just say "X is an error" and leave the question of how much
checking is done in compiled code to the various implementors.

2. The question of how (or whether) the user can debug compiled code is
also implementation-dependent, since the runtime representations and
stack formats may differ radically.  In addition, the user interface for
a debugging package will depend on the type of display used, the
conventions of the home system, and other such things, though one can
imagine that the debuggers on similar environments might make an effort
to look the same to the user.  The white pages should probably not
specify any debugging aids at all, or at most should specify a few
standard peeking functions that all implementations can easily support.

I agree that any Common Lisp implementation will need SOME decent debugging
aids before it will be taken seriously, but that does not mean that this
should be a part of the Common Lisp standard.

-- Scott
-------

∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 04:24-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I agree with Fahlman about binding constructs.
I want LAMBDA to be the way it is, and LET to be the way it is,
and certainly not the same.

As for multiple values, if LET is fully extended to do what
SETF can do, then (LET (((VALUES A B C) m-v-returning-form)) ...)
can be used to replace M-V-BIND, just as (SETF (VALUES A B C) ...)
can replace MULTIPLE-VALUES.  I never use MULTIPLE-VALUES any more
because I think that the SETF style is clearer.

∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 06:02-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I would like to clear up a misunderstanding that seems to be
prevalent.  The MIT Lisp machine system, used by Symbolics and LMI, is
probably going to be converted to support Common Lisp (which is the
motivation for my participation in the design effort for Common Lisp
clean).  Whenever this happens, Common Lisp will be available on
the CADR machine (as found at MIT and as sold by LMI and Symbolics)
and the Symbolics L machine (after that exists), and on the second
generation LMI machine (after that exists).

I can't speak for LMI's opinion of Common Lisp, but if MIT converts,
LMI will certainly do so.  As the main Lisp machine hacker at MIT, I
can say that I like Common Lisp.

It is not certain when either of the two new machines will appear, or
when the Lisp machine system itself will support Common Lisp.  Since
these three events are nearly independent, they could happen in any
order.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:23-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

To clear up another random point: the name "Zetalisp" is not a Symbolics
proprietary name.  It is just a name that has been made up to replace
the ungainly name "Lisp Machine Lisp".  The reason for needing a name is
that I belive that people associate the Lisp Machine with Maclisp,
including all of the bad things that they have traditionally belived
about Maclisp, like that it has a user interface far inferior to that of
Interlisp.

I certainly hope that all of the Lisp Machines everywhere will convert
to Common Lisp together.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:20-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

If I understand what RPG is saying then I think that I am not convinced
by his point.  I don't think that just because multiple-value-bind takes
a list of variables that are being bound to variables means that it HAS
to have all the features that LAMBDA combinations have, in the name of
language simplicity, because I just don't think that the inconsistency
there bothers me very much.  It is a very localized inconsistency and I
really do not belive it is going to confuse people much.

However, I still object to RMS's proposal as am still opposed to having
"destructuring LET".  I have flamed about this enough in the past that I
will not do it now.  However, having a "destructuring-bind" (by some
name) form that is like LET except that it destructures might be a
reasonable solution to providing a way to allow multiple-value-bind work
without any perceived language inconsistency.

∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 24 January 1982 23:06-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

    From: Fahlman at CMU-20C
    It is hard to comment on whether NIL is closer to being released than
    other Common Lisp implementations, since you don't give us a time
    estimate for NIL.

Oh. I had announced a release date of JAN 30. But, with the air-conditioner's
down for greater than a week that's got to go to at lease FEB 10. But
FEB 10 is the first week of classes at MIT, so I'll have JM, GJS, and
others on my case to get other stuff working. Sigh.
By release I mean that it is in a useful state, i.e. people will be able
to run their lisp programs in it. We have two concrete tests though, 
[1] To bring up "LSB".
   [A] This gives us stuff like a full hair FORMAT.
   [B] Martin's parser.
[2] TO run Macsyma on the BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2 and 
    HAYAT demos. 

Imagine bringing yourself and a tape to a naked VMS site, and installing
Emacs, a modern lisp, and Macsyma, in that order. You can really
blow away the people who have heard about these things but never
had a chance to use them, especially on their very own machine.
One feeling that makes the hacking worthwhile.

Anyway, when I brought Macsyma over to the Plasma Fusion
Center Alcator Vax, I was doing all the taylor series, integrals and
equation solving they threw at me. Stuff like
INTEGRATE(SIN(X↑2)*EXP(X↑2)*X↑2,X); Then DIFF it, then RATSIMP and TRIGREDUCE
to get back to the starting point.(try that on MC and see how many
files get loaded). (Sorry, gibberish to non-macsyma-hackers.)
=> So I can say that macsyma is released to MIT sites now. (MIT-LNS too). 
   People can use it and I'll field any bug reports. <=

Point of Confusion: Some people are confused as to what Common-Lisp is.
                    Even people at DEC.

-GJC

∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
Date: 25 Jan 1982 0125-EST
From: Fahlman at CMU-20C
Subject: Sequences
To: common-lisp at SU-AI


I have spent a couple of days mulling over RPG's suggestion for putting
the keywords into a list in functional position.  I thought maybe I
could get used to the unfamiliarity of the syntax and learn to like
this proposal.  Unfortunately, I can't.

I do like Guy's proposal for dropping START/END arguments and also
several of the suggestions that Moon made.  I am trying to merge all
of this into a revised proposal in the next day or two.  Watch this
space.

-- Scott
-------

∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
Date: 24 Jan 1982 22:40:50-PST
From: Kim.fateman at Berkeley
To: gjc@mit-mc
Subject: NIL/Macsyma 
Cc: common-lisp@SU-AI

Since it has been possible to run Macsyma on VMS sites (under Eunice or
its precursor) since April, 1980, (when we dropped off a copy at LCS),
it is not clear to me what GJC's ballyhoo is about.  If the physics
sites are only now getting a partly working Macsyma for VMS, it only
brings to mind the question of whether LCS ever sent out copies of the VMS-
Macsyma we gave them, to other MIT sites.

But getting Maclisp programs up under NIL should not be the benchmark,
nor is it clear what the relationship to common lisp is.
Having macsyma run under common lisp (whatever that will be)
would be very nice, of course,
whether having macsyma run under NIL is a step in that direction or
not.  It might also be nice to see, for example, one of the big interlisp
systems.

∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
Date: 25 Jan 1982 1857-EST
From: DILL at CMU-20C
Subject: eql => eq?
To: common-lisp at SU-AI

Proposal: rename the function "eq" in common lisp to be something like
"si:internal-eq-predicate", and the rename "eql" to be "eq".  This would
have several advantages.

 * Simplification by reducing the number of equality tests.

 * Simplification by reducing the number of different versions of
   various predicates that depend on the type of equality test you
   want.

 * Greater machine independence of lisp programs (whether eq and equal
   are the same function for various datatypes is heavily 
   implementation-dependent, while eql is defined to be relatively 
   machine-independent; furthermore, functions like memq in the current
   common lisp proposal make it easier to use eq comparisons than eql).

Possible disadvantages:

 * Do people LIKE having, say, numbers with identical values not be eq?
   If so, they won't like this.

 * Efficiency problems.

I don't believe the first complaint.  If there are no destructive
operations defined for an object, eq and equal ought to do the same
thing.

The second complaint should not be significant in interpreted code,
since overhead of doing a type-dispatch will probably be insignificant
in comparison with, say, finding the right subr and calling it.

In compiled code, taking the time to declare variable types should allow
the compiler to open-code "eq" into address comparisons, if appropriate,
even in the absence of a hairy compiler.  A hairy compiler could do even
better.

Finally, in the case where someone wants efficiency at the price of
tastefulness and machine-independence, the less convenient
implementation-dependent eq could be used.
-------

∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
Date: 25 Jan 1982 2151-EST
From: Fahlman at CMU-20C
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST


I don't think it would be wise to replace EQ with EQL on a wholesale basis.
On microcoded machines, this can be made to win just fine and the added
tastefulness is worth it.  But Common Lisp has to run on vaxen and such as
well, and there the difference can be a factor of three.  In scattered
use, this would not be a problem, but EQ appears in many inner loops.
-- Scott
-------

∂27-Jan-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
Date: 27 Jan 1982 1332-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST

Possibly CL is turning into something so far from normal Lisp that I
can't use my experience with Lisp to judge it.  However in the Lisp
programming that I am used to, I often thought in terms of the actual
data structures I was building, not of course at the bit level, but at
least at the level of pointers.  When doing this sort of programming,
raw comparison of pointers was a conceptual primitive.  Certainly if you
are going to turn Lisp into ADA, which seems the trend in much recent
thinking (not just the CL design effort), EQ will clearly be, as you
say, an internal implementation primitive.  But if anyone wants to
continue to program as I did, then it will be nice to have the real EQ
around.  Now certainly in most cases where EQ is being used to compare
pointers, EQL will work just as well, since these two things differ only
on objects where EQ would not validly be used in the style of
programming I am talking about.  However it is still EQ that is the
conceptual primitive, and I somehow feel better about the language if
when I want to compare pointers I get a primitive that compares
pointers, and not one that tests to see whether what I have is something
that it thinks I should be able to compare and if not does some part of
EQUAL (or is that name out of date now, too?).
-------

∂27-Jan-82  1445	Jon L White <JONL at MIT-MC> 	Multiple mailing lists?  
Date: 27 January 1982 17:27-EST
From: Jon L White <JONL at MIT-MC>
Subject: Multiple mailing lists?
To: common-lisp at SU-AI

Is everyone on this mailing list also on the LISP-FORUM list?
I.e., is there anyone who did not get my note entitled "Two little 
suggestions for macroexpansion" which was just sent out to LISP-FORUM?

∂27-Jan-82  1438	Jon L White <JONL at MIT-MC> 	Two little suggestions for macroexpansion    
Date: 27 January 1982 17:24-EST
From: Jon L White <JONL at MIT-MC>
Subject: Two little suggestions for macroexpansion
To: LISP-FORUM at MIT-MC

Several times in the COMMON LISP discussions, individuals have
proffered a "functional" format to alleviate having lots of
keywords for simple operations: E.g. GLS's suggestion on page 137
of "Decisions on the First Draft Common Lisp Manual", which would
allow one to write 
  ((fposition #'equal x) s 0 7)  for  (position x s 0 7)
  ((fposition #'eq x) s 0 7)     for  (posq x s 0 7)

This format looks similar to something I've wanted for a long time
when macroexpanding, namely, for a form  
	foo = ((<something> . . .) a1 a2) 
then, provided that <something> isn't one of the special words for this 
context [like LAMBDA or (shudder!) LABEL] why not first expand 
(<something> . . .), yielding say <more>, and then try again on the form  
(<more> a1 a1).    Of course, (<something> . . .) may not indicate any 
macros, and <more> will just be eq to it.   The MacLISP function MACROEXPAND 
does do this, but EVAL doesn't call it in this circumstance (rather EVAL does 
a recursive sub-evaluation)

FIRST SUGGESTION:
     In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
 the part (<something> . . .) before recursively evaluating it.

  This will have the incompatible effect that
    (defmacro foo () 'LIST)
    ((foo) 1 2)
  no longer causes an error (unbound variable for LIST), but will rather
  first expand into (list 1 2), which then evaluates to (1 2).
  Similarly, the sequence
    (defun foo () 'LIST)
    ((foo) 1 2)
  would now, incompatibly, result in an error.
  [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, 
   but that's another kettle of worms we don't need to worry about now.]


SECOND SUGGESTION
    Let FMACRO have special significance for macroexpansion in the context
 ((FMACRO . <fun>) . . .), such that this form is a macro call which is
 expanded by calling <fun> on the whole form.


As a result of these two changes, many of the "functional programming
style" examples could easily be implemented by macros.  E.g.
  (defmacro FPOSITION (predfun arg)
    `(FMACRO . (LAMBDA (FORM) 
		 `(SI:POS-HACKER ,',arg 
				 ,@(cdr form) 
				 ':PREDICATE 
				 ,',predfun))))
where SI:POS-HACKER is a version of POSITION which accepts keyword arguments
to direct the actions, at the right end of the argument list.
Notice how 

    ((fposition #'equal x) a1 a2) 
==>
    ((fmacro . (lambda (form) 
		  `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL)))
	  a1
	  s2)
==>
    (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL)

If any macroexpansion "cache'ing" is going on, then the original form 
((fposition #'equal x) a1 a2)  will be paired with the final
result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either
by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP.

Now unfortunately, this suggestion doesn't completely subsume the 
functional programming style, for it doesn't directly help with the
case mentioned by GLS:
  ((fposition (fnot #'numberp)) s)  for (pos-if-not #'numberp s)
Nor does it provide an easy way to use MAPCAR etc, since
  (MAPCAR (fposition #'equal x) ...)
doesn't have (fposition #'equal x) in the proper context.
[Foo, why not use DOLIST or LOOP anyway?]   Nevertheless, I've had many 
ocasions where I wanted such a facility, especially when worrying about 
speed of compiled code.  

Any coments?

∂27-Jan-82  2202	RPG  	MVLet    
To:   common-lisp at SU-AI  

My view of the multiple value issue is that returning multiple values is
more like a function call than like a function return.  One cannot use
multiple values except in those cases where they are caught and spread
into variables via a MVLet or whatever.  Thus, (f (g) (h)) will ignore all
but the first values of g and h in this context.  In both the function
call and multiple value return cases the procedure that is to receive
values does not know how many values to expect in some cases.  In
addition, I believe that it is important that a function, if it can return
more than one value, can return any number it likes, and that the
programmer should be able to capture all of them somehow, even if some
must end up in a list.  The Lisp Machine multiple value scheme cannot do
this.  If we buy that it is important to capture all the values somehow,
then one of two things must happen.  First, the syntax for MVLet has to
allow something like (mvlet (x y (:rest z)) ...)  or (mvlet (x y . z)
...), which is close to the LAMBDA (or at least DEFUN-LAMBDA) syntax,
which means that it is a cognitive confusion if these two binding
descriptions are not the same.  Or, second, we have to have a version
like (mvlet l ...) which binds l to the list of returned values etc. This
latter choice, I think, is a loser.

Therefore, my current stand is that we either 1, go for the decision we
made in Boston at the November meeting, 2, we allow only 2 values in the
mv case (this anticipates the plea that it is sure convenient to be able
to return a value and a flag...), or 3, we flush multiple values
altogether.  I find the Lisp Machine `solution' annoyingly contrary to
intuition (even more annoying than just allowing 2 values).
			-rpg-

∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Thursday, 28 January 1982, 11:37-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

(1) Would you please remind me what conclusion we came to at the
November meeting?  My memory is that the issue was left up in the air
and that there was no conclusion.

(2) I think that removing multiple values, or restricting the number,
would be a terrible restriction.  Multiple values are extremely useful;
their lack has been a traditional weakness in Lisp and I'd hate to see
that go on.

(3) In Zetalisp you can always capture all values by using
(multiple-value-list <form>).  Any scheme that has only multiple-value
and multiple-value-bind and not multiple-value-list is clearly a loser;
the Lisp-Machine-like alternative has got to be a proposal that has all
three Zetalisp forms (not necessarily under those names, of course).

∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 04:24-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I agree with Fahlman about binding constructs.
I want LAMBDA to be the way it is, and LET to be the way it is,
and certainly not the same.

As for multiple values, if LET is fully extended to do what
SETF can do, then (LET (((VALUES A B C) m-v-returning-form)) ...)
can be used to replace M-V-BIND, just as (SETF (VALUES A B C) ...)
can replace MULTIPLE-VALUES.  I never use MULTIPLE-VALUES any more
because I think that the SETF style is clearer.

∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 06:02-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I would like to clear up a misunderstanding that seems to be
prevalent.  The MIT Lisp machine system, used by Symbolics and LMI, is
probably going to be converted to support Common Lisp (which is the
motivation for my participation in the design effort for Common Lisp
clean).  Whenever this happens, Common Lisp will be available on
the CADR machine (as found at MIT and as sold by LMI and Symbolics)
and the Symbolics L machine (after that exists), and on the second
generation LMI machine (after that exists).

I can't speak for LMI's opinion of Common Lisp, but if MIT converts,
LMI will certainly do so.  As the main Lisp machine hacker at MIT, I
can say that I like Common Lisp.

It is not certain when either of the two new machines will appear, or
when the Lisp machine system itself will support Common Lisp.  Since
these three events are nearly independent, they could happen in any
order.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:23-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

To clear up another random point: the name "Zetalisp" is not a Symbolics
proprietary name.  It is just a name that has been made up to replace
the ungainly name "Lisp Machine Lisp".  The reason for needing a name is
that I belive that people associate the Lisp Machine with Maclisp,
including all of the bad things that they have traditionally belived
about Maclisp, like that it has a user interface far inferior to that of
Interlisp.

I certainly hope that all of the Lisp Machines everywhere will convert
to Common Lisp together.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:20-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

If I understand what RPG is saying then I think that I am not convinced
by his point.  I don't think that just because multiple-value-bind takes
a list of variables that are being bound to variables means that it HAS
to have all the features that LAMBDA combinations have, in the name of
language simplicity, because I just don't think that the inconsistency
there bothers me very much.  It is a very localized inconsistency and I
really do not belive it is going to confuse people much.

However, I still object to RMS's proposal as am still opposed to having
"destructuring LET".  I have flamed about this enough in the past that I
will not do it now.  However, having a "destructuring-bind" (by some
name) form that is like LET except that it destructures might be a
reasonable solution to providing a way to allow multiple-value-bind work
without any perceived language inconsistency.

∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 24 January 1982 23:06-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

    From: Fahlman at CMU-20C
    It is hard to comment on whether NIL is closer to being released than
    other Common Lisp implementations, since you don't give us a time
    estimate for NIL.

Oh. I had announced a release date of JAN 30. But, with the air-conditioner's
down for greater than a week that's got to go to at lease FEB 10. But
FEB 10 is the first week of classes at MIT, so I'll have JM, GJS, and
others on my case to get other stuff working. Sigh.
By release I mean that it is in a useful state, i.e. people will be able
to run their lisp programs in it. We have two concrete tests though, 
[1] To bring up "LSB".
   [A] This gives us stuff like a full hair FORMAT.
   [B] Martin's parser.
[2] TO run Macsyma on the BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2 and 
    HAYAT demos. 

Imagine bringing yourself and a tape to a naked VMS site, and installing
Emacs, a modern lisp, and Macsyma, in that order. You can really
blow away the people who have heard about these things but never
had a chance to use them, especially on their very own machine.
One feeling that makes the hacking worthwhile.

Anyway, when I brought Macsyma over to the Plasma Fusion
Center Alcator Vax, I was doing all the taylor series, integrals and
equation solving they threw at me. Stuff like
INTEGRATE(SIN(X↑2)*EXP(X↑2)*X↑2,X); Then DIFF it, then RATSIMP and TRIGREDUCE
to get back to the starting point.(try that on MC and see how many
files get loaded). (Sorry, gibberish to non-macsyma-hackers.)
=> So I can say that macsyma is released to MIT sites now. (MIT-LNS too). 
   People can use it and I'll field any bug reports. <=

Point of Confusion: Some people are confused as to what Common-Lisp is.
                    Even people at DEC.

-GJC

∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
Date: 25 Jan 1982 0125-EST
From: Fahlman at CMU-20C
Subject: Sequences
To: common-lisp at SU-AI


I have spent a couple of days mulling over RPG's suggestion for putting
the keywords into a list in functional position.  I thought maybe I
could get used to the unfamiliarity of the syntax and learn to like
this proposal.  Unfortunately, I can't.

I do like Guy's proposal for dropping START/END arguments and also
several of the suggestions that Moon made.  I am trying to merge all
of this into a revised proposal in the next day or two.  Watch this
space.

-- Scott
-------

∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
Date: 24 Jan 1982 22:40:50-PST
From: Kim.fateman at Berkeley
To: gjc@mit-mc
Subject: NIL/Macsyma 
Cc: common-lisp@SU-AI

Since it has been possible to run Macsyma on VMS sites (under Eunice or
its precursor) since April, 1980, (when we dropped off a copy at LCS),
it is not clear to me what GJC's ballyhoo is about.  If the physics
sites are only now getting a partly working Macsyma for VMS, it only
brings to mind the question of whether LCS ever sent out copies of the VMS-
Macsyma we gave them, to other MIT sites.

But getting Maclisp programs up under NIL should not be the benchmark,
nor is it clear what the relationship to common lisp is.
Having macsyma run under common lisp (whatever that will be)
would be very nice, of course,
whether having macsyma run under NIL is a step in that direction or
not.  It might also be nice to see, for example, one of the big interlisp
systems.

∂25-Jan-82  1436	Hanson at SRI-AI 	NIL and DEC VAX Common LISP
Date: 25 Jan 1982 1436-PST
From: Hanson at SRI-AI
Subject: NIL and DEC VAX Common LISP
To:   rpg at SU-AI
cc:   hanson

Greetings:
	I understand from ARPA that DEC VAX Common Lisp may become a
reality and that you are closely involved.  If that is true, we in the
SRI vision group would like to work closely with you in defining the
specifications so that the resulting language can actually be used for
vision computations with performance and convenience comparable to
Algol-based languages.
	If this is not true, perhaps you can send me to the people
I should talk with to make sure the mistakes of FRANZLISP are not
repeated in COMMON LISP.
	Thanks,  Andy Hanson  859-4395

ps - Where can we get Common Lisp manuals?
-------

∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
Date: 25 Jan 1982 1857-EST
From: DILL at CMU-20C
Subject: eql => eq?
To: common-lisp at SU-AI

Proposal: rename the function "eq" in common lisp to be something like
"si:internal-eq-predicate", and the rename "eql" to be "eq".  This would
have several advantages.

 * Simplification by reducing the number of equality tests.

 * Simplification by reducing the number of different versions of
   various predicates that depend on the type of equality test you
   want.

 * Greater machine independence of lisp programs (whether eq and equal
   are the same function for various datatypes is heavily 
   implementation-dependent, while eql is defined to be relatively 
   machine-independent; furthermore, functions like memq in the current
   common lisp proposal make it easier to use eq comparisons than eql).

Possible disadvantages:

 * Do people LIKE having, say, numbers with identical values not be eq?
   If so, they won't like this.

 * Efficiency problems.

I don't believe the first complaint.  If there are no destructive
operations defined for an object, eq and equal ought to do the same
thing.

The second complaint should not be significant in interpreted code,
since overhead of doing a type-dispatch will probably be insignificant
in comparison with, say, finding the right subr and calling it.

In compiled code, taking the time to declare variable types should allow
the compiler to open-code "eq" into address comparisons, if appropriate,
even in the absence of a hairy compiler.  A hairy compiler could do even
better.

Finally, in the case where someone wants efficiency at the price of
tastefulness and machine-independence, the less convenient
implementation-dependent eq could be used.
-------

∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
Date: 25 Jan 1982 2151-EST
From: Fahlman at CMU-20C
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST


I don't think it would be wise to replace EQ with EQL on a wholesale basis.
On microcoded machines, this can be made to win just fine and the added
tastefulness is worth it.  But Common Lisp has to run on vaxen and such as
well, and there the difference can be a factor of three.  In scattered
use, this would not be a problem, but EQ appears in many inner loops.
-- Scott
-------

∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Thursday, 28 January 1982, 11:37-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

(1) Would you please remind me what conclusion we came to at the
November meeting?  My memory is that the issue was left up in the air
and that there was no conclusion.

(2) I think that removing multiple values, or restricting the number,
would be a terrible restriction.  Multiple values are extremely useful;
their lack has been a traditional weakness in Lisp and I'd hate to see
that go on.

(3) In Zetalisp you can always capture all values by using
(multiple-value-list <form>).  Any scheme that has only multiple-value
and multiple-value-bind and not multiple-value-list is clearly a loser;
the Lisp-Machine-like alternative has got to be a proposal that has all
three Zetalisp forms (not necessarily under those names, of course).

∂28-Jan-82  1235	Fahlman at CMU-20C 	Re: MVLet      
Date: 28 Jan 1982 1522-EST
From: Fahlman at CMU-20C
Subject: Re: MVLet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 28-Jan-82 0102-EST


I agree with DLW that we must retain M-V-LIST.  I never meant to exclude
that.

As for RPG's latest blast, I agree with some of his arguments but not
with his conclusions.  First, I think that the way multiple values are
actually used, in the overwhelming majority of cases, is more like a
return than a function call.  You call INTERN or FLOOR or some
user-written function, and you know what values it is going to return,
what each value means, and which ones you want to use.  In the case of
FLOOR, you might want the quotient or the remainder or both.  The old,
simple, Lisp Machine forms give you a simple and convenient way to
handle this common case.  If a function returns two often-used values
plus some others that are arcane and hard to remember, you just catch
the two you want and let the others (however many there are) evaporate.
M-V-LIST is available to programs (tracers for example) that want to
intercept all the values, no matter what.

Having said that, I agree that there are also some cases where you want
the catching of values to be more like a function call than a return,
since it may be somewhat unpredictable what is going to be bubbling up
from below, and the lambda list with optionals and rests has evolved as
a good way to handle this.  I submit that the cause of uniformity is
best served by actually making these cases be function calls, rather
than faking it.  The proposed M-V-CALL mechanism does exactly this when
given one value-returning "argument".  The proposal to let M-V-CALL
take more than one "argument" form is dangerous, in my view -- it could
easily lead to impenetrable and unmaintainable code -- but if it makes
John McCarthy happy, I'm willing to leave it in, perhaps with a warning
to users not to go overboard with this.

So I think RPG has made a strong case for needing something like
M-V-CALL, and I propose that M-V-CALL itself is the best form for this.
I am much less convinced by his argument that the multiple value SETQing
and BINDing forms have to be beaten into this same shape or thrown out
altogether.  Simple forms for simple things!

And even if RPG's aestheitc judgement were to prevail, I would still
have the problem that, because they have the semantics of PROGNs and not
of function calls, the Lambda-list versions of these functions would be
extremely painful to implement.

As I see it, if RPG wants to have a Lambda-binding form for value
catching, M-V-CALL gives this to him in a way that is clean and easily
implementable.  If what he wants is NOT to have the simple Lisp Machine
forms included, and to force everything through Lambda-list forms in the
name of uniformity, then we have a real problem.

-- Scott
-------

∂28-Jan-82  1416	Richard M. Stallman <rms at MIT-AI> 	Macro expansion suggestions 
Date: 28 January 1982 17:13-EST
From: Richard M. Stallman <rms at MIT-AI>
Subject: Macro expansion suggestions
To: common-lisp at SU-AI

If (fposition #'equal x) is defined so that when in function position
it "expands" to a function, then (mapcar (fposition ...)) loses
as JONL says, but (mapcar #'(fposition ...)...) can perhaps be
made to win.  If (function (fposition...)) expands itself into
(function (lambda (arg arg...) ((fposition ...) arg arg...)))
it will do the right thing.  The only problem is to determine
how many args are needed, which could be a property of the symbol
fposition, or could appear somewhere in its definition.

Alternatively, the definition of fposition could have two "operations"
defined: one to expand when given an ordinary form with (fposition ...)
as its function, and one to expand when given an expression to apply
(fposition ...) to.

∂28-Jan-82  1914	Howard I. Cannon <HIC at MIT-MC> 	Macro expansion suggestions    
Date: 28 January 1982 19:46-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Macro expansion suggestions
To: common-lisp at SU-AI


I have sent the following to GLS as a proposal Lambda Macros in for
Common Lisp.  It is implemented on the Lisp Machine, and is installed
in Symbolics system 202 (unreleased), and will probably be in MIT
system 79.

You could easily use them to implement functional programming style,
and they of course work with #' as RMS suggests.

The text is in Bolio input format, sorry.

--------

.section Lambda macros

Lambda macros may appear in functions where LAMBDA would have previously
appeared.  When the compiler or interpreter detects a function whose CAR
is a lambda macro, they "expand" the macro in much the same way that
ordinary Lisp macros are expanded -- the lambda macro is called with the
function as its argument, and is expected to return another function as
its value.  Lambda macros may be accessed with the (ε3:lambda-macroε*
ε2nameε*) function specifier.

.defspec lambda-macro function-spec lambda-list &body body
Analagously with ε3macroε*, defines a lambda macro to be called
ε2function-specε*. ε2lambda-listε* should consist of one variable, which
will be the function that caused the lambda macro to be called.  The
lambda macro must return a function.  For example:

.lisp
(lambda-macro ilisp (x)
  `(lambda (&optional ,@(second x) &rest ignore) . ,(cddr x)))
.end←lisp

would define a lambda macro called ε3ilispε* which would cause the
function to accept arguments like a standard Interlisp function -- all
arguments are optional, and extra arguments are ignored.  A typical call
would be:

.lisp
(fun-with-functional-arg #'(ilisp (x y z) (list x y z)))
.end←lisp

Then, any calls to the functional argument that
ε3fun-with-functional-argε* executes will pass arguments as if the
number of arguments did not matter.
.end←defspec

.defspec deflambda-macro
ε3deflambda-macroε* is like ε3defmacroε*, but defines a lambda macro
instead of a normal macro.
.end←defspec

.defspec deflambda-macro-displace
ε3deflambda-macro-displaceε* is like ε3defmacro-displaceε*, but defines
a lambda macro instead of a normal macro.
.end←defspec

.defspec deffunction function-spec lambda-macro-name lambda-list &body body 
ε3deffunctionε* defines a function with an arbitrary lambda macro
instead of ε3lambdaε*.  It takes arguments like ε3defunε*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used.  ε3deffunctionε* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.

For example:

.lisp
(deffunction some-interlisp-like-function ilisp (x y z)
  (list x y z))
.end←lisp

would define a function called ε3some-interlisp-like-functionε*, that
would use the lambda macro called ε3ilispε*.  Thus, the function would
do no number of arguments checking.
.end←defspec

∂27-Jan-82  1633	Jonl at MIT-MC Two little suggestions for macroexpansion
Several times in the COMMON LISP discussions, individuals have
proffered a "functional" format to alleviate having lots of
keywords for simple operations: E.g. GLS's suggestion on page 137
of "Decisions on the First Draft Common Lisp Manual", which would
allow one to write 
  ((fposition #'equal x) s 0 7)  for  (position x s 0 7)
  ((fposition #'eq x) s 0 7)     for  (posq x s 0 7)

This format looks similar to something I've wanted for a long time
when macroexpanding, namely, for a form  
	foo = ((<something> . . .) a1 a2) 
then, provided that <something> isn't one of the special words for this 
context [like LAMBDA or (shudder!) LABEL] why not first expand 
(<something> . . .), yielding say <more>, and then try again on the form  
(<more> a1 a1).    Of course, (<something> . . .) may not indicate any 
macros, and <more> will just be eq to it.   The MacLISP function MACROEXPAND 
does do this, but EVAL doesn't call it in this circumstance (rather EVAL does 
a recursive sub-evaluation)

FIRST SUGGESTION:
     In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
 the part (<something> . . .) before recursively evaluating it.

  This will have the incompatible effect that
    (defmacro foo () 'LIST)
    ((foo) 1 2)
  no longer causes an error (unbound variable for LIST), but will rather
  first expand into (list 1 2), which then evaluates to (1 2).
  Similarly, the sequence
    (defun foo () 'LIST)
    ((foo) 1 2)
  would now, incompatibly, result in an error.
  [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, 
   but that's another kettle of worms we don't need to worry about now.]


SECOND SUGGESTION
    Let FMACRO have special significance for macroexpansion in the context
 ((FMACRO . <fun>) . . .), such that this form is a macro call which is
 expanded by calling <fun> on the whole form.


As a result of these two changes, many of the "functional programming
style" examples could easily be implemented by macros.  E.g.
  (defmacro FPOSITION (predfun arg)
    `(FMACRO . (LAMBDA (FORM) 
		 `(SI:POS-HACKER ,',arg 
				 ,@(cdr form) 
				 ':PREDICATE 
				 ,',predfun))))
where SI:POS-HACKER is a version of POSITION which accepts keyword arguments
to direct the actions, at the right end of the argument list.
Notice how 

    ((fposition #'equal x) a1 a2) 
==>
    ((fmacro . (lambda (form) 
		  `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL)))
	  a1
	  s2)
==>
    (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL)

If any macroexpansion "cache'ing" is going on, then the original form 
((fposition #'equal x) a1 a2)  will be paired with the final
result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either
by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP.

Now unfortunately, this suggestion doesn't completely subsume the 
functional programming style, for it doesn't directly help with the
case mentioned by GLS:
  ((fposition (fnot #'numberp)) s)  for (pos-if-not #'numberp s)
Nor does it provide an easy way to use MAPCAR etc, since
  (MAPCAR (fposition #'equal x) ...)
doesn't have (fposition #'equal x) in the proper context.
[Foo, why not use DOLIST or LOOP anyway?]   Nevertheless, I've had many 
ocasions where I wanted such a facility, especially when worrying about 
speed of compiled code.  

Any coments?

∂28-Jan-82  1633	Fahlman at CMU-20C 	Re: Two little suggestions for macroexpansion
Date: 28 Jan 1982 1921-EST
From: Fahlman at CMU-20C
Subject: Re: Two little suggestions for macroexpansion
To: JONL at MIT-MC
cc: LISP-FORUM at MIT-MC
In-Reply-To: Your message of 27-Jan-82 1724-EST


JONL's suggestion looks pretty good to me.  Given this sort of facility,
it would be easier to experiment with functional styles of programming,
and nothing very important is lost in the way of useful error checking,
at least nothing that I can see.

"Experiment" is a key word in the above comment.  I would not oppose the
introduction of such a macro facility into Common Lisp, but I would be
very uncomfortable if a functional-programming style started to pervade
the base language -- I think we need to play with such things for a
couple of years before locking them in.

-- Scott
-------

∂29-Jan-82  0945	DILL at CMU-20C 	Re: eql => eq?    
Date: 29 Jan 1982 1221-EST
From: DILL at CMU-20C
Subject: Re: eql => eq?
To: HEDRICK at RUTGERS
cc: common-lisp at SU-AI
In-Reply-To: Your message of 27-Jan-82 1332-EST

If an object in a Common Lisp is defined to have a particular type of
semantics (basically, you would like it to be an "immediate" object if
you could only implement that efficiently), programmers should not have
to worry about whether it is actually implemented using pointers.  If
you think about your data structures in terms of pointers in the
implementation, I contend that you are thinking about them at the wrong
level (unless you have decided to sacrifice commonality in order to
wring nanoseconds out of your code).  The reason you have to think about
it at this level is that the Lisp dialect you use lets the
implementation shine through when it shouldn't.

With the current Common Lisp definition, users will have to go to extra
effort to write implementation-independent code. For example, if your
implementation makes all numbers (or characters or whatever) that are
EQUAL also EQ, you will have to stop and force yourself to use MEMBER or
MEM instead of MEMQ, because other implementations may use pointer
implementations of numbers (or worse, your program will work for some
numbers and not others, because you are in a maclisp compatibility mode
and numbers less than 519 are immediate but others aren't).  My belief
is that Common Lisp programs should end up being common, unless the user
has made a conscious decision to make his code implementation-dependent.
The only reason to decide against a feature that would promote this is
if it would result in serious performance losses.

Even if an implementation is running on a VAX, it is still possible to
declare data structures (with the proposed "THE" construct, perhaps) so
that compiler can know to use the internal EQ when possible, or to use a
more specific predicate.  It is also not clear if compiled code for EQL
has to be expensive, depending on how hard it is to determine the type
of a datum -- it doesn't seem totally unreasonable that a single
instruction could determine whether to use the internal EQ (a single
instruction), or the hairier EQL code.

In what way is this "turning Lisp into Ada"?
-------

∂29-Jan-82  1026	Guy.Steele at CMU-10A 	Okay, you hackers
Date: 29 January 1982 1315-EST (Friday)
From: Guy.Steele at CMU-10A
To: Fateman at UCB-C70, gjc at MIT-MC
Subject:  Okay, you hackers
CC: common-lisp at SU-AI
Message-Id: <29Jan82 131549 GS70@CMU-10A>

It would be of great interest to the entire LISP community, now that
MACSYMA is up and running on VAX on two different LISPs, to get some
comparative timings.  There are standard MACSYMA demo files, and MACSYMA
provides for automatic timing.  Could you both please run the set of demo
files GJC mentioned, namely BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2, and
HAYAT, and send the results to RPG@SAIL for analysis?  (You're welcome,
Dick!)
--Guy

∂29-Jan-82  1059	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
Date: 29 Jan 1982 1354-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1221-EST

I have gotten two rejoinders to my comments about the conceptual
usefulness of EQ, both of which explained to me that EQ is not useful
for numbers or any other objects which may be immediate in some
implementations and pointers in others.  I am well aware of that.
Clearly if I am interested either in comparing the values of two numbers
or in seeing whether two general objects will look the same when
printed, EQ is not the right thing to use.  But this has been true back
from the days of Lisp 1.5.  I claim however that there are many cases
where I know that what I am dealing with is in fact a pointer, and what
I want is something that simply checks to see whether two objects are
identical.  In this case, I claim that it is muddying the waters
conceptually to use a primitive that checks for certain kinds of objects
and does tests oriented towards seeing whether they look the same when
printed, act the same when multiplied, or something else.  Possibly it
would be sensible to have a primitive that works like EQ for pointers
and gives an error otherwise.  But if what you are trying to do is to
see whether two literal atoms or CONS cells are the same, I can't see
any advantage to something that works like EQ for pointers and does
something else otherwise.  I can even come up with cases where EQ makes
sense for real numbers.  I can well imagine a program where you have two
lists, one of which is a proper subset of the other.   Depending upon
how they were constructed, it might well be the case that if something
from the larger list is a member of the smaller list, it is a member
using EQ, even if the object involved is a real number. I trust that the
following code will always print T, even if X is a real number.
   (SETQ BIG-LIST (CONS X BIG-LIST))
   (SETQ SMALL-LIST (CONS X SMALL-LIST))
   (PRINT (EQ (CAR BIG-LIST) (CAR SMALL-LIST)))
-------

∂29-Jan-82  1146	Guy.Steele at CMU-10A 	MACSYMA timing   
Date: 29 January 1982 1442-EST (Friday)
From: Guy.Steele at CMU-10A
To: George J. Carrette <GJC at MIT-MC> 
Subject:  MACSYMA timing
CC: common-lisp at SU-AI
In-Reply-To:  George J. Carrette's message of 29 Jan 82 13:30-EST
Message-Id: <29Jan82 144201 GS70@CMU-10A>

Well, I understand your reluctance to release timings before the
implementation has been properly tuned; but on the other hand,
looking at the situation in an abstract sort of way, I don't understand
why someone willing to shoot off his mouth and take unsupported pot
shots in a given forum should be unwilling to provide in that same
forum some objective data that might help to douse the flames (and
this goes for people on both sides of the fence).  In short, I merely
meant to suggest a way to prove that the so-called ballyhoo was
worthwhile (not that this is the only way to prove it).
--Guy

∂29-Jan-82  1204	Guy.Steele at CMU-10A 	Re: eql => eq?   
Date: 29 January 1982 1452-EST (Friday)
From: Guy.Steele at CMU-10A
To: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject:  Re: eql => eq?
CC: common-lisp at SU-AI
In-Reply-To:  HEDRICK@RUTGERS's message of 29 Jan 82 13:54-EST
Message-Id: <29Jan82 145243 GS70@CMU-10A>

(DEFUN FOO (X)
  (SETQ BIG-LIST (CONS X BIG-LIST))
  (SETQ SMALL-LIST (CONS X SMALL-LIST))
  (PRINT (EQ (CAR BIG-LIST) (CAR SMALL-LIST))))

(DEFUN BAR (Z) (FOO (*$ Z 2.0)))

Compile this using the MacLISP compiler.  Then (BAR 3.0) reliably
prints NIL, not T.  The reason is that the compiled code for FOO
gets, as its argument X, a pdl number passed to it by BAR.  The code
for FOO happens to choose to make two distinct heap copies of X,
rather than one, and so the cars of the two lists will contain
distinct pointers.
--Guy

∂29-Jan-82  1225	George J. Carrette <GJC at MIT-MC> 	MACSYMA timing
Date: 29 January 1982 15:23-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  MACSYMA timing
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

All I said was that Macsyma was running, and I felt I had to
do that because many people thought that NIL was not a working
language. I get all sorts of heckling from certain people anyway,
so a few extra unsupported pot-shots aren't going to bother me.
Also, I have limited time now to complete a paper on the timing
figures that JM wants me to submit to the conference on lisp
and applicable languages, taking place at CMU right? So you
get the picture.

But, OK, I'll give two timing figures, VAX-780 speed in % of KL-10.

Compiling "M:MAXII;NPARSE >"   48% of KL-10.
INTEGRATE(1/(X↑3-1),X)         12% of KL-10.

Obviously the compiler is the most-used program in NIL, so it has been tuned.
Macsyma has not been tuned.

Note well, I say "Macsyma has not been tuned" not "NIL has not been tuned."
Why? Because NIL has been tuned, lots of design thought by many people,
and lots of work by RWK and RLB to provide fast lisp primitives in the VAX.
It is Macsyma which needs to be tuned for NIL. This may not be very
interesting! Purely source-level hacks. For example, the Franz people
maintain entirely seperate versions of large (multi-page)
functions from the core of Macsyma for the purpose
of making Macsyma run fast in Franz.
=> There is nothing wrong with this when it is worth the time saved
   in solving the user's problems. I think for Macsyma it is worth it. <=

The LISPM didn't need special hacks though. This is interesting,
I think...

-gjc

∂29-Jan-82  1324	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
Date: 29 Jan 1982 1620-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re:  Re: eql => eq?
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1452-EST

I call that a bug.
-------

∂29-Jan-82  1332	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
Date: 29 Jan 1982 1627-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re:  Re: eql => eq?
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1452-EST

I seem to recall that it was a basic property of Lisp that
  (EQ X (CAR (CONS X Y)))
If your compiler compiles code that does not preserve this property,
the kindest thing I have to say is that it is premature optimization.
-------

∂29-Jan-82  1336	Guy.Steele at CMU-10A 	Re: Re: eql => eq?    
Date: 29 January 1982 1630-EST (Friday)
From: Guy.Steele at CMU-10A
To: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject:  Re: Re: eql => eq?
CC: common-lisp at SU-AI
In-Reply-To:  HEDRICK@RUTGERS's message of 29 Jan 82 16:20-EST
Message-Id: <29Jan82 163020 GS70@CMU-10A>

Well, it is at least a misfeature that SETQ and lambda-binding
do not preserve EQ-ness.  It is precisely for this reason that
the predicate EQL was proposed: this is the strongest equivalence
relation on S-expressions which is preserved by SETQ and binding.
Notice that this definition is in terms of user-level semantics
rather than implementation technique.
It certainly was a great feature that user semantics and implementation
coincided and had simple definitions in EQ in the original LISP.
MacLISP was nudged from this by the great efficiency gains to be had
for numerical code, and it didn't bother too many users.
The Swiss Cheese draft of the Common LISP manual does at least make
all this explicit: see the first page of the Numbers chapter.  The
disclaimer is poorly stated (my fault), but it is there for the nonce.
--Guy

∂29-Jan-82  1654	Richard M. Stallman <RMS at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs.    
Date: 29 January 1982 19:46-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Trying to implement FPOSITION with LAMBDA-MACROs.
To: HIC at MIT-AI, common-lisp at SU-AI

LAMBDA-MACRO is a good hack but is not exactly what JONL was suggesting.

The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
expands into (FPOSITION-INTERNAL X Y MORE ARGS), and
((FPOSITION) MORE ARGS) into (FPOSITION-INTERNAL NIL NIL MORE ARGS).
In JONL's suggestion, the expander for FPOSITION operates on the
entire form in which the call to the FPOSITION-list appears, not
just to the FPOSITION-list.  This allows FPOSITION to be handled
straightforwardly; but also causes trouble with (FUNCTION (FPOSITION
...)) where lambda-macros automatically work properly.

It is possible to define FPOSITION using lambda-macros by making
(FPOSITION X Y) expand into 
(LAMBDA (&REST ARGS) (FUNCALL* 'FPOSITION-INTERNAL X Y ARGS))
but this does make worse code when used in an internal lambda.
It would also be possible to use an analogous SUBST function
but first SUBST functions have to be made to work with &REST args.
I think I can do this, but are SUBST functions in Common Lisp?

∂29-Jan-82  2149	Kim.fateman at Berkeley 	Okay, you hackers   
Date: 29 Jan 1982 20:31:23-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: Okay, you hackers
Cc: common-lisp@SU-AI

I think that when GJC says that NIL/Macsyma runs the "X" demo, it
is kind of like the dog that plays checkers.  It is
remarkable, not for how well it plays, but for the fact that it plays at all.

(And I believe it is creditable [if] NIL runs Macsyma at all... I
know how hard it is, so don't get me wrong..)
Anyway, the stardard timings we have had in the past, updated somewhat:

MC-Macsyma, Vaxima and Lisp Machine timings for DEMO files
(fg genral, fg rats, gen demo, begin demo)
(garbage collection times excluded.)  An earlier version of this
table was prepared and distributed in April, 1980.  The only
column I have changed is the 2nd one.

MC Time	     VAXIMA    	128K lispm     192K lispm       256K lispm
4.119	   11.8   sec.  43.333 sec.     19.183 sec.    16.483 sec.  
2.639	    8.55  sec.  55.916 sec.     16.416 sec.    13.950 sec. 
3.141	   14.3   sec. 231.516 sec.     94.933 sec.    58.166 sec.  
4.251	   13.1   sec. 306.350 sec.    125.666 sec.    90.716 sec. 


(Berkeley VAX 11/780 UNIX (Kim) Jan 29, 1982,  KL-10 MIT-MC ITS April 9, 1980.)
Kim has no FPA, and 2.5meg of memory.  Actually, 2 of these times are
slower than in 1980, 2 are faster. 

Of course, GJC could run these at MIT on his Franz/Vaxima/Unix system, and
then bring up his NIL/VMS system and time them again.

∂29-Jan-82  2235	HIC at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs.  
Date: Friday, 29 January 1982  22:13-EST
From: HIC at SCRC-TENEX
To:   Richard M. Stallman <RMS at MIT-AI>
Cc:   common-lisp at SU-AI
Subject: Trying to implement FPOSITION with LAMBDA-MACROs.

    Date: Friday, 29 January 1982  19:46-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    To:   HIC at MIT-AI, common-lisp at SU-AI
    Re:   Trying to implement FPOSITION with LAMBDA-MACROs.

    LAMBDA-MACRO is a good hack but is not exactly what JONL was suggesting.
Yes, I know.  I think it's the right thing, however.

    The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
    expands into (FPOSITION-INTERNAL X Y MORE ARGS), and
    ((FPOSITION) MORE ARGS) into (FPOSITION-INTERNAL NIL NIL MORE ARGS).
    In JONL's suggestion, the expander for FPOSITION operates on the
    entire form in which the call to the FPOSITION-list appears, not
    just to the FPOSITION-list.  This allows FPOSITION to be handled
    straightforwardly; but also causes trouble with (FUNCTION (FPOSITION
    ...)) where lambda-macros automatically work properly.
Yes, that's right.  If you don't care about #'(FPOSITION ..), then you can have
the lambda macro expand into a real macro which can see the form, so you
can use lambda macros to simulate JONL's behavior quite easily.

    It is possible to define FPOSITION using lambda-macros by making
    (FPOSITION X Y) expand into 
    (LAMBDA (&REST ARGS) (FUNCALL* 'FPOSITION-INTERNAL X Y ARGS))
    but this does make worse code when used in an internal lambda.
    It would also be possible to use an analogous SUBST function
    but first SUBST functions have to be made to work with &REST args.
    I think I can do this, but are SUBST functions in Common Lisp?
Yes, this is what I had in mind.  The fact that this makes worse code
whe used as an internal lambda is a bug in the compiler, not an
intrinisic fact of Common-Lisp or of the Lisp Machine.  However, it would
be ok if subst's worked with &REST args too.

∂30-Jan-82  0006	MOON at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs 
Date: Saturday, 30 January 1982  03:00-EST
From: MOON at SCRC-TENEX
To:   Richard M. Stallman <RMS at MIT-AI>
Cc:   common-lisp at SU-AI
Subject: Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs

If SUBSTs aren't in Common Lisp, they certainly should be.  They are
extremely useful and trivial to implement.

∂30-Jan-82  0431	Kent M. Pitman <KMP at MIT-MC> 	Those two little suggestions for macroexpansion 
Date: 30 January 1982 07:26-EST
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Those two little suggestions for macroexpansion
To: Fahlman at CMU-20C
cc: LISP-FORUM at MIT-MC

    Date: 28 Jan 1982 1921-EST
    From: Fahlman at CMU-20C

    JONL's suggestion looks pretty good to me...
-----
Actually, JONL was just repeating suggestions brought up by GLS and EAK just
over a year ago on LISP-FORUM. I argued then that the recursive EVAL call was
semantically all wrong and not possible to support compatibly between the 
interpreter and compiler ... I won't bore you with a repeat of that discussion.
If you've forgotten it and are interested, it's most easily gettable from the
file "MC: AR1: LSPMAIL; FMACRO >".

∂30-Jan-82  1234	Eric Benson <BENSON at UTAH-20> 	Re: MVLet   
Date: 30 Jan 1982 1332-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: MVLet
To: Common-Lisp at SU-AI

Regarding return of multiple values: "...their lack has been a traditional
weakness in Lisp..."  What other languages have this feature?  Many have
call-by-reference which allows essentially the same functionality, but I
don't know of any which have multiple value returns in anything like the
Common Lisp sense.

I can certainly see the benefit of including them, but the restrictions
placed on them and the dismal syntax for using them counteracts the
intention of their inclusion, namely to increase the clarity of those
functions that have more than one value of interest.  If we were using a
graphical dataflow language they would fit like a glove, without all the
fuss.  The problem arises because each arrangement of arcs passing values
requires either its own special construct or binding the values to
variables.  I'm not suggesting we should throw out the n-in, 1-out nature
of Lisp forms in favor of an n-in, m-out arrangement, (at least not right
now!) rather that the current discussion of multiple values is unlikely to
come to a satisfactory conclusion due to the "tacked-on afterthought"
nature of the current version.  We may feel that it is a useful enough
facility to keep in spite of all this, but it's probably too much to hope
to "do it right".
-------

∂30-Jan-82  1351	RPG  	MVlet    
To:   common-lisp at SU-AI  
Of course, if Scott is only worried about the difficulty of implementing
the full MVlet with hairy syntax, all one has to do is provide MV-LIST
as Dan notes and write MVlet as a simple macro using that and LAMBDA.
That way CONSes, but who said that it had to be implemented well?
				-rpg-

∂30-Jan-82  1405	Jon L White <JONL at MIT-MC> 	Comparison of "lambda-macros" and my "Two little suggestions ..."
Date: 30 January 1982 16:55-EST
From: Jon L White <JONL at MIT-MC>
Subject: Comparison of "lambda-macros" and my "Two little suggestions ..."
To: KMP at MIT-MC, hic at SCRC-TENEX
cc: LISP-FORUM at MIT-MC, common-lisp at SU-AI

[Apologies for double mailings -- could we agree on a name for a
 mailing list to be kept at SU-AI which would just be those 
 individuals in COMMON-LISP@SU-AI which are not also on LISP-FORUM@MC]

There were two suggestions in my note, and lambda-macros relate
to only one of then, namely the first one

    FIRST SUGGESTION:
	 In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
     the part (<something> . . .) and "try again" before recursively 
     evaluating it. This will have the incompatible effect that
	(defmacro foo () 'LIST)
	((foo) 1 2)
     no longer causes an error (unbound variable for LIST), but will rather
     first expand into (list 1 2), which then evaluates to (1 2).

Note that for clarity, I've added the phrase "try again", meaning to
look at the form as see if it is recognized explicitly as, say, some
special form, or some subr application.

The discussion from last year, which resulted in the name "lambda-macros"
centered around finding a separate (but equal?) mechanism for code-expansion
for non-atomic forms which appear in a function place;  my first suggestion 
is to change EVAL (and compiler if necessary) to call the regular macroexpander
on any form which looks like some kind of function composition, and thus
implement a notion of "Meta-Composition" which is context free.  It would be 
a logical consequence of this notion that eval'ing (FUNCTION (FROTZ 1)) must
first macroexpand (FROTZ 1), so that #'(FPOSITION ...) could work in the 
contexts cited about MAP.  However, it is my second suggestion that would
not work in the context of an APPLY -- it is intended only for the EVAL-
of-a-form context -- and I'm not sure if that has been fully appreciated
since only RMS appears to have alluded to it.

However, I'd like to offer some commentary on why context-free 
"meta-composition" is good for eval, yet why context-free "evaluation" 
is bad:
  1) Context-free "evaluation" is SCHEME.  SCHEME is not bad, but it is
     not LISP either.  For the present, I believe the LISP community wants
     to be able to write functions like:
	(DEFUN SEMI-SORT (LIST)
	  (IF (GREATERP (FIRST LIST) (SECOND LIST))
	      LIST 
	      (LIST (SECOND LIST) (FIRST LIST))))
     Correct interpretation of the last line means doing (FSYMEVAL 'LIST)
     for the instance of LIST in the "function" position, but doing (more
     or less) (SYMEVAL 'LIST) for the others -- i.e., EVAL acts differently
     depending upon whether the context is "function" or "expression-value".
 2) Context-free "Meta-composition" is just source-code re-writing, and
    there is no ambiguity of reference such as occured with "LIST" in the 
    above example.  Take this example:
	(DEFMACRO GET-SI (STRING)
	  (SETQ STRING (TO-STRING STRING))
	  (INTERN STRING 'SI))
        (DEFUN SEE-IF-NEW-ATOM-LIST (LIST)
	  ((GET-SI "LIST")  LIST  (GET-SI "LIST")))
    Note that the context for (GET-SI "LIST") doesn't matter (sure, there
    are other ways to write equivalent code but . . .)
    Even the following macro definition for GET-SI results in perfectly
    good, unambiguous results:
	(DEFMACRO GET-SI (STRING)
	  `(LAMBDA (X Y) (,(intern (to-string string) 'SI) X Y)))
    For example, assuming that (LAMBDA ...) => #'(LAMBDA ...),
      (SEE-IF-NEW-ATOM-LIST 35)   =>   (35  #'(LAMBDA (X Y) (LIST X Y)))

The latter (bletcherous) example shows a case where a user ** perhaps **
did not intend to use (GET-SI...) anywhere but in function context --
he simply put in some buggy code.   The lambda-macro mechanism would require
a user to state unequivocally that a macro-defintion in precisely one
context;  I'd rather not be encumbered with separate-but-parallel machinery
and documentation -- why not have this sort of restriction on macro usage
contexts be some kind of optional declaration?

Yet my second suggestion involves a form which could not at all be interpreted
in "expression-value" context:
    SECOND SUGGESTION
	Let FMACRO have special significance for macroexpansion in the context
     ((FMACRO . <fun>) . . .), such that this form is a macro call which is
     expanded by calling <fun> on the whole form.
Thus (LIST 3 (FMACRO . <fun>)) would cause an error.  I believe this 
restriction is more akin to that which prevents MACROs from working
with APPLY.

∂30-Jan-82  1446	Jon L White <JONL at MIT-MC> 	The format ((MACRO . f) ...)  
Date: 30 January 1982 17:39-EST
From: Jon L White <JONL at MIT-MC>
Subject: The format ((MACRO . f) ...)
To: common-lisp at SU-AI
cc: LISP-FORUM at MIT-MC


HIC has pointed out that the LISPM interpreter already treats the
format ((MACRO . f) ...) according to my "second suggestion" for
((FMACRO . f) ..);  although I couldn't find this noted in the current
manual, it does work.   I'd be just as happy with ((MACRO . f) ...)  -- my 
only consideration was to avoid a perhaps already used format.  Although the 
LISPM compiler currently barfs on this format, I believe there will be a 
change soon?

The issue of parallel macro formats -- lambda-macros versus
only context-free macros -- is quite independent; although I
have a preference, I'd be happy with either one.

∂30-Jan-82  1742	Fahlman at CMU-20C 	Re: MVlet      
Date: 30 Jan 1982 2039-EST
From: Fahlman at CMU-20C
Subject: Re: MVlet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 30-Jan-82 1651-EST


But why chose a form that is hard to implement well and that will
therefore be implemented poorly over one that is easy to implement well?
If we are going to CONS, we may as well throw the MV stuff out
altogether.  Even if implementation weere not a problem, I would prefer
the simple syntax.  Does anyone else out there share RPG's view that
the alleged uniformity of the hairy syntax justifies the hair?
-- Scott
-------

∂30-Jan-82  1807	RPG  	MVlet    
To:   common-lisp at SU-AI  
1. What is that hard to implement about the MVlet thing that is not
already swamped by the difficulty of having n values on the stack
as you return and throw, and is also largely subsumed by the theory
of function entry?

2. To get any variable number of values back now you have to CONS anyway,
so by implementing it `poorly' for the user, but with 
a uniform syntax for all, is better than the user implementing
it poorly himself over and over.

3. If efficiency of the implementation is the issue, and if the
simple cases admit efficiency in the old syntax, the same simple 
cases admit efficiency in the proposed syntax.

4. Here's what happens when a function is called:
	You have a description of the variables and how the
	values that you get will be bound to them depending on how many you get.

  Here's what happens when a function with multiple values returns to
a MVlet:
	You have a description of the variables and how the
	values that you get will be bound to them depending on how many you get.

Because the naive user will think these descriptions are similar, he will expect
that the syntax to deal with them is similar.

∂30-Jan-82  1935	Guy.Steele at CMU-10A 	Forwarded message
Date: 30 January 1982 2231-EST (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Forwarded message
CC: feinberg at CMU-20C
Message-Id: <30Jan82 223157 GS70@CMU-10A>


- - - - Begin forwarded message - - - -
Date: 30 January 1982  21:43-EST (Saturday)
From: FEINBERG at CMU-20C
To:   Guy.Steele at CMUA
Subject: Giving in to Maclisp
Via:     CMU-20C; 30 Jan 1982 2149-EST

Howdy!
	I was looking through Decisions.Press and I came upon a 
little section, which I was surprised to see:


        Adopt functions parallel to GETF, PUTF, and REMF, to be
        called GETPR, PUTPR, and REMPR, which operate on symbols.
        These are analogous to GET, PUTPROP, and REMPROP of
        MACLISP, but the arguments to PUTPR are in corrected order.
        (It was agreed that GETPROP, PUTPROP, and REMPROP would be
        better names, but that these should not be used to minimize
        compatibility problems.)

Are we really going to give all the good names away to Maclisp in the
name of "compatibility"?  Compatibility in what way? Is it not clear
that we will have to do extensive modifications to Maclisp to get
Common Lisp running in it anyway? Is it also not clear that Maclisp
programs will also require extensive transformation to run in Common
Lisp? Didn't everyone agree that comming up with a clean language,
even at the expense of compatibility, was most important? I think it
is crucial that we break away from Maclisp braindammage, and not let
it steal good names in the process.  PUTPR is pretty meaningless,
whereas PUTPROP is far more clear.  

						--Chiron
- - - - End forwarded message - - - -

∂30-Jan-82  1952	Fahlman at CMU-20C 	Re: MVlet      
Date: 30 Jan 1982 2244-EST
From: Fahlman at CMU-20C
Subject: Re: MVlet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 30-Jan-82 2107-EST


    1. What is that hard to implement about the MVlet thing that is not
    already swamped by the difficulty of having n values on the stack
    as you return and throw, and is also largely subsumed by the theory
    of function entry?

Function calling with hairy lambda syntax was an incredible pain to
implement decently, but was worth it.  Having multiple values on the
stack was also a pain to implement, but was also (just barely) worth it.
The proposed M-V-CALL just splices together these two moby pieces of
machinery, so is relatively painless.  In the implementations I am
doing, at least, the proposed lambda-list syntax for the other MV forms
will require a third moby chunk of machinery since it has to do what a
function call does, but it cannot be implemented as a function call
since it differs slightly.

    2. To get any variable number of values back now you have to CONS anyway,
    so by implementing it `poorly' for the user, but with 
    a uniform syntax for all, is better than the user implementing
    it poorly himself over and over.

Neither the simple MV forms nor M-V-CALL would cons in my
implementations, except in the case that the functional arg to M-V-CALL
takes a rest arg and there is at least one rest value passed to it.  To
go through M-V-LIST routinely would cons much more, and would make the
multiple value mechanism totally worthless.

    3. If efficiency of the implementation is the issue, and if the
    simple cases admit efficiency in the old syntax, the same simple 
    cases admit efficiency in the proposed syntax.

Yup, it can be implemented efficiently.  My objection is that it's a lot
of extra work (I figure it would take me a full week) and would make the
language uglier as well (in the eye of this beholder).

    4. Here's what happens when a function is called:
    	You have a description of the variables and how the
    	values that you get will be bound to them depending on how many
        you get.

      Here's what happens when a function with multiple values returns to
      a MVlet:
    	You have a description of the variables and how the
    	values that you get will be bound to them depending
        on how many you get.

Here's what really happens:

You know exactly how many values the called form is going to return and
what each value is.  Some of these you want, some you don't.  You
arrange to catch and bind those that are of interest, ignoring the rest.
Defaults and rest args simply aren't meaningful if you know how many
values are coming back.

In the rare case of a called form that is returning an unpredictable
number of args (the case that RPG erroneously takes as typical), you use
M-V-CALL and get the full lambda binding machinery, or you use M-V-LIST
and grovel the args yourself, or you let the called form return a list
in the first place.  I would guess that such unpredictable cases occur
in less than 1% of all multiple-value calls, and the above-listed
mechanisms handle that 1% quite well.

OK, we need to settle this.  If most of the rest of you share RPG's
taste in this, I will shut up and do the extra work to implement the
lambda forms, rather than walk out.  If RPG is alone or nearly alone in
his view of what is tasteful, I would hope that he would give in
gracefully.  I assume that punting multiples altogether or limiting them
to two values would please no one.

-- Scott
-------

∂30-Jan-82  2002	Fahlman at CMU-20C 	GETPR
Date: 30 Jan 1982 2256-EST
From: Fahlman at CMU-20C
Subject: GETPR
To: feinberg at CMU-20C
cc: common-lisp at SU-AI


I think that Feinberg underestimates the value of retaining Maclisp
compatibility in commonly-used functions, other things being equal.

On the other hand, I agree that GETPR and friends are pretty ugly.  If I
understand the proposal, GETPR is identical to the present GET, and
REMPR is identical to REMPROP.  Only PUTPR is different.  How about
going with GET, REMPROP, and PUT in new code, where PUT is like PUTPROP,
but with the new argument order?  Then PUTPROP could be phased out
gradually, with a minimum of hassle.  (Instead of PUT we could use
SETPROP, but I like PUT better.)

-- Scott
-------

∂30-Jan-82  2201	Richard M. Stallman <RMS at MIT-AI>
Date: 31 January 1982 00:57-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I vote for GET and PUT rather than GETPR and PUTPR.

Fahlman is not alone in thinking that it is cleaner not to
have M-V forms that contain &-keywords.

∂31-Jan-82  1116	Daniel L. Weinreb <dlw at MIT-AI> 	GETPR
Date: Sunday, 31 January 1982, 14:15-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: GETPR
To: Fahlman at CMU-20C, feinberg at CMU-20C
Cc: common-lisp at SU-AI

Would you please go back and read the message I sent a little while ago?
I belive that it makes more sense to FIRST define a policy about Maclisp
compatibility and THEN make the specific decisions based on that
proposal.  I don't want to waste time thinking about the GET thing before
we have such a policy.

∂01-Feb-82  0752	Jon L White <JONL at MIT-MC> 	Incredible co-incidence about the format ((MACRO . f) ...)  
Date: 1 February 1982 10:47-EST
From: Jon L White <JONL at MIT-MC>
Subject: Incredible co-incidence about the format ((MACRO . f) ...)
To: common-lisp at SU-AI
cc: LISP-FORUM at MIT-MC


One of my previous messages seemed to imply that ((MACRO . f) ...)
on the LISPM fulfills the intent of my second suggestion -- apparently
there is a completely unforseen consequence of the fact that
   (FSYMEVAL 'FOO) => (MACRO . <foofun>)
when FOO is defined as a macro, such that the interpreter "makes it work".
However, MACROEXPAND knows nothing about this format, which is probably
why the compiler can't handle it; also such action isn't documented
anywhere.
 
Thus I believe it to be merely an accidental co-incidence that the
interpreter does anything at all meaningful with this format.   My
"second suggestion" now is to institutionalize this "accident"; it
certainly would make it easier to experiment with a pseudo-functional
programming style, and it obviously hasn't been used for any other
meaning.

∂01-Feb-82  0939	HIC at SCRC-TENEX 	Incredible co-incidence about the format ((MACRO . f) ...)   
Date: Monday, 1 February 1982  11:38-EST
From: HIC at SCRC-TENEX
To:   Jon L White <JONL at MIT-MC>
Cc:   common-lisp at SU-AI, LISP-FORUM at MIT-MC
Subject: Incredible co-incidence about the format ((MACRO . f) ...)

    Date: Monday, 1 February 1982  10:47-EST
    From: Jon L White <JONL at MIT-MC>
    To:   common-lisp at SU-AI
    cc:   LISP-FORUM at MIT-MC
    Re:   Incredible co-incidence about the format ((MACRO . f) ...)

    One of my previous messages seemed to imply that ((MACRO . f) ...)
    on the LISPM fulfills the intent of my second suggestion -- apparently
    there is a completely unforseen consequence of the fact that
       (FSYMEVAL 'FOO) => (MACRO . <foofun>)
    when FOO is defined as a macro, such that the interpreter "makes it work".
    However, MACROEXPAND knows nothing about this format, which is probably
    why the compiler can't handle it; also such action isn't documented
    anywhere.

Of course MACROEXPAND knows about it (but not the version you looked
at).  I discovered this BUG (yes, BUG, I admit it, the LISPM had a
bug) in about 2 minutes of testing this feature, after I told the
world I thought it would work, and fixed it in about another two
minutes.
     
    Thus I believe it to be merely an accidental co-incidence that the
    interpreter does anything at all meaningful with this format.   My
    "second suggestion" now is to institutionalize this "accident"; it
    certainly would make it easier to experiment with a pseudo-functional
    programming style, and it obviously hasn't been used for any other
    meaning.

JONL, you seem very eager to make this be your proposal -- so be it.
I don't care.  However, it works on the Lisp Machine (it was a BUG
when it didn't work) to have (MACRO . foo) in the CAR of a form, and
thus it works to have a lambda macro expand into this.

Of course, Lambda Macros are the right way to experiment with the
functional programming style -- I think it's wrong to rely on seeing
the whole form (I almost KNOW it's wrong...).  In any case, the Lisp
Machine now has these.

∂01-Feb-82  1014	Kim.fateman at Berkeley 	GETPR and compatibility  
Date: 1 Feb 1982 10:11:13-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: GETPR and compatibility

There are (at least) two kinds of compatibility worth comparing.

1. One, which I believe is very hard to do,
probably not worthwhile, and probably not
in the line of CL, is the kind which
would allow one to take an arbitrary maclisp (say) file, read it into
a CL implementation, and run it, without ever even telling the CL
system, hey, this file is maclisp.  And when you prettyprint or debug one of
those functions, it looks pretty much like what you read in, and did
not suffer "macro←replacement←itis".

2. The second type is to put in the file, or establish somehow,
#.(enter maclisp←mode)  ;; or whatever, followed by 
<random maclisp stuff>
#.(enter common←lisp←mode)  ;; etc.

The reader/evaluator would know about maclisp. There
are (at least) two ways of handling this 
  a:  any maclisp construct (e.g. get) would be macro-replaced by
the corresponding CL thing (e.g. getprop or whatever); arguments would
be reordered as necessary.  I think transor does this, thought generally
in the direction non-interlisp ==> interlisp.  The original maclisp
would be hard to examine from within CL, since it was destroyed on read-in
(by read, eval or whatever made the changes). (Examination by looking
at the file or some verbatim copy would be possible).  This makes
debugging in native maclisp, hard.
  b: wrap around each uniquely maclisp construction (perhaps invisibly) 
(evaluate←as←maclisp  <whatever>).  This would preserve prettyprinting,
and other things.  Functions which behave identically would presumably
not need such a wrapper, though interactions would be hard to manage.

I think 2a is what makes most sense, and is how Franz lisp 
handles some things which are, for example, in interlisp, but not in Franz.
The presumption is that you would take an interlisp (or maclisp)
file and translate it into CL, and at that point abandon the original
dialect.  In view of this, re-using the names seems quite possible,
once the conversion is done.
  In point of fact, what some people may do is handle CL this way.
That is, translate it into  another dialect, which, for whatever
reason, seems more appropriate.  Thus, an Xlisp chauvinist
might simply write an Xlispifier for CL. The Xlispifier for CL
would be written in Xlisp, and consist of the translation package
and (probably) a support package of CL functions.  Depending on
whether you are in CL-reading-mode or XL-reading-mode, you would
get one or the other "getprop".
  Are such "implementations of CL"  "correct"?  Come to think of
it, how would one determine if one is looking at an implementation
of CL?

∂01-Feb-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	a proposal about compatibility 
Date:  1 Feb 1982 1326-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: a proposal about compatibility
To: common-lisp at SU-AI

I would like to propose that CL be a dialect of Lisp.  A reasonable
definition of Lisp seems to be the following:
  - all functions defined in the "Lisp 1.5 Programmer's Manual",
	McCarthy, et. al, 1962, other than those that are system- or
	implementation-dependent 
  - all functions on whose definitions Maclisp and Interlisp agree
I propose that CL should not redefine any names from these two sets,
except in ways that are upwards-compatible.
-------

∂01-Feb-82  1039	Daniel L. Weinreb <DLW at MIT-AI> 	Re: MVLet      
Date: 1 February 1982 13:32-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Re: MVLet    
To: common-lisp at SU-AI

    Regarding return of multiple values: "...their lack has been a traditional
    weakness in Lisp..."  What other languages have this feature?  Many have
    call-by-reference which allows essentially the same functionality, but I
    don't know of any which have multiple value returns in anything like the
    Common Lisp sense.
Many of them have call-by-reference, which allows essentially the same
functionality.  Indeed, few of them have multiple value returns in the
Lisp sense, although the general idea is around, and was included in at
least some of the proposals for "DOD-1" (it's sometimes called "val out"
paramters.) Lisp is neither call-by-value or call-by-reference exactly,
so a direct comparision is difficult.  My point was that there is a
pretty good way to return many things in the call-by-reference pardigm,
it is used to good advantage by Pascal and PL/1 programs, and Lisp
programmer who want to do analogous things have traditionally been up
the creek.

    We may feel that it is a useful enough facility to keep in spite of all
    this, but it's probably too much to hope to "do it right".
When we added multiple values to the Lisp Machine years ago, we decided that
we couldn't "do it right", but it was a useful enough facility to keep in
spite of all this.  I still think so, and it applies to Common Lisp for the
same reasons.

∂01-Feb-82  2315	Earl A. Killian <EAK at MIT-MC> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs   
Date: 1 February 1982 19:09-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

I don't want SUBSTs in Common Lisp, I want the real thing, ie.
inline functions.  They can be implemented easily in any
implementation by replacing the function name with its lambda
expression (this isn't quite true, because of free variables, but
that's not really that hard to deal with in a compiler).  Now the
issue is simply efficiency.  Since Common Lisp has routinely
chosen cleanliness when efficiency can be dealt with by the
compiler (as it is in the S-1 compiler), then I see no reason to
have ugly SUBSTs.

∂01-Feb-82  2315	FEINBERG at CMU-20C 	Compatibility With Maclisp   
Date: 1 February 1982  16:35-EST (Monday)
From: FEINBERG at CMU-20C
To:   Daniel L. Weinreb <dlw at MIT-AI>
Cc:   common-lisp at SU-AI, Fahlman at CMU-20C
Subject: Compatibility With Maclisp

Howdy!
	I agree with you, we must have a consistent policy concerning
maintaining compatibility with Maclisp.  I propose that Common Lisp
learn from the mistakes of Maclisp, not repeat them.  This policy
means that Common Lisp is free to use clear and meaningful names for
its functions, even if they conflict with Maclisp function names.
Yes, some names must be kept for historical purposes (CAR, CDR and
CONS to name a few), but my view of Common Lisp is that it is in fact
a new language, and should not be constrained to live in the #+MACLISP
world.  I think if Common Lisp software becomes useful enough, PDP-10
people will either make a Common Lisp implementation, they will make a
mechanical translator, or they will retrofit Maclisp to run Common
Lisp.  Common Lisp should either be upward compatible Maclisp or
compatibility should take a back seat to a good language.  I think
Common Lisp has justifiably moved far enough away from Maclisp that
the former can no longer be accomplished, so the latter is the only
reasonable choice.  Being half upward compatible only creates more
confusion.

∂01-Feb-82  2319	Earl A. Killian <EAK at MIT-MC> 	GET/PUT names    
Date: 1 February 1982 19:32-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  GET/PUT names
To: common-lisp at SU-AI

I don't like the name GET for property lists.  GET is a verb, and
therefore doesn't sound very applicative to me.  I prefer Lisp
function names to refer to what they do, not how they do it.
Thus I'd like something like PROPERTY-VALUE, PROPERTY, or just
PROP (depending on how important a short name is) instead of GET.
PUTPROP would be SET-PROPERTY-VALUE, SET-PROPERTY, or SET-PROP,
though I'd personally use SETF instead:
	(SETF (PROP S 'X) Y)

∂01-Feb-82  2319	Howard I. Cannon <HIC at MIT-MC> 	The right way   
Date: 1 February 1982 20:13-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  The right way
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

    Date: 1 February 1982 1650-EST (Monday)
    From: Guy.Steele at CMU-10A
    To:   HIC at MIT-AI
    cc:   common-lisp at SU-AI
    Re:   The right way

    I think I take slight exception at the remark

        Of course, Lambda Macros are the right way to experiment with the
        functional programming style...

    It may be a right way, but surely not the only one.  It seems to me
    that actually using functions (rather than macros) also leads to a
    functional programming style.  Lambda macros may be faster in some
    implementations for some purposes.  However, they do not fulfill all
    purposes (as has already been noted: (MAPCAR (FPOSITION ...) ...)).

Sigh...it's so easy to be misinterpreted in mail.  Of course, that meant
"Of these two approaches,..."  I'm sorry I wasn't explicit enough.

However, now it's my turn to take "slight exception" (which wasn't so
slight on your part that you didn't bother to send a note):

Have we accepted the Scheme approach of LAMBDA as a "self-evaling" form?
If not, then I don't see why you expect (MAPCAR (FPOSITION ...) ...)
to work where (MAPCAR (LAMBDA ...) ...) wouldn't.  Actually, that's
part of the point of Lambda macros -- they work nicely when flagged
by #'.  If you want functions called, then have the lambda macro
turn into a function call.  I think writing #' is a useful marker and
serves to avoid other crocks in the implementation (e.g. evaling the
car of a form, and using the result as the function.  I thought we
had basically punted that idea a while ago.)

If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
(or whatever), then I might propose changing lambda macros to be called
in normal functional position, or just go to the scheme of not distinguishing
between lambda and regular macros.

∂01-Feb-82  2321	Jon L White <JONL at MIT-MC> 	MacLISP name compatibility, and return values of update functions
Date: 1 February 1982 16:26-EST
From: Jon L White <JONL at MIT-MC>
Subject: MacLISP name compatibility, and return values of update functions
To: common-lisp at SU-AI

	
[I meant to CC this to common-lisp earlier -- was just sent to Weinreb.]

    Date: Sunday, 31 January 1982, 14:15-EST
    From: Daniel L. Weinreb <dlw at MIT-AI>
    To: Fahlman at CMU-20C, feinberg at CMU-20C
    Would you please go back and read the message I sent a little while ago?
    I belive that it makes more sense to FIRST define a policy about Maclisp
    compatibility and THEN make the specific decisions based on that
    proposal. . . 
Uh, what msg -- I've looked through my mail file for a modest distance, and
don't seem to find anything in your msgs to common-lisp that this might refer 
to.  I thought we had the general notion of not usurping MacLISP names, unless
EXTREMEMLY good cause could be shown.  For example,
 1) (good cause) The names for type-specific (and "modular") arithmetic 
    were usurped by LISPM/SPICE-LISP for the generic arithmetic  (i.e., 
    "+" instead of "PLUS" for generic, and nothing for modular-fixnum). 
    Although I don't like this incompatibility, I can see the point about 
    using the obvious name for the case that will appear literally tens of
    thousands of times in our code.
 2) (bad cause) LISPM "PRINT" returns a gratuitously-incompatible value.
    There is discussion on this point, with my observation that when it was
    first implemented very few LISPM people were aware of the 1975 change
    to MacLISP (in fact, probalby only Ira Goldstein noticed it at all!)
    Yet no one has offered any estimate of the magnitude of the effects of 
    leaving undefined the value of side-effecting and/or updating functions;  
    presumably SETQ would have a defined value, and RPLACA/D also for 
    backwards compatibity, but what about SETF?
Actually the SETF question introduces the ambiguity of which of the
two possible values to return.  Take for example VSET:  Should (VSET V I X) 
return V, by analogy with RPLACA, or should it return X by analyogy with SETQ? 
Whatever is decided for update functions in general affects SETF in some 
possibly conflicting way.  For this reason alone, RMS's suggestion to have 
SETF be the only updator (except for SETQ and RPLACA/RPLACD ??) makes some 
sense; presumably then we could afford to leave the value of SETF undefined.

∂01-Feb-82  2322	Jon L White <JONL at MIT-MC> 	MVLet hair, and RPG's suggestion   
Date: 1 February 1982 16:36-EST
From: Jon L White <JONL at MIT-MC>
Subject: MVLet hair, and RPG's suggestion
To: common-lisp at SU-AI

    Date: 19 Jan 1982 1551-PST
    From: Dick Gabriel <RPG at SU-AI>
    To:   common-lisp at SU-AI  
    I would like to make the following suggestion regarding the
    strategy for designing Common Lisp. . . .
    We should separate the kernel from the Lisp based portions of the system
    and design the kernel first. Lambda-grovelling, multiple values,
    and basic data structures seem kernel.
    The reason that we should do this is so that the many man-years of effort
    to immplement a Common Lisp can be done in parallel with the design of
    less critical things. 
I'm sure it will be impossible to agree completely on a "kernel", but
some approach like this *must* be taken, or there'll never be any code
written in Common-Lisp at all, much less the code which implements the
various features.  Regarding hairy forms of Multiple-value things, 
I believe I voted to have both forms, because the current LISPM set
is generally useful, even if not completely parallel with Multiple-argument 
syntax; also it is small enough and useful enough to "put it in right now"
and strive for the hairy versions at a later time.
  Couldn't we go on record at least as favoring the style which permits
the duality of concept (i.e., whatever syntax works for receiving multiple
arguments also works for receiving multiple values), but noting that
we can't guarantee anything more that the several LISPM functions for
the next three years?  I'd sure hate to see this become an eclectic
kitchen sink merely because the 5-10 people who will  be involved in
Common-Lisp compiler-writing didn't want to take the day or so apiece
over the next three years to write the value side of the value/argument
receiving code.

∂02-Feb-82  0002	Guy.Steele at CMU-10A 	The right way    
Date:  1 February 1982 1650-EST (Monday)
From: Guy.Steele at CMU-10A
To: HIC at MIT-AI
Subject:  The right way
CC: common-lisp at SU-AI
In-Reply-To:  HIC@SCRC-TENEX's message of 1 Feb 82 11:38-EST
Message-Id: <01Feb82 165054 GS70@CMU-10A>

I think I take slight exception at the remark

    Of course, Lambda Macros are the right way to experiment with the
    functional programming style...

It may be a right way, but surely not the only one.  It seems to me
that actually using functions (rather than macros) also leads to a
functional programming style.  Lambda macros may be faster in some
implementations for some purposes.  However, they do not fulfill all
purposes (as has already been noted: (MAPCAR (FPOSITION ...) ...)).

∂02-Feb-82  0110	Richard M. Stallman <RMS at MIT-AI>
Date: 1 February 1982 17:51-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

It seems that the proposal to use GET and PUT for property functions
is leading to a discussion of whether it is ok to reuse Maclisp
names with different meanings.

Perhaps that topic does need to be discussed, but there is no such
problem with using GET and PUT instead of GETPR and PUTPR.
GET would be compatible with Maclisp (except for disembodied plists),
and PUT is not used in Maclisp.

Let's not get bogged down in wrangling about the bigger issue
of clean definitions vs compatibility with Maclisp as long as we
can solve the individual issues in ways that meet both goals.

∂02-Feb-82  0116	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
Date: Monday, 1 February 1982, 23:54-EST
From: David A. Moon <Moon at SCRC-TENEX at MIT-AI>
Subject: Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
To: Earl A. Killian <EAK at MIT-MC>
Cc: common-lisp at SU-AI
In-reply-to: The message of 1 Feb 82 19:09-EST from Earl A. Killian <EAK at MIT-MC>

    Date: 1 February 1982 19:09-EST
    From: Earl A. Killian <EAK at MIT-MC>
    Subject:  Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
    To: MOON at SCRC-TENEX
    cc: common-lisp at SU-AI

    I don't want SUBSTs in Common Lisp, I want the real thing, ie.
    inline functions...
In the future I will try to remember, when I suggest that something should
exist in Common Lisp, to say explicitly that it should not have bugs in it.

∂02-Feb-82  1005	Daniel L. Weinreb <DLW at MIT-AI>  
Date: 2 February 1982 12:25-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
To: RMS at MIT-AI
cc: common-lisp at SU-AI

While we may not need to decide about Maclisp compatibility policy for the
particular proposal you discussed, we do need to worry about whether, for
example, we must not rename PUTPROP to PUT even if it is upward-compatible
because some of us might think that "CL is not a dialect of Lisp" if we are
that far off; there might be other proposals about Maclisp compatibility
that would affect the proposal you mention regardless of the
upward-compatibility of the proposal.

But what is much more imporrant is that there are other issues that will be
affected strongly by our policy, and if we put this off now then it will be
a long time indeed before we see a coherent and accepted CL definition.  We
don't have forever; if this takes too long we will all get bored and forget
about it.  Furthermore, if we come up with a policy later, we'll have to go
back and change some earlier decisions, or else decide that the policy
won't really be followed.  I think we have to get this taken care of
immediately.

∂02-Feb-82  1211	Eric Benson <BENSON at UTAH-20> 	Re: MacLISP name compatibility, and return values of update functions   
Date:  2 Feb 1982 1204-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: MacLISP name compatibility, and return values of update functions
To: JONL at MIT-MC, common-lisp at SU-AI
In-Reply-To: Your message of 1-Feb-82 1426-MST

We had a long discussion about SETF here at Utah for our implementation and
decided that RPLACA and RPLACD are really the wrong things to use for this.
Every other SETF-type function returns (depending on how you look at it)
the value of the RHS of the assignment (the second argument) or the updated
value of the LHS (the first argument).  This has been the case in most
languages where the value of an assignment is defined, for variables, array
elements or structure elements.  The correct thing to use for
(SETF (CAR X) Y)
is
(PROGN (RPLACA X Y) (CAR X))
or the equivalent.  It appears that the value of SETF was undefined in
LISPM just because of this one case.  Perhaps it is just more apparent when
one uses Algol syntax, i.e.  CAR(X) := CDR(Y) := Z; that this is the
obvious way to define the value of SETF.
-------

∂02-Feb-82  1304	FEINBERG at CMU-20C 	a proposal about compatibility    
Date: 2 February 1982  15:59-EST (Tuesday)
From: FEINBERG at CMU-20C
To:   HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility), DLW at AI
Cc:   common-lisp at SU-AI
Subject: a proposal about compatibility

Howdy!
	Could you provide some rationale for your proposal? Are you
claiming that it is necessary to include Lisp 1.5 and the intersection
of Maclisp and Interlisp in Common Lisp before it can be truly called
a dialect of Lisp? 

	I agree with DLW, it is rather important to settle the issue
of Maclisp compatibility soon.

∂02-Feb-82  1321	Masinter at PARC-MAXC 	Re: MacLISP name compatibility, and return values of update functions   
Date: 2 Feb 1982 13:20 PST
From: Masinter at PARC-MAXC
Subject: Re: MacLISP name compatibility, and return values of update functions
In-reply-to: BENSON's message of 2 Feb 1982 1204-MST
To: common-lisp at SU-AI

The Interlisp equivalent of SETF, "change", is defined in that way. It turns out
that the translation of (change (CAR X) Y) is (CAR (RPLACA X Y)). The
compiler normally optimizes out extra CAR/CDR's when not in value context.
RPLACA is retained for compatibility.


Larry

∂02-Feb-82  1337	Masinter at PARC-MAXC 	SUBST vs INLINE, consistent compilation   
Date: 2 Feb 1982 13:34 PST
From: Masinter at PARC-MAXC
Subject: SUBST vs INLINE, consistent compilation
To: Common-Lisp@SU-AI
cc: Masinter

I think there is some rationale both for SUBST-type macros and for INLINE.

SUBST macros are quite important for cases where the semantics of
lambda-binding is not wanted, e.g., where (use your favorite syntax):

(DEFSUBST SWAP (X Y)
    (SETQ Y (PROG1 X (SETQ X Y]

This isn't a real example, but the idea is that sometimes a simple substitution
expresses what you want to do more elegantly than the equivalent

(DEFMACRO SWAP X
	\(SETQ ,(CADDR X) (PROG1 ,(CADR X) (SETQ ,(CADR X) ,(CADDR X]

These are definitely not doable with inlines. (I am not entirely sure they can be 
correctly implemented with SUBST-macros either.)

-----------------

There is a more important issue which is being skirted in these various
discussions, and that is the one of consistent compilation: when is it
necessary to recompile a function in order to preserve the equivalence of
semantics of compiled and interpreted code. There are some simple situations
where it is clear:
	The source for the function changed
	The source for some macros used by the function changed

There are other situations where it is not at all clear:
	The function used a macro which accessed a data structure which
	has changed.

Tracing the actual data structures used by a macro is quite difficult. It is not
at all difficult for subst and inline macros, though, because the expansion of
the macro depends only on the macro-body and the body of the macro
invocation.

I think the important issue for Common Lisp is: what is the policy on consistent
compilation?

Larry

∂02-Feb-82  1417	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: a proposal about compatibility  
Date:  2 Feb 1982 1714-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: a proposal about compatibility
To: FEINBERG at CMU-20C
cc: DLW at MIT-AI, common-lisp at SU-AI
In-Reply-To: Your message of 2-Feb-82 1559-EST

This is a response to a couple of requests to justify my comments.  Based
on one of these, I feel it necessary to say that nothing in this message
(nor in the previous one) should be taken to be sarcasm.  I am trying to
speak as directly as possible.  I find it odd when people take me as
being sarcastic when I start with the assumption that CL should be a
dialect of Lisp, and then give what I think is a fairly conservative
explanation of what I think that should mean.  However once I get into
the mode of looking for sarcasm, I see how easy it is to interpret
things that way.  Almost any of the statements I make below could be
taken as sarcasm.  It depends upon what expression you imagine being on
my face.  The rest of this message was typed with a deadpan expression.

I thought what I said was that if CL used a name in the set I mentioned,
that the use should be consistent with the old use.  I didn't say that
CL should in fact implement all of the old functions, although I would
not be opposed to such a suggestion.  But what I actually said was that
CL shouldn't use the old names to mean different things.

As for justification, consider the following points:
  - now and then we might like to transport code from one major family
	to another, i.e. not just Maclisp to CL, etc., but Interlisp to
	CL.  I realize this wouldn't be possible with code of some
	types, but I think at least some of our users do write what I
	would call "vanilla Lisp", i.e. Lisp that uses mostly common
	functions that they expect to be present in any Lisp system.  I
	admit that such transportation is not going to be easy under any
	circumstance and for that reason will not be all that common,
	but we should not make it more complicated than necessary.
  - I would like to be able to teach students Lisp, and then have them
	be able to use what they learned even if they end up using a
	different implementation.  Again, some reorientation is
	obviously going to be needed when they move to another
	implementation, but it would be nice not to have things that
	look like they ought to be the same, and aren't.  Further, it
	would be helpful for there to be enough similarity that we can
	continue to have textbooks describe Lisp.
  - I find myself having to deal with several dialects.  Of course I am
	probably a bit unusual, in that I am supporting users, rather
	than implementing systems.  Presumably most of the users will
	spend their time working on one system.  But I would like for
	the most common functions to do more or less the same thing in
	in all of these systems.
  - Now and then we write papers, journal articles, etc.  It would be
	helpful for these to be readable by people in other Lisp
	communities.
-------

∂02-Feb-82  1539	Richard M. Stallman <RMS at MIT-AI> 	No policy is a good policy  
Date: 2 February 1982 18:22-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: No policy is a good policy
To: Common-lisp at SU-AI

Common Lisp is an attempt to compromise betwen several goals:
cleanliness, utility, efficiency and compatibility both between
implementations and with Maclisp.  On any given issue, it is usually
possible to find a "right" solution which may meet most of these goals
well and meet the others poorly but tolerably.  Which goals have to be
sacrificed are different in each case.

For example, issue A may offer a clean, useful and efficient solution
which is incompatible, but in ways that are tolerable.  The other
solutions might be more compatible but worse in general.  Issue B may
offer a fully upward compatible solution which is very useful and fast
when implemented, which we may believe justifies being messy.  If we
are willing to consider each issue separately and sacrifice different
goals on each, the problem is easy.  But if we decide to make a global
choice of how much incompatibility we want, how much cleanliness we
want, etc., then probably whichever way we decide we will be unable to
use both the best solution for A and the best solution for B.  The
language becomes worse because it has been designed dogmatically.

Essentially the effect of having a global policy is to link issues A
and B, which could otherwise be considered separately.  The combined
problem is much harder than either one.  For example, if someone found a new
analogy between ways of designing the sequence function and ways of
designing read syntaxes for sequences, it might quite likely match
feasible designs for one with problematical designs for the other.
Then two problems which are proving enough work to get agreement on
individually would turn into one completely intractable problem.

It is very important to finish Common Lisp reasonably quickly, if the
effort is to be useful.  The study of philosophy of language design is
a worthy field but a difficulty one.  There are many more years of
work to be done in it.  If we make solving this field part of the plan
for designing Common Lisp, we will not be finished in time to do the
job that Common Lisp was intended for: to enable users of different
Maclisp descendents to write portable programs.

∂02-Feb-82  1926	DILL at CMU-20C 	upward compatibility   
Date:  2 Feb 1982 2225-EST
From: DILL at CMU-20C
Subject: upward compatibility
To: common-lisp at SU-AI

I believe that compatibility with other lisp dialects should be a
consideration in the design of Common Lisp, but it should absolutely have
less priority that considerations of portability, taste, and efficiency.
It is possible that this won't leave a whole lot of room for upward
compatibility.

If Common Lisp manages to be a high-quality, widely implemented common
language, the user community will end up being much larger than that of
any existing lisp dialect.  Imposing misfeatures on those users because
a much smaller community of users has gotten used to those features
doesn't make sense.

I also don't see why it is more important to maintain compatibility with
Maclisp than with other dialects.
-------

∂02-Feb-82  2148	RPG  	MVLet    
To:   common-lisp at SU-AI  
Scott pointed out to me that the MVCall construct can take
a general LAMBDA expression, complete with hairy LAMBDA list
syntax. Thus one can write:

		(MV-CALL #'(LAMBDA (FOO BAR (:REST ZTESCH)) ...)
			 (BAZOLA))

Which is virtually the same as:

	(MVLET (FOO BAR (:REST ZTESCH)) (BAZOLA) ...)

but above MVCall syntax strikes me as superior (using LAMBDA's for
LAMBDA-like things.

Therefore, I will go along with Scott's LISPM syntax + MVCALL.
			-rpg-

∂02-Feb-82  2223	Richard M. Stallman <RMS at MIT-AI>
Date: 3 February 1982 01:06-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI, dill at CMU-20C

The reason it is important to be compatible pretty much
with Maclisp is that that means being compatible with the
existing Lisp machine system, and that is very important
to all the Lisp machine users.  And to Lisp machine
system maintainers too.  It is fine if old Maclisp functions
get dropped from the definition of Common Lisp, and replaced
with cleaner ways of doing things: the Lisp machine can implement
the new way while continuing to support the old one, Common Lisp or no.
But making old Maclisp functions do new things that are fundamentally
incompatible will cause a great deal of trouble.

The purpose of the Common Lisp project was to unify Maclisp dialects.
The narrowness of the purpose is all that gives it a chance of success.
It may be an interesting project to design a totally new Lisp dialect,
but you have no chance of getting this many people to agree on a design
if you remove the constraints.

∂02-Feb-82  2337	David A. Moon <MOON at MIT-MC> 	upward compatibility   
Date: 3 February 1982 02:36-EST
From: David A. Moon <MOON at MIT-MC>
Subject: upward compatibility
To: common-lisp at SU-AI

I agree with RMS (for once).  Common Lisp should be made a good language,
but designing "pie in the sky" will simply result in there never being
a Common Lisp.  This is not a case of the Lisp Machine people being
recalcitrant and attempting to impose their own view of the world, but
simply that there is no chance of this large a group agreeing on anything
if there are no constraints.  I think the Lisp Machine people have already
shown far more tolerance and willingness to compromise than anyone would ever
have the right to expect.

∂03-Feb-82  1622	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
Date: 3 February 1982 19:20-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SUBST vs INLINE, consistent compilation
To: Masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

In Common Lisp the macro definition of SWAP would be the same of
as your SUBST, except for some commas (i.e. defmacro handles
normal argument lists).  I don't think Common Lisp needs subst
as another way of defining macros.  Inline functions are,
however, useful.

∂04-Feb-82  1513	Jon L White <JONL at MIT-MC> 	"exceptions" possibly based on misconception; and EVAL strikes again  
Date: 4 February 1982 18:04-EST
From: Jon L White <JONL at MIT-MC>
Subject: "exceptions" possibly based on misconception; and EVAL strikes again
To: Hic at SCRC-TENEX, Guy.Steele at CMU-10A
cc: common-lisp at SU-AI


The several "exceptions" just taken about implementing functional programming 
may be in part due to a misconception taken from RMS's remark

    Date: 29 January 1982 19:46-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    Subject: Trying to implement FPOSITION with LAMBDA-MACROs.
    . . . 
    The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
    expands into (FPOSITION-INTERNAL X Y MORE ARGS), and . . . 
    In JONL's suggestion, the expander for FPOSITION operates on the
    entire form in which the call to the FPOSITION-list appears, not
    just to the FPOSITION-list.

This isn't right -- in my suggestion, the expander for FPOSITION would 
operate only on (FPOSITION X Y), which *could* then produce something like 
(MACRO . <another-fun>); and it would be  <another-fun>  which would get 
the "entire form in which the call to the FPOSITION-list appears"

HIC is certainly justified in saying that something is wrong, but it looked
like to me (and maybe Guy) that he was saying alternatives to lambda-macros 
were wrong.  However, this side-diversion into a misconception has detracted 
from the main part of my "first suggestion", namely to fix the misdesign in 
EVAL whereby it totally evaluates a non-atomic function position before trying
any macro-expansion. 

    Date: 1 February 1982 20:13-EST
    From: Howard I. Cannon <HIC at MIT-MC>
    Subject:  The right way
    To: Guy.Steele at CMU-10A
    . . . 
    If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
    (or whatever), then I might propose changing lambda macros to be called
    in normal functional position, or just go to the scheme of not 
    distinguishing between lambda and regular macros.

So how about it?  Regardless of the lambda-macro question, or the style
of functional programming, let EVAL take

   ((MUMBLE ...) A1 ... A2)  into  `(,(macroexpand '(MUMBLE ...)) A1 ... A2)

and try its cycle again.  Only after (macroexpand '(MUMBLE ...)) fails to
produce something discernibly a function would the nefarious "evaluation"
come up for consideration.

[P.S. -- this isn't the old (STATUS PUNT) question -- that only applied to
 forms which had, from the beginning, an atomic-function position.]

∂04-Feb-82  2047	Howard I. Cannon <HIC at MIT-MC> 	"exceptions" possibly based on misconception; and EVAL strikes again   
Date: 4 February 1982 23:45-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  "exceptions" possibly based on misconception; and EVAL strikes again
To: JONL at MIT-MC
cc: common-lisp at SU-AI, Guy.Steele at CMU-10A

        If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
        (or whatever), then I might propose changing lambda macros to be called
        in normal functional position, or just go to the scheme of not 
        distinguishing between lambda and regular macros.

    So how about it?  Regardless of the lambda-macro question, or the style
    of functional programming, let EVAL take

       ((MUMBLE ...) A1 ... A2)  into  `(,(macroexpand '(MUMBLE ...)) A1 ... A2)

Since, in my first note, I said "If, however, we do accept (LAMBDA ...) as a
valid form that...", and we aren't, I am strenuously against this suggestion.

∂05-Feb-82  0022	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
Date: 3 February 1982 19:20-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SUBST vs INLINE, consistent compilation
To: Masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

In Common Lisp the macro definition of SWAP would be the same of
as your SUBST, except for some commas (i.e. defmacro handles
normal argument lists).  I don't think Common Lisp needs subst
as another way of defining macros.  Inline functions are,
however, useful.

∂05-Feb-82  2247	Fahlman at CMU-20C 	Maclisp compatibility    
Date:  6 Feb 1982 0141-EST
From: Fahlman at CMU-20C
Subject: Maclisp compatibility
To: common-lisp at SU-AI


I would like to second RMS's views about Maclisp compatibility: there are
many goals to be traded off here, and any rigid set of guidelines is
going to do more harm than good.  Early in the effort the following
general principles were agreed upon by those working on Common Lisp at
the time:

1. Common Lisp will not be a strict superset of Maclisp.  There are some
things that need to be changed, even at the price of incompatibility.
If it comes down to a clear choice between making Common Lisp better
and doing what Maclisp does, we make Common Lisp better.

2. Despite point 1, we should be compatible with Maclisp and Lisp
Machine Lisp unless there is a good reason not to be.  Functions added
or subtracted are relatively innocuous, but incompatible changes to
existing functions should only be made with good reason and after
careful deliberation.  Common Lisp started as a Maclisp derivitive, and
we intend to move over much code and many users from the Maclisp
world.  The easier we make that task, the better it is for all of us.

3. If possible, consistent with points 1 and 2, we should not do
anything that screws people moving over from Interlisp.  The same holds
for the lesser-used Lisps, but with correspondingly less emphasis.  I
think that Lisp 1.5 should get no special treatment here: all of its
important features show up in Maclisp, and the ones that have changed or
dropped away have done so for good reason.

-- Scott
-------

∂06-Feb-82  1200	Daniel L. Weinreb <dlw at MIT-AI> 	Maclisp compatibility    
Date: Friday, 6 February 1981, 14:56-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Maclisp compatibility
To: Fahlman at CMU-20C, common-lisp at SU-AI

Your message is exactly what I wanted to see.  This is just as much of a
policy as I think we need.  I didn't want any more rigid guidelines than
that; I just wanted a set of principles that we all agree upon.

Not everybody on the mailing list seems to agree with your set here.  I
do, by the way, but clearly HEDRICK does not.  I hope the official
referee will figure out what to do about this.  Guy?

∂06-Feb-82  1212	Daniel L. Weinreb <dlw at MIT-AI> 	Return values of SETF    
Date: Friday, 6 February 1981, 15:12-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Return values of SETF
To: common-lisp at SU-AI

I'm pretty much convinced by Masinter's mail.  SETF should be defined to
return the value that it stores.  SETF is really too important a form to
work in an explicitly undefined method, and compiler optimizations
and/or special-purpose settting functions (that exist only so that SETF
can turn into them) are well worth it to keep SETF from having to have
crummy "undefined" behavior.  (Not having any kind of up-to-date Common
Lisp manual, I have no idea how or if it is currently defined.)

∂06-Feb-82  1232	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Friday, 6 February 1981, 15:25-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

I see your point.  I agree; given this insight, I am happy with the Lispm
syntax plus MVCALL.  There is one thing that I'd like to see improved,
if possible.  In the example:

		(MV-CALL #'(LAMBDA (FOO BAR (:REST ZTESCH)) ...)
			 (BAZOLA))

the order of events is that BAZOLA happens first, and the body of the
function happens second.  This has the same problem that
lambda-combinations had; LET was introduced to solve the problem.  If
anyone can figure out something that solves this problem for MV-CALL
without any other ill effects, I'd like to know about it.  One
possibility is to simply switch the order of the two subforms; what do
people think about that?

However, I'm not trying to be a troublemaker.  If nobody comes up with a
widely-liked improvement, I will be happy to accept the proposal as it
stands.

∂06-Feb-82  1251	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Maclisp compatibility 
Date:  6 Feb 1982 1547-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Maclisp compatibility
To: dlw at MIT-AI
cc: Fahlman at CMU-20C, common-lisp at SU-AI
In-Reply-To: Your message of 6-Feb-82 1506-EST

No, I think the approach suggested by the folks at CMU is fine.
-------

∂06-Feb-82  1416	Eric Benson <BENSON at UTAH-20> 	Re: Maclisp compatibility  
Date:  6 Feb 1982 1513-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: Maclisp compatibility
To: Fahlman at CMU-20C, common-lisp at SU-AI
In-Reply-To: Your message of 5-Feb-82 2341-MST

"Lisp 1.5 should get no special treatment here: all of its important features
show up in Maclisp, and the ones that have changed or dropped away have done
so for good reason."

I am curious about one feature of Lisp 1.5 (and also Standard Lisp) which was
dropped from Maclisp.  I am referring to the Flag/FlagP property list functions.
I realize that Put(Symbol, Indicator, T) can serve the same function, but I
can't see any good reason why the others should have been dropped.  In an
obvious implementation of property lists Put/Get can use dotted pairs and
Flag/FlagP use atoms, making the property list itself sort of a corrupted
association list.  Maclisp and its descendants seem to use a flat list of
alternating indicators and values.  It isn't clear to me what advantage this
representation gives over the a-list.  Were Flag and FlagP dropped as a
streamlining effort, or what?
-------

∂06-Feb-82  1429	Howard I. Cannon <HIC at MIT-MC> 	Return values of SETF
Date: 6 February 1982 17:23-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Return values of SETF
To: common-lisp at SU-AI
cc: dlw at MIT-AI

I strongly agree.  I have always thought it a screw that SETF did not return
a value like SETQ.  It sometimes makes for more compact, readable, and convenient
coding.

∂06-Feb-82  2031	Fahlman at CMU-20C 	Value of SETF  
Date:  6 Feb 1982 2328-EST
From: Fahlman at CMU-20C
Subject: Value of SETF
To: common-lisp at SU-AI


Just for the record, I am also persuaded by Masinter's arguments for
having SETF return the value that it stores, assuming that RPLACA and
RPLACD are the only forms that want to do something else.  It would
cause no particular problems in the Spice implementation to add two new
primitives that are like RPLACA and RPLACD but return the values, and
the additional uniformity would be well worth it.

-- Scott
-------

∂06-Feb-82  2102	Fahlman at CMU-20C 	Re: MVLet      
Date:  6 Feb 1982 2354-EST
From: Fahlman at CMU-20C
Subject: Re: MVLet    
To: dlw at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 6-Feb-82 1536-EST


DLW's suggestion that we switch the order of arguments to M-V-CALL, so
that the function comes after the argument forms, does not look very
attractive if you allow more than one argument form.  This would be the
universally reviled situation in which a single required argument comes
after a rest arg.

As currently proposed, with the function to be called as the first arg,
M-V-CALL exactly parallels the format of FUNCALL.  (The difference, of
course, is that M-V-CALL uses all of the values returned by each of the
argument forms, while FUNCALL accepts only one value from each argument
form.)

-- Scott
-------

∂07-Feb-82  0129	Richard Greenblatt <RG at MIT-AI>  
Date: 7 February 1982 04:26-EST
From: Richard Greenblatt <RG at MIT-AI>
To: common-lisp at SU-AI

Re compatibility, etc
  Its getting really hard to keep track of
where things "officially" stand.   Hopefully,
the grosser of the suggestions that go whizzing
by on this mailing list are getting flushed,
but I have this uneasy feeling that one
of these days I will turn around and find
there has been "agreement" to change something
really fundamental like EQ.
  Somewhere there should be a clear and current summary
of "Proposed Changes which would change
the world."  What I'm talking about here are cases
where large bodies of code can reasonably be
expected to be affected, or changes or extensions
time honored central concepts like MEMBER or LAMBDA.
  It would be nice to have summaries from time to time
on the new frobs (like this MV-LET thing) that are proposed
but that is somewhat less urgent.

∂07-Feb-82  0851	Fahlman at CMU-20C  
Date:  7 Feb 1982 1149-EST
From: Fahlman at CMU-20C
To: RG at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 7-Feb-82 0426-EST


I feel sure that no really incompatible changes will become "official"
without another round of explicit proposal and feedback, though the
group has grown so large and diverse that we can no longer expect
unanimity on all issues -- we will have to be content with the emrgence
of substantial consensus, especially among those people representing
major implemenation efforts.  Of course, there is a weaker form of
"acceptance" in which a proposal seems to have been accepted by all
parties and therefore becomes the current working hypothesis, pending an
official round of feedback.

-- Scott
-------

∂07-Feb-82  2234	David A. Moon <Moon at MIT-MC> 	Flags in property lists
Date: Monday, 8 February 1982, 01:31-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Flags in property lists
To: Eric Benson <BENSON at UTAH-20>
Cc: common-lisp at SU-AI

Flat property lists can be stored more efficiently than pair lists
in Lisp with cdr-coding.  That isn't why Maclisp dropped them, of
course; probably Maclisp dropped them because they are a crock and
because they make GET a little slower, which slows down the
interpreter in a system like Maclisp that stores function definitions
on the property list.

∂08-Feb-82  0749	Daniel L. Weinreb <DLW at MIT-MC> 	mv-call   
Date: 8 February 1982 10:48-EST
From: Daniel L. Weinreb <DLW at MIT-MC>
Subject: mv-call
To: common-lisp at SU-AI

I guess my real disagreement with mv-call is that I don't like to see it
used with more than one form.  I have explained before that the mv-call
with many forms has the effect of concatenating together the returned
values of many forms, which is something that I cannot possibly imagine
wanting to do, givn the way we use multiple values in practice today.  (I
CAN see it as useful in a completely different programming style that is so
far unexplored, but this is a standardization effort, not a language
experiment, and so I don't think that's relevant.)  This was my original
objection to mv-call.

RPG's message about mv-call shows how you can use it with only one form to
get the effect of the new-style lambda-binding multiple-value forms, and
that looked attractive.  But I still don't like the mv-call form when used
with more than one form.

I do not for one moment buy the "analogy with funcall" argument.  I think
of funcall as a function.  It takes arguments and does something with them,
namely , aply the firsrt to the rest.  mv-call is most certainly not a
function: it is a special form.  I think that in all important ways,
what it does is different in kind and spirit from funcall.  Now, I realize
that this is a matter of personal philosophy, and you may simply not feel
this way.

Anyway, I still don't want to make trouble.  So while I'd prefer having
mv-call only work with one form, and then to have the order of its subforms
reversed, I'll go along with the existing proposal if nobody supports me.

∂08-Feb-82  0752	Daniel L. Weinreb <DLW at MIT-MC>  
Date: 8 February 1982 10:51-EST
From: Daniel L. Weinreb <DLW at MIT-MC>
To: common-lisp at SU-AI

I agree with RG, even after hearing Scott's reply.  I would like to
see, in the next manual, a section prominently placed that summarizes
fundamental incompatibilities with Maclisp and changes in philosophy,
especially those that are not things that are already in Zetalisp.
For those people who have not been following Common Lisp closely,
and even for people like me who are following sort of closely, it would
be extremely valuable to be able to see these things without poring
over the entire manual.

∂08-Feb-82  1256	Guy.Steele at CMU-10A 	Flat property lists   
Date:  8 February 1982 1546-EST (Monday)
From: Guy.Steele at CMU-10A
To: benson at utah-20
Subject:  Flat property lists
CC: common-lisp at SU-AI
Message-Id: <08Feb82 154637 GS70@CMU-10A>

LISP 1.5 used flat property lists (see LISP 1.5 Programmer's Manual,
page 59).  Indeed, Standard LISP is the first I know of that did *not*
use flat property lists.  Whence came this interesting change, after all?
--Guy

∂08-Feb-82  1304	Guy.Steele at CMU-10A 	The "Official" Rules  
Date:  8 February 1982 1559-EST (Monday)
From: Guy.Steele at CMU-10A
To: rg at MIT-AI
Subject:  The "Official" Rules
CC: common-lisp at SU-AI
Message-Id: <08Feb82 155937 GS70@CMU-10A>

Well, I don't know what the official rules are, but my understanding
was that my present job is simply to make the revisions decided
upon in November, and when that revised document comes out we'll have
another round of discussion.  This is not to say that the discussion
going on now is useless.  I am carefully saving it all in a file for
future collation.  It is just that I thought I was not authorized to
make any changes on the basis of current discussion, but only on what
was agreed upon in November.  So everyone should rest assured that a
clearly labelled document like the previous "Discussion" document
will be announced before any other "official" changes are made.

(Meanwhile, I have a great idea for eliminating LAMBDA from the language
by using combinators...)
--Guy

∂08-Feb-82  1410	Eric Benson <BENSON at UTAH-20> 	Re:  Flat property lists   
Date:  8 Feb 1982 1504-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re:  Flat property lists
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 8-Feb-82 1346-MST

I think I finally figured out what's going on.  Indeed every Lisp dialect I
can find a manual for in my office describes property lists as flat lists
of alternating indicators and values.  The dialects which do have flags
(Lisp 1.5 and Lisp/360) appear to just throw them in as atoms in the flat
list.  This obviously leads to severe problems in synchronizing the search
down the list!  Perhaps this is the origin of Moon's (unsupported) claim
that flags are a crock.  Flags are not a crock, but the way they were
implemented certainly was!  This must have led to their elimination in more
recent dialects, such as Stanford Lisp 1.6, Maclisp and Interlisp.
Standard Lisp included flags, but recent implementations have used a more
reasonable implementation for them, by making the p-list resemble an a-list
except for the atomic flags.  Even without flags, an a-list seems like a
more obvious implementation to me, since it reflects the structure of the
data.  There is NO cost difference in space or speed (excluding cdr-coding)
between a flat list and an a-list if flags are not included.  The presence
of flags on the list requires a CONSP test for each indicator comparison
which would otherwise be unnecessary.

Much of the above is speculation.  Lisp historians please step forward and
correct me.
-------

∂08-Feb-82  1424	Don Morrison <Morrison at UTAH-20> 	Re:  Flat property lists
Date:  8 Feb 1982 1519-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re:  Flat property lists
To: Guy.Steele at CMU-10A
cc: benson at UTAH-20, common-lisp at SU-AI
In-Reply-To: Your message of 8-Feb-82 1346-MST

Stanford LISP 1.6  (which predates  "Standard" LISP)  used a-lists  for
instead of flat  property lists.   See the  manual by  Quam and  Diffie
(SAILON 28.7), section 3.1.  

It was also mentioned a message or two ago that even in implementations
without cdr-coding  flat  property  lists are  more  efficient.   Would
someone explain to me why?   If we assume that  cars and cdrs cost  the
same and do not have flags (Stanford LISP 1.6 does not have flags) then
I see no difference in  cost.  And certainly the a-list  implementation
is a bit more perspicuous. There's  got to be a reason besides  inertia
why nearly all LISPs use flat property lists.  But in any case,  Common
LISP has no  business telling  implementers how  to implement  property
lists -- simply explain the semantics of PutProp, GetProp, and RemProp,
or whatever they end up being called and leave it to the implementer to
use a  flat  list, a-list,  hash-table,  or,  if he  insists,  a  flat,
randomly ordered list of triples.  It should make no difference to  the
Common LISP definition. 
-------

∂08-Feb-82  1453	Richard M. Stallman <RMS at MIT-AI>
Date: 8 February 1982 16:56-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

In my opinion, the distinction between functions and special
forms is not very important, and Mv-call really is like funcall.