perm filename COMMON.MSG[COM,LSP]10 blob sn#666816 filedate 1982-07-09 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00222 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00031 00002	∂30-Dec-81  1117	Guy.Steele at CMU-10A 	Text-file versions of DECISIONS and REVISIONS documents  
C00033 00003	∂23-Dec-81  2255	Kim.fateman at Berkeley 	elementary functions
C00036 00004	∂01-Jan-82  1600	Guy.Steele at CMU-10A 	Tasks: A Reminder and Plea 
C00040 00005	∂08-Dec-81  0650	Griss at UTAH-20 (Martin.Griss) 	PSL progress report   
C00049 00006	∂15-Dec-81  0829	Guy.Steele at CMU-10A 	Arrgghhh blag    
C00051 00007	∂18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
C00055 00008	∂21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
C00057 00009	∂21-Dec-81  1101	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
C00058 00010	∂21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
C00061 00011	∂22-Dec-81  0811	Kim.fateman at Berkeley 	various: arithmetic  commonlisp broadcasts  
C00064 00012	∂22-Dec-81  0847	Griss at UTAH-20 (Martin.Griss) 	[Griss (Martin.Griss): Re: Common Lisp]   
C00068 00013	∂23-Dec-81 1306	Guy.Steele at CMU-10A 	Re: various: arithmetic commonlisp broadcasts 
C00076 00014	∂18-Dec-81  1533	Jon L. White <JONL at MIT-XX> 	Extended-addressing Common Lisp   
C00077 00015	∂21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
C00079 00016	∂22-Dec-81  0827	Griss at UTAH-20 (Martin.Griss) 	Re: various: arithmetic  commonlisp broadcasts
C00081 00017	∂04-Jan-82  1754	Kim.fateman at Berkeley 	numbers in common lisp   
C00090 00018	∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
C00096 00019	∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
C00098 00020	∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
C00100 00021	∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
C00102 00022	∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
C00105 00023	∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
C00108 00024	∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
C00109 00025	∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
C00112 00026	∂17-Jan-82  2207	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
C00114 00027	∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
C00116 00028	∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
C00117 00029	∂02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
C00120 00030	∂14-Jan-82  0732	Griss at UTAH-20 (Martin.Griss) 	Common LISP 
C00121 00031	∂14-Jan-82  2032	Jonathan A. Rees <JAR at MIT-MC>   
C00124 00032	∂15-Jan-82  0109	RPG   	Rutgers lisp development project 
C00137 00033	∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
C00143 00034	∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
C00145 00035	∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
C00147 00036	∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
C00149 00037	∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
C00152 00038	∂16-Jan-82  1252	Griss at UTAH-20 (Martin.Griss) 	Kernel for Commaon LISP    
C00154 00039	∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
C00157 00040	∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
C00158 00041	∂17-Jan-82  0618	Griss at UTAH-20 (Martin.Griss) 	Agenda 
C00161 00042	∂17-Jan-82  1751	Feigenbaum at SUMEX-AIM 	more on Interlisp-VAX    
C00167 00043	∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
C00170 00044	∂17-Jan-82  2042	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
C00172 00045	∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
C00174 00046	∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
C00175 00047	∂18-Jan-82  1602	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
C00176 00048	∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
C00179 00049	∂19-Jan-82  1551	RPG  	Suggestion    
C00181 00050	∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
C00183 00051	∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
C00200 00052	∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
C00209 00053	∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
C00211 00054	∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
C00213 00055	∂18-Jan-82  1537	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
C00214 00056	∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
C00217 00057	∂19-Jan-82  1551	RPG  	Suggestion    
C00220 00058	∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
C00222 00059	∂19-Jan-82  2113	Fahlman at CMU-20C 	Re: Suggestion      
C00224 00060	∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
C00241 00061	∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
C00250 00062	∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
C00252 00063	∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
C00260 00064	∂20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
C00268 00065	∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
C00271 00066	∂21-Jan-82  1746	Earl A. Killian <EAK at MIT-MC> 	SET functions    
C00272 00067	∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
C00274 00068	∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
C00277 00069	∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
C00280 00070	∂21-Jan-82  1144	Sridharan at RUTGERS (Sri) 	S-1 CommonLisp   
C00291 00071	∂21-Jan-82  1651	Earl A. Killian <EAK at MIT-MC> 	SET functions    
C00292 00072	∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
C00294 00073	∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
C00297 00074	∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
C00299 00075	∂22-Jan-82  1842	Fahlman at CMU-20C 	Re: adding to kernel
C00303 00076	∂22-Jan-82  1914	Fahlman at CMU-20C 	Multiple values
C00305 00077	∂22-Jan-82  2132	Kim.fateman at Berkeley 	Re: adding to kernel
C00309 00078	∂23-Jan-82  0409	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00313 00079	∂23-Jan-82  0910	RPG  
C00315 00080	∂23-Jan-82  1841	Fahlman at CMU-20C  
C00318 00081	∂23-Jan-82  2029	Fahlman at CMU-20C 	Re:  adding to kernel    
C00324 00082	∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
C00325 00083	∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
C00327 00084	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00329 00085	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00331 00086	∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00335 00087	∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
C00337 00088	∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
C00339 00089	∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
C00342 00090	∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
C00343 00091	∂27-Jan-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
C00346 00092	∂27-Jan-82  1445	Jon L White <JONL at MIT-MC> 	Multiple mailing lists?  
C00347 00093	∂27-Jan-82  1438	Jon L White <JONL at MIT-MC> 	Two little suggestions for macroexpansion    
C00353 00094	∂27-Jan-82  2202	RPG  	MVLet    
C00356 00095	∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00358 00096	∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
C00359 00097	∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
C00361 00098	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00363 00099	∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
C00365 00100	∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
C00369 00101	∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
C00371 00102	∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
C00373 00103	∂25-Jan-82  1436	Hanson at SRI-AI 	NIL and DEC VAX Common LISP
C00375 00104	∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
C00378 00105	∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
C00379 00106	∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00381 00107	∂28-Jan-82  1235	Fahlman at CMU-20C 	Re: MVLet      
C00386 00108	∂28-Jan-82  1416	Richard M. Stallman <rms at MIT-AI> 	Macro expansion suggestions 
C00388 00109	∂28-Jan-82  1914	Howard I. Cannon <HIC at MIT-MC> 	Macro expansion suggestions    
C00393 00110	∂27-Jan-82  1633	Jonl at MIT-MC Two little suggestions for macroexpansion
C00399 00111	∂28-Jan-82  1633	Fahlman at CMU-20C 	Re: Two little suggestions for macroexpansion
C00401 00112	∂29-Jan-82  0945	DILL at CMU-20C 	Re: eql => eq?    
C00405 00113	∂29-Jan-82  1026	Guy.Steele at CMU-10A 	Okay, you hackers
C00407 00114	∂29-Jan-82  1059	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
C00411 00115	∂29-Jan-82  1146	Guy.Steele at CMU-10A 	MACSYMA timing   
C00413 00116	∂29-Jan-82  1204	Guy.Steele at CMU-10A 	Re: eql => eq?   
C00415 00117	∂29-Jan-82  1225	George J. Carrette <GJC at MIT-MC> 	MACSYMA timing
C00418 00118	∂29-Jan-82  1324	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
C00419 00119	∂29-Jan-82  1332	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
C00420 00120	∂29-Jan-82  1336	Guy.Steele at CMU-10A 	Re: Re: eql => eq?    
C00422 00121	∂29-Jan-82  1654	Richard M. Stallman <RMS at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs.    
C00425 00122	∂29-Jan-82  2149	Kim.fateman at Berkeley 	Okay, you hackers   
C00428 00123	∂29-Jan-82  2235	HIC at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs.  
C00432 00124	∂30-Jan-82  0006	MOON at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs 
C00433 00125	∂30-Jan-82  0431	Kent M. Pitman <KMP at MIT-MC> 	Those two little suggestions for macroexpansion 
C00435 00126	∂30-Jan-82  1234	Eric Benson <BENSON at UTAH-20> 	Re: MVLet   
C00438 00127	∂30-Jan-82  1351	RPG  	MVlet    
C00439 00128	∂30-Jan-82  1405	Jon L White <JONL at MIT-MC> 	Comparison of "lambda-macros" and my "Two little suggestions ..."
C00447 00129	∂30-Jan-82  1446	Jon L White <JONL at MIT-MC> 	The format ((MACRO . f) ...)  
C00449 00130	∂30-Jan-82  1742	Fahlman at CMU-20C 	Re: MVlet      
C00451 00131	∂30-Jan-82  1807	RPG  	MVlet    
C00453 00132	∂30-Jan-82  1935	Guy.Steele at CMU-10A 	Forwarded message
C00456 00133	∂30-Jan-82  1952	Fahlman at CMU-20C 	Re: MVlet      
C00462 00134	∂30-Jan-82  2002	Fahlman at CMU-20C 	GETPR
C00464 00135	∂30-Jan-82  2201	Richard M. Stallman <RMS at MIT-AI>
C00465 00136	∂31-Jan-82  1116	Daniel L. Weinreb <dlw at MIT-AI> 	GETPR
C00466 00137	∂01-Feb-82  0752	Jon L White <JONL at MIT-MC> 	Incredible co-incidence about the format ((MACRO . f) ...)  
C00468 00138	∂01-Feb-82  0939	HIC at SCRC-TENEX 	Incredible co-incidence about the format ((MACRO . f) ...)   
C00472 00139	∂01-Feb-82  1014	Kim.fateman at Berkeley 	GETPR and compatibility  
C00477 00140	∂01-Feb-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	a proposal about compatibility 
C00479 00141	∂01-Feb-82  1039	Daniel L. Weinreb <DLW at MIT-AI> 	Re: MVLet      
C00482 00142	∂01-Feb-82  2315	Earl A. Killian <EAK at MIT-MC> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs   
C00484 00143	∂01-Feb-82  2315	FEINBERG at CMU-20C 	Compatibility With Maclisp   
C00487 00144	∂01-Feb-82  2319	Earl A. Killian <EAK at MIT-MC> 	GET/PUT names    
C00489 00145	∂01-Feb-82  2319	Howard I. Cannon <HIC at MIT-MC> 	The right way   
C00493 00146	∂01-Feb-82  2321	Jon L White <JONL at MIT-MC> 	MacLISP name compatibility, and return values of update functions
C00498 00147	∂01-Feb-82  2322	Jon L White <JONL at MIT-MC> 	MVLet hair, and RPG's suggestion   
C00502 00148	∂02-Feb-82  0002	Guy.Steele at CMU-10A 	The right way    
C00504 00149	∂02-Feb-82  0110	Richard M. Stallman <RMS at MIT-AI>
C00506 00150	∂02-Feb-82  0116	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
C00508 00151	∂02-Feb-82  1005	Daniel L. Weinreb <DLW at MIT-AI>  
C00510 00152	∂02-Feb-82  1211	Eric Benson <BENSON at UTAH-20> 	Re: MacLISP name compatibility, and return values of update functions   
C00512 00153	∂02-Feb-82  1304	FEINBERG at CMU-20C 	a proposal about compatibility    
C00513 00154	∂02-Feb-82  1321	Masinter at PARC-MAXC 	Re: MacLISP name compatibility, and return values of update functions   
C00514 00155	∂02-Feb-82  1337	Masinter at PARC-MAXC 	SUBST vs INLINE, consistent compilation   
C00517 00156	∂02-Feb-82  1417	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: a proposal about compatibility  
C00522 00157	∂02-Feb-82  1539	Richard M. Stallman <RMS at MIT-AI> 	No policy is a good policy  
C00526 00158	∂02-Feb-82  1926	DILL at CMU-20C 	upward compatibility   
C00528 00159	∂02-Feb-82  2148	RPG  	MVLet    
C00529 00160	∂02-Feb-82  2223	Richard M. Stallman <RMS at MIT-AI>
C00531 00161	∂02-Feb-82  2337	David A. Moon <MOON at MIT-MC> 	upward compatibility   
C00533 00162	∂03-Feb-82  1622	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
C00534 00163	∂04-Feb-82  1513	Jon L White <JONL at MIT-MC> 	"exceptions" possibly based on misconception and EVAL strikes again  
C00539 00164	∂04-Feb-82  2047	Howard I. Cannon <HIC at MIT-MC> 	"exceptions" possibly based on misconception and EVAL strikes again   
C00541 00165	∂05-Feb-82  0022	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
C00542 00166	∂05-Feb-82  2247	Fahlman at CMU-20C 	Maclisp compatibility    
C00545 00167	∂06-Feb-82  1200	Daniel L. Weinreb <dlw at MIT-AI> 	Maclisp compatibility    
C00547 00168	∂06-Feb-82  1212	Daniel L. Weinreb <dlw at MIT-AI> 	Return values of SETF    
C00549 00169	∂06-Feb-82  1232	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
C00551 00170	∂06-Feb-82  1251	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Maclisp compatibility 
C00552 00171	∂06-Feb-82  1416	Eric Benson <BENSON at UTAH-20> 	Re: Maclisp compatibility  
C00554 00172	∂06-Feb-82  1429	Howard I. Cannon <HIC at MIT-MC> 	Return values of SETF
C00555 00173	∂06-Feb-82  2031	Fahlman at CMU-20C 	Value of SETF  
C00556 00174	∂06-Feb-82  2102	Fahlman at CMU-20C 	Re: MVLet      
C00558 00175	∂07-Feb-82  0129	Richard Greenblatt <RG at MIT-AI>  
C00560 00176	∂07-Feb-82  0851	Fahlman at CMU-20C  
C00562 00177	∂07-Feb-82  2234	David A. Moon <Moon at MIT-MC> 	Flags in property lists
C00563 00178	∂08-Feb-82  0749	Daniel L. Weinreb <DLW at MIT-MC> 	mv-call   
C00566 00179	∂08-Feb-82  0752	Daniel L. Weinreb <DLW at MIT-MC>  
C00568 00180	∂08-Feb-82  1256	Guy.Steele at CMU-10A 	Flat property lists   
C00569 00181	∂08-Feb-82  1304	Guy.Steele at CMU-10A 	The "Official" Rules  
C00571 00182	∂08-Feb-82  1410	Eric Benson <BENSON at UTAH-20> 	Re:  Flat property lists   
C00574 00183	∂08-Feb-82  1424	Don Morrison <Morrison at UTAH-20> 	Re:  Flat property lists
C00577 00184	∂08-Feb-82  1453	Richard M. Stallman <RMS at MIT-AI>
C00578 00185	∂19-Feb-82  1656	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Revised sequence proposal 
C00579 00186	∂20-Feb-82  1845	Scott.Fahlman at CMU-10A 	Revised sequence proposal    
C00580 00187	∂21-Feb-82  2357	MOON at SCRC-TENEX 	Fahlman's new new sequence proposal, and an issue of policy 
C00587 00188	∂22-Feb-82  0729	Griss at UTAH-20 (Martin.Griss)    
C00589 00189	∂08-Feb-82  1222	Hanson at SRI-AI 	common Lisp 
C00594 00190	∂28-Feb-82  1158	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL  
C00608 00191	∂28-Feb-82  1342	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL addendum   
C00610 00192	∂28-Feb-82  1524	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
C00615 00193	∂28-Feb-82  1700	Kim.fateman at Berkeley 	smoking things out of macsyma 
C00618 00194	∂28-Feb-82  1803	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re:  T and NIL. 
C00621 00195	∂28-Feb-82  2102	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
C00623 00196	∂28-Feb-82  2333	George J. Carrette <GJC at MIT-MC> 	Take the hint.
C00625 00197	∂01-Mar-82  1356	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: T and NIL   
C00629 00198	∂01-Mar-82  2031	Richard M. Stallman <RMS at MIT-AI> 	Pronouncing ()    
C00631 00199	∂01-Mar-82  2124	Richard M. Stallman <RMS at MIT-AI> 	() and T.    
C00635 00200	∂02-Mar-82  1233	Jon L White <JONL at MIT-MC> 	NIL versus (), and more about predicates.    
C00641 00201	∂02-Mar-82  1322	Jon L White <JONL at MIT-MC> 	NOT and NULL: addendum to previous note 
C00642 00202	∂02-Mar-82  1322	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
C00645 00203	∂02-Mar-82  1406	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	I think I am missing something 
C00649 00204	∂03-Mar-82  1158	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
C00651 00205	∂03-Mar-82  1337	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
C00653 00206	∂03-Mar-82  1753	Richard M. Stallman <RMS at MIT-AI>
C00655 00207	∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	T and NIL   
C00656 00208	∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	Fahlman's new new sequence proposal, and an issue of policy   
C00658 00209	∂05-Mar-82  0101	Richard M. Stallman <RMS at MIT-AI> 	COMPOSE 
C00659 00210	∂05-Mar-82  0902	Jon L White <JONL at MIT-MC> 	What are you missing?  and "patching"  ATOM and LISTP  
C00663 00211	∂05-Mar-82  0910	Jon L White <JONL at MIT-MC> 	How useful will a liberated T and NIL be?    
C00666 00212	∂05-Mar-82  1129	MASINTER at PARC-MAXC 	NIL and T   
C00669 00213	∂05-Mar-82  1308	Kim.fateman at Berkeley 	aesthetics, NIL and T    
C00671 00214	∂05-Mar-82  2045	George J. Carrette <GJC at MIT-MC> 	I won't die if (SYMBOLP (NOT 'FOO)) => T, but really now...
C00675 00215	∂05-Mar-82  2312	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Lexical Scoping 
C00677 00216	∂06-Mar-82  1218	Alan Bawden <ALAN at MIT-MC> 	What I still think about T and NIL 
C00679 00217	∂06-Mar-82  1251	Alan Bawden <ALAN at MIT-MC> 	What I still think about T and NIL 
C00681 00218	∂06-Mar-82  1326	Howard I. Cannon <HIC at MIT-MC> 	T/NIL 
C00682 00219	∂06-Mar-82  1351	Eric Benson <BENSON at UTAH-20> 	CAR of NIL  
C00683 00220	∂06-Mar-82  1429	KIM.jkf@Berkeley (John Foderaro) 	t and nil  
C00685 00221	∂06-Mar-82  1911	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: CAR of NIL  
C00689 00222	∂06-Mar-82  2306	JMC  
C00690 ENDMK
C⊗;
∂30-Dec-81  1117	Guy.Steele at CMU-10A 	Text-file versions of DECISIONS and REVISIONS documents  
Date: 30 December 1981 1415-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Text-file versions of DECISIONS and REVISIONS documents
Message-Id: <30Dec81 141557 GS70@CMU-10A>

The files DECISIONS DOC and REVISIONS DOC  on directory  GLS;
at  MIT-MC  are available.  They are text files, as opposed to
PRESS files.  The former is 9958 lines long, and the latter is
1427.
--Guy

∂23-Dec-81  2255	Kim.fateman at Berkeley 	elementary functions
Date: 23 Dec 1981 22:48:00-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: elementary functions
Cc: Kim.jkf@UCB-C70, gjc@MIT-MC, griss@utah-20, jonl@MIT-MC, masinter@PARC-MAXC,
    rpg@SU-AI

I have no objection to making lisp work better with numerical computation.
I think that it is a far more complicated issue than you seem to think
to put in elementary functions.  Branch cuts are probably not hard.
APL's notion of a user-settable "fuzz" is gross.  Stan Brown's
model of arithmetic is (Ada notwithstanding) inadequate as a prescriptive
model (Brown agrees).  If you provide a logarithm function, are you
willing to bet that it will hold up to the careful scrutiny of people
like Kahan?
  
As for the vagaries of arithmetic in Franz, I hope such things will
get ironed out along with vagaries in the Berkeley UNIX system.  Kahan
and I intend to address such issues.  I think it is a mistake to
address such issues as LANGUAGE issues, though.

I have not seen Penfield's article (yet). 

As for the rational number implementation question, it seems to me
that implementation of rational numbers (as pairs) loses little by
being programmed in Lisp.  Writing bignums in lisp loses unless you
happen to have access to machine instructions like 64-bit divided by
32 bit, from Lisp.  

I would certainly like to see common lisp be successful;  if you
have specific plans for the arithmetic that you wish to get comments and/or
help on, please give them a wider circulation.  E.g. the IEEE
floating point committee might like to see how you might incorporate
good ideas in a language.
I would be glad to pass your plans on to them.

∂01-Jan-82  1600	Guy.Steele at CMU-10A 	Tasks: A Reminder and Plea 
Date:  1 January 1982 1901-EST (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Tasks: A Reminder and Plea
Message-Id: <01Jan82 190137 GS70@CMU-10A>

At the November meeting, a number of issues were deferred with the
understanding that certain people would make concrete proposals for
consideration and inclusion in the second draft of the manual.  I
promised to get the second draft out in January, and to do that I need
those proposals pretty soon.  I am asking to get them in two weeks (by
January 15).  Ideally they would already be in SCRIBE format, but I'll
settle for any reasonable-looking ASCII file of text approximately in
the style of the manual.  BOLIO files are okay too; I can semi-automate
BOLIO to SCRIBE conversion.  I would prefer not to get rambling prose,
outlines, or sentence fragments; just nice, clean, crisp text that
requires only typographical editing before inclusion in the manual.
(That's the goal, anyway; I realize I may have to do some
industrial-strength editing for consistency.)  A list of the outstanding
tasks follows.

--Guy

GLS: Propose a method for allowing special forms to have a dual
implementation as both a macro (for user and compiler convenience)
and as a fexpr (for interpreter speed).  Create a list of primitive
special forms not easily reducible via macros to other primitives.
As part of this suggest an alternative to FUNCTIONP of two arguments.

MOON: Propose a rigorous mathematical formulation of the treatment
of the optional tolerance-specification argument for MOD and REMAINDER.
(I had a crack at this and couldn't figure it out, though I think I
came close.)

GLS: Propose specifications for lexical catch, especially a good name for it.

Everybody: Propose a clean and consistent declaration system.

MOON/DLW/ALAN: Propose a cleaned-up version of LOOP.  Alter it to handle
most interesting sequence operations gracefully.

SEF: Propose a complete set of keyword-style sequence operations.

GLS: Propose a set of functional-style sequence operations.

GJC/RLB: Polish the VAXMAX proposal for feature sets and #+ syntax.

ALAN: Propose a more extensible character-syntax definition system.

GLS: Propose a set of functions to interface to a filename/pathname
system in the spirit of the LISP Machine's.

LISPM: Propose a new error-handling system.

LISPM: Propose a new package system.


∂08-Dec-81  0650	Griss at UTAH-20 (Martin.Griss) 	PSL progress report   
Date:  8 Dec 1981 0743-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: PSL progress report
To: rpg at SU-AI
cc: griss at UTAH-20

How was common LISP meeting?
Did you meet Ohlander?

Excuse me if following remailed to you, seems to be a mailer bug:
                            PSL Interest Group
                              2 December 1981


     Since my last message at the end of October, we have made significant
progress on the VAX version of PSL. Most of the effort this last month has
been directed at VAX PSL, with some utility work on the DEC-20 and Apollo.
Please send a message if you wish to be removed from this mailing LIST, or
wish other names to be added.

	Martin L. Griss,
	CS Dept., 3160 MEB,
	University of Utah,
	Salt Lake City, Utah 84112.
	(801)-581-6542

--------------------------------------------------------------------------

Last month, we started the VAX macros and LAP to UNIX as converter in
earnest.  We used the PSL-20 V2 sources and PSL to MIDAS compiler c-macros
and tables as a guide. After some small pieces of code were tested, cross
compilation on the DEC-20 and assembly on the VAX proceeded full-bore. Just
before Thanksgiving, there was rapid progress resulting in the first
executing PSL on the VAX. This version consisted mostly of the kernel
modules of the PSL-20 version, without the garbage collector, resident LAP
and some debugging tools. Most of the effort in implementing these smaller
modules is the requirement for a small amount of LAP to provide the
compiled function/interpreted function interface, and efficient variable
binding operations.  The resident LAP has to be newly written for the VAX.
The c-macros and compiler of course have been fully tested in the process
of building the kernel.

It was decided to produce a new stop-and-copy (two space) collector for
PSL-VAX, to replace the PSL-20 compacting collector.  This collector was
written in about a day and tested by loading it into PSL-20 and dynamically
redefining the compacting collector. On the DEC-20, it seems about 50%
faster than the compacting collector, and MUCH simpler to maintain. It will
be used for the Extended addressing PSL-20. This garbage collector is now
in use with PSL-VAX.

Additional ("non-kernel") modules have also been incorporated in this
cross-compilation phase (they are normally loaded as LAP into PSL-20) to
provide a usable interpreted PSL. PSL-VAX V1 now runs all of the Standard
LISP test file, and most utility modules run interpretively (RLISP parser,
structure editor, etc).  We may compile the RLISP parser and support in the
next build and have a complete RLISP for use until we have resident LAP and
compiler.  The implementation of the resident LAP, a SYSCALL function, etc
should take a few weeks. One possibility is to look at the Franz LISP fasl
and object file loader, and consider using the Unix assembler in a lower
fork with a fasl loader.

Preliminary timings of small interpreted code segments indicate that this
version PSL runs somewhat slower than FranzLISP. There are functions that
are slower and functions that are faster (usually because of SYSLISP
constructs).  We will time some compiled code shortly (have to
cross-compile and link into kernel in current PSL) to identify good and bad
constructs.  We will also spend some time studying the code emitted, and
change the code-generator tables to produce the next version, which we
expect to be quite a bit faster. The current code generator does not use
any three address or indexing mode operations.

We will shortly concentrate on the first Apollo version of PSL.  We do not
expect any major surprises. Most of the changes from the PSL-20 system
(byte/word conflicts) have now been completely flushed out in the VAX
version.  The 68000 tables should be modeled very closely on the VAX
tables. The current Apollo assembler, file-transfer program, and debugger
are not as powerful as the corresponding VAX tools, and this will make work
a little harder. To compensate, there will be less source changes to check
out.



M
-------

Eric
Just finished my long trip plus recovery from East coast flu's etc. Can
you compile the TAK function for me using your portable compiler and send
me the code. Also, could you time it on (TAK 18. 12. 6.). Here's the code
I mean:

(defun tak (x y z)
       (cond ((not (< y x))
	      z)
	     (t (tak (tak (1- x) y z)
		     (tak (1- y) z x)
		     (tak (1- z) x y))))))

I'm in the process of purring together a synopsis of the results from
the meeting. In short, from your viewpoint, we decided that it would be
necessary for us (Common Lisp) to specify a small virtual machine and
for us to then supply to all interested parties the rest of the system
in Common Lisp code. This means that there would be a smallish number
of primitives that you would need to implement. I assume that this
is satisfactory for the Utah contingent. 

Unfortunately, a second meeting will be necessary to complete the agenda 
since we did not quite finish. In fact, I was unable to travel to
Washington on this account.
∂15-Dec-81  0829	Guy.Steele at CMU-10A 	Arrgghhh blag    
Date: 15 December 1981 1127-EST (Tuesday)
From: Guy.Steele at CMU-10A
To: rpg at SU-AI
Subject:  Arrgghhh blag
Message-Id: <15Dec81 112717 GS70@CMU-10A>

Foo.  I didn't want to become involved in an ANSI standard, and I have
told people so. or one thing, it looks like a power play and might
alienate people such as the InterLISP crowd, and I wouldn't blame them.
In any case, I don't think it is appropriate to consider this until
we at least have a full draft manual.  If MRG wants to fight that fight,
let him at it.
I am working on collating the bibliographic entries.  I have most of them
on-line already, but just have to convert from TJ6 to SC
RIBE format.  I agree that the abstract is not very exciting -- it is
practically stodgy.  I was hoping you would know how to give it some oomph,
some sparkle.  If not, we'll just send it out as is and try to sparkle up
the paper if it is accepted.  Your siggestions about explaining TNBIND
and having a diagram are good.
--Q

∂18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
Date: 18 Dec 1981 1214-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: information about Common Lisp implementation
To: rpg at SU-AI, jonl at MIT-AI

We are about to sign a contract with DEC's LCG whereby they sponsor us
to produce an extended addressing Lisp.  We are still discussing whether
this should be Interlisp or Common Lisp.  I can see good arguments in
both directions, and do not have a strong perference, but I would
slightly prefer Common Lisp.  Do you know whether there are any
implementations of Common Lisp, or something reasonably close to it? I
am reconciled to producing my own "kernel", probably in assembly
language, though I have some other candidates in mind too. But I would
prefer not to have to do all of the Lisp code from scratch.

As you may know, DEC is probably going to support a Lisp for the VAX. My
guess is that we will be very likely to do the same dialect that  is
decided upon there.  The one exception would be if it looks like MIT (or
someone else) is going to do an extended implementation of Common Lisp.
If so, then we would probably do Interlisp, for completeness.

We have some experience in Lisp implementation now, since Elisp (the
extended implementation of Rutgers/UCI Lisp) is essentially finished.
(I.e. there are some extensions I want to put in, and some optimizations,
but it does allow any sane R/UCI Lisp code to run.) The interpreter now
runs faster than the original R/UCI lisp interpreter. Compiled code is
slightly slower, but we think this is due to the fact that we are not
yet compiling some things in line that should be. (Even CAR is not
always done in line!)  The compiler is Utah's portable compiler,
modified for the R/UCI Lisp dialect.  It does about what you would want
a Lisp compiler to do, except that it does not open code arithmetic
(though a later compiler has some abilities in that direction).  I
suspect that for a Common Lisp implementation we would try to use the
PDP-10 Maclisp compiler as a base, unless it is too crufty to understand
or modify.  Changing compilers to produce extended code turns out not to
be a very difficult job.
-------

∂21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
Date: 21 Dec 1981 0957-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Extended-addressing Common Lisp
To: JONL at MIT-XX
cc: rpg at SU-AI
In-Reply-To: Your message of 18-Dec-81 1835-EST

thanks.  At the moment the problem is that DEC is not sure whether they
are interested in Common Lisp or Interlisp.  We will probably
follow the decision they make for the VAX, which should be done
sometime within a month.  What surprised me about that was from what I
can hear one of Interlisp's main advantages was supposed to be that the
project was further along on the VAX than the NIL project.  That sounds
odd to me.  I thought NIL had been released.  You might want to talk
with some of the folks at DEC.  The only one I know is Kalman Reti,
XCON.RETI@DEC-MARLBORO.
-------

∂21-Dec-81  1101	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1355-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 21-Dec-81 1323-EST

I am very happy to hear this.  we have used their compiler for Elisp,
as you may know, and have generally been following their work.  I
have been very impressed also, and would be very happy to see their
work get into something that is going to be more widely used them
Standard Lisp.
-------

∂21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
Date: 21 Dec 1981 1806-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Common Lisp
To: rpg at SU-AI, griss at UTAH-20

I just had a conversation with JonL which I found to be somewhat
unsettling.  I had hoped that Common Lisp was a sign that the Maclisp
community was willing to start doing a common development effort. It
begins to look like this is not the case.  It sounds to me like the most
we can hope for is a bunch of Lisps that will behave quite differently,
have completely different user facilities, but will have a common subset
of language facilities which will allow knowlegable users to write
transportable code, if they are careful.  I.e. it looks a lot like the
old Standard Lisp effort, wherein you tried to tweak existing
implementations to support the Standard Lisp primitives.  I thought more
or less everyone agreed that hadn't worked so well, which is why the new
efforts at Utah to do something really transportable.  I thought
everybody agreed that these days the way you did a Lisp was to write
some small kernel in an implementation language, and then have a lot of
Lisp code, and that the Lisp code would be shared.

Supposing that we and DEC do agree to proceed with Common Lisp, would
you be interested in starting a Common Lisp sub-conspiracy, i.e. a group
of people interested in a shared Common Lisp implementation?  While we
are going to have support from DEC, that support is going to be $70K
(including University overhead) which is going to be a drop in the
bucket if we have to do a whole system, rather than just a VM and some
tweaking.

-------

∂22-Dec-81  0811	Kim.fateman at Berkeley 	various: arithmetic;  commonlisp broadcasts  
Date: 22 Dec 1981 08:04:24-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: various: arithmetic;  commonlisp broadcasts
Cc: gjc@mit-mc, griss@utah-20, Kim.jkf@Berkeley, jonl@mit-mc, masinter@parc-maxc, rpg@su-ai

seem to include token representatives from berkeley (jkf) and utah (dm).
I think that including fateman@berkeley and griss@utah, too, would be nice.

I noticed in the the interlisp representative's report (the first to arrive
in "clear text" (not press format), that arithmetic needs are being
dictated in such a way as to be "as much as you would want for an
algebraic manipulation system such as Macsyma."   Since ratios and
complex numbers are not supported in the base Maclisp, I wonder why
they would be considered important to have in the base common lisp?

Personally, having the common lisp people dictate the results of
elementary functions, the semantics of bigfloat (what happened to
bigfloat? Is it gone?), single and double...
and such, seems overly ambitious and unnecessary.
No other language, even Fortran or ADA does much of this, and what it
does is usually not very good.

The true argument for including such stuff is NOT requirements of 
algebraic  manipulation stuff, but the prospect of doing
ARITHMETIC manipulation stuff with C.L.  Since only a few people are
familiar with Macsyma and Macsyma-like systems, requirements expressed
in the form "macsyma needs it"  seem unarguable.  But they are not...

∂22-Dec-81  0847	Griss at UTAH-20 (Martin.Griss) 	[Griss (Martin.Griss): Re: Common Lisp]   
Date: 22 Dec 1981 0944-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: [Griss (Martin.Griss): Re: Common Lisp]
To: rpg at SU-AI
cc: griss at UTAH-20

This is part of my response to Hedrick's last message. I guess I dont know
what JonL siad to him... I feel that I would be able to make more informed
decisions, and to interact more on Common LISP if I was on the mailing list.
I believe that PSL is pretty viable replacement for Standard LISP, and 
maybe have some kernel for CL. We are on a course now that really wants us to
finish our current "new-LISP" and to begin using it for applications in next
2-3 months (eg NSF and Boeing support). I think having association with CL
would help some funding efforts, maybe ARPA, Schlumberger, etc.

Perhaps we could talk on phone?
M
                ---------------

Date: 22 Dec 1981 0940-MST
From: Griss (Martin.Griss)
Subject: Re: Common Lisp
To: HEDRICK at RUTGERS
cc: Griss
In-Reply-To: Your message of 21-Dec-81 1606-MST

   Some more thoughts. Actually, I havent heard anything "official" about
decisions on CommonLISP. RPG visited here, and I think our concerns that CL
definition was too large (even larger than InterLISP VM), helped formulate
a Kernel+CL extension files.  Clearly that is what we are doing now in PSL,
building on relatively successful parts of Standard LISP, such as compiler,
etc. (SL worked well enough for us, just didnt have resources to do more
then).  I agree that JonL's comments as relayed by you sound much more
Anarchistic...

  I would really like to get involved in Common LISP, probably do VAX and
68000, since I guess you seem to be snapping up DEC-20 market. I currently
plan to continue with PSL on 20, VAX and 68000, since we are almost done
first round. VAX 90% complete and 68000 partially underway. In same sense
that SYSLISP could be basis for your 20 InterLISP, I think SYSLISP and some
of PSL could be transportable kernel for CL.

I need of course to find more funding, I cant cover out of my NSF effort,
since we are just about ready to start using PSL. Ill be teaching class
using PSL on DEC-20 and VAX (maybe even 68000?) this quarter, get soem
Algebra and Graphics projects underway. I will of course strive to be as CL
compatible as I can afford at this time.
-------
-------

∂23-Dec-81 1306	Guy.Steele at CMU-10A 	Re: various: arithmetic; commonlisp broadcasts 
Date: 23 December 1981 0025-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: Kim.fateman at UCB-C70
Subject:  Re: various: arithmetic; commonlisp broadcasts
CC: gjc at MIT-MC, griss at utah-20, Kim.jkf at UCB-C70, jonl at MIT-MC,
    masinter at PARC-MAXC, rpg at SU-AI
In-Reply-To:  Kim.fateman@Berkeley's message of 22 Dec 81 11:06-EST
Message-Id: <23Dec81 002535 GS70@CMU-10A>

I sent the mail to the specified representatives of Berkeley and Utah
not because they were "token" but because they were the ones that had
actually contributed substantially to the discussion of outstanding issues.
I assumed that they would pass on the news.  I'll be glad to add you to
the mailing list if you really want that much more junk mail.

It should be noted that the InterLISP representative's report is just that:
the report of the InterLISP representative.  I think it is an excellent
report, but do not necessarily agree with all of its value judgements
and perspectives.  Therefore the motivations induced by vanMelle and
suggested in his report are not necessarily the true ones of the other
people involved.  I assume, however, that they accurately reflect vanMelle's
*perception* of people's motives, and as such are a valuable contribution
(because after all people may not understand their own motives well, or
may not realize how well or poorly they are communicating their ideas!).

You ask why Common LISP should support ratios and complex numbers, given
that MacLISP did not and yet MACSYMA got built anyway.  In response,
I rhetorically ask why MacLISP should have supported bignums, since
the PDP-10 does not?  Ratios were introduced primarily because they are
useful, they are natural for novices to use (at least as natural as
binary floating-point, with all its odd quirks, and with the advantage
of calculating exact results, such as (* 3 1/3) => 1, *always*), and
they solve problems with the quotient function.  Complex numbers were
motivated primarily by the S-1, which can handle complex floating-point
numbers and "Gaussian fixnums" primitively.  They need not be in Common
LISP, I suppose, but they are not much work to add.

The results of elementary functions are not being invented in a vacuum,
as you have several times insinuated, nor are the Common LISP implementors
going off and inventing some arbitrary new thing.  I have researched
the implementation, definition, and use of complex numbers in Algol 68,
PL/I, APL, and FORTRAN, and the real elementary functions in another
half-dozen languages.  The definitions of branch cuts and boundary cases,
which are in general not agreed on by any mathematicians at all (they tend
to define them *ad hoc* for the purpose at hand), are taken from a paper
by Paul Penfield for the APL community, in which he considers the problem
at length, weighs alternatives, and justifies his results according to
ten general principles, among which are consistency, keeping branch cuts
away from the positive real axis, preserving identities at boundaries,
and so on.  This paper has appeared in the APL '81 conference.  I agree that
mistakes have been made in other programming languages, but that does not
mean we should hide our heads in the sand.  A serious effort is being made
to learn from the past.  I think this effort is more substantial than will
be made by the dozens of Common LISP users who will have to write their
own trig functions if the language does not provide them.

Even if a mistake is made, it can be compensated for.  MACSYMA presently
has to compensate for MacLISP's ATAN function, whose range is 0 to 2*pi
(for most purposes -pi to pi is more appropriate, and certainly more
conventional).

[Could I inquire as to whether (FIX 1.0E20) still produces a smallish
negative number in Franz LISP?]

I could not agree more that all of this is relevant, not to *algebraic*
manipulation, but to *arithmetic* manipulation (although certainly the
presence of rational arithmetic will relieve MACSYMA of that particular
small burden).  But there is no good reason why LISP cannot become a
useful computational and well as symbolic language.  In particular,
certain kinds of AI work such as vision and speech research require
great amounts of numerical computation.  I know that you advocate
methods for linking FORTRAN or C programs to LISP for this purpose.
That is well and good, but I (for one) would like it also to be
practical to do it all in LISP if one so chooses.  LISP has already
expanded its horizons to support text editors and disk controllers;
why not also number-crunching?

--Guy

∂18-Dec-81  1533	Jon L. White <JONL at MIT-XX> 	Extended-addressing Common Lisp   
Date: 18 Dec 1981 1835-EST
From: Jon L. White <JONL at MIT-XX>
Subject: Extended-addressing Common Lisp
To: Hedrick at RUTGERS
cc: rpg at SU-AI

Sounds likea win for you to do it.  As far as I know, no one else
is going to do it (at least not now).  Probably some hints from
the NIL design would be good for you -- at one time the
file MC:NIL;VMACH >  gave a bunch of details about the
NIL "virtual machine".  Probably you should get in personal
touch with me (phone or otherwise) to chat about such "kernels".
-------

∂21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1012-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 20-Dec-81 2304-EST

thanks.  Are you sure Utah is producing Common Lisp?  they have a thing
they call Standard Lisp, which is something completely different.  I have
never heard of a Common Lisp project there, and I work very closely with
their Lisp development people so I think I would have.
-------

I visited there the middle of last month for about 3 days and talked
the technical side of Common Lisp being implemented in their style. Martin told
me that if we only insisted on a small virtual machine with most of the
rest in Lisp code from the Common Lisp people he'd like to do it.

I've been looking at their stuff pretty closely for the much behind schedule
Lisp evaluation thing and I'm pretty impressed with them. We discussed
grafting my S-1 Lisp compiler front end on top of their portable compiler.
			-rpg-
∂22-Dec-81  0827	Griss at UTAH-20 (Martin.Griss) 	Re: various: arithmetic;  commonlisp broadcasts
Date: 22 Dec 1981 0924-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: various: arithmetic;  commonlisp broadcasts
To: Kim.fateman at UCB-C70, guy.steele at CMU-10A
cc: gjc at MIT-MC, Kim.jkf at UCB-C70, jonl at MIT-MC, masinter at PARC-MAXC,
    rpg at SU-AI, Griss at UTAH-20
In-Reply-To: Your message of 22-Dec-81 0905-MST

I agree with Dick re being on the commonlisp mailing list. The PSL effort
is a more modest atempt at defining a transportable modern LISP, extending
Standard LISP with more powerful and efficient functions. I find no trace
of DM@utah-20 on our system, and have tried various aliases, still with
no luck.

Martin
-------

∂04-Jan-82  1754	Kim.fateman at Berkeley 	numbers in common lisp   
Date: 4 Jan 1982 17:54:03-PST
From: Kim.fateman at Berkeley
To: fahlman@cmu-10a, guy.steele@cmu-10a, moon@mit-ai, rpg@su-ai
Subject: numbers in common lisp
Cc: Kim.jkf@Berkeley, Kim.sklower@Berkeley


*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #(times 4 (atan 1.0)),  #(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual.

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support.

∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
Date: 15 January 1982 1124-EST (Friday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Multiple Values
CC: Scott.Fahlman at CMU-10A
Message-Id: <15Jan82 112415 SF50@CMU-10A>


I hate to rock the boat, but I would like to re-open one of the issues
supposedly settled at the November meeting, namely issue 55: whether to
go with the simple Lisp Machine style multiple-value receving forms, or
to go with the more complex forms in the Swiss Cheese Edition, which
provide full lambda-list syntax.

My suggestion was that we go with the simple forms and also provide the
Multiple-Value-Call construct, which more or less subsumes the
interesting uses for the Lambda-list forms.  The latter is quite easy
to implement, at least in Spice Lisp and I believe also in Lisp Machine
Lisp: you open the specified function call frame, evaluate the
arguments (which may return multiples) leaving all returned values on
the stack, then activate the call.  The normal argument-passing
machinery  (which is highly optimized) does all the lambda grovelling.
Furthermore, since this is only a very slight variation on a normal
function call, we should not be screwed in the future by unanticipated
interactions between this and, say, the declaration mechanism.

Much to my surprise, the group's decision was to go with all of the
above, but also to require that the lambda-hacking forms be supported.
This gives me real problems.  Given the M-V-CALL construct, I think
that these others are quite useless and likely to lead to many bad
interactions: this is now the only place where general lambda-lists have
to be grovelled outside of true function calls and defmacro.  I am not
willing to implement yet another variation on lambda-grovelling
just to include these silly forms, unless someone can show me that they
are more useful than I think they are.

The November vote might reflect the notion that M-V-LET and M-V-SETQ
can be implemented merely as special cases of M-V-CALL.  Note however,
that the bodies of the M-V-LET and M-V-SETQ forms are defined as
PROGNs, and will see a different set of local variables than they would
see if turned into a function to be called.  At least, that will be the
case unless Guy can come up with some way of hacking lexical closures
so as to make embedded lambdas see the lexical binding environment in
which they are defined.  Right now, for me at least, it is unclear
whether this can be done for all Common Lisp implementations with low
enough cost that we can make it a required feature.  In the meantime, I
think it is a real mistake to include in the language any constructs
that require a successful solution to this problem if they are to be
implemented decently.

So my vote (with the maximum number of exclamation points) continues to
be that Common Lisp should include only the Lisp Machine style forms,
plus M-V-CALL of multiple arguments.  Are the other forms really so
important to the rest of you?

All in all, I think that the amount of convergence made in the November
meeting was really remarkable, and that we are surprisingly close to
winning big on this effort.

-- Scott

∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
Date: 15 January 1982 12:14-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: multiple values.
To: Scott.Fahlman at CMU-10A
cc: Common-lisp at SU-AI

[1] I think your last note has some incorrect assumptions about how
    the procedure call mechanism will work on future Lisp machines.
    Not that the assumption isn't reasonable, but as I recall the procedure
    ARGUMENT mechanism and the mechanism for passing the back
    the FIRST VALUE was designed to be inconsistent with the mechanism
    for passing the rest of the values. This puts a whole different
    perspective on the language semantics.
[2] At least one implementation, NIL, guessed that there would be
    demand in the future for various lambda extensions, so a
    sufficiently general lambda-grovelling mechanism was painlessly
    introduce from the begining.

∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
Date: Saturday, 16 January 1982, 02:36-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
Cc: common-lisp at su-ai

We are planning for implementation of the new multiple-value receiving
forms with &optional and &rest, on the L machine, but are unlikely to
be able to implement them on the present Lisp machine without a significant
amount of work.  I would just as soon see them flushed, but am willing
to implement them if the concensus is to keep them.

If by lambda-grovelling you mean (as GJC seems to think you mean) a
subroutine in the compiler that parses out the &optionals, that is about
0.5% of the work involved.  If by lambda-grovelling you mean the generated
code in a compiled function that takes some values and defaults the
unsupplied optionals, indeed that is where the hair comes in, since in
most implementations it can't be -quite- the same as the normal function-entry
case of what might seem to be the same thing.

∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
Date: 16 January 1982 0930-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: David A. Moon <Moon at MIT-MC> 
Subject:  Re: Multiple Values
CC: common-lisp at su-ai
In-Reply-To:  David A. Moon's message of 16 Jan 82 02:36-EST
Message-Id: <16Jan82 093009 SF50@CMU-10A>


As Moon surmises, my concern for "Lambda-grovelling" was indeed about
needing a second, slightly different version of the whole binding and
defaulting and rest-ifying machinery, not about the actual parsing of
the Lambda-list syntax which, as GJC points out, can be mostly put into
a universal function of its own.
-- Scott

∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
Date: Saturday, 16 January 1982, 10:22-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A, common-lisp at su-ai

What Moon says is true: I am writing a compiler, and parsing the
&-mumbles is quite easy compared to generating the code that implements
taking the returned values off of the stack and putting them where they
go while managing to run the default-forms and so on.  I could live
without the &-mumble forms of the receivers, although they seem like
they may be a good idea, and we are willing to implement them if they
appear in the Common Lisp definition.  I would not say that it is
generally an easy feature to implement.

It should be kept in mind that multiple-value-call certainly does not
provide the functionality of the &-mumble forms.  Only rarely do you
want to take all of the values produced by a function and pass them all
as successive arguments to a function.  Often they are some values
computed by the same piece of code, and you want to do completely
different things with each of them.

The goal of the &-mumble forms was to provide the same kind of
error-checking that we have with function calling.  Interlisp has no
such error-checking on function calls, which seems like a terrible thing
to me; the argument says that the same holds true of returned values.
I'm not convinced by that argument, but it has some merit.

∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
Date: 16 January 1982 17:11-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
cc: common-lisp at SU-AI

I mostly agree with SEF.

Better than a separate function M-V-CALL would be a new option to the
function CALL that allows one or more of several arg-forms to be
treated a la M-V-CALL.  Then it is possible to have more than one arg
form, all of whose values become separate args, intermixed with lists
of evaluated args, and ordinary args; but it is not really any harder
to implement than M-V-CALL alone.

[Background note: the Lisp machine function CALL takes alternating
options and arg-forms.  Each option says how to pass the following
arg-form.  It is either a symbol or a list of symbols.  Symbols now
allowed are SPREAD and OPTIONAL.  SPREAD means pass the elements of
the value as args.  OPTIONAL means do not get an error if the function
being called doesn't want the args.  This proposal is to add VALUES as
an alternative to SPREAD, meaning pass all values of the arg form as
args.]

If the &-keyword multiple value forms are not going to be implemented
on the current Lisp machine, that is an additional reason to keep them
out of Common Lisp, given that they are not vitally necessary for
anything.

∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
Date: 16 January 1982 2333-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Keyword sequence fns
Message-Id: <16Jan82 233312 SF50@CMU-10A>


My proposal for keyword-style sequence functions can be found on CMUA as

TEMP:NEWSEQ.PRE[C380SF50]

or as

TEMP:NEWSEQ.DOC[C380SF50]

Fire away.
-- Scott

∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
Date: 17 January 1982 2056-EST (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Sequence functions
Message-Id: <17Jan82 205656 GS70@CMU-10A>

Here is an idea I would like to bounce off people.

The optional arguments given to the sequence functions are of two general
kinds: (1) specify subranges of the sequences to operate on; (2) specify
comparison predicates.  These choices tend to be completely orthogonal
in that it would appear equally likely to want to specify (1) without (2)
as to want to specify (2) without (1).  Therefore it is probably not
acceptable to choose a fixed ortder for them as simple optional arguments.

It is this problem that led me to propose the "functional-style" sequence
functions.  The minor claimed advantage was that the generated functions
might be useful as arguments to other functionals, particularly MAP.  The
primary motivation, however, was that this would syntactically allow
two distinct places for optional arguments, as:
   ((FREMOVE ...predicate optionals...) sequence ...subrange optionals...)

Here I propose to solve this problem in a different way, which is simply
to remove the subrange optionals entirely.  If you want to operate on a
subsequence, you have to use SUBSEQ to specify the subrange.  (Of course,
this won't work for the REPLACE function, which is in-place destructive.)
Given this, consistently reorganize the argument list so that the sequence
comes first.  This would give:
	(MEMBER SEQ #'EQL X)
	(MEMBER SEQ #'NUMBERP)
and so on.

Disadvantages:
(1) Unfamiliar argument order.
(2) Using SUBSEQ admittedlt is not as efficient as the subrange arguments
("but good a compiler could...").
(3) This doesn't allow you to elide EQL or EQUAL or whatever the chosen
default is.

Any takers?
--Guy




∂17-Jan-82  2207	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
Date: 17 January 1982 23:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Sequence functions
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Using subseq instead of additional arguments is of course what
other languages do, and it is quite tasteful in those languages
because the creating a subsequence doesn't cons.  In Lisp it
does, which makes a lot of difference.  Unless you're willing to
GUARENTEE that the consing will be avoided, I don't think the
proposal is acceptable.  Consider a TECO style buffer management
that wanted to use string-replace to copy stuff around; it'd be
terrible if it consed the stuff it wanted to move!

∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
Date: 18 January 1982 05:25-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

Even if SUBSEQ itself conses,
if you offer compiler optimizations which take expressions
where sequence functions are applied to calls to subseq
and turn them into calls to other internal functions which
take extra args and avoid consing, this is good enough
in efficiency and provides the same simplicity in user interface.

While on the subject, how about eliminating all the functions
to set this or that from the language description
(except a few for Maclisp compatibility) and making SETF
the only way to set anything?
The only use for the setting-functions themselves, as opposed
to SETF, is to pass to a functional--they are more efficient perhaps
than a user-written function that just uses SETF.  However, such
user-written functions that only use SETF can be made to expand
into the internal functions which exist to do the dirty work.
This change would greatly simplify the language.

∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
Date: 18 Jan 1982 0918-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: subseq and consing
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 18-Jan-82 0325-MST

And, after you've eliminated all the setting functions/forms, including
SETQ, change the name from SETF to SETQ.
-------

∂02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
Date:  2 Jan 1982 1005-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Com L
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I have retrieved the revisions and decisions, will look them over.
I will try to set up arrangements to be at POPL Mondat-Wednesday,
depends on flights,

What is Common LISP schedule, next meeting, etc? Will we be invited to
attend, or is this one of topics for us to dicuss, etc. at POPL.
What in fact are we to dicuss, and what should I be thinking about.
As I explained, I hope to finish this round of PSL implementation
on DEC-20, VAX and maybe even first version on 68000 by then.
We then will fill in some missing features, and start bringup up REDUCE,
meta-compiler, BIGfloats, and PictureRLISP graphics. At that point I
have accomplished a significant amount of my NSF goals this year.

Next step is to signficantly improve PSL, SYSLISP, merge with Mode Analysis
phase for improved LISP<->SYSLISP comunications and efficiency.

At the same time, we will be looking over various LISP systems to see what sort of good
features can be adapted, and what sort of compatibility packages (eg, UCI-LISP
package, FranzLISP package, etc).

Its certainly in this pahse that I could easily attempt to modify PSL to
provide a ComonLISP kernel, assuming that we have not already adapted much of the
code.
M
-------

∂14-Jan-82  0732	Griss at UTAH-20 (Martin.Griss) 	Common LISP 
Date: 14 Jan 1982 0829-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Common LISP
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I just received amessage from Hedrick, regarding his project of doing an
extended Addressing common LISP on the DEC-20; it also refers to 
CMU doing the VAX version. I thought one of the possibilities we
were to discuss was whether we might become involved in doing the
VAX version? Is this true - ie what do you see as the possible routes
of joint work.?
Martin
-------

∂14-Jan-82  2032	Jonathan A. Rees <JAR at MIT-MC>   
Date: 14 January 1982 23:32-EST
From: Jonathan A. Rees <JAR at MIT-MC>
To: GLS at MIT-MC
cc: BROOKS at MIT-MC, RPG at MIT-MC

We've integrated your changes to the packing phase into our
code... we'll see pretty soon whether the new preferencing stuff works.
I've written a fancy new closure analysis phase which you might be
interested in snarfing at some point.  Much smarter than RABBIT about
SETQ'ed closed-over variables.
Using NODE-DISPATCH now.  Win.
I now have an ALIASP slot in the NODE structure, and the ALIAS-IF-SAFE
analysis has been moved into TARGETIZE-CALL-PRIMOP.  I'm debugging
that now.  This means the DEPENDENTS slot goes away.  I'm trying to
get e.g. (RPLACA X (FOO)) where X must be in a register (because
it's an RPLACA) and (FOO) is a call to an unknown function (and thus
clobbers all regs) to work fairly efficiently in all cases.
In fact I've rewritten a lot of TARGETIZE...

Does the <S1LISP.COMPILER> directory still exist?  I can't seem to read
it from FTP.  Has anyone done more work on S1COMP?

The T project, of course, is behind schedule.  As I told you before,
a toy interpreter runs on the Vax, but so far nothing besides
a read-factorial-print loop runs on the 68000.  But soon, I hope,...

∂15-Jan-82  0109	RPG   	Rutgers lisp development project 
 ∂14-Jan-82  1625	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Rutgers lisp development project    
Mail-from: ARPANET site RUTGERS rcvd at 13-Jan-82 2146-PST
Date: 14 Jan 1982 0044-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Rutgers lisp development project
To: bboard at RUTGERS, griss at UTAH-20, admin.mrc at SU-SCORE, jsol at RUTGERS
Remailed-date: 14 Jan 1982 1622-PST
Remailed-from: Mark Crispin
Remailed-to: Feigenbaum at SUMEX-AIM, REG at SU-AI

It now appears that we are going to do an implementation of Common Lisp
for the DEC-20.  This project is being funded by DEC.

		Why are we doing this project at all?

This project is being done because a number of our researchers are going
to want to be able to move their programs to other systems than the
DEC-20.  We are proposing to get personal machines over the next few
years.  Sri has already run into problem in trying to give AIMDS to
someone that only has a VAX.  Thus we think our users are going to want
to move to a dialect that is widely portable.

Also, newer dialects have some useful new features.  Although these
features can be put into Elisp, doing so will introduce
incompatibilities with old programs.  R/UCI Lisp already has too many
inconsistencies introduced by its long history.  It is probably better
to start with a dialect that has been designed in a coherent fashion.

			Why Common Lisp?

There are only three dialects of Lisp that are in wide use within the
U.S. on a variety of systems:  Interlisp, meta-Maclisp, and Standard
Lisp.  (By meta-Maclisp I mean a family of dialects that are all
related to Maclisp and generally share ideas.)  Of these, Standard Lisp
has a reputation of not being as "rich" a language, and in fact is not
taken seriously by most sites.  This is not entirely fair, but there is
probably nothing we can do about that fact at this stage. So we are left
with Interlisp and meta-Maclisp.  A number of implementors from the
Maclisp family have gotten together to define a common dialect that
combines the best features of their various dialects, while still being
reasonable in size.  A manual is being produced for it, and once
finished will remain reasonably stable.  (Can you believe it?
Documentation before coding!)  This dialect is now called Common Lisp.
The advantages of Common Lisp over Interlisp are:

  - outside of BBN and Xerox, the Lisp development efforts now going on
	all seem to be in the Maclisp family, and now are being
	redirected towards Common Lisp.  These efforts include 
	CMU, the Lisp Machine companies (Symbolics, LMI), LRL and MIT.

  - Interlisp has some features, particularly the spaghetti stack,
	that make it impossible to implement as efficiently and cleanly
	as Common Lisp.  (Note that it is possible to get as good
	effiency out of compiled code if you do not use these features,
	and if you use special techniques when compiling.  However that
	doesn't help the interpreter, and is not as clean.)

  - Because of these complexities in Interlisp, implementation is a
	large and complex job.  ARPA funded a fairly large effort at
	ISI, and even that seems to be marginal.  This comment is based
	on the report on the ISI project produced by Larry Masinter,
	<lisp>interlisp-vax-rpt.txt.  Our only hope would be to take
	the ISI implementation and attempt to transport it to the 20.
	I am concerned that the result of this would be extremely slow.
	I am also concerned that we might turn out not to have the
	resources necessary to do it a good job.

  - There seems to be a general feeling that Common Lisp will have a
	number of attractive features as a language.  (Notice that I am
	not talking about user facilities, which will no doubt take some
	time before they reach the level of Interlisp.)  Even people
	within Arpa are starting to talk about it as the language of the
	future.  I am not personally convinced that it is seriously
	superior to Interlisp, but it is as good (again, at the language
	level), and the general Maclisp community seems to have a number
	of ideas that are significantly in advance of what is likely to
	show up in Interlisp with the current support available for it.

There are two serious disadvantages of Common Lisp:

  - It does not exist yet.  As of this week, there now seem to be
	sufficient resources committed to it that we can be sure it will
	be implemented.  The following projects are now committed, at a
	level sufficient for success:  VAX (CMU), DEC-20 (Rutgers), PERQ
	and other related machines (CMU), Lisp Machine (Symbolics), S-1
	(LRL).  I believe this is sufficient to give the language a
	"critical mass".

  - It does not have user facilities defined for it.  CMU is heavily
	committed to the Spice (PERQ) implementation, and will produce
	the appropriate tools.  They appear to be funded sufficiently
	that this will happen.

		 Why is DEC funding it, and what will be
		 	our relationship with them?

LCG (the group within DEC that is responsible for the DEC-20) is
interested in increasing the software that will support the full 30-bit
address space possible in the DEC-20 architecture.  (Our current
processor will only use 23 bits of this, but this is still much better
than what was supported by the old software, which is 18 bits.)  They
are proceeding at a reasonable rate with the software that is supported
by DEC.  However they recognize that many important languages were
developed outside of DEC, and that it will not be practical for them
to develop large-address-space implementations of all of them in-house.
Thus DEC is attempting to find places that are working on the more
important of these languages, and they are funding efforts to develop
large address versions.  They are sponsoring us for Lisp, and Utah
for C.  Pascal is being done in a slightly complex fashion.  (In fact
some of our support from DEC is for Pascal.)

DEC does not expect to make money directly from these projects.  We will
maintain control over the software we develop, and could sell support
for it if we wanted to. We are, of course, expected to make the software
widely available. (Most likely we will submit it to DECUS but also
distribute it ourselves.)  What DEC gets out of it is that the large
address space DEC-20 will have a larger variety of software available
for it than otherwise.  I believe this will be an important point for
them in the long run, since no one is going to want to buy a machine for
which only the Fortran compiler can generate programs larger than 256K.
Thus they are facing the following facts:
  - they can't do things in house nearly as cheaply as universities
	can do them.
  - universities are no longer being as well funded to do language
	development, particularly not for the DEC-20.

			How will we go about it?

We have sufficient funding for one full-time person and one RA.  Both
DEC and Rutgers are very slow about paperwork.  But these people should
be in place sometime early this semester.  The implementation will
involve a small kernel, in assembly language, with the rest done in
Lisp.  We will get the Lisp code from CMU, and so will only have to do
the kernel.  This project seems to be the same size as the Elisp
project, which was done within a year using my spare time and a month of
so of Josh's time.  It seems clear that we have sufficient manpower. (If
you think maybe we have too much, I can only say that if we finish the
kernel sooner than planned, we will spend the time working on user
facilities, documentation, and helping users here convert to it.) CMU
plans to finish the VAX project in a year, with a preliminary version in
6 months and a polished release in a year.  Our target is similar.
-------

∂15-Jan-82  0850	Scott.Fahlman at CMU-10A 	Multiple Values    
Date: 15 January 1982 1124-EST (Friday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Multiple Values
CC: Scott.Fahlman at CMU-10A
Message-Id: <15Jan82 112415 SF50@CMU-10A>


I hate to rock the boat, but I would like to re-open one of the issues
supposedly settled at the November meeting, namely issue 55: whether to
go with the simple Lisp Machine style multiple-value receving forms, or
to go with the more complex forms in the Swiss Cheese Edition, which
provide full lambda-list syntax.

My suggestion was that we go with the simple forms and also provide the
Multiple-Value-Call construct, which more or less subsumes the
interesting uses for the Lambda-list forms.  The latter is quite easy
to implement, at least in Spice Lisp and I believe also in Lisp Machine
Lisp: you open the specified function call frame, evaluate the
arguments (which may return multiples) leaving all returned values on
the stack, then activate the call.  The normal argument-passing
machinery  (which is highly optimized) does all the lambda grovelling.
Furthermore, since this is only a very slight variation on a normal
function call, we should not be screwed in the future by unanticipated
interactions between this and, say, the declaration mechanism.

Much to my surprise, the group's decision was to go with all of the
above, but also to require that the lambda-hacking forms be supported.
This gives me real problems.  Given the M-V-CALL construct, I think
that these others are quite useless and likely to lead to many bad
interactions: this is now the only place where general lambda-lists have
to be grovelled outside of true function calls and defmacro.  I am not
willing to implement yet another variation on lambda-grovelling
just to include these silly forms, unless someone can show me that they
are more useful than I think they are.

The November vote might reflect the notion that M-V-LET and M-V-SETQ
can be implemented merely as special cases of M-V-CALL.  Note however,
that the bodies of the M-V-LET and M-V-SETQ forms are defined as
PROGNs, and will see a different set of local variables than they would
see if turned into a function to be called.  At least, that will be the
case unless Guy can come up with some way of hacking lexical closures
so as to make embedded lambdas see the lexical binding environment in
which they are defined.  Right now, for me at least, it is unclear
whether this can be done for all Common Lisp implementations with low
enough cost that we can make it a required feature.  In the meantime, I
think it is a real mistake to include in the language any constructs
that require a successful solution to this problem if they are to be
implemented decently.

So my vote (with the maximum number of exclamation points) continues to
be that Common Lisp should include only the Lisp Machine style forms,
plus M-V-CALL of multiple arguments.  Are the other forms really so
important to the rest of you?

All in all, I think that the amount of convergence made in the November
meeting was really remarkable, and that we are surprisingly close to
winning big on this effort.

-- Scott

∂15-Jan-82  0913	George J. Carrette <GJC at MIT-MC> 	multiple values.   
Date: 15 January 1982 12:14-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: multiple values.
To: Scott.Fahlman at CMU-10A
cc: Common-lisp at SU-AI

[1] I think your last note has some incorrect assumptions about how
    the procedure call mechanism will work on future Lisp machines.
    Not that the assumption isn't reasonable, but as I recall the procedure
    ARGUMENT mechanism and the mechanism for passing the back
    the FIRST VALUE was designed to be inconsistent with the mechanism
    for passing the rest of the values. This puts a whole different
    perspective on the language semantics.
[2] At least one implementation, NIL, guessed that there would be
    demand in the future for various lambda extensions, so a
    sufficiently general lambda-grovelling mechanism was painlessly
    introduce from the begining.

∂15-Jan-82  2352	David A. Moon <Moon at MIT-MC> 	Multiple Values   
Date: Saturday, 16 January 1982, 02:36-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
Cc: common-lisp at su-ai

We are planning for implementation of the new multiple-value receiving
forms with &optional and &rest, on the L machine, but are unlikely to
be able to implement them on the present Lisp machine without a significant
amount of work.  I would just as soon see them flushed, but am willing
to implement them if the concensus is to keep them.

If by lambda-grovelling you mean (as GJC seems to think you mean) a
subroutine in the compiler that parses out the &optionals, that is about
0.5% of the work involved.  If by lambda-grovelling you mean the generated
code in a compiled function that takes some values and defaults the
unsupplied optionals, indeed that is where the hair comes in, since in
most implementations it can't be -quite- the same as the normal function-entry
case of what might seem to be the same thing.

∂16-Jan-82  0631	Scott.Fahlman at CMU-10A 	Re: Multiple Values
Date: 16 January 1982 0930-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: David A. Moon <Moon at MIT-MC> 
Subject:  Re: Multiple Values
CC: common-lisp at su-ai
In-Reply-To:  David A. Moon's message of 16 Jan 82 02:36-EST
Message-Id: <16Jan82 093009 SF50@CMU-10A>


As Moon surmises, my concern for "Lambda-grovelling" was indeed about
needing a second, slightly different version of the whole binding and
defaulting and rest-ifying machinery, not about the actual parsing of
the Lambda-list syntax which, as GJC points out, can be mostly put into
a universal function of its own.
-- Scott

∂16-Jan-82  0737	Daniel L. Weinreb <DLW at MIT-AI> 	Multiple Values
Date: Saturday, 16 January 1982, 10:22-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A, common-lisp at su-ai

What Moon says is true: I am writing a compiler, and parsing the
&-mumbles is quite easy compared to generating the code that implements
taking the returned values off of the stack and putting them where they
go while managing to run the default-forms and so on.  I could live
without the &-mumble forms of the receivers, although they seem like
they may be a good idea, and we are willing to implement them if they
appear in the Common Lisp definition.  I would not say that it is
generally an easy feature to implement.

It should be kept in mind that multiple-value-call certainly does not
provide the functionality of the &-mumble forms.  Only rarely do you
want to take all of the values produced by a function and pass them all
as successive arguments to a function.  Often they are some values
computed by the same piece of code, and you want to do completely
different things with each of them.

The goal of the &-mumble forms was to provide the same kind of
error-checking that we have with function calling.  Interlisp has no
such error-checking on function calls, which seems like a terrible thing
to me; the argument says that the same holds true of returned values.
I'm not convinced by that argument, but it has some merit.

∂16-Jan-82  1252	Griss at UTAH-20 (Martin.Griss) 	Kernel for Commaon LISP    
Date: 16 Jan 1982 1347-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Kernel for Commaon LISP
To: guy.steel at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

What was actually decided about a "small" common kernel, rest
in LISP. Were core functions identified? This first place that
my work and expertise will strongly overlap; the smaller the
kernel, and the more jazzy-features that can be imnpleemnted
in terms of it, the better. 

Have you sent out a revised Ballot, or are there pending questions that
the "world-at-large" should respond to (as apposed to the ongoing
group that has been making decisions). The last bit about the
lambda stuff for multiples is pretty obscure, seems to depend on
a model that was discussed, but not documented (as far as I can see).

In general, where are the proposed to solutions to the hard implementation
issues being recorded.
Martin
-------

∂16-Jan-82  1415	Richard M. Stallman <RMS at MIT-AI> 	Multiple Values   
Date: 16 January 1982 17:11-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Multiple Values
To: Scott.Fahlman at CMU-10A
cc: common-lisp at SU-AI

I mostly agree with SEF.

Better than a separate function M-V-CALL would be a new option to the
function CALL that allows one or more of several arg-forms to be
treated a la M-V-CALL.  Then it is possible to have more than one arg
form, all of whose values become separate args, intermixed with lists
of evaluated args, and ordinary args; but it is not really any harder
to implement than M-V-CALL alone.

[Background note: the Lisp machine function CALL takes alternating
options and arg-forms.  Each option says how to pass the following
arg-form.  It is either a symbol or a list of symbols.  Symbols now
allowed are SPREAD and OPTIONAL.  SPREAD means pass the elements of
the value as args.  OPTIONAL means do not get an error if the function
being called doesn't want the args.  This proposal is to add VALUES as
an alternative to SPREAD, meaning pass all values of the arg form as
args.]

If the &-keyword multiple value forms are not going to be implemented
on the current Lisp machine, that is an additional reason to keep them
out of Common Lisp, given that they are not vitally necessary for
anything.

∂16-Jan-82  2033	Scott.Fahlman at CMU-10A 	Keyword sequence fns    
Date: 16 January 1982 2333-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Keyword sequence fns
Message-Id: <16Jan82 233312 SF50@CMU-10A>


My proposal for keyword-style sequence functions can be found on CMUA as

TEMP:NEWSEQ.PRE[C380SF50]

or as

TEMP:NEWSEQ.DOC[C380SF50]

Fire away.
-- Scott

∂17-Jan-82  0618	Griss at UTAH-20 (Martin.Griss) 	Agenda 
Date: 17 Jan 1982 0714-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Agenda
To: guy.Steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

Still havent any indication from you guys as to what we should be dicussing;
ie what should I be thinking about as our possible mode of intercation with
the common Lispers?
M
-------

I had been deferring to GLS on this by silence, but let me tell you my thoughts
on the current situation.

First, the DEC/Rutgers things took me somewhat by surprise. I know that Hedrick
thinks very highly of the Standard Lisp stuff, and I wouldn't mind seeing
a joint effort from the Common Lisp core people, Dec/Rutgers, and Utah.

From the Utah connection I would like to see a clean looking virtual machine,
a set of Lisp code to implement the fluff from Common Lisp, and a reasonable
portable type of compiler.

By `connection' I mean Utah providing the virtual machine for a few specific
computers, Common Lisp core people providing most of the Lisp code, and
maybe S-1 and Utah providing the compiler.

Even with Dec/Rutgers doing the Vax/20 versions, Utah provides us with
the expertise to do many other important, but bizarre machines, such as
68k based machines, IBM equipment, and Burroughs, to name a few. Perhaps
Rutgers/DEC wouldn't mind working with us all on this.

That is what I would to discuss for political topics.

For technical topics, the virtual machine specification and the compiler
technology.

			-rpg-
∂17-Jan-82  1751	Feigenbaum at SUMEX-AIM 	more on Interlisp-VAX    
Date: 17 Jan 1982 1744-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on Interlisp-VAX
To:   rindfleisch at SUMEX-AIM, barstow at SUMEX-AIM, bonnet at SUMEX-AIM,
      hart at SRI-KL, csd.hbrown at SU-SCORE
cc:   csd.genesereth at SU-SCORE, buchanan at SUMEX-AIM, lenat at SUMEX-AIM,
      friedland at SUMEX-AIM, pople at SUMEX-AIM, gabriel at SU-AI

Mail-from: ARPANET host USC-ISIB rcvd at 17-Jan-82 1647-PST
Date: 17 Jan 1982 1649-PST
From: Dave Dyer       <DDYER at USC-ISIB>
Subject: Interlisp-VAX report
To: feigenbaum at SUMEX-AIM, lynch at USC-ISIB, balzer at USC-ISIB,
    bengelmore at SRI-KL, nilsson at SRI-AI
cc: rbates at USC-ISIB, saunders at USC-ISIB, voreck at USC-ISIB, mcgreal at USC-ISIB,
    ignatowski at USC-ISIB, hedrick at RUTGERS, admin.mrc at SU-SCORE,
    jsol at RUTGERS, griss at UTAH-20, bboard at RUTGERS, reg at SU-AI

	Addendum to Interlisp-VAX: A report

		Jan 16, 1982


  Since Larry Masinter's "Interlisp-VAX: A Report" is being
used in the battle of LISPs, it is important that it be as
accurate as possible.  This note represents the viewpoint of
the implementors of Interlisp-VAX, as of January 1982.

  The review or the project, and the discussions with other
LISP implementors, that provided the basis for "Interlisp-VAX:
A report", were done in June 1981.  We were given the opportunity
to review and respond to a draft of the report, and had few
objections that were refutable at the time of its writing.

  We now have the advantage of an additional 6 month's development
effort, and can present as facts what would have been merely
counter arguments at the time.


  We believed at the time, and still believe now, that Masinter's
report is largely a fair and accurate presentation of Interlisp-VAX,
and of the long term efforts necesary to support it.  However,
a few very important points he made have proven to be inaccurate.


AVAILABILITY AND FUNCTINALITY
-----------------------------

  Interlisp-VAX has been in beta test, here at ISI and at several
sites around the network, since November 13 (a friday - we weren't worried).
We are planning the first general release for February 1982 - ahead
of the schedule that was in effect in June, 1981.

  The current implementation incudes all of the features of Interlisp-10
with very minor exceptions.  There is no noticable gap in functionality
among Interlisp-10, Interlisp-D and Interlisp-VAX.

   Among the Interlisp systems we are running here are KLONE, AP3,
HEARSAY, and AFFIRM.

PERFORMANCE
-----------

   Masinter's analysis of the problems of maximizing performance,
both for Interlisp generally and for the VAX particularly was excellent.
It is now reasonable to quantify the performance based on experiance
with real systems.   I don't want to descend into the quagmire of
benchmarking LISPs here, so I'll limit my statements to the most basic.

  CPU speed (on a vax/780) is currently in the range of 1/4 the speed
of Interlisp-10 (on a KL-10), which we believe is about half the 
asymptoticaly acheivalbe speed.

   Our rule of thumb for real memory is 1 mb. per active user.


-------

∂17-Jan-82  1756	Guy.Steele at CMU-10A 	Sequence functions    
Date: 17 January 1982 2056-EST (Sunday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Sequence functions
Message-Id: <17Jan82 205656 GS70@CMU-10A>

Here is an idea I would like to bounce off people.

The optional arguments given to the sequence functions are of two general
kinds: (1) specify subranges of the sequences to operate on; (2) specify
comparison predicates.  These choices tend to be completely orthogonal
in that it would appear equally likely to want to specify (1) without (2)
as to want to specify (2) without (1).  Therefore it is probably not
acceptable to choose a fixed ortder for them as simple optional arguments.

It is this problem that led me to propose the "functional-style" sequence
functions.  The minor claimed advantage was that the generated functions
might be useful as arguments to other functionals, particularly MAP.  The
primary motivation, however, was that this would syntactically allow
two distinct places for optional arguments, as:
   ((FREMOVE ...predicate optionals...) sequence ...subrange optionals...)

Here I propose to solve this problem in a different way, which is simply
to remove the subrange optionals entirely.  If you want to operate on a
subsequence, you have to use SUBSEQ to specify the subrange.  (Of course,
this won't work for the REPLACE function, which is in-place destructive.)
Given this, consistently reorganize the argument list so that the sequence
comes first.  This would give:
	(MEMBER SEQ #'EQL X)
	(MEMBER SEQ #'NUMBERP)
and so on.

Disadvantages:
(1) Unfamiliar argument order.
(2) Using SUBSEQ admittedlt is not as efficient as the subrange arguments
("but good a compiler could...").
(3) This doesn't allow you to elide EQL or EQUAL or whatever the chosen
default is.

Any takers?
--Guy




∂17-Jan-82  2042	Earl A. Killian <EAK at MIT-MC> 	Sequence functions    
Date: 17 January 1982 23:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Sequence functions
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Using subseq instead of additional arguments is of course what
other languages do, and it is quite tasteful in those languages
because the creating a subsequence doesn't cons.  In Lisp it
does, which makes a lot of difference.  Unless you're willing to
GUARENTEE that the consing will be avoided, I don't think the
proposal is acceptable.  Consider a TECO style buffer management
that wanted to use string-replace to copy stuff around; it'd be
terrible if it consed the stuff it wanted to move!

∂18-Jan-82  0235	Richard M. Stallman <RMS at MIT-AI> 	subseq and consing
Date: 18 January 1982 05:25-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

Even if SUBSEQ itself conses,
if you offer compiler optimizations which take expressions
where sequence functions are applied to calls to subseq
and turn them into calls to other internal functions which
take extra args and avoid consing, this is good enough
in efficiency and provides the same simplicity in user interface.

While on the subject, how about eliminating all the functions
to set this or that from the language description
(except a few for Maclisp compatibility) and making SETF
the only way to set anything?
The only use for the setting-functions themselves, as opposed
to SETF, is to pass to a functional--they are more efficient perhaps
than a user-written function that just uses SETF.  However, such
user-written functions that only use SETF can be made to expand
into the internal functions which exist to do the dirty work.
This change would greatly simplify the language.

∂18-Jan-82  0822	Don Morrison <Morrison at UTAH-20> 	Re: subseq and consing  
Date: 18 Jan 1982 0918-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: subseq and consing
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 18-Jan-82 0325-MST

And, after you've eliminated all the setting functions/forms, including
SETQ, change the name from SETF to SETQ.
-------

∂18-Jan-82  1602	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
Date: Monday, 18 January 1982, 18:04-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

I agree that GLS's proposal is nice, that it is only acceptable if the
compiler optimizes it, and that it is very easy to optimize.  It is also
extremely clear to the reader of the program, and it cuts down on the
number of arguments that he has to remember.  This sounds OK to me.

∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
Date: 19 January 1982 0103-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: Guy.Steele at CMU-10A
Subject:  Re: Sequence functions
CC: common-lisp at su-ai
In-Reply-To:  <17Jan82 205656 GS70@CMU-10A>
Message-Id: <19Jan82 010338 SF50@CMU-10A>


Guy,

I agree that the index-range and the comparison-choice parameters are
orthogonal.  I like your proposal to use SUBSEQ for the ranges -- it
would appear to be no harder to optimize this in the compiler than to
do the equivalent keyword or optional argument thing, and the added
consing in interpreted code (only!)  should not make much difference.
And the semantics of what is going on with all the start and end
options now becomes crystal clear.  We would need a style suggestion in
the manual urging the programmer to use SUBSEQ for this and not some
random thing he cooks up, since the compiler will only recognize fairly
obvious cases.  Good idea!

I do not like the part of your proposal that relates to reordering the
arguments, on the grounds of gross incompatibility.  Unless we want to
come up with totally new names for all these functions, the change will
make it a real pain to move code and programmers over from Maclisp or
Franz.  Too high a price to pay for epsilon increase in elegance.  I
guess that of the suggestions I've seen so far, I would go with your
subseq idea for ranges and my keywords for specifying the comparison,
throwing out the IF family.

-- Scott

∂19-Jan-82  1551	RPG  	Suggestion    
To:   common-lisp at SU-AI  
I would like to make the following suggestion regarding the
strategy for designing Common Lisp. I'm not sure how to exactly
implement the strategy, but I think it is imperative we do something
like this soon.

We should separate the kernel from the Lisp based portions of the system
and design the kernel first. Lambda-grovelling, multiple values,
and basic data structures seem kernel. Sequence functions and names
can be done later.

The reason that we should do this is so that the many man-years of effort
to immplement a Common Lisp can be done in parallel with the design of
less critical things. 
			-rpg-

∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
Date: 19 Jan 1982 1832-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 19-Jan-82 1651-MST

I agree entirely. In terms of my 2 interests:
a) Implementing Common LISP kernel/compatibility in/for PSL
b) Getting our and other LISP tools working for Common LISP

I would very much like to see a clear effort NOW to isolate some of the
kernel features, and major implementation issues (data-types, user
control over storage manager, etc) so that some of us can implement
a kernel, and others can design extensions.
-------

∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
Date: 20 January 1982 16:34-EST
From: David A. Moon <MOON5 at MIT-AI>
Subject: Keyword style sequence functions
To: common-lisp at SU-AI

Comments on Fahlman's Proposal for Keyword Style Sequence Functions for
Common Lisp of 16 January 1982

I think this is a good proposal and a step in the right direction.  There
are some problems with it, and also a couple issues that come to mind while
reading it.  I will first make some minor comments to get flamed up, and
then say what I really want to say.


- Negative comments first:

ELT and SETELT should be provided in type-specific versions.

My intuition suggests that MAP would be more readable with the result data
type before the function instead of after.  I don't really feel strongly
about this, but it's a suggestion.

I don't like the idea of flushing CONCAT (catenate) and making TO-LIST
allow multiple arguments, for some reason.

There is a problem with the :compare and :compare-not keywords.  For some
functions (the ones that take two sequences as arguments), the predicate is
really and truly an equality test.  It might be clearer to call it :equal.
For these functions I think it makes little sense to have a :compare-not.
Note that these are the same functions for which :if/:if-not are meaningless.
For other functions, such as POSITION, the predicate may not be a symmetric
equality predicate; you might be trying to find the first number in a list
greater than 50, or the number of astronauts whose grandmothers are not
ethnic Russians.  Here it makes sense to have a :compare-not.  It may actually
make sense to have a :compare keyword for these functions and a :equal
keyword for the others.  I'm not ecstatic about the name compare for this,
but I haven't thought of anything better.  This is only a minor esthetic
issue; I wouldn't really mind leaving things the way they are in Fahlman's
proposal.

Re :start and :end.  A nil value for either of these keywords should be
the same as not supplying it (i.e. the appropriate boundary of the sequence.)
This makes a lot of things simpler.  In :from-end mode, is the :start where
you start processing the sequence or the left-hand end of the subsequence?
In the Lisp machine, it is the latter, but either way would be acceptable.

The optional "count" argument to REMOVE and friends should be a keyword
argument.  This is more uniform, doesn't hurt anything, and is trivially
mechanically translatable from the old way.

The set functions, from ADJOIN through NSET-XOR, should not take keywords.
:compare-not is meaningless for these (unlike say position, where you would
use it to find the first element of a sequence that differed from a given
value).  That leaves only one keyword for these functions.  Also it is
-really- a bad idea to put keywords after an &rest argument (as in UNION).
I would suggest that the equal-predicate be a required first argument for
all the set functions; alternatively it could be an optional third argument
except for UNION and INTERSECTION, or those functions could be changed
to operate on only two sets like the others.  I think EQUAL is likely
to be the right predicate for set membership only in rare circumstances,
so that it would not hurt to make the predicate a required argument and
have no default predicate.

The :eq, :eql, :nequal, etc. keywords are really a bad idea.  The reasons
are:  1) They are non-uniform, with some keywords taking arguments and
some not.  See the tirade about this below.  2) They introduce an artificial
barrier between system-defined and user-defined predicates.  This is always
a bad idea, and here serves no useful purpose.  3) They introduce an
unesthetic interchangeability between foo and :foo, which can lead to
a significant amount of confusion.  If the keyword form of specifying the
predicate is too verbose, I would be much happier with making the predicate
be an optional argument, to be followed by keywords.  Personally I don't
think it is verbose enough to justify that.

There are still a lot of string functions in the Lisp machine not generalized
into sequence functions.  I guess it is best to leave that issue for future
generations and get on with the initial specification of Common Lisp.


- Negative comments not really related to the issue at hand:

"(the :string foo)".  Data type names cannot have colons, i.e. cannot be
keywords.  The reason is that the data type system is user-extensible, at
least via defstruct and certainly via other mechanisms such as flavors in
individual implementations and in future Common extensions.  This means
that it is important to be able to use the package system to avoid name
clashes between data types defined by different programs.  The standard
primitive data type names should be globals (or more exactly, should be
in the same package as the standard primitive functions that operate
on those data types.)

Lisp machine experience suggests that it is really not a good idea to have
some keywords take arguments and other keywords not take arguments.  It's a
bit difficult to explain why.  When you are just using these functions with
their keywords in a syntactic way, i.e. essentially as special forms, it
makes no difference except insofar as it makes the documentation more
confusing.  But when you start having programs processing the keywords,
i.e. using the sequence functions as functions rather than special forms,
all hell breaks loose if the syntax isn't uniform.  I think the slight
ugliness of an extra "t" sometimes is well worth it for the sake of
uniformity and simplicity.  On the Lisp machine, we've gone through an
evolution in the last couple of years in which keywords that don't take
arguments have been weeded out.

I don't think much of the scheme for having keywords be constants.  There
is nothing really bad about this except for the danger of confusing
novices, so I guess I could be talked into it, but I don't think getting
rid of the quote mark is a significant improvement (but perhaps it is in
some funny place on your keyboard, where you can't find it, rather than
lower case and to the right of the semicolon as is standard for
typewriters?)


- Minor positive comments

Making REPLACE take keywords is a good idea.

:start1/:end1/:start2/:end2 is a good idea.

The order of arguments to the compare/compare-not function needs to be
strictly defined (since it is not always a commutative function).  Presumably 
the right thing is to make its arguments come in the same order as the
arguments to the sequence function from which they derive.  Thus for SEARCH
the arguments would be an element of sequence1 followed by an element of
sequence2, while for POSITION the arguments would be the item followed
by an element of the sequence.

In addition to MEMQ, etc., would it be appropriate to have MEMQL, etc.,
which would use EQL as the comparison predicate?

MEMBER is a better name than POSITION for the predicate that tests for
membership of an element in a sequence, when you don't care about its
position and really want simply a predicate.  I am tempted to propose that
MEMBER be extended to sequences.  Of course, this would be a non-uniform
extension, since the true value would be T rather than a tail of a list (in
other words, MEMBER would be a predicate on sequences but a semi-predicate
on lists.)  This might be a nasty for novices, but it really seems worth
risking that.  Fortunately car, cdr, rplaca, and rplacd of T are errors in
any reasonable implementation, so that accidentally thinking that the truth
value is a list is likely to be caught immediately.


- To get down to the point:

The problems remaining after this proposal are basically two.  One is that there
is still a ridiculous family of "assoc" functions, and the other is that the
three proposed solutions to the -if/-if-not problem (flushing it, having an
optional argument before a required argument, or passing nil as a placeholder)
are all completely unacceptable.

My solution to the first problem is somewhat radical: remove ASSOC and all
its relatives from the language entirely.  Instead, add a new keyword,
:KEY, to the sequence functions.  The argument to :KEY is the function
which is given an element of the sequence and returns its "key", the object
to be fed to the comparison predicate.  :KEY would be accepted by REMOVE,
POSITION, COUNT, MEMBER, and DELETE.  This is the same as the new optional
argument to SORT (and presumably MERGE), which replaced SORTCAR and
SORTSLOT; but I guess we don't want to make those take keywords.  It is
also necessary to add a new sequence function, FIND, which takes arguments
like POSITION but returns the element it finds.  With a :compare of EQ and
no :key, FIND is (almost) trivial, but with other comparisons and/or other
keys, it becomes extremely useful.

The default value for :KEY would be #'ID or IBID or CR, whatever we call
the function that simply returns its argument [I don't like any of those
names much.]  Using #'CAR as the argument gives you ASSOC (from FIND),
MEMASSOC (from MEMBER), POSASSOC (from POSITION), and DELASSOC (from
DELETE).  Using #'CDR as the argument gives you the RASS- forms.  Of
course, usually you don't want to use either CAR or CDR as the key, but
some defstruct structure-element-accessor.

In the same way that it may be reasonable to keep MEMQ for historical
reasons and because it is used so often, it is probably good to keep
ASSQ and ASSOC.  But the other a-list searching functions are unnecessary.

My solution to the second problem is to put in separate functions for
the -if and -if-not case.  In fact this is a total of only 10 functions:

	remove-if	remove-if-not	position-if	position-if-not
	count-if	count-if-not	delete-if	delete-if-not
	find-if		find-if-not

MEMBER-IF and MEMBER-IF-NOT are identical to SOME and NOTEVERY if the above
suggestion about extending MEMBER to sequences is adopted, and if my memory
of SOME and NOTEVERY is correct (I don't have a Common Lisp manual here.)
If they are put in anyway, that still makes only 12 functions, which are
really only 6 entries in the manual since -if/-if-not pairs would be
documented together.

∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
Date: 20 Jan 1982 16:29:10-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: numerics and common-lisp

The following stuff was sent a while back to GLS, and seemed to
provoke no comment; although it probably raises more questions
than answers, here goes:

*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #.(times 4 (atan 1.0)),  #.(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual. Constants like e could
be handled too...

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support, and that Franz
will be extended.

∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
Date: Wednesday, 20 January 1982, 21:04-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI

Sounds good, unless it turns out to be difficult to figure out just
which things are the kernel and which aren't.  Also, when the kernel is
designed, things should be set up so that even if some higher-level
function is NOT in the kernel, it is still possible for some
implementations to write a higher-level function in "machine language"
if they want to, without losing when they load in gobs and gobs of
Lisp-coded higher-level stuff.

∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
Date: 20 Jan 1982 22:04:29-PST
From: Kim.fateman at Berkeley
To: dlw@MIT-AI
Subject: adding to kernel
Cc: common-lisp@su-ai

One of the features of Franz which we addressed early on in the
design for the VAX was how we would link to system calls in UNIX, and
provide calling sequences and appropriate data structures for use
by other languages (C, Fortran, Pascal).  An argument could be made
that linkages of this nature could be done by message passing, if
necessary; an argument could be made that  CL will be so universal
that it would not be necessary to make such linkages at all.  I
have not found these arguments convincing in the past, though in
the perspective of a single CL virtual machine running on many machines,
they might seem better. 

I am unclear as to how many implementations of CL are anticipated, also:
for what machines; 
who will be doing them;
who will be paying for the work;
how much it will cost to get a copy (if CL is done "for profit");
how will maintenance and standardization happen (e.g. under ANSI?);

If these questions have been answered previously, please forgive my
ignorance/impertinence.


∂18-Jan-82  1537	Daniel L. Weinreb <DLW at MIT-AI> 	subseq and consing  
Date: Monday, 18 January 1982, 18:04-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: subseq and consing
To: common-lisp at SU-AI

I agree that GLS's proposal is nice, that it is only acceptable if the
compiler optimizes it, and that it is very easy to optimize.  It is also
extremely clear to the reader of the program, and it cuts down on the
number of arguments that he has to remember.  This sounds OK to me.

∂18-Jan-82  2203	Scott.Fahlman at CMU-10A 	Re: Sequence functions  
Date: 19 January 1982 0103-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: Guy.Steele at CMU-10A
Subject:  Re: Sequence functions
CC: common-lisp at su-ai
In-Reply-To:  <17Jan82 205656 GS70@CMU-10A>
Message-Id: <19Jan82 010338 SF50@CMU-10A>


Guy,

I agree that the index-range and the comparison-choice parameters are
orthogonal.  I like your proposal to use SUBSEQ for the ranges -- it
would appear to be no harder to optimize this in the compiler than to
do the equivalent keyword or optional argument thing, and the added
consing in interpreted code (only!)  should not make much difference.
And the semantics of what is going on with all the start and end
options now becomes crystal clear.  We would need a style suggestion in
the manual urging the programmer to use SUBSEQ for this and not some
random thing he cooks up, since the compiler will only recognize fairly
obvious cases.  Good idea!

I do not like the part of your proposal that relates to reordering the
arguments, on the grounds of gross incompatibility.  Unless we want to
come up with totally new names for all these functions, the change will
make it a real pain to move code and programmers over from Maclisp or
Franz.  Too high a price to pay for epsilon increase in elegance.  I
guess that of the suggestions I've seen so far, I would go with your
subseq idea for ranges and my keywords for specifying the comparison,
throwing out the IF family.

-- Scott

∂19-Jan-82  1551	RPG  	Suggestion    
To:   common-lisp at SU-AI  
I would like to make the following suggestion regarding the
strategy for designing Common Lisp. I'm not sure how to exactly
implement the strategy, but I think it is imperative we do something
like this soon.

We should separate the kernel from the Lisp based portions of the system
and design the kernel first. Lambda-grovelling, multiple values,
and basic data structures seem kernel. Sequence functions and names
can be done later.

The reason that we should do this is so that the many man-years of effort
to immplement a Common Lisp can be done in parallel with the design of
less critical things. 
			-rpg-

∂19-Jan-82  2113	Griss at UTAH-20 (Martin.Griss) 	Re: Suggestion        
Date: 19 Jan 1982 1832-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 19-Jan-82 1651-MST

I agree entirely. In terms of my 2 interests:
a) Implementing Common LISP kernel/compatibility in/for PSL
b) Getting our and other LISP tools working for Common LISP

I would very much like to see a clear effort NOW to isolate some of the
kernel features, and major implementation issues (data-types, user
control over storage manager, etc) so that some of us can implement
a kernel, and others can design extensions.
-------

∂19-Jan-82  2113	Fahlman at CMU-20C 	Re: Suggestion      
Date: 19 Jan 1982 2328-EST
From: Fahlman at CMU-20C
Subject: Re: Suggestion    
To: RPG at SU-AI
In-Reply-To: Your message of 19-Jan-82 1851-EST


Dick,
Your suggestion makes sense for implementations that are just getting started
now, but for those of us who have already got something designed, coded, an
close to up (and that includes most of the implementations that anyone now
cares about) I'm not sure that identifying and concentrating on a kernel is
a good move.  Sequence functions are quite pervasive and I, for one, would
like to see this issue settled soon.  Multiples, on the other hand, are fairly
localized.  Is there some implementation that is being particularly screwed
by the ordering of the current ad hoc agenda?
-- Scott
-------

I think it is possible for us to not define the kernel explicitly but to
identify those decisions that definitely apply to the kernel as opposed to
the non-kernel. It would seem that an established implementation would rather
know now about any changes to its kernel than later. I suggest that the
order of decisions be changed to decide `kernelish' issues first.
			-rpg-
∂20-Jan-82  1604	David A. Moon <MOON5 at MIT-AI> 	Keyword style sequence functions
Date: 20 January 1982 16:34-EST
From: David A. Moon <MOON5 at MIT-AI>
Subject: Keyword style sequence functions
To: common-lisp at SU-AI

Comments on Fahlman's Proposal for Keyword Style Sequence Functions for
Common Lisp of 16 January 1982

I think this is a good proposal and a step in the right direction.  There
are some problems with it, and also a couple issues that come to mind while
reading it.  I will first make some minor comments to get flamed up, and
then say what I really want to say.


- Negative comments first:

ELT and SETELT should be provided in type-specific versions.

My intuition suggests that MAP would be more readable with the result data
type before the function instead of after.  I don't really feel strongly
about this, but it's a suggestion.

I don't like the idea of flushing CONCAT (catenate) and making TO-LIST
allow multiple arguments, for some reason.

There is a problem with the :compare and :compare-not keywords.  For some
functions (the ones that take two sequences as arguments), the predicate is
really and truly an equality test.  It might be clearer to call it :equal.
For these functions I think it makes little sense to have a :compare-not.
Note that these are the same functions for which :if/:if-not are meaningless.
For other functions, such as POSITION, the predicate may not be a symmetric
equality predicate; you might be trying to find the first number in a list
greater than 50, or the number of astronauts whose grandmothers are not
ethnic Russians.  Here it makes sense to have a :compare-not.  It may actually
make sense to have a :compare keyword for these functions and a :equal
keyword for the others.  I'm not ecstatic about the name compare for this,
but I haven't thought of anything better.  This is only a minor esthetic
issue; I wouldn't really mind leaving things the way they are in Fahlman's
proposal.

Re :start and :end.  A nil value for either of these keywords should be
the same as not supplying it (i.e. the appropriate boundary of the sequence.)
This makes a lot of things simpler.  In :from-end mode, is the :start where
you start processing the sequence or the left-hand end of the subsequence?
In the Lisp machine, it is the latter, but either way would be acceptable.

The optional "count" argument to REMOVE and friends should be a keyword
argument.  This is more uniform, doesn't hurt anything, and is trivially
mechanically translatable from the old way.

The set functions, from ADJOIN through NSET-XOR, should not take keywords.
:compare-not is meaningless for these (unlike say position, where you would
use it to find the first element of a sequence that differed from a given
value).  That leaves only one keyword for these functions.  Also it is
-really- a bad idea to put keywords after an &rest argument (as in UNION).
I would suggest that the equal-predicate be a required first argument for
all the set functions; alternatively it could be an optional third argument
except for UNION and INTERSECTION, or those functions could be changed
to operate on only two sets like the others.  I think EQUAL is likely
to be the right predicate for set membership only in rare circumstances,
so that it would not hurt to make the predicate a required argument and
have no default predicate.

The :eq, :eql, :nequal, etc. keywords are really a bad idea.  The reasons
are:  1) They are non-uniform, with some keywords taking arguments and
some not.  See the tirade about this below.  2) They introduce an artificial
barrier between system-defined and user-defined predicates.  This is always
a bad idea, and here serves no useful purpose.  3) They introduce an
unesthetic interchangeability between foo and :foo, which can lead to
a significant amount of confusion.  If the keyword form of specifying the
predicate is too verbose, I would be much happier with making the predicate
be an optional argument, to be followed by keywords.  Personally I don't
think it is verbose enough to justify that.

There are still a lot of string functions in the Lisp machine not generalized
into sequence functions.  I guess it is best to leave that issue for future
generations and get on with the initial specification of Common Lisp.


- Negative comments not really related to the issue at hand:

"(the :string foo)".  Data type names cannot have colons, i.e. cannot be
keywords.  The reason is that the data type system is user-extensible, at
least via defstruct and certainly via other mechanisms such as flavors in
individual implementations and in future Common extensions.  This means
that it is important to be able to use the package system to avoid name
clashes between data types defined by different programs.  The standard
primitive data type names should be globals (or more exactly, should be
in the same package as the standard primitive functions that operate
on those data types.)

Lisp machine experience suggests that it is really not a good idea to have
some keywords take arguments and other keywords not take arguments.  It's a
bit difficult to explain why.  When you are just using these functions with
their keywords in a syntactic way, i.e. essentially as special forms, it
makes no difference except insofar as it makes the documentation more
confusing.  But when you start having programs processing the keywords,
i.e. using the sequence functions as functions rather than special forms,
all hell breaks loose if the syntax isn't uniform.  I think the slight
ugliness of an extra "t" sometimes is well worth it for the sake of
uniformity and simplicity.  On the Lisp machine, we've gone through an
evolution in the last couple of years in which keywords that don't take
arguments have been weeded out.

I don't think much of the scheme for having keywords be constants.  There
is nothing really bad about this except for the danger of confusing
novices, so I guess I could be talked into it, but I don't think getting
rid of the quote mark is a significant improvement (but perhaps it is in
some funny place on your keyboard, where you can't find it, rather than
lower case and to the right of the semicolon as is standard for
typewriters?)


- Minor positive comments

Making REPLACE take keywords is a good idea.

:start1/:end1/:start2/:end2 is a good idea.

The order of arguments to the compare/compare-not function needs to be
strictly defined (since it is not always a commutative function).  Presumably 
the right thing is to make its arguments come in the same order as the
arguments to the sequence function from which they derive.  Thus for SEARCH
the arguments would be an element of sequence1 followed by an element of
sequence2, while for POSITION the arguments would be the item followed
by an element of the sequence.

In addition to MEMQ, etc., would it be appropriate to have MEMQL, etc.,
which would use EQL as the comparison predicate?

MEMBER is a better name than POSITION for the predicate that tests for
membership of an element in a sequence, when you don't care about its
position and really want simply a predicate.  I am tempted to propose that
MEMBER be extended to sequences.  Of course, this would be a non-uniform
extension, since the true value would be T rather than a tail of a list (in
other words, MEMBER would be a predicate on sequences but a semi-predicate
on lists.)  This might be a nasty for novices, but it really seems worth
risking that.  Fortunately car, cdr, rplaca, and rplacd of T are errors in
any reasonable implementation, so that accidentally thinking that the truth
value is a list is likely to be caught immediately.


- To get down to the point:

The problems remaining after this proposal are basically two.  One is that there
is still a ridiculous family of "assoc" functions, and the other is that the
three proposed solutions to the -if/-if-not problem (flushing it, having an
optional argument before a required argument, or passing nil as a placeholder)
are all completely unacceptable.

My solution to the first problem is somewhat radical: remove ASSOC and all
its relatives from the language entirely.  Instead, add a new keyword,
:KEY, to the sequence functions.  The argument to :KEY is the function
which is given an element of the sequence and returns its "key", the object
to be fed to the comparison predicate.  :KEY would be accepted by REMOVE,
POSITION, COUNT, MEMBER, and DELETE.  This is the same as the new optional
argument to SORT (and presumably MERGE), which replaced SORTCAR and
SORTSLOT; but I guess we don't want to make those take keywords.  It is
also necessary to add a new sequence function, FIND, which takes arguments
like POSITION but returns the element it finds.  With a :compare of EQ and
no :key, FIND is (almost) trivial, but with other comparisons and/or other
keys, it becomes extremely useful.

The default value for :KEY would be #'ID or IBID or CR, whatever we call
the function that simply returns its argument [I don't like any of those
names much.]  Using #'CAR as the argument gives you ASSOC (from FIND),
MEMASSOC (from MEMBER), POSASSOC (from POSITION), and DELASSOC (from
DELETE).  Using #'CDR as the argument gives you the RASS- forms.  Of
course, usually you don't want to use either CAR or CDR as the key, but
some defstruct structure-element-accessor.

In the same way that it may be reasonable to keep MEMQ for historical
reasons and because it is used so often, it is probably good to keep
ASSQ and ASSOC.  But the other a-list searching functions are unnecessary.

My solution to the second problem is to put in separate functions for
the -if and -if-not case.  In fact this is a total of only 10 functions:

	remove-if	remove-if-not	position-if	position-if-not
	count-if	count-if-not	delete-if	delete-if-not
	find-if		find-if-not

MEMBER-IF and MEMBER-IF-NOT are identical to SOME and NOTEVERY if the above
suggestion about extending MEMBER to sequences is adopted, and if my memory
of SOME and NOTEVERY is correct (I don't have a Common Lisp manual here.)
If they are put in anyway, that still makes only 12 functions, which are
really only 6 entries in the manual since -if/-if-not pairs would be
documented together.

∂20-Jan-82  1631	Kim.fateman at Berkeley 	numerics and common-lisp 
Date: 20 Jan 1982 16:29:10-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: numerics and common-lisp

The following stuff was sent a while back to GLS, and seemed to
provoke no comment; although it probably raises more questions
than answers, here goes:

*** Issue 81: Complex numbers. Allow SQRT and LOG to produce results in
whatever form is necessary to deliver the mathematically defined result.

RJF:  This is problematical. The mathematically defined result is not
necessarily agreed upon.  Does Log(0) produce an error or a symbol?
(e.g. |log-of-zero| ?)  If a symbol, what happens when you try to
do arithmetic on it? Does sin(x) give up after some specified max x,
or continue to be a periodic function up to limit of machine range,
as on the HP 34?  Is accuracy specified in addition to precision?
Is it possible to specify rounding modes by flag setting or by
calling specific rounding-versions e.g. (plus-round-up x y) ? Such
features make it possible to implement interval arithmetic nicely.
Can one trap (signal, throw) on underflow, overflow,...
It would be a satisfying situation if common lisp, or at least a
superset of it, could exploit the IEEE standard. (Prof. Kahan would
much rather that language standardizers NOT delve too deeply into this,
leaving the semantics  (or "arithmetics") to specialists.)

Is it the case that a complex number could be implemented by
#C(x y) == (complex x y) ?  in which case  (real z) ==(cadr z),
(etc); Is a complex "atomic" in the lisp sense, or is it
the case that (eq (numerator #C(x y)) (numerator #C(x z)))?
Can one "rplac←numerator"?
If one is required to implement another type of atom for the
sake of rationals and another for complexes,
and another for ratios of complexes, then the
utility of this had better be substantial, and the implementation
cost modest.  In the case of x and y rational, there are a variety of
ways of representing x + i*y.  For example, it
is always possible to rationalize the denominator, but is it
required?
If  #R(1 2)  == (rat 1 2), is it the case that
(numerator r) ==(cadr r) ?  what is the numerator of (1/2+i)?

Even if you insist that all complex numbers are floats, not rationals,
you have multiple precisions to deal with.  Is it allowed to 
compute intermediate results to higher precision, or must one truncate
(or round) to some target precision in-between operations?

.......
Thus (SQRT -1.0) -> #C(0.0 1.0) and (LOG -1.0) -> #C(0.0 3.14159265).
Document all this carefully so that the user who doesn't care about
complex numbers isn't bothered too much.  As a rule, if you only play
with integers you won't see floating-point numbers, and if you only
play with non-complex numbers you won't see complex numbers.
.......
RJF: You've given 2 examples where, presumably, integers
are converted not only into floats, but into complex numbers. Your
rule does not seem to be a useful characterization. 
Note also that, for example, asin(1.5) is complex.

*** Issue 82: Branch cuts and boundary cases in mathematical
functions. Tentatively consider compatibility with APL on the subject of
branch cuts and boundary cases.
.......
RJF:Certainly gratuitous differences with APL, Fortran, PL/I etc are 
not a good idea!
.....

*** Issue 83: Fuzzy numerical comparisons. Have a new function FUZZY=
which takes three arguments: two numbers and a fuzz (relative
tolerance), which defaults in a way that depends on the precision of the
first two arguments.

.......
RJF: Why is this considered a language issue (in Lisp!), when the primary
language for numerical work (Fortran, not APL) does not?  The computation
of absolute and relative errors are sufficiently simple that not much
would be added by making this part of the language.)  I believe the fuzz business is used to cover
up the fact that some languages do not support integers. In such systems,
some computations  result in 1.99999 vs. 2.00000 comparisons, even though
both numbers are "integers". 

Incidentally, on "mod" of floats, I think that what you want is
like the "integer-part" of the IEEE proposal.  The EMOD instruction on 
the VAX is a brain-damaged attempt to do range-reductions.
.......

*** Issue 93: Complete set of trigonometric functions? Add ASIN, ACOS,
and TAN.


*** Issue 95: Hyperbolic functions. Add SINH, COSH, TANH, ASINH, ACOSH,
and ATANH.
.....
also useful are log(1+x) and exp(1+x).  


*** Issue 96: Are several versions of pi necessary? Eliminate the
variables SHORT-PI, SINGLE-PI, DOUBLE-PI, and LONG-PI, retaining only
PI.  Encourage the user to write such things as (SHORT-FLOAT PI),
(SINGLE-FLOAT (/ PI 2)), etc., when appropriate.
......
RJF: huh?  why not #.(times 4 (atan 1.0)),  #.(times 4 (atan 1.0d0)) etc.
It seems you are placing a burden on the implementors and discussants
of common lisp to write such trivial programs when the same thing
could be accomplished by a comment in the manual. Constants like e could
be handled too...

.......
.......
RJF: Sorry if the above comments sound overly argumentative.  I realize they
are in general not particularly constructive. 
I believe the group here at UCB will be making headway in many 
of the directions required as part of the IEEE support, and that Franz
will be extended.

∂20-Jan-82  2008	Daniel L. Weinreb <dlw at MIT-AI> 	Suggestion     
Date: Wednesday, 20 January 1982, 21:04-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Suggestion    
To: RPG at SU-AI, common-lisp at SU-AI

Sounds good, unless it turns out to be difficult to figure out just
which things are the kernel and which aren't.  Also, when the kernel is
designed, things should be set up so that even if some higher-level
function is NOT in the kernel, it is still possible for some
implementations to write a higher-level function in "machine language"
if they want to, without losing when they load in gobs and gobs of
Lisp-coded higher-level stuff.

∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Scott:
	Here are some messages I received recently. I'm worried about
Hedrick and the Vax. I'm not too worried about Lisp Machine, you guys,
and us guys (S-1). I am also worried about Griss and Standard Lisp,
which wants to get on the bandwagon. I guess I'd like to settle kernel
stuff first, fluff later.

	I understand your worry about sequences etc. Maybe we could try
to split the effort of studying issues a little. I dunno. It was just
a spur of the moment thought.
			-rpg-

∂19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Date: 19 Jan 1982 1443-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on common lisp
To:   gabriel at SU-AI

Mail-from: ARPANET host PARC-MAXC rcvd at 19-Jan-82 1331-PST
Date: 19 Jan 1982 13:12 PST
From: Masinter at PARC-MAXC
to: Feigenbaum@sumex-aim
Subject: Common Lisp- reply to Hedrick

It is a shame that such misinformation gets such rapid dissemination....

Date: 19 Jan 1982 12:57 PST
From: Masinter at PARC-MAXC
Subject: Re: CommonLisp at Rutgers
To: Hedrick@Rutgers
cc: Masinter

A copy of your message to "bboard at RUTGERS, griss at UTAH-20, admin.mrc at
SU-SCORE, jsol at RUTGERS" was forwarded to me. I would like to rebut some of
the points in it:

I think that Common Lisp has the potential for being a good lisp dialect which
will carry research forward in the future. I do not think, however, that people
should underestimate the amount of time before Common Lisp could possibly be a
reality.

The Common Lisp manual is nowhere near being complete. Given the current
rate of progress, the Common Lisp language definition would probably not be
resolved for two years--most of the hard issues have merely been deferred (e.g.,
T and NIL, multiple-values), and there are many parts of the manual which are
simply missing. Given the number of people who are joining into the discussion,
some drastic measures will have to be taken to resolve some of the more serious
problems within a reasonable timeframe (say a year).

Beyond that, the number of things which would have to be done to bring up a
new implementation of CommonLisp lead me to believe that the kernel for
another machine, such as the Dec-20, would take on the order of 5 man-years at
least. For many of the features in the manual, it is essential that the be built
into the kernel (most notably the arithmetic features and the multiple-value
mechanism) rather than in shared Lisp code. I believe that many of these may
make an implementation of Common Lisp more "difficult to implement efficiently
and cleanly" than Interlisp. 

I think that the Interlisp-VAX effort has been progressing quite well. They have
focused on the important problems before them, and are proceeding quite well. I
do not know for sure, but it is likely that they will deliver a useful system
complete with a programming enviornment long before the VAX/NIL project,
which has consumed much more resources. When you were interacting with the
group of Interlisp implementors at Xerox, BBN and ISI about implementing
Interlisp, we cautioned you about being optimistic about the amount of
manpower required. What seems to have happened is that you have come away
believing that Common Lisp would be easier to implement.  I don't think that is
the case by far.

Given your current manpower estimate (one full-time person and one RA) I do
not believe you have the critical mass to bring off a useful implemention of
Common Lisp. I would hate to see a replay of the previous situation with
Interlisp-VAX, where budgets were made and machines bought on the basis of a
hopeless software project. It is not that you are not competent to do a reasonable
job of implementation, it is just that creating a new implementation of an already
specified language is much much harder than merely creating a new
implementation of a language originally designed for another processor. 

I do think that an Interlisp-20 using extended virtual addressing might be
possible, given the amount of work that has gone into making Interlisp
transportable, the current number of compatible implementations (10, D, Jericho,
VAX) and the fact that Interlisp "grew up" in the Tenex/Tops-20 world, and that
some of the ordinarily more difficult problems, such as file names and operating
system conventions, are already tuned for that operating system. I think that a
year of your spare time and Josh for one month seems very thin.

Larry
-------

∂20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
Date: 21 Jan 1982 0024-EST
From: Fahlman at CMU-20C
Subject: Implementations
To: rpg at SU-AI
cc: steele at CMU-20C, fahlman at CMU-20C

Dick,

I agree that, where a choice must be made, we should give first priority
to settling kernel-ish issues.  However, I think that the debate on
sequence functions is not detracting from more kernelish things, so I
see no reason not to go on with that.

Thanks for forwarding Masinter's note to me.  I found him to be awfully
pessimistic.  I believe that the white pages will be essentially complete
and in a form that just about all of us can agree on within two months.
Of course, the Vax NIL crowd (or anyone else, for that matter) could delay
ratification indefinitely, even if the rest of us have come together, but I
think we had best deal with that when the need arises.  We may have to
do something to force convergence if it does not occur naturally.  My
estimate may be a bit optimistic, but I don't see how anyone can look at
what has happened since last April and decide that the white pages will
not be done for two years.

Maybe Masinter's two years includes the time to develop all of the
yellow pages stuff -- editors, cross referencers, and so on.  If so, I
tend to agree with his estimate.  To an Interlisper, Common Lisp will
not offer all of the comforts of home until all this is done and stable,
and a couple of years is a fair estimate for all of this stuff, given
that we haven't really started thinking about this.  I certainly don't
expect the Interlisp folks to start flocking over until all this is
ready, but I think we will have the Perq and Vax implementations
together within 6 months or so and fairly stable within a year.

I had assumed that Guy had been keeping you informed of the negotiations
we have had with DEC on Common Lisp for VAX, but maybe he has not.  The
situation is this: DEC has been extremely eager to get a Common Lisp up
on Vax VMS, due to pressure from Slumberger and some other customers,
plus their own internal plans for building some expert systems.  Vax NIL
is not officially abandoned, but looks more and more dubious to them,
and to the rest of us.  A couple of months ago, I proposed to DEC that
we could build them a fairly decent compiler just by adding a
post-processor to the Spice Lisp byte-code compiler.  This
post-processor would turn the simple byte codes into in-line Vax
instructions and the more complex ones into jumps off to hand-coded
functions.  Given this compiler, one could then get a Lisp system up
simply by using the Common Lisp in Common Lisp code that we have
developed for Spice.  The extra effort to do the Vax implementation
amounts to only a few man-months and, once it is done, the system will
be totally compatible with the Spice implementation and will track any
improvements.  With some additional optimizations and a bit of tuning,
the performance of this sytem should be comparable to any other Lisp on
the Vax, and probably better than Franz.

DEC responded to this proposal with more enthusiasm than I expected.  It
is now nearly certain that they will be placing two DEC employees
(namely, ex-CMU grad students Dave McDonald and Water van Roggen) here
in Pittsburgh to work on this, with consulting by Guy and me.  The goal
is to get a Common Lisp running on the Vax in six months, and to spend
the following 6 months tuning and polishing.  I feel confident that this
goal will be met.  The system will be done first for VMS, but I think we
have convinced DEC that they should invest the epsilon extra effort
needed to get a Unix version up as well.

So even if MIT totally drops the ball on VAX NIL, I think that it is a
pretty safe bet that a Common Lisp for Vax will be up within a year.  If
MIT wins, so much the better: the world will have a choice between a
hairy NIL and a basic Common Lisp implementation.

We are suggesting to Chuck Hedrick that he do essentially the same thing
to bring up a Common Lisp for the extended-address 20.  If he does, then
this implementation should be done in finite time as well, and should
end up being fully compatible with the other systems.  If he decides
instead to do a traditinal brute-force implementation with lots of
assembly code, then I tend to agree with Masinter's view: it will take
forever.

I think we may have come up with an interesting kind of portability
here.  Anyway, I thought you would be interested in hearing all the
latest news on this.

-- Scott
-------

∂20-Jan-82  2234	Kim.fateman at Berkeley 	adding to kernel    
Date: 20 Jan 1982 22:04:29-PST
From: Kim.fateman at Berkeley
To: dlw@MIT-AI
Subject: adding to kernel
Cc: common-lisp@su-ai

One of the features of Franz which we addressed early on in the
design for the VAX was how we would link to system calls in UNIX, and
provide calling sequences and appropriate data structures for use
by other languages (C, Fortran, Pascal).  An argument could be made
that linkages of this nature could be done by message passing, if
necessary; an argument could be made that  CL will be so universal
that it would not be necessary to make such linkages at all.  I
have not found these arguments convincing in the past, though in
the perspective of a single CL virtual machine running on many machines,
they might seem better. 

I am unclear as to how many implementations of CL are anticipated, also:
for what machines; 
who will be doing them;
who will be paying for the work;
how much it will cost to get a copy (if CL is done "for profit");
how will maintenance and standardization happen (e.g. under ANSI?);

If these questions have been answered previously, please forgive my
ignorance/impertinence.


The known and suspected implementations for Common Lisp are:

	S-1 Mark IIA, paid for by ONR, done by RPG, GLS, Rod Brooks and others
	SPICELISP, paid for by ARPA, done by SEF, GLS, students, some RPG
	ZETALISP, paid for by Symbolics, by Symbolics
	VAX Common Lisp, probably paid for by DEC, done by CMU Spice personnel
	Extended addressing 20, probably paid for by DEC, done by Rutgers (Hedrick)
	68000, Burroughs, IBM, Various portable versions done by Utah group,
		paid for by ARPA (hopefully spoken).
	Retrofit to MacLisp by concerned citizens, maybe.
∂21-Jan-82  1746	Earl A. Killian <EAK at MIT-MC> 	SET functions    
Date: 21 January 1982 17:26-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET functions
To: Morrison at UTAH-20, RMS at MIT-AI
cc: common-lisp at SU-AI

Well if you're going to propose two changes like that, you might
as well do SETF -> SET, instead of SETF -> SETQ.  It's shorter
and people wouldn't wonder what the Q or F means.

But actually I'm not particularly in favor of eliminating the set
functions, even though I tend to use SETF instead myself, merely
because I don't see how their nonexistance would clean up
anything.

∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
Date: 21 January 1982 18:01-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: EAK at MIT-MC
cc: common-lisp at SU-AI

The point is not to get rid of the setting functions, but to
reduce their status in the documentation.  Actually getting rid of
them doesn't accomplish much, as you say, and also is too great
an incompatibility.  (For the same reason, SETF cannot be renamed
to SET, but can be renamed to SETQ).  But moving them all to an
appendix on compatibility and telling most users simply
"to alter anything, use SETF" is a tremendous improvement in
the simplicity of the language as perceived by users, even if
there is no change in the actual system that they use.
(At the same time, any plans to introduce new setting functions
that are not needed for compatibility can be canceled).

∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
Date: 21 Jan 1982 1939-MST
From: Don Morrison <Morrison at UTAH-20>
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 1601-MST

I'm not convinced  that drastic  renamings (such  as SETF  => SET)  are
impractical.  Just as  you move the  documentation to a  "compatability
appendix", you move  the old  semantics to  a "compatability  package".
Old code must be run with  the reader interning in the MACLISP  package
or the Franz  LISP package, or  whatever.  The only  things which  must
really change  are the  programmers  -- and  I  believe the  effort  of
changing ones thoughts  to a  conceptually simpler LISP  would, in  the
long run, save  programmers time and effort.

There is, however, the problem of  maintenance of old code.  One  would
not like  to  have to  remember  seventeen  dialects of  LISP  just  to
maintain old  code.  But  I suspect  that maintenance  would  naturally
proceed by rewiting large  hunks of code, which  would then be done  in
the "clean" dialect.  LISP code is  not exempt from the usual  folklore
that  tweeking  broken  code  only  makes  it  worse.   This  is   just
conjecture; has experience on the  LISP Machine shown that old  MACLISP
code tends to get rewritten as it needs to change, or does it just  get
tweeked, mostly using those historical  atrocities left in for  MACLISP
compatability? 

It would be a shame to  see a standardized Common LISP incorporate  the
same sort of historical  abominations as those  which FORTRAN 77  lives
with.
-------

∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
Date: 21 January 1982 23:50-EST
From: George J. Carrette <GJC at MIT-MC>
To: Morrison at UTAH-20
cc: RMS at MIT-AI, common-lisp at SU-AI

My experience with running macsyma in maclisp and lispm is that what
happens is that compatibility features are not quite compatible, and
that gross amounts of tweeking beyond the scope of a possibility in
FORTRAN 77 goes on. Much of the tweeking takes the form of adding
another layer of abstraction through macros, not using ANY known form
of lisp, but one which is a generalization, and obscure to anyone but
a macsyma-lisp hacker. At the same time the *really* gross old code
gets rewritten, when significant new features are provided, like
Pathnames.

Anyway, in NIL I wanted to get up macsyma as quickly as possible
without grossing out RLB or myself, or overloading NIL with so many
compatibility features, as happened in the Lispmachine. Also there
was that bad-assed T and NIL problem we only talked about a little
at the common-lisp meeting. [However, more severe problems, like the
fact that macsyma would not run with error-checking in CAR/CDR 
had already been fixed by smoking it out on the Lispmachine.]



∂21-Jan-82  1144	Sridharan at RUTGERS (Sri) 	S-1 CommonLisp   
Date: 21 Jan 1982 1435-EST
From: Sridharan at RUTGERS (Sri)
Subject: S-1 CommonLisp
To: rpg at SU-AI, guy.steele at CMU-10A

I have been kicking around an idea to build a multiprocessor aimed at
running some form of Concurrent Lisp as well my AI language AIMDS.
I came across S-1 project and it is clear I need to find out about
this project in detail.  Can you arrange to have me receive what
reports and documents are available on this project?

More recently, Hedrick mentioned in a note that there is an effort
to develop Lisp for the S-1.  How exciting!  Can you provide me
some background on this and describe the goals and current status?

My project is an attempt to develop coarse-grain parallelism in
a multprocessor environment, each processor being of the order of a
Lisp-machine, with a switching element between processors and memories,
with ability for the user/programmer to write ordinary Lisp code,
enhanced in places with necessary declarations and also new primitives
to make it feasible to take advantage of parallelism.  One of the
goals of the project is to support gradual conversion of existing
code to take advantage of available concurrency.

My mailing address is
N.S.Sridharan
Department of Computer Science
Rutgers University, Hill Center
New Brunswick, NJ 08903
-------

∂21-Jan-82  1651	Earl A. Killian <EAK at MIT-MC> 	SET functions    
Date: 21 January 1982 17:26-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SET functions
To: Morrison at UTAH-20, RMS at MIT-AI
cc: common-lisp at SU-AI

Well if you're going to propose two changes like that, you might
as well do SETF -> SET, instead of SETF -> SETQ.  It's shorter
and people wouldn't wonder what the Q or F means.

But actually I'm not particularly in favor of eliminating the set
functions, even though I tend to use SETF instead myself, merely
because I don't see how their nonexistance would clean up
anything.

∂21-Jan-82  1803	Richard M. Stallman <RMS at MIT-AI>
Date: 21 January 1982 18:01-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: EAK at MIT-MC
cc: common-lisp at SU-AI

The point is not to get rid of the setting functions, but to
reduce their status in the documentation.  Actually getting rid of
them doesn't accomplish much, as you say, and also is too great
an incompatibility.  (For the same reason, SETF cannot be renamed
to SET, but can be renamed to SETQ).  But moving them all to an
appendix on compatibility and telling most users simply
"to alter anything, use SETF" is a tremendous improvement in
the simplicity of the language as perceived by users, even if
there is no change in the actual system that they use.
(At the same time, any plans to introduce new setting functions
that are not needed for compatibility can be canceled).

∂21-Jan-82  1844	Don Morrison <Morrison at UTAH-20> 
Date: 21 Jan 1982 1939-MST
From: Don Morrison <Morrison at UTAH-20>
To: RMS at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 1601-MST

I'm not convinced  that drastic  renamings (such  as SETF  => SET)  are
impractical.  Just as  you move the  documentation to a  "compatability
appendix", you move  the old  semantics to  a "compatability  package".
Old code must be run with  the reader interning in the MACLISP  package
or the Franz  LISP package, or  whatever.  The only  things which  must
really change  are the  programmers  -- and  I  believe the  effort  of
changing ones thoughts  to a  conceptually simpler LISP  would, in  the
long run, save  programmers time and effort.

There is, however, the problem of  maintenance of old code.  One  would
not like  to  have to  remember  seventeen  dialects of  LISP  just  to
maintain old  code.  But  I suspect  that maintenance  would  naturally
proceed by rewiting large  hunks of code, which  would then be done  in
the "clean" dialect.  LISP code is  not exempt from the usual  folklore
that  tweeking  broken  code  only  makes  it  worse.   This  is   just
conjecture; has experience on the  LISP Machine shown that old  MACLISP
code tends to get rewritten as it needs to change, or does it just  get
tweeked, mostly using those historical  atrocities left in for  MACLISP
compatability? 

It would be a shame to  see a standardized Common LISP incorporate  the
same sort of historical  abominations as those  which FORTRAN 77  lives
with.
-------

∂21-Jan-82  2053	George J. Carrette <GJC at MIT-MC> 
Date: 21 January 1982 23:50-EST
From: George J. Carrette <GJC at MIT-MC>
To: Morrison at UTAH-20
cc: RMS at MIT-AI, common-lisp at SU-AI

My experience with running macsyma in maclisp and lispm is that what
happens is that compatibility features are not quite compatible, and
that gross amounts of tweeking beyond the scope of a possibility in
FORTRAN 77 goes on. Much of the tweeking takes the form of adding
another layer of abstraction through macros, not using ANY known form
of lisp, but one which is a generalization, and obscure to anyone but
a macsyma-lisp hacker. At the same time the *really* gross old code
gets rewritten, when significant new features are provided, like
Pathnames.

Anyway, in NIL I wanted to get up macsyma as quickly as possible
without grossing out RLB or myself, or overloading NIL with so many
compatibility features, as happened in the Lispmachine. Also there
was that bad-assed T and NIL problem we only talked about a little
at the common-lisp meeting. [However, more severe problems, like the
fact that macsyma would not run with error-checking in CAR/CDR 
had already been fixed by smoking it out on the Lispmachine.]



∂22-Jan-82  1842	Fahlman at CMU-20C 	Re: adding to kernel
Date: 22 Jan 1982 2140-EST
From: Fahlman at CMU-20C
Subject: Re: adding to kernel
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI
In-Reply-To: Your message of 21-Jan-82 0104-EST


The ability to link system calls and compiled routines written in the
barbarous tongues into Common Lisp will be important in some
implementations.  In others, this will be handled by inter-process
message passing (Spice) or by translating everything into Lisp or
Lispish byte-codes (Symbolics).  In any event, it seems clear that
features of this sort must be implementation-dependent packages rather
than parts of the Common Lisp core.

As for what implementations are planned, I know of the following that
are definitely underway: Spice Lisp, S1-NIL, VAX-NIL, and Zetalisp
(Symbolics).  Several other implementations (for Vax, Tops-20, IBM 4300
series, and a portable implementation from the folks at Utah) are being
considered, but it is probably premature to discuss the details of any
of these, since as far as I know none of them are definite as yet.  The
one implmentation I can discuss is Spice Lisp.

Spice is a multiple process, multiple language, portable computing
environment for powerful personal machines (i.e. more powerful than the
current generation of micros).  It is being developed by a large group
of people at CMU, with mostly ARPA funding.  Spice Lisp is the Common
Lisp implementation for personal machines running Spice.  Scott Fahlman
and Guy Steele are in charge.  The first implementation is for the Perq
1a with 16K microstore and 1 Mbyte main memory (it will NOT run on the
Perq 1).  We will probably be porting all of the Spice system, including
the Lisp, to the Symbolics 3600 when this machine is available, with
other implementations probably to follow.

The PERQ implementation will probably be distributed and maintained by
3RCC as one of the operating systems for the PERQ; we would hope to
develop similar arrangements with other manufacturers of machines on
which Spice runs, since we at CMU are not set up to do maintenance for
lots of customers ourselves.

Standardization for awhile will (we hope) be a result of adhering to the
Common Lisp Manual; once Common Lisp has had a couple of years to
settle, it might be worth freezing a version and going for ANSI
standardization, but not until then.
-------

∂22-Jan-82  1914	Fahlman at CMU-20C 	Multiple values
Date: 22 Jan 1982 2209-EST
From: Fahlman at CMU-20C
Subject: Multiple values
To: common-lisp at SU-AI


It has now been a week since I suggested flushing the lambda-list
versions of the multiple value catching forms.  Nobody has leapt up to
defend these, so I take it that nobody is as passionate about keeping
these around as I am about flushing them.  Therefore, unless strong
objections appear soon, I propose that we go with the simple Lisp
Machine versions plus M-V-Call in the next version of the manual.  (If,
once the business about lexical binding is resolved, it is clear that
these can easily be implemented as special cases of M-V-Call, we can put
them back in again.)

The CALL construct proposed by Stallman seems very strange and low-level
to me.  Does anyone really use this?  For what?  I wouldn't object to
having this around in a hackers-only package, but I'm not sure random
users ought to mess with it.  Whatever we do with CALL, I would like to
keep M-V-Call as well, as its use seems a good deal clearer without the
spreading and such mixed in.

-- Scott
-------

∂22-Jan-82  2132	Kim.fateman at Berkeley 	Re: adding to kernel
Date: 22 Jan 1982 21:27:03-PST
From: Kim.fateman at Berkeley
To: Fahlman@CMU-20C
Subject: Re: adding to kernel
Cc: common-lisp@su-ai

There is a difference between the "common lisp core" and the
"kernel" of a particular implementation.  The common lisp core
presumably would have a function which obtains the time.  Extended
common lisp might convert the time to Roman numerals.  The kernel
would have to have a function (in most cases, written in something
other than lisp) which obtains the time from the hardware or
operating system.  I believe that the common lisp core should be
delineated, and the extended common lisp (written in common lisp core)
should be mostly identical from system to system.  What I would like
to know, though, is what will be required of the kernel, because it
will enable one to say to a manufacturer, it is impossible to write
a common lisp for this architecture because it lacks (say) a real-time
clock, or does not support (in the UNIX parlance) "raw i/o", or

perhaps multiprocessing...

I hope that the results of common lisp discussions become available for
less than the $10k (or more) per cpu that keeps us at Berkeley from
using Scribe.  I have no objection to a maintenance organization, but
I hope copies of relevant programs (etc) are made available in an
unmaintained form for educational institutions or other worthy types.

Do the proprietor(s) of NIL think it is a "common lisp implementation"?
That is, if NIL and CL differ in specifications, will NIL change, or
will NIL be NIL, and a new thing, CL emerge?  If CL is sufficiently
well defined that, for example, it can be written in Franz Lisp with
some C-code added, presumably a CL compatibility package could be
written.  Would that make Franz a "common lisp implementation"?
(I am perfectly happy with the idea of variants of Franz; e.g. users
here have a choice of the CMU top-level or the (raw) default; they
can have a moderately interlisp-like set of functions ("defineq" etc.)
or the default maclisp-ish.  ).

∂23-Jan-82  0409	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 23 January 1982 07:07-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI, Fahlman at CMU-20C

I don't know the exact delivery time for Symbolics new "L" machine,
nor the exact state of CMU spice-lisp, [which is on the front-burner
now for micro-coded implementation on their own machine no?] with
respect to any possible VAX implementation; but I suspect that of
all the lisp implementations planning to support the COMMON-LISP
standard, MIT's NIL is the closest to release. Can I get some
feedback on this?

As far as bucks go "$$$" gee. CPU's that can run lisp are not cheap
in themselves. However, I don't anything concrete about the
marketing of NIL. Here is a cute one, when the New Implementation of Lisp,
becomes the Old Implementation of Lisp, then NIL becomes OIL.
However, right now it is still NEW, so you don't have to worry.

Unstated assumptions (so far) in Common-lisp?
[1] Error-checking CAR/CDR by default in compiled code.
[2] Lispm-featurefull debugging in compiled code.

Maybe this need not be part of the standard, but everbody knows that
it is part of the usability and marketability of a modern lisp.

Here is my guess as to what NIL will look like by the time the UNIX
port is made: Virtual Machine written in SCHEME, with the SCHEME compiler
written in NIL producing standard UNIX assembler. NIL written in NIL,
and the common-lisp support written in NIL and common-lisp. A Maclisp
compatibility namespace supported by functions written in NIL.
VM for unix written in Scheme rather than "C" might seem strange to
some, but it comes from a life-long Unix/C hacker around here who
wants to raise the stakes a bit to make it interesting. You know, one
thing for sure around MIT => If it ain't interesting it ain't going to
get done! <= There being so many other things to do, not to even
mention other, possibly commercial organizations.



∂23-Jan-82  0910	RPG  
To:   common-lisp at SU-AI  
MV Gauntlet Picked Up
Ok. I believe that even if the implementation details are grossly different
all constructs that bind should have the same syntax. Thus,
if any MV construct binds, and is called ``-BIND'', ``-LAMBDA'', or
``-LET'', it should behave the same way as anything else that purports
to bind (like LAMBDA).  Since LET and LAMBDA are similar to most naive
users, too, I would like to see LET and LAMBDA be brought into line.

I would like a uniform, consistent language, so I strongly propose
either simplifying LAMBDA to be as simple as Lisp Machine multiple-value-bind
and using Lisp Machine style MV's as Scott suggests, or going to complex
LAMBDA, complex MV-lambda as in the current scheme, and flushing Lisp
Machine Multiple-value-bind. I propose not doing a mixture. 
			-rpg-

∂23-Jan-82  1841	Fahlman at CMU-20C  
Date: 23 Jan 1982 2136-EST
From: Fahlman at CMU-20C
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 23-Jan-82 1210-EST


It seems clear to me that we MUST support two kinds of binding forms: a
simple-syntax form as in PROG and LET, and a more complex form as in
DEFUN and LAMBDA.  (Not to mention odd things like DO and PROGV that are
different but necessary.)  It clearly makes no sense to hair up PROG and
LET with optionals and rest args, since there is no possible use for
these things -- they would just confuse people and be a pain to
implement.  It is also clear that we are not going to abandon optionals
and rest args in DEFUN and LAMBDA in the name of uniformity -- they are
too big a win when you are defining functions that are going to be
called from a lot of different places, not all of them necessarily known
at compile-time.  So I don't really see what RPG is arguing for.  The
issue is not whether to support both a simple and a hairy syntax for
binding forms; the issue is simply which of these we want the
MV-catching forms to be.  And in answering that question, as in many
other places in the language, we must consider not only uniformity as
seen by Lisp theologians, but also implementation cost, runtime
efficiency, and what will be least confusing to the typical user.

-- Scott
-------

∂23-Jan-82  2029	Fahlman at CMU-20C 	Re:  adding to kernel    
Date: 23 Jan 1982 2319-EST
From: Fahlman at CMU-20C
Subject: Re:  adding to kernel
To: GJC at MIT-MC
cc: common-lisp at SU-AI
In-Reply-To: Your message of 23-Jan-82 0707-EST

In reply to GJC's recent message:

It is hard to comment on whether NIL is closer to being released than
other Common Lisp implementations, since you don't give us a time
estimate for NIL, and you don't really explain what you mean by
"released".  I understand that you have something turning over on
various machines at MIT, but it is unclear to me how complete this
version is or how much work has to be done to make it a Common Lisp
superset.  Also, how much manpower do you folks have left?

The PERQ implementation of Spice Lisp is indeed on our front burner.
Unfortumately, we do not yet have an instance of the PERQ 1a processor
upon which to run this.  The PERQ microcode is essentially complete and
has been debugged on an emulator.  The rest of the code, written in
Common Lisp itself, is being debugged on a different emulator.  If we
get get the manual settled soon and if 3RCC delivers the 1a soon, we
should have a Spartan but usable Common Lisp up by the start of the
summer.  The Perq 1a wil probably not be generally available until
mid-summer, given the delays in getting the prototype together.

By summer's end we should have an Emacs-like editor running, along with
some fairly nice debugging tools.  Of course, the system will be
improving for a couple of years beyond that as additional user amenities
appear.  I have no idea how long it will take 3RCC to start distributing
and supporting this Lisp, if that's what you mean by "release".  Their
customers might force them to move more quickly on this than they
otherwise would, but they have a lot of infrastructure to build -- no
serious Lispers over there at present.

As for your "unstated assumptions":

1. The amount of runtime error checking done by compiled code must be
left up to the various implementations, in general.  A machine like the
Vax will probably do less of this than a microcoded implementation, and
a native-code compiler may well want to give the user a compile-time
choice between some checking and maximum speed.  I think that the white
pages should just say "X is an error" and leave the question of how much
checking is done in compiled code to the various implementors.

2. The question of how (or whether) the user can debug compiled code is
also implementation-dependent, since the runtime representations and
stack formats may differ radically.  In addition, the user interface for
a debugging package will depend on the type of display used, the
conventions of the home system, and other such things, though one can
imagine that the debuggers on similar environments might make an effort
to look the same to the user.  The white pages should probably not
specify any debugging aids at all, or at most should specify a few
standard peeking functions that all implementations can easily support.

I agree that any Common Lisp implementation will need SOME decent debugging
aids before it will be taken seriously, but that does not mean that this
should be a part of the Common Lisp standard.

-- Scott
-------

∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 04:24-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I agree with Fahlman about binding constructs.
I want LAMBDA to be the way it is, and LET to be the way it is,
and certainly not the same.

As for multiple values, if LET is fully extended to do what
SETF can do, then (LET (((VALUES A B C) m-v-returning-form)) ...)
can be used to replace M-V-BIND, just as (SETF (VALUES A B C) ...)
can replace MULTIPLE-VALUES.  I never use MULTIPLE-VALUES any more
because I think that the SETF style is clearer.

∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 06:02-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I would like to clear up a misunderstanding that seems to be
prevalent.  The MIT Lisp machine system, used by Symbolics and LMI, is
probably going to be converted to support Common Lisp (which is the
motivation for my participation in the design effort for Common Lisp
clean).  Whenever this happens, Common Lisp will be available on
the CADR machine (as found at MIT and as sold by LMI and Symbolics)
and the Symbolics L machine (after that exists), and on the second
generation LMI machine (after that exists).

I can't speak for LMI's opinion of Common Lisp, but if MIT converts,
LMI will certainly do so.  As the main Lisp machine hacker at MIT, I
can say that I like Common Lisp.

It is not certain when either of the two new machines will appear, or
when the Lisp machine system itself will support Common Lisp.  Since
these three events are nearly independent, they could happen in any
order.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:23-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

To clear up another random point: the name "Zetalisp" is not a Symbolics
proprietary name.  It is just a name that has been made up to replace
the ungainly name "Lisp Machine Lisp".  The reason for needing a name is
that I belive that people associate the Lisp Machine with Maclisp,
including all of the bad things that they have traditionally belived
about Maclisp, like that it has a user interface far inferior to that of
Interlisp.

I certainly hope that all of the Lisp Machines everywhere will convert
to Common Lisp together.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:20-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

If I understand what RPG is saying then I think that I am not convinced
by his point.  I don't think that just because multiple-value-bind takes
a list of variables that are being bound to variables means that it HAS
to have all the features that LAMBDA combinations have, in the name of
language simplicity, because I just don't think that the inconsistency
there bothers me very much.  It is a very localized inconsistency and I
really do not belive it is going to confuse people much.

However, I still object to RMS's proposal as am still opposed to having
"destructuring LET".  I have flamed about this enough in the past that I
will not do it now.  However, having a "destructuring-bind" (by some
name) form that is like LET except that it destructures might be a
reasonable solution to providing a way to allow multiple-value-bind work
without any perceived language inconsistency.

∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 24 January 1982 23:06-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

    From: Fahlman at CMU-20C
    It is hard to comment on whether NIL is closer to being released than
    other Common Lisp implementations, since you don't give us a time
    estimate for NIL.

Oh. I had announced a release date of JAN 30. But, with the air-conditioner's
down for greater than a week that's got to go to at lease FEB 10. But
FEB 10 is the first week of classes at MIT, so I'll have JM, GJS, and
others on my case to get other stuff working. Sigh.
By release I mean that it is in a useful state, i.e. people will be able
to run their lisp programs in it. We have two concrete tests though, 
[1] To bring up "LSB".
   [A] This gives us stuff like a full hair FORMAT.
   [B] Martin's parser.
[2] TO run Macsyma on the BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2 and 
    HAYAT demos. 

Imagine bringing yourself and a tape to a naked VMS site, and installing
Emacs, a modern lisp, and Macsyma, in that order. You can really
blow away the people who have heard about these things but never
had a chance to use them, especially on their very own machine.
One feeling that makes the hacking worthwhile.

Anyway, when I brought Macsyma over to the Plasma Fusion
Center Alcator Vax, I was doing all the taylor series, integrals and
equation solving they threw at me. Stuff like
INTEGRATE(SIN(X↑2)*EXP(X↑2)*X↑2,X); Then DIFF it, then RATSIMP and TRIGREDUCE
to get back to the starting point.(try that on MC and see how many
files get loaded). (Sorry, gibberish to non-macsyma-hackers.)
=> So I can say that macsyma is released to MIT sites now. (MIT-LNS too). 
   People can use it and I'll field any bug reports. <=

Point of Confusion: Some people are confused as to what Common-Lisp is.
                    Even people at DEC.

-GJC

∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
Date: 25 Jan 1982 0125-EST
From: Fahlman at CMU-20C
Subject: Sequences
To: common-lisp at SU-AI


I have spent a couple of days mulling over RPG's suggestion for putting
the keywords into a list in functional position.  I thought maybe I
could get used to the unfamiliarity of the syntax and learn to like
this proposal.  Unfortunately, I can't.

I do like Guy's proposal for dropping START/END arguments and also
several of the suggestions that Moon made.  I am trying to merge all
of this into a revised proposal in the next day or two.  Watch this
space.

-- Scott
-------

∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
Date: 24 Jan 1982 22:40:50-PST
From: Kim.fateman at Berkeley
To: gjc@mit-mc
Subject: NIL/Macsyma 
Cc: common-lisp@SU-AI

Since it has been possible to run Macsyma on VMS sites (under Eunice or
its precursor) since April, 1980, (when we dropped off a copy at LCS),
it is not clear to me what GJC's ballyhoo is about.  If the physics
sites are only now getting a partly working Macsyma for VMS, it only
brings to mind the question of whether LCS ever sent out copies of the VMS-
Macsyma we gave them, to other MIT sites.

But getting Maclisp programs up under NIL should not be the benchmark,
nor is it clear what the relationship to common lisp is.
Having macsyma run under common lisp (whatever that will be)
would be very nice, of course,
whether having macsyma run under NIL is a step in that direction or
not.  It might also be nice to see, for example, one of the big interlisp
systems.

∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
Date: 25 Jan 1982 1857-EST
From: DILL at CMU-20C
Subject: eql => eq?
To: common-lisp at SU-AI

Proposal: rename the function "eq" in common lisp to be something like
"si:internal-eq-predicate", and the rename "eql" to be "eq".  This would
have several advantages.

 * Simplification by reducing the number of equality tests.

 * Simplification by reducing the number of different versions of
   various predicates that depend on the type of equality test you
   want.

 * Greater machine independence of lisp programs (whether eq and equal
   are the same function for various datatypes is heavily 
   implementation-dependent, while eql is defined to be relatively 
   machine-independent; furthermore, functions like memq in the current
   common lisp proposal make it easier to use eq comparisons than eql).

Possible disadvantages:

 * Do people LIKE having, say, numbers with identical values not be eq?
   If so, they won't like this.

 * Efficiency problems.

I don't believe the first complaint.  If there are no destructive
operations defined for an object, eq and equal ought to do the same
thing.

The second complaint should not be significant in interpreted code,
since overhead of doing a type-dispatch will probably be insignificant
in comparison with, say, finding the right subr and calling it.

In compiled code, taking the time to declare variable types should allow
the compiler to open-code "eq" into address comparisons, if appropriate,
even in the absence of a hairy compiler.  A hairy compiler could do even
better.

Finally, in the case where someone wants efficiency at the price of
tastefulness and machine-independence, the less convenient
implementation-dependent eq could be used.
-------

∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
Date: 25 Jan 1982 2151-EST
From: Fahlman at CMU-20C
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST


I don't think it would be wise to replace EQ with EQL on a wholesale basis.
On microcoded machines, this can be made to win just fine and the added
tastefulness is worth it.  But Common Lisp has to run on vaxen and such as
well, and there the difference can be a factor of three.  In scattered
use, this would not be a problem, but EQ appears in many inner loops.
-- Scott
-------

∂27-Jan-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
Date: 27 Jan 1982 1332-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST

Possibly CL is turning into something so far from normal Lisp that I
can't use my experience with Lisp to judge it.  However in the Lisp
programming that I am used to, I often thought in terms of the actual
data structures I was building, not of course at the bit level, but at
least at the level of pointers.  When doing this sort of programming,
raw comparison of pointers was a conceptual primitive.  Certainly if you
are going to turn Lisp into ADA, which seems the trend in much recent
thinking (not just the CL design effort), EQ will clearly be, as you
say, an internal implementation primitive.  But if anyone wants to
continue to program as I did, then it will be nice to have the real EQ
around.  Now certainly in most cases where EQ is being used to compare
pointers, EQL will work just as well, since these two things differ only
on objects where EQ would not validly be used in the style of
programming I am talking about.  However it is still EQ that is the
conceptual primitive, and I somehow feel better about the language if
when I want to compare pointers I get a primitive that compares
pointers, and not one that tests to see whether what I have is something
that it thinks I should be able to compare and if not does some part of
EQUAL (or is that name out of date now, too?).
-------

∂27-Jan-82  1445	Jon L White <JONL at MIT-MC> 	Multiple mailing lists?  
Date: 27 January 1982 17:27-EST
From: Jon L White <JONL at MIT-MC>
Subject: Multiple mailing lists?
To: common-lisp at SU-AI

Is everyone on this mailing list also on the LISP-FORUM list?
I.e., is there anyone who did not get my note entitled "Two little 
suggestions for macroexpansion" which was just sent out to LISP-FORUM?

∂27-Jan-82  1438	Jon L White <JONL at MIT-MC> 	Two little suggestions for macroexpansion    
Date: 27 January 1982 17:24-EST
From: Jon L White <JONL at MIT-MC>
Subject: Two little suggestions for macroexpansion
To: LISP-FORUM at MIT-MC

Several times in the COMMON LISP discussions, individuals have
proffered a "functional" format to alleviate having lots of
keywords for simple operations: E.g. GLS's suggestion on page 137
of "Decisions on the First Draft Common Lisp Manual", which would
allow one to write 
  ((fposition #'equal x) s 0 7)  for  (position x s 0 7)
  ((fposition #'eq x) s 0 7)     for  (posq x s 0 7)

This format looks similar to something I've wanted for a long time
when macroexpanding, namely, for a form  
	foo = ((<something> . . .) a1 a2) 
then, provided that <something> isn't one of the special words for this 
context [like LAMBDA or (shudder!) LABEL] why not first expand 
(<something> . . .), yielding say <more>, and then try again on the form  
(<more> a1 a1).    Of course, (<something> . . .) may not indicate any 
macros, and <more> will just be eq to it.   The MacLISP function MACROEXPAND 
does do this, but EVAL doesn't call it in this circumstance (rather EVAL does 
a recursive sub-evaluation)

FIRST SUGGESTION:
     In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
 the part (<something> . . .) before recursively evaluating it.

  This will have the incompatible effect that
    (defmacro foo () 'LIST)
    ((foo) 1 2)
  no longer causes an error (unbound variable for LIST), but will rather
  first expand into (list 1 2), which then evaluates to (1 2).
  Similarly, the sequence
    (defun foo () 'LIST)
    ((foo) 1 2)
  would now, incompatibly, result in an error.
  [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, 
   but that's another kettle of worms we don't need to worry about now.]


SECOND SUGGESTION
    Let FMACRO have special significance for macroexpansion in the context
 ((FMACRO . <fun>) . . .), such that this form is a macro call which is
 expanded by calling <fun> on the whole form.


As a result of these two changes, many of the "functional programming
style" examples could easily be implemented by macros.  E.g.
  (defmacro FPOSITION (predfun arg)
    `(FMACRO . (LAMBDA (FORM) 
		 `(SI:POS-HACKER ,',arg 
				 ,@(cdr form) 
				 ':PREDICATE 
				 ,',predfun))))
where SI:POS-HACKER is a version of POSITION which accepts keyword arguments
to direct the actions, at the right end of the argument list.
Notice how 

    ((fposition #'equal x) a1 a2) 
==>
    ((fmacro . (lambda (form) 
		  `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL)))
	  a1
	  s2)
==>
    (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL)

If any macroexpansion "cache'ing" is going on, then the original form 
((fposition #'equal x) a1 a2)  will be paired with the final
result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either
by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP.

Now unfortunately, this suggestion doesn't completely subsume the 
functional programming style, for it doesn't directly help with the
case mentioned by GLS:
  ((fposition (fnot #'numberp)) s)  for (pos-if-not #'numberp s)
Nor does it provide an easy way to use MAPCAR etc, since
  (MAPCAR (fposition #'equal x) ...)
doesn't have (fposition #'equal x) in the proper context.
[Foo, why not use DOLIST or LOOP anyway?]   Nevertheless, I've had many 
ocasions where I wanted such a facility, especially when worrying about 
speed of compiled code.  

Any coments?

∂27-Jan-82  2202	RPG  	MVLet    
To:   common-lisp at SU-AI  

My view of the multiple value issue is that returning multiple values is
more like a function call than like a function return.  One cannot use
multiple values except in those cases where they are caught and spread
into variables via a MVLet or whatever.  Thus, (f (g) (h)) will ignore all
but the first values of g and h in this context.  In both the function
call and multiple value return cases the procedure that is to receive
values does not know how many values to expect in some cases.  In
addition, I believe that it is important that a function, if it can return
more than one value, can return any number it likes, and that the
programmer should be able to capture all of them somehow, even if some
must end up in a list.  The Lisp Machine multiple value scheme cannot do
this.  If we buy that it is important to capture all the values somehow,
then one of two things must happen.  First, the syntax for MVLet has to
allow something like (mvlet (x y (:rest z)) ...)  or (mvlet (x y . z)
...), which is close to the LAMBDA (or at least DEFUN-LAMBDA) syntax,
which means that it is a cognitive confusion if these two binding
descriptions are not the same.  Or, second, we have to have a version
like (mvlet l ...) which binds l to the list of returned values etc. This
latter choice, I think, is a loser.

Therefore, my current stand is that we either 1, go for the decision we
made in Boston at the November meeting, 2, we allow only 2 values in the
mv case (this anticipates the plea that it is sure convenient to be able
to return a value and a flag...), or 3, we flush multiple values
altogether.  I find the Lisp Machine `solution' annoyingly contrary to
intuition (even more annoying than just allowing 2 values).
			-rpg-

∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Thursday, 28 January 1982, 11:37-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

(1) Would you please remind me what conclusion we came to at the
November meeting?  My memory is that the issue was left up in the air
and that there was no conclusion.

(2) I think that removing multiple values, or restricting the number,
would be a terrible restriction.  Multiple values are extremely useful;
their lack has been a traditional weakness in Lisp and I'd hate to see
that go on.

(3) In Zetalisp you can always capture all values by using
(multiple-value-list <form>).  Any scheme that has only multiple-value
and multiple-value-bind and not multiple-value-list is clearly a loser;
the Lisp-Machine-like alternative has got to be a proposal that has all
three Zetalisp forms (not necessarily under those names, of course).

∂24-Jan-82  0127	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 04:24-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I agree with Fahlman about binding constructs.
I want LAMBDA to be the way it is, and LET to be the way it is,
and certainly not the same.

As for multiple values, if LET is fully extended to do what
SETF can do, then (LET (((VALUES A B C) m-v-returning-form)) ...)
can be used to replace M-V-BIND, just as (SETF (VALUES A B C) ...)
can replace MULTIPLE-VALUES.  I never use MULTIPLE-VALUES any more
because I think that the SETF style is clearer.

∂24-Jan-82  0306	Richard M. Stallman <RMS at MIT-AI>
Date: 24 January 1982 06:02-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I would like to clear up a misunderstanding that seems to be
prevalent.  The MIT Lisp machine system, used by Symbolics and LMI, is
probably going to be converted to support Common Lisp (which is the
motivation for my participation in the design effort for Common Lisp
clean).  Whenever this happens, Common Lisp will be available on
the CADR machine (as found at MIT and as sold by LMI and Symbolics)
and the Symbolics L machine (after that exists), and on the second
generation LMI machine (after that exists).

I can't speak for LMI's opinion of Common Lisp, but if MIT converts,
LMI will certainly do so.  As the main Lisp machine hacker at MIT, I
can say that I like Common Lisp.

It is not certain when either of the two new machines will appear, or
when the Lisp machine system itself will support Common Lisp.  Since
these three events are nearly independent, they could happen in any
order.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:23-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

To clear up another random point: the name "Zetalisp" is not a Symbolics
proprietary name.  It is just a name that has been made up to replace
the ungainly name "Lisp Machine Lisp".  The reason for needing a name is
that I belive that people associate the Lisp Machine with Maclisp,
including all of the bad things that they have traditionally belived
about Maclisp, like that it has a user interface far inferior to that of
Interlisp.

I certainly hope that all of the Lisp Machines everywhere will convert
to Common Lisp together.

∂24-Jan-82  1925	Daniel L. Weinreb <dlw at MIT-AI>  
Date: Sunday, 24 January 1982, 22:20-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
To: common-lisp at SU-AI

If I understand what RPG is saying then I think that I am not convinced
by his point.  I don't think that just because multiple-value-bind takes
a list of variables that are being bound to variables means that it HAS
to have all the features that LAMBDA combinations have, in the name of
language simplicity, because I just don't think that the inconsistency
there bothers me very much.  It is a very localized inconsistency and I
really do not belive it is going to confuse people much.

However, I still object to RMS's proposal as am still opposed to having
"destructuring LET".  I have flamed about this enough in the past that I
will not do it now.  However, having a "destructuring-bind" (by some
name) form that is like LET except that it destructures might be a
reasonable solution to providing a way to allow multiple-value-bind work
without any perceived language inconsistency.

∂24-Jan-82  2008	George J. Carrette <GJC at MIT-MC> 	adding to kernel   
Date: 24 January 1982 23:06-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  adding to kernel
To: Fahlman at CMU-20C
cc: common-lisp at SU-AI

    From: Fahlman at CMU-20C
    It is hard to comment on whether NIL is closer to being released than
    other Common Lisp implementations, since you don't give us a time
    estimate for NIL.

Oh. I had announced a release date of JAN 30. But, with the air-conditioner's
down for greater than a week that's got to go to at lease FEB 10. But
FEB 10 is the first week of classes at MIT, so I'll have JM, GJS, and
others on my case to get other stuff working. Sigh.
By release I mean that it is in a useful state, i.e. people will be able
to run their lisp programs in it. We have two concrete tests though, 
[1] To bring up "LSB".
   [A] This gives us stuff like a full hair FORMAT.
   [B] Martin's parser.
[2] TO run Macsyma on the BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2 and 
    HAYAT demos. 

Imagine bringing yourself and a tape to a naked VMS site, and installing
Emacs, a modern lisp, and Macsyma, in that order. You can really
blow away the people who have heard about these things but never
had a chance to use them, especially on their very own machine.
One feeling that makes the hacking worthwhile.

Anyway, when I brought Macsyma over to the Plasma Fusion
Center Alcator Vax, I was doing all the taylor series, integrals and
equation solving they threw at me. Stuff like
INTEGRATE(SIN(X↑2)*EXP(X↑2)*X↑2,X); Then DIFF it, then RATSIMP and TRIGREDUCE
to get back to the starting point.(try that on MC and see how many
files get loaded). (Sorry, gibberish to non-macsyma-hackers.)
=> So I can say that macsyma is released to MIT sites now. (MIT-LNS too). 
   People can use it and I'll field any bug reports. <=

Point of Confusion: Some people are confused as to what Common-Lisp is.
                    Even people at DEC.

-GJC

∂24-Jan-82  2227	Fahlman at CMU-20C 	Sequences 
Date: 25 Jan 1982 0125-EST
From: Fahlman at CMU-20C
Subject: Sequences
To: common-lisp at SU-AI


I have spent a couple of days mulling over RPG's suggestion for putting
the keywords into a list in functional position.  I thought maybe I
could get used to the unfamiliarity of the syntax and learn to like
this proposal.  Unfortunately, I can't.

I do like Guy's proposal for dropping START/END arguments and also
several of the suggestions that Moon made.  I am trying to merge all
of this into a revised proposal in the next day or two.  Watch this
space.

-- Scott
-------

∂24-Jan-82  2246	Kim.fateman at Berkeley 	NIL/Macsyma    
Date: 24 Jan 1982 22:40:50-PST
From: Kim.fateman at Berkeley
To: gjc@mit-mc
Subject: NIL/Macsyma 
Cc: common-lisp@SU-AI

Since it has been possible to run Macsyma on VMS sites (under Eunice or
its precursor) since April, 1980, (when we dropped off a copy at LCS),
it is not clear to me what GJC's ballyhoo is about.  If the physics
sites are only now getting a partly working Macsyma for VMS, it only
brings to mind the question of whether LCS ever sent out copies of the VMS-
Macsyma we gave them, to other MIT sites.

But getting Maclisp programs up under NIL should not be the benchmark,
nor is it clear what the relationship to common lisp is.
Having macsyma run under common lisp (whatever that will be)
would be very nice, of course,
whether having macsyma run under NIL is a step in that direction or
not.  It might also be nice to see, for example, one of the big interlisp
systems.

∂25-Jan-82  1436	Hanson at SRI-AI 	NIL and DEC VAX Common LISP
Date: 25 Jan 1982 1436-PST
From: Hanson at SRI-AI
Subject: NIL and DEC VAX Common LISP
To:   rpg at SU-AI
cc:   hanson

Greetings:
	I understand from ARPA that DEC VAX Common Lisp may become a
reality and that you are closely involved.  If that is true, we in the
SRI vision group would like to work closely with you in defining the
specifications so that the resulting language can actually be used for
vision computations with performance and convenience comparable to
Algol-based languages.
	If this is not true, perhaps you can send me to the people
I should talk with to make sure the mistakes of FRANZLISP are not
repeated in COMMON LISP.
	Thanks,  Andy Hanson  859-4395

ps - Where can we get Common Lisp manuals?
-------

∂25-Jan-82  1558	DILL at CMU-20C 	eql => eq?   
Date: 25 Jan 1982 1857-EST
From: DILL at CMU-20C
Subject: eql => eq?
To: common-lisp at SU-AI

Proposal: rename the function "eq" in common lisp to be something like
"si:internal-eq-predicate", and the rename "eql" to be "eq".  This would
have several advantages.

 * Simplification by reducing the number of equality tests.

 * Simplification by reducing the number of different versions of
   various predicates that depend on the type of equality test you
   want.

 * Greater machine independence of lisp programs (whether eq and equal
   are the same function for various datatypes is heavily 
   implementation-dependent, while eql is defined to be relatively 
   machine-independent; furthermore, functions like memq in the current
   common lisp proposal make it easier to use eq comparisons than eql).

Possible disadvantages:

 * Do people LIKE having, say, numbers with identical values not be eq?
   If so, they won't like this.

 * Efficiency problems.

I don't believe the first complaint.  If there are no destructive
operations defined for an object, eq and equal ought to do the same
thing.

The second complaint should not be significant in interpreted code,
since overhead of doing a type-dispatch will probably be insignificant
in comparison with, say, finding the right subr and calling it.

In compiled code, taking the time to declare variable types should allow
the compiler to open-code "eq" into address comparisons, if appropriate,
even in the absence of a hairy compiler.  A hairy compiler could do even
better.

Finally, in the case where someone wants efficiency at the price of
tastefulness and machine-independence, the less convenient
implementation-dependent eq could be used.
-------

∂25-Jan-82  1853	Fahlman at CMU-20C 	Re: eql => eq? 
Date: 25 Jan 1982 2151-EST
From: Fahlman at CMU-20C
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 25-Jan-82 1857-EST


I don't think it would be wise to replace EQ with EQL on a wholesale basis.
On microcoded machines, this can be made to win just fine and the added
tastefulness is worth it.  But Common Lisp has to run on vaxen and such as
well, and there the difference can be a factor of three.  In scattered
use, this would not be a problem, but EQ appears in many inner loops.
-- Scott
-------

∂28-Jan-82  0901	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Thursday, 28 January 1982, 11:37-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

(1) Would you please remind me what conclusion we came to at the
November meeting?  My memory is that the issue was left up in the air
and that there was no conclusion.

(2) I think that removing multiple values, or restricting the number,
would be a terrible restriction.  Multiple values are extremely useful;
their lack has been a traditional weakness in Lisp and I'd hate to see
that go on.

(3) In Zetalisp you can always capture all values by using
(multiple-value-list <form>).  Any scheme that has only multiple-value
and multiple-value-bind and not multiple-value-list is clearly a loser;
the Lisp-Machine-like alternative has got to be a proposal that has all
three Zetalisp forms (not necessarily under those names, of course).

∂28-Jan-82  1235	Fahlman at CMU-20C 	Re: MVLet      
Date: 28 Jan 1982 1522-EST
From: Fahlman at CMU-20C
Subject: Re: MVLet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 28-Jan-82 0102-EST


I agree with DLW that we must retain M-V-LIST.  I never meant to exclude
that.

As for RPG's latest blast, I agree with some of his arguments but not
with his conclusions.  First, I think that the way multiple values are
actually used, in the overwhelming majority of cases, is more like a
return than a function call.  You call INTERN or FLOOR or some
user-written function, and you know what values it is going to return,
what each value means, and which ones you want to use.  In the case of
FLOOR, you might want the quotient or the remainder or both.  The old,
simple, Lisp Machine forms give you a simple and convenient way to
handle this common case.  If a function returns two often-used values
plus some others that are arcane and hard to remember, you just catch
the two you want and let the others (however many there are) evaporate.
M-V-LIST is available to programs (tracers for example) that want to
intercept all the values, no matter what.

Having said that, I agree that there are also some cases where you want
the catching of values to be more like a function call than a return,
since it may be somewhat unpredictable what is going to be bubbling up
from below, and the lambda list with optionals and rests has evolved as
a good way to handle this.  I submit that the cause of uniformity is
best served by actually making these cases be function calls, rather
than faking it.  The proposed M-V-CALL mechanism does exactly this when
given one value-returning "argument".  The proposal to let M-V-CALL
take more than one "argument" form is dangerous, in my view -- it could
easily lead to impenetrable and unmaintainable code -- but if it makes
John McCarthy happy, I'm willing to leave it in, perhaps with a warning
to users not to go overboard with this.

So I think RPG has made a strong case for needing something like
M-V-CALL, and I propose that M-V-CALL itself is the best form for this.
I am much less convinced by his argument that the multiple value SETQing
and BINDing forms have to be beaten into this same shape or thrown out
altogether.  Simple forms for simple things!

And even if RPG's aestheitc judgement were to prevail, I would still
have the problem that, because they have the semantics of PROGNs and not
of function calls, the Lambda-list versions of these functions would be
extremely painful to implement.

As I see it, if RPG wants to have a Lambda-binding form for value
catching, M-V-CALL gives this to him in a way that is clean and easily
implementable.  If what he wants is NOT to have the simple Lisp Machine
forms included, and to force everything through Lambda-list forms in the
name of uniformity, then we have a real problem.

-- Scott
-------

∂28-Jan-82  1416	Richard M. Stallman <rms at MIT-AI> 	Macro expansion suggestions 
Date: 28 January 1982 17:13-EST
From: Richard M. Stallman <rms at MIT-AI>
Subject: Macro expansion suggestions
To: common-lisp at SU-AI

If (fposition #'equal x) is defined so that when in function position
it "expands" to a function, then (mapcar (fposition ...)) loses
as JONL says, but (mapcar #'(fposition ...)...) can perhaps be
made to win.  If (function (fposition...)) expands itself into
(function (lambda (arg arg...) ((fposition ...) arg arg...)))
it will do the right thing.  The only problem is to determine
how many args are needed, which could be a property of the symbol
fposition, or could appear somewhere in its definition.

Alternatively, the definition of fposition could have two "operations"
defined: one to expand when given an ordinary form with (fposition ...)
as its function, and one to expand when given an expression to apply
(fposition ...) to.

∂28-Jan-82  1914	Howard I. Cannon <HIC at MIT-MC> 	Macro expansion suggestions    
Date: 28 January 1982 19:46-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Macro expansion suggestions
To: common-lisp at SU-AI


I have sent the following to GLS as a proposal Lambda Macros in for
Common Lisp.  It is implemented on the Lisp Machine, and is installed
in Symbolics system 202 (unreleased), and will probably be in MIT
system 79.

You could easily use them to implement functional programming style,
and they of course work with #' as RMS suggests.

The text is in Bolio input format, sorry.

--------

.section Lambda macros

Lambda macros may appear in functions where LAMBDA would have previously
appeared.  When the compiler or interpreter detects a function whose CAR
is a lambda macro, they "expand" the macro in much the same way that
ordinary Lisp macros are expanded -- the lambda macro is called with the
function as its argument, and is expected to return another function as
its value.  Lambda macros may be accessed with the (ε3:lambda-macroε*
ε2nameε*) function specifier.

.defspec lambda-macro function-spec lambda-list &body body
Analagously with ε3macroε*, defines a lambda macro to be called
ε2function-specε*. ε2lambda-listε* should consist of one variable, which
will be the function that caused the lambda macro to be called.  The
lambda macro must return a function.  For example:

.lisp
(lambda-macro ilisp (x)
  `(lambda (&optional ,@(second x) &rest ignore) . ,(cddr x)))
.end←lisp

would define a lambda macro called ε3ilispε* which would cause the
function to accept arguments like a standard Interlisp function -- all
arguments are optional, and extra arguments are ignored.  A typical call
would be:

.lisp
(fun-with-functional-arg #'(ilisp (x y z) (list x y z)))
.end←lisp

Then, any calls to the functional argument that
ε3fun-with-functional-argε* executes will pass arguments as if the
number of arguments did not matter.
.end←defspec

.defspec deflambda-macro
ε3deflambda-macroε* is like ε3defmacroε*, but defines a lambda macro
instead of a normal macro.
.end←defspec

.defspec deflambda-macro-displace
ε3deflambda-macro-displaceε* is like ε3defmacro-displaceε*, but defines
a lambda macro instead of a normal macro.
.end←defspec

.defspec deffunction function-spec lambda-macro-name lambda-list &body body 
ε3deffunctionε* defines a function with an arbitrary lambda macro
instead of ε3lambdaε*.  It takes arguments like ε3defunε*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used.  ε3deffunctionε* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.

For example:

.lisp
(deffunction some-interlisp-like-function ilisp (x y z)
  (list x y z))
.end←lisp

would define a function called ε3some-interlisp-like-functionε*, that
would use the lambda macro called ε3ilispε*.  Thus, the function would
do no number of arguments checking.
.end←defspec

∂27-Jan-82  1633	Jonl at MIT-MC Two little suggestions for macroexpansion
Several times in the COMMON LISP discussions, individuals have
proffered a "functional" format to alleviate having lots of
keywords for simple operations: E.g. GLS's suggestion on page 137
of "Decisions on the First Draft Common Lisp Manual", which would
allow one to write 
  ((fposition #'equal x) s 0 7)  for  (position x s 0 7)
  ((fposition #'eq x) s 0 7)     for  (posq x s 0 7)

This format looks similar to something I've wanted for a long time
when macroexpanding, namely, for a form  
	foo = ((<something> . . .) a1 a2) 
then, provided that <something> isn't one of the special words for this 
context [like LAMBDA or (shudder!) LABEL] why not first expand 
(<something> . . .), yielding say <more>, and then try again on the form  
(<more> a1 a1).    Of course, (<something> . . .) may not indicate any 
macros, and <more> will just be eq to it.   The MacLISP function MACROEXPAND 
does do this, but EVAL doesn't call it in this circumstance (rather EVAL does 
a recursive sub-evaluation)

FIRST SUGGESTION:
     In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
 the part (<something> . . .) before recursively evaluating it.

  This will have the incompatible effect that
    (defmacro foo () 'LIST)
    ((foo) 1 2)
  no longer causes an error (unbound variable for LIST), but will rather
  first expand into (list 1 2), which then evaluates to (1 2).
  Similarly, the sequence
    (defun foo () 'LIST)
    ((foo) 1 2)
  would now, incompatibly, result in an error.
  [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, 
   but that's another kettle of worms we don't need to worry about now.]


SECOND SUGGESTION
    Let FMACRO have special significance for macroexpansion in the context
 ((FMACRO . <fun>) . . .), such that this form is a macro call which is
 expanded by calling <fun> on the whole form.


As a result of these two changes, many of the "functional programming
style" examples could easily be implemented by macros.  E.g.
  (defmacro FPOSITION (predfun arg)
    `(FMACRO . (LAMBDA (FORM) 
		 `(SI:POS-HACKER ,',arg 
				 ,@(cdr form) 
				 ':PREDICATE 
				 ,',predfun))))
where SI:POS-HACKER is a version of POSITION which accepts keyword arguments
to direct the actions, at the right end of the argument list.
Notice how 

    ((fposition #'equal x) a1 a2) 
==>
    ((fmacro . (lambda (form) 
		  `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL)))
	  a1
	  s2)
==>
    (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL)

If any macroexpansion "cache'ing" is going on, then the original form 
((fposition #'equal x) a1 a2)  will be paired with the final
result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either
by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP.

Now unfortunately, this suggestion doesn't completely subsume the 
functional programming style, for it doesn't directly help with the
case mentioned by GLS:
  ((fposition (fnot #'numberp)) s)  for (pos-if-not #'numberp s)
Nor does it provide an easy way to use MAPCAR etc, since
  (MAPCAR (fposition #'equal x) ...)
doesn't have (fposition #'equal x) in the proper context.
[Foo, why not use DOLIST or LOOP anyway?]   Nevertheless, I've had many 
ocasions where I wanted such a facility, especially when worrying about 
speed of compiled code.  

Any coments?

∂28-Jan-82  1633	Fahlman at CMU-20C 	Re: Two little suggestions for macroexpansion
Date: 28 Jan 1982 1921-EST
From: Fahlman at CMU-20C
Subject: Re: Two little suggestions for macroexpansion
To: JONL at MIT-MC
cc: LISP-FORUM at MIT-MC
In-Reply-To: Your message of 27-Jan-82 1724-EST


JONL's suggestion looks pretty good to me.  Given this sort of facility,
it would be easier to experiment with functional styles of programming,
and nothing very important is lost in the way of useful error checking,
at least nothing that I can see.

"Experiment" is a key word in the above comment.  I would not oppose the
introduction of such a macro facility into Common Lisp, but I would be
very uncomfortable if a functional-programming style started to pervade
the base language -- I think we need to play with such things for a
couple of years before locking them in.

-- Scott
-------

∂29-Jan-82  0945	DILL at CMU-20C 	Re: eql => eq?    
Date: 29 Jan 1982 1221-EST
From: DILL at CMU-20C
Subject: Re: eql => eq?
To: HEDRICK at RUTGERS
cc: common-lisp at SU-AI
In-Reply-To: Your message of 27-Jan-82 1332-EST

If an object in a Common Lisp is defined to have a particular type of
semantics (basically, you would like it to be an "immediate" object if
you could only implement that efficiently), programmers should not have
to worry about whether it is actually implemented using pointers.  If
you think about your data structures in terms of pointers in the
implementation, I contend that you are thinking about them at the wrong
level (unless you have decided to sacrifice commonality in order to
wring nanoseconds out of your code).  The reason you have to think about
it at this level is that the Lisp dialect you use lets the
implementation shine through when it shouldn't.

With the current Common Lisp definition, users will have to go to extra
effort to write implementation-independent code. For example, if your
implementation makes all numbers (or characters or whatever) that are
EQUAL also EQ, you will have to stop and force yourself to use MEMBER or
MEM instead of MEMQ, because other implementations may use pointer
implementations of numbers (or worse, your program will work for some
numbers and not others, because you are in a maclisp compatibility mode
and numbers less than 519 are immediate but others aren't).  My belief
is that Common Lisp programs should end up being common, unless the user
has made a conscious decision to make his code implementation-dependent.
The only reason to decide against a feature that would promote this is
if it would result in serious performance losses.

Even if an implementation is running on a VAX, it is still possible to
declare data structures (with the proposed "THE" construct, perhaps) so
that compiler can know to use the internal EQ when possible, or to use a
more specific predicate.  It is also not clear if compiled code for EQL
has to be expensive, depending on how hard it is to determine the type
of a datum -- it doesn't seem totally unreasonable that a single
instruction could determine whether to use the internal EQ (a single
instruction), or the hairier EQL code.

In what way is this "turning Lisp into Ada"?
-------

∂29-Jan-82  1026	Guy.Steele at CMU-10A 	Okay, you hackers
Date: 29 January 1982 1315-EST (Friday)
From: Guy.Steele at CMU-10A
To: Fateman at UCB-C70, gjc at MIT-MC
Subject:  Okay, you hackers
CC: common-lisp at SU-AI
Message-Id: <29Jan82 131549 GS70@CMU-10A>

It would be of great interest to the entire LISP community, now that
MACSYMA is up and running on VAX on two different LISPs, to get some
comparative timings.  There are standard MACSYMA demo files, and MACSYMA
provides for automatic timing.  Could you both please run the set of demo
files GJC mentioned, namely BEGIN, SIN, MATRIX, ALGSYS, DEFINT, ODE2, and
HAYAT, and send the results to RPG@SAIL for analysis?  (You're welcome,
Dick!)
--Guy

∂29-Jan-82  1059	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: eql => eq?  
Date: 29 Jan 1982 1354-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: eql => eq?
To: DILL at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1221-EST

I have gotten two rejoinders to my comments about the conceptual
usefulness of EQ, both of which explained to me that EQ is not useful
for numbers or any other objects which may be immediate in some
implementations and pointers in others.  I am well aware of that.
Clearly if I am interested either in comparing the values of two numbers
or in seeing whether two general objects will look the same when
printed, EQ is not the right thing to use.  But this has been true back
from the days of Lisp 1.5.  I claim however that there are many cases
where I know that what I am dealing with is in fact a pointer, and what
I want is something that simply checks to see whether two objects are
identical.  In this case, I claim that it is muddying the waters
conceptually to use a primitive that checks for certain kinds of objects
and does tests oriented towards seeing whether they look the same when
printed, act the same when multiplied, or something else.  Possibly it
would be sensible to have a primitive that works like EQ for pointers
and gives an error otherwise.  But if what you are trying to do is to
see whether two literal atoms or CONS cells are the same, I can't see
any advantage to something that works like EQ for pointers and does
something else otherwise.  I can even come up with cases where EQ makes
sense for real numbers.  I can well imagine a program where you have two
lists, one of which is a proper subset of the other.   Depending upon
how they were constructed, it might well be the case that if something
from the larger list is a member of the smaller list, it is a member
using EQ, even if the object involved is a real number. I trust that the
following code will always print T, even if X is a real number.
   (SETQ BIG-LIST (CONS X BIG-LIST))
   (SETQ SMALL-LIST (CONS X SMALL-LIST))
   (PRINT (EQ (CAR BIG-LIST) (CAR SMALL-LIST)))
-------

∂29-Jan-82  1146	Guy.Steele at CMU-10A 	MACSYMA timing   
Date: 29 January 1982 1442-EST (Friday)
From: Guy.Steele at CMU-10A
To: George J. Carrette <GJC at MIT-MC> 
Subject:  MACSYMA timing
CC: common-lisp at SU-AI
In-Reply-To:  George J. Carrette's message of 29 Jan 82 13:30-EST
Message-Id: <29Jan82 144201 GS70@CMU-10A>

Well, I understand your reluctance to release timings before the
implementation has been properly tuned; but on the other hand,
looking at the situation in an abstract sort of way, I don't understand
why someone willing to shoot off his mouth and take unsupported pot
shots in a given forum should be unwilling to provide in that same
forum some objective data that might help to douse the flames (and
this goes for people on both sides of the fence).  In short, I merely
meant to suggest a way to prove that the so-called ballyhoo was
worthwhile (not that this is the only way to prove it).
--Guy

∂29-Jan-82  1204	Guy.Steele at CMU-10A 	Re: eql => eq?   
Date: 29 January 1982 1452-EST (Friday)
From: Guy.Steele at CMU-10A
To: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject:  Re: eql => eq?
CC: common-lisp at SU-AI
In-Reply-To:  HEDRICK@RUTGERS's message of 29 Jan 82 13:54-EST
Message-Id: <29Jan82 145243 GS70@CMU-10A>

(DEFUN FOO (X)
  (SETQ BIG-LIST (CONS X BIG-LIST))
  (SETQ SMALL-LIST (CONS X SMALL-LIST))
  (PRINT (EQ (CAR BIG-LIST) (CAR SMALL-LIST))))

(DEFUN BAR (Z) (FOO (*$ Z 2.0)))

Compile this using the MacLISP compiler.  Then (BAR 3.0) reliably
prints NIL, not T.  The reason is that the compiled code for FOO
gets, as its argument X, a pdl number passed to it by BAR.  The code
for FOO happens to choose to make two distinct heap copies of X,
rather than one, and so the cars of the two lists will contain
distinct pointers.
--Guy

∂29-Jan-82  1225	George J. Carrette <GJC at MIT-MC> 	MACSYMA timing
Date: 29 January 1982 15:23-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  MACSYMA timing
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

All I said was that Macsyma was running, and I felt I had to
do that because many people thought that NIL was not a working
language. I get all sorts of heckling from certain people anyway,
so a few extra unsupported pot-shots aren't going to bother me.
Also, I have limited time now to complete a paper on the timing
figures that JM wants me to submit to the conference on lisp
and applicable languages, taking place at CMU right? So you
get the picture.

But, OK, I'll give two timing figures, VAX-780 speed in % of KL-10.

Compiling "M:MAXII;NPARSE >"   48% of KL-10.
INTEGRATE(1/(X↑3-1),X)         12% of KL-10.

Obviously the compiler is the most-used program in NIL, so it has been tuned.
Macsyma has not been tuned.

Note well, I say "Macsyma has not been tuned" not "NIL has not been tuned."
Why? Because NIL has been tuned, lots of design thought by many people,
and lots of work by RWK and RLB to provide fast lisp primitives in the VAX.
It is Macsyma which needs to be tuned for NIL. This may not be very
interesting! Purely source-level hacks. For example, the Franz people
maintain entirely seperate versions of large (multi-page)
functions from the core of Macsyma for the purpose
of making Macsyma run fast in Franz.
=> There is nothing wrong with this when it is worth the time saved
   in solving the user's problems. I think for Macsyma it is worth it. <=

The LISPM didn't need special hacks though. This is interesting,
I think...

-gjc

∂29-Jan-82  1324	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
Date: 29 Jan 1982 1620-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re:  Re: eql => eq?
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1452-EST

I call that a bug.
-------

∂29-Jan-82  1332	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re:  Re: eql => eq?  
Date: 29 Jan 1982 1627-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re:  Re: eql => eq?
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 29-Jan-82 1452-EST

I seem to recall that it was a basic property of Lisp that
  (EQ X (CAR (CONS X Y)))
If your compiler compiles code that does not preserve this property,
the kindest thing I have to say is that it is premature optimization.
-------

∂29-Jan-82  1336	Guy.Steele at CMU-10A 	Re: Re: eql => eq?    
Date: 29 January 1982 1630-EST (Friday)
From: Guy.Steele at CMU-10A
To: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject:  Re: Re: eql => eq?
CC: common-lisp at SU-AI
In-Reply-To:  HEDRICK@RUTGERS's message of 29 Jan 82 16:20-EST
Message-Id: <29Jan82 163020 GS70@CMU-10A>

Well, it is at least a misfeature that SETQ and lambda-binding
do not preserve EQ-ness.  It is precisely for this reason that
the predicate EQL was proposed: this is the strongest equivalence
relation on S-expressions which is preserved by SETQ and binding.
Notice that this definition is in terms of user-level semantics
rather than implementation technique.
It certainly was a great feature that user semantics and implementation
coincided and had simple definitions in EQ in the original LISP.
MacLISP was nudged from this by the great efficiency gains to be had
for numerical code, and it didn't bother too many users.
The Swiss Cheese draft of the Common LISP manual does at least make
all this explicit: see the first page of the Numbers chapter.  The
disclaimer is poorly stated (my fault), but it is there for the nonce.
--Guy

∂29-Jan-82  1654	Richard M. Stallman <RMS at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs.    
Date: 29 January 1982 19:46-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Trying to implement FPOSITION with LAMBDA-MACROs.
To: HIC at MIT-AI, common-lisp at SU-AI

LAMBDA-MACRO is a good hack but is not exactly what JONL was suggesting.

The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
expands into (FPOSITION-INTERNAL X Y MORE ARGS), and
((FPOSITION) MORE ARGS) into (FPOSITION-INTERNAL NIL NIL MORE ARGS).
In JONL's suggestion, the expander for FPOSITION operates on the
entire form in which the call to the FPOSITION-list appears, not
just to the FPOSITION-list.  This allows FPOSITION to be handled
straightforwardly; but also causes trouble with (FUNCTION (FPOSITION
...)) where lambda-macros automatically work properly.

It is possible to define FPOSITION using lambda-macros by making
(FPOSITION X Y) expand into 
(LAMBDA (&REST ARGS) (FUNCALL* 'FPOSITION-INTERNAL X Y ARGS))
but this does make worse code when used in an internal lambda.
It would also be possible to use an analogous SUBST function
but first SUBST functions have to be made to work with &REST args.
I think I can do this, but are SUBST functions in Common Lisp?

∂29-Jan-82  2149	Kim.fateman at Berkeley 	Okay, you hackers   
Date: 29 Jan 1982 20:31:23-PST
From: Kim.fateman at Berkeley
To: guy.steele@cmu-10a
Subject: Okay, you hackers
Cc: common-lisp@SU-AI

I think that when GJC says that NIL/Macsyma runs the "X" demo, it
is kind of like the dog that plays checkers.  It is
remarkable, not for how well it plays, but for the fact that it plays at all.

(And I believe it is creditable [if] NIL runs Macsyma at all... I
know how hard it is, so don't get me wrong..)
Anyway, the stardard timings we have had in the past, updated somewhat:

MC-Macsyma, Vaxima and Lisp Machine timings for DEMO files
(fg genral, fg rats, gen demo, begin demo)
(garbage collection times excluded.)  An earlier version of this
table was prepared and distributed in April, 1980.  The only
column I have changed is the 2nd one.

MC Time	     VAXIMA    	128K lispm     192K lispm       256K lispm
4.119	   11.8   sec.  43.333 sec.     19.183 sec.    16.483 sec.  
2.639	    8.55  sec.  55.916 sec.     16.416 sec.    13.950 sec. 
3.141	   14.3   sec. 231.516 sec.     94.933 sec.    58.166 sec.  
4.251	   13.1   sec. 306.350 sec.    125.666 sec.    90.716 sec. 


(Berkeley VAX 11/780 UNIX (Kim) Jan 29, 1982,  KL-10 MIT-MC ITS April 9, 1980.)
Kim has no FPA, and 2.5meg of memory.  Actually, 2 of these times are
slower than in 1980, 2 are faster. 

Of course, GJC could run these at MIT on his Franz/Vaxima/Unix system, and
then bring up his NIL/VMS system and time them again.

∂29-Jan-82  2235	HIC at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs.  
Date: Friday, 29 January 1982  22:13-EST
From: HIC at SCRC-TENEX
To:   Richard M. Stallman <RMS at MIT-AI>
Cc:   common-lisp at SU-AI
Subject: Trying to implement FPOSITION with LAMBDA-MACROs.

    Date: Friday, 29 January 1982  19:46-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    To:   HIC at MIT-AI, common-lisp at SU-AI
    Re:   Trying to implement FPOSITION with LAMBDA-MACROs.

    LAMBDA-MACRO is a good hack but is not exactly what JONL was suggesting.
Yes, I know.  I think it's the right thing, however.

    The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
    expands into (FPOSITION-INTERNAL X Y MORE ARGS), and
    ((FPOSITION) MORE ARGS) into (FPOSITION-INTERNAL NIL NIL MORE ARGS).
    In JONL's suggestion, the expander for FPOSITION operates on the
    entire form in which the call to the FPOSITION-list appears, not
    just to the FPOSITION-list.  This allows FPOSITION to be handled
    straightforwardly; but also causes trouble with (FUNCTION (FPOSITION
    ...)) where lambda-macros automatically work properly.
Yes, that's right.  If you don't care about #'(FPOSITION ..), then you can have
the lambda macro expand into a real macro which can see the form, so you
can use lambda macros to simulate JONL's behavior quite easily.

    It is possible to define FPOSITION using lambda-macros by making
    (FPOSITION X Y) expand into 
    (LAMBDA (&REST ARGS) (FUNCALL* 'FPOSITION-INTERNAL X Y ARGS))
    but this does make worse code when used in an internal lambda.
    It would also be possible to use an analogous SUBST function
    but first SUBST functions have to be made to work with &REST args.
    I think I can do this, but are SUBST functions in Common Lisp?
Yes, this is what I had in mind.  The fact that this makes worse code
whe used as an internal lambda is a bug in the compiler, not an
intrinisic fact of Common-Lisp or of the Lisp Machine.  However, it would
be ok if subst's worked with &REST args too.

∂30-Jan-82  0006	MOON at SCRC-TENEX 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs 
Date: Saturday, 30 January 1982  03:00-EST
From: MOON at SCRC-TENEX
To:   Richard M. Stallman <RMS at MIT-AI>
Cc:   common-lisp at SU-AI
Subject: Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs

If SUBSTs aren't in Common Lisp, they certainly should be.  They are
extremely useful and trivial to implement.

∂30-Jan-82  0431	Kent M. Pitman <KMP at MIT-MC> 	Those two little suggestions for macroexpansion 
Date: 30 January 1982 07:26-EST
From: Kent M. Pitman <KMP at MIT-MC>
Subject:  Those two little suggestions for macroexpansion
To: Fahlman at CMU-20C
cc: LISP-FORUM at MIT-MC

    Date: 28 Jan 1982 1921-EST
    From: Fahlman at CMU-20C

    JONL's suggestion looks pretty good to me...
-----
Actually, JONL was just repeating suggestions brought up by GLS and EAK just
over a year ago on LISP-FORUM. I argued then that the recursive EVAL call was
semantically all wrong and not possible to support compatibly between the 
interpreter and compiler ... I won't bore you with a repeat of that discussion.
If you've forgotten it and are interested, it's most easily gettable from the
file "MC: AR1: LSPMAIL; FMACRO >".

∂30-Jan-82  1234	Eric Benson <BENSON at UTAH-20> 	Re: MVLet   
Date: 30 Jan 1982 1332-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: MVLet
To: Common-Lisp at SU-AI

Regarding return of multiple values: "...their lack has been a traditional
weakness in Lisp..."  What other languages have this feature?  Many have
call-by-reference which allows essentially the same functionality, but I
don't know of any which have multiple value returns in anything like the
Common Lisp sense.

I can certainly see the benefit of including them, but the restrictions
placed on them and the dismal syntax for using them counteracts the
intention of their inclusion, namely to increase the clarity of those
functions that have more than one value of interest.  If we were using a
graphical dataflow language they would fit like a glove, without all the
fuss.  The problem arises because each arrangement of arcs passing values
requires either its own special construct or binding the values to
variables.  I'm not suggesting we should throw out the n-in, 1-out nature
of Lisp forms in favor of an n-in, m-out arrangement, (at least not right
now!) rather that the current discussion of multiple values is unlikely to
come to a satisfactory conclusion due to the "tacked-on afterthought"
nature of the current version.  We may feel that it is a useful enough
facility to keep in spite of all this, but it's probably too much to hope
to "do it right".
-------

∂30-Jan-82  1351	RPG  	MVlet    
To:   common-lisp at SU-AI  
Of course, if Scott is only worried about the difficulty of implementing
the full MVlet with hairy syntax, all one has to do is provide MV-LIST
as Dan notes and write MVlet as a simple macro using that and LAMBDA.
That way CONSes, but who said that it had to be implemented well?
				-rpg-

∂30-Jan-82  1405	Jon L White <JONL at MIT-MC> 	Comparison of "lambda-macros" and my "Two little suggestions ..."
Date: 30 January 1982 16:55-EST
From: Jon L White <JONL at MIT-MC>
Subject: Comparison of "lambda-macros" and my "Two little suggestions ..."
To: KMP at MIT-MC, hic at SCRC-TENEX
cc: LISP-FORUM at MIT-MC, common-lisp at SU-AI

[Apologies for double mailings -- could we agree on a name for a
 mailing list to be kept at SU-AI which would just be those 
 individuals in COMMON-LISP@SU-AI which are not also on LISP-FORUM@MC]

There were two suggestions in my note, and lambda-macros relate
to only one of then, namely the first one

    FIRST SUGGESTION:
	 In the context of ((<something> . . .) a1 a2),  have EVAL macroexpand 
     the part (<something> . . .) and "try again" before recursively 
     evaluating it. This will have the incompatible effect that
	(defmacro foo () 'LIST)
	((foo) 1 2)
     no longer causes an error (unbound variable for LIST), but will rather
     first expand into (list 1 2), which then evaluates to (1 2).

Note that for clarity, I've added the phrase "try again", meaning to
look at the form as see if it is recognized explicitly as, say, some
special form, or some subr application.

The discussion from last year, which resulted in the name "lambda-macros"
centered around finding a separate (but equal?) mechanism for code-expansion
for non-atomic forms which appear in a function place;  my first suggestion 
is to change EVAL (and compiler if necessary) to call the regular macroexpander
on any form which looks like some kind of function composition, and thus
implement a notion of "Meta-Composition" which is context free.  It would be 
a logical consequence of this notion that eval'ing (FUNCTION (FROTZ 1)) must
first macroexpand (FROTZ 1), so that #'(FPOSITION ...) could work in the 
contexts cited about MAP.  However, it is my second suggestion that would
not work in the context of an APPLY -- it is intended only for the EVAL-
of-a-form context -- and I'm not sure if that has been fully appreciated
since only RMS appears to have alluded to it.

However, I'd like to offer some commentary on why context-free 
"meta-composition" is good for eval, yet why context-free "evaluation" 
is bad:
  1) Context-free "evaluation" is SCHEME.  SCHEME is not bad, but it is
     not LISP either.  For the present, I believe the LISP community wants
     to be able to write functions like:
	(DEFUN SEMI-SORT (LIST)
	  (IF (GREATERP (FIRST LIST) (SECOND LIST))
	      LIST 
	      (LIST (SECOND LIST) (FIRST LIST))))
     Correct interpretation of the last line means doing (FSYMEVAL 'LIST)
     for the instance of LIST in the "function" position, but doing (more
     or less) (SYMEVAL 'LIST) for the others -- i.e., EVAL acts differently
     depending upon whether the context is "function" or "expression-value".
 2) Context-free "Meta-composition" is just source-code re-writing, and
    there is no ambiguity of reference such as occured with "LIST" in the 
    above example.  Take this example:
	(DEFMACRO GET-SI (STRING)
	  (SETQ STRING (TO-STRING STRING))
	  (INTERN STRING 'SI))
        (DEFUN SEE-IF-NEW-ATOM-LIST (LIST)
	  ((GET-SI "LIST")  LIST  (GET-SI "LIST")))
    Note that the context for (GET-SI "LIST") doesn't matter (sure, there
    are other ways to write equivalent code but . . .)
    Even the following macro definition for GET-SI results in perfectly
    good, unambiguous results:
	(DEFMACRO GET-SI (STRING)
	  `(LAMBDA (X Y) (,(intern (to-string string) 'SI) X Y)))
    For example, assuming that (LAMBDA ...) => #'(LAMBDA ...),
      (SEE-IF-NEW-ATOM-LIST 35)   =>   (35  #'(LAMBDA (X Y) (LIST X Y)))

The latter (bletcherous) example shows a case where a user ** perhaps **
did not intend to use (GET-SI...) anywhere but in function context --
he simply put in some buggy code.   The lambda-macro mechanism would require
a user to state unequivocally that a macro-defintion in precisely one
context;  I'd rather not be encumbered with separate-but-parallel machinery
and documentation -- why not have this sort of restriction on macro usage
contexts be some kind of optional declaration?

Yet my second suggestion involves a form which could not at all be interpreted
in "expression-value" context:
    SECOND SUGGESTION
	Let FMACRO have special significance for macroexpansion in the context
     ((FMACRO . <fun>) . . .), such that this form is a macro call which is
     expanded by calling <fun> on the whole form.
Thus (LIST 3 (FMACRO . <fun>)) would cause an error.  I believe this 
restriction is more akin to that which prevents MACROs from working
with APPLY.

∂30-Jan-82  1446	Jon L White <JONL at MIT-MC> 	The format ((MACRO . f) ...)  
Date: 30 January 1982 17:39-EST
From: Jon L White <JONL at MIT-MC>
Subject: The format ((MACRO . f) ...)
To: common-lisp at SU-AI
cc: LISP-FORUM at MIT-MC


HIC has pointed out that the LISPM interpreter already treats the
format ((MACRO . f) ...) according to my "second suggestion" for
((FMACRO . f) ..);  although I couldn't find this noted in the current
manual, it does work.   I'd be just as happy with ((MACRO . f) ...)  -- my 
only consideration was to avoid a perhaps already used format.  Although the 
LISPM compiler currently barfs on this format, I believe there will be a 
change soon?

The issue of parallel macro formats -- lambda-macros versus
only context-free macros -- is quite independent; although I
have a preference, I'd be happy with either one.

∂30-Jan-82  1742	Fahlman at CMU-20C 	Re: MVlet      
Date: 30 Jan 1982 2039-EST
From: Fahlman at CMU-20C
Subject: Re: MVlet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 30-Jan-82 1651-EST


But why chose a form that is hard to implement well and that will
therefore be implemented poorly over one that is easy to implement well?
If we are going to CONS, we may as well throw the MV stuff out
altogether.  Even if implementation weere not a problem, I would prefer
the simple syntax.  Does anyone else out there share RPG's view that
the alleged uniformity of the hairy syntax justifies the hair?
-- Scott
-------

∂30-Jan-82  1807	RPG  	MVlet    
To:   common-lisp at SU-AI  
1. What is that hard to implement about the MVlet thing that is not
already swamped by the difficulty of having n values on the stack
as you return and throw, and is also largely subsumed by the theory
of function entry?

2. To get any variable number of values back now you have to CONS anyway,
so by implementing it `poorly' for the user, but with 
a uniform syntax for all, is better than the user implementing
it poorly himself over and over.

3. If efficiency of the implementation is the issue, and if the
simple cases admit efficiency in the old syntax, the same simple 
cases admit efficiency in the proposed syntax.

4. Here's what happens when a function is called:
	You have a description of the variables and how the
	values that you get will be bound to them depending on how many you get.

  Here's what happens when a function with multiple values returns to
a MVlet:
	You have a description of the variables and how the
	values that you get will be bound to them depending on how many you get.

Because the naive user will think these descriptions are similar, he will expect
that the syntax to deal with them is similar.

∂30-Jan-82  1935	Guy.Steele at CMU-10A 	Forwarded message
Date: 30 January 1982 2231-EST (Saturday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Forwarded message
CC: feinberg at CMU-20C
Message-Id: <30Jan82 223157 GS70@CMU-10A>


- - - - Begin forwarded message - - - -
Date: 30 January 1982  21:43-EST (Saturday)
From: FEINBERG at CMU-20C
To:   Guy.Steele at CMUA
Subject: Giving in to Maclisp
Via:     CMU-20C; 30 Jan 1982 2149-EST

Howdy!
	I was looking through Decisions.Press and I came upon a 
little section, which I was surprised to see:


        Adopt functions parallel to GETF, PUTF, and REMF, to be
        called GETPR, PUTPR, and REMPR, which operate on symbols.
        These are analogous to GET, PUTPROP, and REMPROP of
        MACLISP, but the arguments to PUTPR are in corrected order.
        (It was agreed that GETPROP, PUTPROP, and REMPROP would be
        better names, but that these should not be used to minimize
        compatibility problems.)

Are we really going to give all the good names away to Maclisp in the
name of "compatibility"?  Compatibility in what way? Is it not clear
that we will have to do extensive modifications to Maclisp to get
Common Lisp running in it anyway? Is it also not clear that Maclisp
programs will also require extensive transformation to run in Common
Lisp? Didn't everyone agree that comming up with a clean language,
even at the expense of compatibility, was most important? I think it
is crucial that we break away from Maclisp braindammage, and not let
it steal good names in the process.  PUTPR is pretty meaningless,
whereas PUTPROP is far more clear.  

						--Chiron
- - - - End forwarded message - - - -

∂30-Jan-82  1952	Fahlman at CMU-20C 	Re: MVlet      
Date: 30 Jan 1982 2244-EST
From: Fahlman at CMU-20C
Subject: Re: MVlet    
To: RPG at SU-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 30-Jan-82 2107-EST


    1. What is that hard to implement about the MVlet thing that is not
    already swamped by the difficulty of having n values on the stack
    as you return and throw, and is also largely subsumed by the theory
    of function entry?

Function calling with hairy lambda syntax was an incredible pain to
implement decently, but was worth it.  Having multiple values on the
stack was also a pain to implement, but was also (just barely) worth it.
The proposed M-V-CALL just splices together these two moby pieces of
machinery, so is relatively painless.  In the implementations I am
doing, at least, the proposed lambda-list syntax for the other MV forms
will require a third moby chunk of machinery since it has to do what a
function call does, but it cannot be implemented as a function call
since it differs slightly.

    2. To get any variable number of values back now you have to CONS anyway,
    so by implementing it `poorly' for the user, but with 
    a uniform syntax for all, is better than the user implementing
    it poorly himself over and over.

Neither the simple MV forms nor M-V-CALL would cons in my
implementations, except in the case that the functional arg to M-V-CALL
takes a rest arg and there is at least one rest value passed to it.  To
go through M-V-LIST routinely would cons much more, and would make the
multiple value mechanism totally worthless.

    3. If efficiency of the implementation is the issue, and if the
    simple cases admit efficiency in the old syntax, the same simple 
    cases admit efficiency in the proposed syntax.

Yup, it can be implemented efficiently.  My objection is that it's a lot
of extra work (I figure it would take me a full week) and would make the
language uglier as well (in the eye of this beholder).

    4. Here's what happens when a function is called:
    	You have a description of the variables and how the
    	values that you get will be bound to them depending on how many
        you get.

      Here's what happens when a function with multiple values returns to
      a MVlet:
    	You have a description of the variables and how the
    	values that you get will be bound to them depending
        on how many you get.

Here's what really happens:

You know exactly how many values the called form is going to return and
what each value is.  Some of these you want, some you don't.  You
arrange to catch and bind those that are of interest, ignoring the rest.
Defaults and rest args simply aren't meaningful if you know how many
values are coming back.

In the rare case of a called form that is returning an unpredictable
number of args (the case that RPG erroneously takes as typical), you use
M-V-CALL and get the full lambda binding machinery, or you use M-V-LIST
and grovel the args yourself, or you let the called form return a list
in the first place.  I would guess that such unpredictable cases occur
in less than 1% of all multiple-value calls, and the above-listed
mechanisms handle that 1% quite well.

OK, we need to settle this.  If most of the rest of you share RPG's
taste in this, I will shut up and do the extra work to implement the
lambda forms, rather than walk out.  If RPG is alone or nearly alone in
his view of what is tasteful, I would hope that he would give in
gracefully.  I assume that punting multiples altogether or limiting them
to two values would please no one.

-- Scott
-------

∂30-Jan-82  2002	Fahlman at CMU-20C 	GETPR
Date: 30 Jan 1982 2256-EST
From: Fahlman at CMU-20C
Subject: GETPR
To: feinberg at CMU-20C
cc: common-lisp at SU-AI


I think that Feinberg underestimates the value of retaining Maclisp
compatibility in commonly-used functions, other things being equal.

On the other hand, I agree that GETPR and friends are pretty ugly.  If I
understand the proposal, GETPR is identical to the present GET, and
REMPR is identical to REMPROP.  Only PUTPR is different.  How about
going with GET, REMPROP, and PUT in new code, where PUT is like PUTPROP,
but with the new argument order?  Then PUTPROP could be phased out
gradually, with a minimum of hassle.  (Instead of PUT we could use
SETPROP, but I like PUT better.)

-- Scott
-------

∂30-Jan-82  2201	Richard M. Stallman <RMS at MIT-AI>
Date: 31 January 1982 00:57-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

I vote for GET and PUT rather than GETPR and PUTPR.

Fahlman is not alone in thinking that it is cleaner not to
have M-V forms that contain &-keywords.

∂31-Jan-82  1116	Daniel L. Weinreb <dlw at MIT-AI> 	GETPR
Date: Sunday, 31 January 1982, 14:15-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: GETPR
To: Fahlman at CMU-20C, feinberg at CMU-20C
Cc: common-lisp at SU-AI

Would you please go back and read the message I sent a little while ago?
I belive that it makes more sense to FIRST define a policy about Maclisp
compatibility and THEN make the specific decisions based on that
proposal.  I don't want to waste time thinking about the GET thing before
we have such a policy.

∂01-Feb-82  0752	Jon L White <JONL at MIT-MC> 	Incredible co-incidence about the format ((MACRO . f) ...)  
Date: 1 February 1982 10:47-EST
From: Jon L White <JONL at MIT-MC>
Subject: Incredible co-incidence about the format ((MACRO . f) ...)
To: common-lisp at SU-AI
cc: LISP-FORUM at MIT-MC


One of my previous messages seemed to imply that ((MACRO . f) ...)
on the LISPM fulfills the intent of my second suggestion -- apparently
there is a completely unforseen consequence of the fact that
   (FSYMEVAL 'FOO) => (MACRO . <foofun>)
when FOO is defined as a macro, such that the interpreter "makes it work".
However, MACROEXPAND knows nothing about this format, which is probably
why the compiler can't handle it; also such action isn't documented
anywhere.
 
Thus I believe it to be merely an accidental co-incidence that the
interpreter does anything at all meaningful with this format.   My
"second suggestion" now is to institutionalize this "accident"; it
certainly would make it easier to experiment with a pseudo-functional
programming style, and it obviously hasn't been used for any other
meaning.

∂01-Feb-82  0939	HIC at SCRC-TENEX 	Incredible co-incidence about the format ((MACRO . f) ...)   
Date: Monday, 1 February 1982  11:38-EST
From: HIC at SCRC-TENEX
To:   Jon L White <JONL at MIT-MC>
Cc:   common-lisp at SU-AI, LISP-FORUM at MIT-MC
Subject: Incredible co-incidence about the format ((MACRO . f) ...)

    Date: Monday, 1 February 1982  10:47-EST
    From: Jon L White <JONL at MIT-MC>
    To:   common-lisp at SU-AI
    cc:   LISP-FORUM at MIT-MC
    Re:   Incredible co-incidence about the format ((MACRO . f) ...)

    One of my previous messages seemed to imply that ((MACRO . f) ...)
    on the LISPM fulfills the intent of my second suggestion -- apparently
    there is a completely unforseen consequence of the fact that
       (FSYMEVAL 'FOO) => (MACRO . <foofun>)
    when FOO is defined as a macro, such that the interpreter "makes it work".
    However, MACROEXPAND knows nothing about this format, which is probably
    why the compiler can't handle it; also such action isn't documented
    anywhere.

Of course MACROEXPAND knows about it (but not the version you looked
at).  I discovered this BUG (yes, BUG, I admit it, the LISPM had a
bug) in about 2 minutes of testing this feature, after I told the
world I thought it would work, and fixed it in about another two
minutes.
     
    Thus I believe it to be merely an accidental co-incidence that the
    interpreter does anything at all meaningful with this format.   My
    "second suggestion" now is to institutionalize this "accident"; it
    certainly would make it easier to experiment with a pseudo-functional
    programming style, and it obviously hasn't been used for any other
    meaning.

JONL, you seem very eager to make this be your proposal -- so be it.
I don't care.  However, it works on the Lisp Machine (it was a BUG
when it didn't work) to have (MACRO . foo) in the CAR of a form, and
thus it works to have a lambda macro expand into this.

Of course, Lambda Macros are the right way to experiment with the
functional programming style -- I think it's wrong to rely on seeing
the whole form (I almost KNOW it's wrong...).  In any case, the Lisp
Machine now has these.

∂01-Feb-82  1014	Kim.fateman at Berkeley 	GETPR and compatibility  
Date: 1 Feb 1982 10:11:13-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: GETPR and compatibility

There are (at least) two kinds of compatibility worth comparing.

1. One, which I believe is very hard to do,
probably not worthwhile, and probably not
in the line of CL, is the kind which
would allow one to take an arbitrary maclisp (say) file, read it into
a CL implementation, and run it, without ever even telling the CL
system, hey, this file is maclisp.  And when you prettyprint or debug one of
those functions, it looks pretty much like what you read in, and did
not suffer "macro←replacement←itis".

2. The second type is to put in the file, or establish somehow,
#.(enter maclisp←mode)  ;; or whatever, followed by 
<random maclisp stuff>
#.(enter common←lisp←mode)  ;; etc.

The reader/evaluator would know about maclisp. There
are (at least) two ways of handling this 
  a:  any maclisp construct (e.g. get) would be macro-replaced by
the corresponding CL thing (e.g. getprop or whatever); arguments would
be reordered as necessary.  I think transor does this, thought generally
in the direction non-interlisp ==> interlisp.  The original maclisp
would be hard to examine from within CL, since it was destroyed on read-in
(by read, eval or whatever made the changes). (Examination by looking
at the file or some verbatim copy would be possible).  This makes
debugging in native maclisp, hard.
  b: wrap around each uniquely maclisp construction (perhaps invisibly) 
(evaluate←as←maclisp  <whatever>).  This would preserve prettyprinting,
and other things.  Functions which behave identically would presumably
not need such a wrapper, though interactions would be hard to manage.

I think 2a is what makes most sense, and is how Franz lisp 
handles some things which are, for example, in interlisp, but not in Franz.
The presumption is that you would take an interlisp (or maclisp)
file and translate it into CL, and at that point abandon the original
dialect.  In view of this, re-using the names seems quite possible,
once the conversion is done.
  In point of fact, what some people may do is handle CL this way.
That is, translate it into  another dialect, which, for whatever
reason, seems more appropriate.  Thus, an Xlisp chauvinist
might simply write an Xlispifier for CL. The Xlispifier for CL
would be written in Xlisp, and consist of the translation package
and (probably) a support package of CL functions.  Depending on
whether you are in CL-reading-mode or XL-reading-mode, you would
get one or the other "getprop".
  Are such "implementations of CL"  "correct"?  Come to think of
it, how would one determine if one is looking at an implementation
of CL?

∂01-Feb-82  1034	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	a proposal about compatibility 
Date:  1 Feb 1982 1326-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: a proposal about compatibility
To: common-lisp at SU-AI

I would like to propose that CL be a dialect of Lisp.  A reasonable
definition of Lisp seems to be the following:
  - all functions defined in the "Lisp 1.5 Programmer's Manual",
	McCarthy, et. al, 1962, other than those that are system- or
	implementation-dependent 
  - all functions on whose definitions Maclisp and Interlisp agree
I propose that CL should not redefine any names from these two sets,
except in ways that are upwards-compatible.
-------

∂01-Feb-82  1039	Daniel L. Weinreb <DLW at MIT-AI> 	Re: MVLet      
Date: 1 February 1982 13:32-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
Subject: Re: MVLet    
To: common-lisp at SU-AI

    Regarding return of multiple values: "...their lack has been a traditional
    weakness in Lisp..."  What other languages have this feature?  Many have
    call-by-reference which allows essentially the same functionality, but I
    don't know of any which have multiple value returns in anything like the
    Common Lisp sense.
Many of them have call-by-reference, which allows essentially the same
functionality.  Indeed, few of them have multiple value returns in the
Lisp sense, although the general idea is around, and was included in at
least some of the proposals for "DOD-1" (it's sometimes called "val out"
paramters.) Lisp is neither call-by-value or call-by-reference exactly,
so a direct comparision is difficult.  My point was that there is a
pretty good way to return many things in the call-by-reference pardigm,
it is used to good advantage by Pascal and PL/1 programs, and Lisp
programmer who want to do analogous things have traditionally been up
the creek.

    We may feel that it is a useful enough facility to keep in spite of all
    this, but it's probably too much to hope to "do it right".
When we added multiple values to the Lisp Machine years ago, we decided that
we couldn't "do it right", but it was a useful enough facility to keep in
spite of all this.  I still think so, and it applies to Common Lisp for the
same reasons.

∂01-Feb-82  2315	Earl A. Killian <EAK at MIT-MC> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs   
Date: 1 February 1982 19:09-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

I don't want SUBSTs in Common Lisp, I want the real thing, ie.
inline functions.  They can be implemented easily in any
implementation by replacing the function name with its lambda
expression (this isn't quite true, because of free variables, but
that's not really that hard to deal with in a compiler).  Now the
issue is simply efficiency.  Since Common Lisp has routinely
chosen cleanliness when efficiency can be dealt with by the
compiler (as it is in the S-1 compiler), then I see no reason to
have ugly SUBSTs.

∂01-Feb-82  2315	FEINBERG at CMU-20C 	Compatibility With Maclisp   
Date: 1 February 1982  16:35-EST (Monday)
From: FEINBERG at CMU-20C
To:   Daniel L. Weinreb <dlw at MIT-AI>
Cc:   common-lisp at SU-AI, Fahlman at CMU-20C
Subject: Compatibility With Maclisp

Howdy!
	I agree with you, we must have a consistent policy concerning
maintaining compatibility with Maclisp.  I propose that Common Lisp
learn from the mistakes of Maclisp, not repeat them.  This policy
means that Common Lisp is free to use clear and meaningful names for
its functions, even if they conflict with Maclisp function names.
Yes, some names must be kept for historical purposes (CAR, CDR and
CONS to name a few), but my view of Common Lisp is that it is in fact
a new language, and should not be constrained to live in the #+MACLISP
world.  I think if Common Lisp software becomes useful enough, PDP-10
people will either make a Common Lisp implementation, they will make a
mechanical translator, or they will retrofit Maclisp to run Common
Lisp.  Common Lisp should either be upward compatible Maclisp or
compatibility should take a back seat to a good language.  I think
Common Lisp has justifiably moved far enough away from Maclisp that
the former can no longer be accomplished, so the latter is the only
reasonable choice.  Being half upward compatible only creates more
confusion.

∂01-Feb-82  2319	Earl A. Killian <EAK at MIT-MC> 	GET/PUT names    
Date: 1 February 1982 19:32-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  GET/PUT names
To: common-lisp at SU-AI

I don't like the name GET for property lists.  GET is a verb, and
therefore doesn't sound very applicative to me.  I prefer Lisp
function names to refer to what they do, not how they do it.
Thus I'd like something like PROPERTY-VALUE, PROPERTY, or just
PROP (depending on how important a short name is) instead of GET.
PUTPROP would be SET-PROPERTY-VALUE, SET-PROPERTY, or SET-PROP,
though I'd personally use SETF instead:
	(SETF (PROP S 'X) Y)

∂01-Feb-82  2319	Howard I. Cannon <HIC at MIT-MC> 	The right way   
Date: 1 February 1982 20:13-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  The right way
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

    Date: 1 February 1982 1650-EST (Monday)
    From: Guy.Steele at CMU-10A
    To:   HIC at MIT-AI
    cc:   common-lisp at SU-AI
    Re:   The right way

    I think I take slight exception at the remark

        Of course, Lambda Macros are the right way to experiment with the
        functional programming style...

    It may be a right way, but surely not the only one.  It seems to me
    that actually using functions (rather than macros) also leads to a
    functional programming style.  Lambda macros may be faster in some
    implementations for some purposes.  However, they do not fulfill all
    purposes (as has already been noted: (MAPCAR (FPOSITION ...) ...)).

Sigh...it's so easy to be misinterpreted in mail.  Of course, that meant
"Of these two approaches,..."  I'm sorry I wasn't explicit enough.

However, now it's my turn to take "slight exception" (which wasn't so
slight on your part that you didn't bother to send a note):

Have we accepted the Scheme approach of LAMBDA as a "self-evaling" form?
If not, then I don't see why you expect (MAPCAR (FPOSITION ...) ...)
to work where (MAPCAR (LAMBDA ...) ...) wouldn't.  Actually, that's
part of the point of Lambda macros -- they work nicely when flagged
by #'.  If you want functions called, then have the lambda macro
turn into a function call.  I think writing #' is a useful marker and
serves to avoid other crocks in the implementation (e.g. evaling the
car of a form, and using the result as the function.  I thought we
had basically punted that idea a while ago.)

If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
(or whatever), then I might propose changing lambda macros to be called
in normal functional position, or just go to the scheme of not distinguishing
between lambda and regular macros.

∂01-Feb-82  2321	Jon L White <JONL at MIT-MC> 	MacLISP name compatibility, and return values of update functions
Date: 1 February 1982 16:26-EST
From: Jon L White <JONL at MIT-MC>
Subject: MacLISP name compatibility, and return values of update functions
To: common-lisp at SU-AI

	
[I meant to CC this to common-lisp earlier -- was just sent to Weinreb.]

    Date: Sunday, 31 January 1982, 14:15-EST
    From: Daniel L. Weinreb <dlw at MIT-AI>
    To: Fahlman at CMU-20C, feinberg at CMU-20C
    Would you please go back and read the message I sent a little while ago?
    I belive that it makes more sense to FIRST define a policy about Maclisp
    compatibility and THEN make the specific decisions based on that
    proposal. . . 
Uh, what msg -- I've looked through my mail file for a modest distance, and
don't seem to find anything in your msgs to common-lisp that this might refer 
to.  I thought we had the general notion of not usurping MacLISP names, unless
EXTREMEMLY good cause could be shown.  For example,
 1) (good cause) The names for type-specific (and "modular") arithmetic 
    were usurped by LISPM/SPICE-LISP for the generic arithmetic  (i.e., 
    "+" instead of "PLUS" for generic, and nothing for modular-fixnum). 
    Although I don't like this incompatibility, I can see the point about 
    using the obvious name for the case that will appear literally tens of
    thousands of times in our code.
 2) (bad cause) LISPM "PRINT" returns a gratuitously-incompatible value.
    There is discussion on this point, with my observation that when it was
    first implemented very few LISPM people were aware of the 1975 change
    to MacLISP (in fact, probalby only Ira Goldstein noticed it at all!)
    Yet no one has offered any estimate of the magnitude of the effects of 
    leaving undefined the value of side-effecting and/or updating functions;  
    presumably SETQ would have a defined value, and RPLACA/D also for 
    backwards compatibity, but what about SETF?
Actually the SETF question introduces the ambiguity of which of the
two possible values to return.  Take for example VSET:  Should (VSET V I X) 
return V, by analogy with RPLACA, or should it return X by analyogy with SETQ? 
Whatever is decided for update functions in general affects SETF in some 
possibly conflicting way.  For this reason alone, RMS's suggestion to have 
SETF be the only updator (except for SETQ and RPLACA/RPLACD ??) makes some 
sense; presumably then we could afford to leave the value of SETF undefined.

∂01-Feb-82  2322	Jon L White <JONL at MIT-MC> 	MVLet hair, and RPG's suggestion   
Date: 1 February 1982 16:36-EST
From: Jon L White <JONL at MIT-MC>
Subject: MVLet hair, and RPG's suggestion
To: common-lisp at SU-AI

    Date: 19 Jan 1982 1551-PST
    From: Dick Gabriel <RPG at SU-AI>
    To:   common-lisp at SU-AI  
    I would like to make the following suggestion regarding the
    strategy for designing Common Lisp. . . .
    We should separate the kernel from the Lisp based portions of the system
    and design the kernel first. Lambda-grovelling, multiple values,
    and basic data structures seem kernel.
    The reason that we should do this is so that the many man-years of effort
    to immplement a Common Lisp can be done in parallel with the design of
    less critical things. 
I'm sure it will be impossible to agree completely on a "kernel", but
some approach like this *must* be taken, or there'll never be any code
written in Common-Lisp at all, much less the code which implements the
various features.  Regarding hairy forms of Multiple-value things, 
I believe I voted to have both forms, because the current LISPM set
is generally useful, even if not completely parallel with Multiple-argument 
syntax; also it is small enough and useful enough to "put it in right now"
and strive for the hairy versions at a later time.
  Couldn't we go on record at least as favoring the style which permits
the duality of concept (i.e., whatever syntax works for receiving multiple
arguments also works for receiving multiple values), but noting that
we can't guarantee anything more that the several LISPM functions for
the next three years?  I'd sure hate to see this become an eclectic
kitchen sink merely because the 5-10 people who will  be involved in
Common-Lisp compiler-writing didn't want to take the day or so apiece
over the next three years to write the value side of the value/argument
receiving code.

∂02-Feb-82  0002	Guy.Steele at CMU-10A 	The right way    
Date:  1 February 1982 1650-EST (Monday)
From: Guy.Steele at CMU-10A
To: HIC at MIT-AI
Subject:  The right way
CC: common-lisp at SU-AI
In-Reply-To:  HIC@SCRC-TENEX's message of 1 Feb 82 11:38-EST
Message-Id: <01Feb82 165054 GS70@CMU-10A>

I think I take slight exception at the remark

    Of course, Lambda Macros are the right way to experiment with the
    functional programming style...

It may be a right way, but surely not the only one.  It seems to me
that actually using functions (rather than macros) also leads to a
functional programming style.  Lambda macros may be faster in some
implementations for some purposes.  However, they do not fulfill all
purposes (as has already been noted: (MAPCAR (FPOSITION ...) ...)).

∂02-Feb-82  0110	Richard M. Stallman <RMS at MIT-AI>
Date: 1 February 1982 17:51-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

It seems that the proposal to use GET and PUT for property functions
is leading to a discussion of whether it is ok to reuse Maclisp
names with different meanings.

Perhaps that topic does need to be discussed, but there is no such
problem with using GET and PUT instead of GETPR and PUTPR.
GET would be compatible with Maclisp (except for disembodied plists),
and PUT is not used in Maclisp.

Let's not get bogged down in wrangling about the bigger issue
of clean definitions vs compatibility with Maclisp as long as we
can solve the individual issues in ways that meet both goals.

∂02-Feb-82  0116	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
Date: Monday, 1 February 1982, 23:54-EST
From: David A. Moon <Moon at SCRC-TENEX at MIT-AI>
Subject: Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
To: Earl A. Killian <EAK at MIT-MC>
Cc: common-lisp at SU-AI
In-reply-to: The message of 1 Feb 82 19:09-EST from Earl A. Killian <EAK at MIT-MC>

    Date: 1 February 1982 19:09-EST
    From: Earl A. Killian <EAK at MIT-MC>
    Subject:  Trying to implement FPOSITION with LAMBDA-MACROs and SUBSTs
    To: MOON at SCRC-TENEX
    cc: common-lisp at SU-AI

    I don't want SUBSTs in Common Lisp, I want the real thing, ie.
    inline functions...
In the future I will try to remember, when I suggest that something should
exist in Common Lisp, to say explicitly that it should not have bugs in it.

∂02-Feb-82  1005	Daniel L. Weinreb <DLW at MIT-AI>  
Date: 2 February 1982 12:25-EST
From: Daniel L. Weinreb <DLW at MIT-AI>
To: RMS at MIT-AI
cc: common-lisp at SU-AI

While we may not need to decide about Maclisp compatibility policy for the
particular proposal you discussed, we do need to worry about whether, for
example, we must not rename PUTPROP to PUT even if it is upward-compatible
because some of us might think that "CL is not a dialect of Lisp" if we are
that far off; there might be other proposals about Maclisp compatibility
that would affect the proposal you mention regardless of the
upward-compatibility of the proposal.

But what is much more imporrant is that there are other issues that will be
affected strongly by our policy, and if we put this off now then it will be
a long time indeed before we see a coherent and accepted CL definition.  We
don't have forever; if this takes too long we will all get bored and forget
about it.  Furthermore, if we come up with a policy later, we'll have to go
back and change some earlier decisions, or else decide that the policy
won't really be followed.  I think we have to get this taken care of
immediately.

∂02-Feb-82  1211	Eric Benson <BENSON at UTAH-20> 	Re: MacLISP name compatibility, and return values of update functions   
Date:  2 Feb 1982 1204-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: MacLISP name compatibility, and return values of update functions
To: JONL at MIT-MC, common-lisp at SU-AI
In-Reply-To: Your message of 1-Feb-82 1426-MST

We had a long discussion about SETF here at Utah for our implementation and
decided that RPLACA and RPLACD are really the wrong things to use for this.
Every other SETF-type function returns (depending on how you look at it)
the value of the RHS of the assignment (the second argument) or the updated
value of the LHS (the first argument).  This has been the case in most
languages where the value of an assignment is defined, for variables, array
elements or structure elements.  The correct thing to use for
(SETF (CAR X) Y)
is
(PROGN (RPLACA X Y) (CAR X))
or the equivalent.  It appears that the value of SETF was undefined in
LISPM just because of this one case.  Perhaps it is just more apparent when
one uses Algol syntax, i.e.  CAR(X) := CDR(Y) := Z; that this is the
obvious way to define the value of SETF.
-------

∂02-Feb-82  1304	FEINBERG at CMU-20C 	a proposal about compatibility    
Date: 2 February 1982  15:59-EST (Tuesday)
From: FEINBERG at CMU-20C
To:   HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility), DLW at AI
Cc:   common-lisp at SU-AI
Subject: a proposal about compatibility

Howdy!
	Could you provide some rationale for your proposal? Are you
claiming that it is necessary to include Lisp 1.5 and the intersection
of Maclisp and Interlisp in Common Lisp before it can be truly called
a dialect of Lisp? 

	I agree with DLW, it is rather important to settle the issue
of Maclisp compatibility soon.

∂02-Feb-82  1321	Masinter at PARC-MAXC 	Re: MacLISP name compatibility, and return values of update functions   
Date: 2 Feb 1982 13:20 PST
From: Masinter at PARC-MAXC
Subject: Re: MacLISP name compatibility, and return values of update functions
In-reply-to: BENSON's message of 2 Feb 1982 1204-MST
To: common-lisp at SU-AI

The Interlisp equivalent of SETF, "change", is defined in that way. It turns out
that the translation of (change (CAR X) Y) is (CAR (RPLACA X Y)). The
compiler normally optimizes out extra CAR/CDR's when not in value context.
RPLACA is retained for compatibility.


Larry

∂02-Feb-82  1337	Masinter at PARC-MAXC 	SUBST vs INLINE, consistent compilation   
Date: 2 Feb 1982 13:34 PST
From: Masinter at PARC-MAXC
Subject: SUBST vs INLINE, consistent compilation
To: Common-Lisp@SU-AI
cc: Masinter

I think there is some rationale both for SUBST-type macros and for INLINE.

SUBST macros are quite important for cases where the semantics of
lambda-binding is not wanted, e.g., where (use your favorite syntax):

(DEFSUBST SWAP (X Y)
    (SETQ Y (PROG1 X (SETQ X Y]

This isn't a real example, but the idea is that sometimes a simple substitution
expresses what you want to do more elegantly than the equivalent

(DEFMACRO SWAP X
	\(SETQ ,(CADDR X) (PROG1 ,(CADR X) (SETQ ,(CADR X) ,(CADDR X]

These are definitely not doable with inlines. (I am not entirely sure they can be 
correctly implemented with SUBST-macros either.)

-----------------

There is a more important issue which is being skirted in these various
discussions, and that is the one of consistent compilation: when is it
necessary to recompile a function in order to preserve the equivalence of
semantics of compiled and interpreted code. There are some simple situations
where it is clear:
	The source for the function changed
	The source for some macros used by the function changed

There are other situations where it is not at all clear:
	The function used a macro which accessed a data structure which
	has changed.

Tracing the actual data structures used by a macro is quite difficult. It is not
at all difficult for subst and inline macros, though, because the expansion of
the macro depends only on the macro-body and the body of the macro
invocation.

I think the important issue for Common Lisp is: what is the policy on consistent
compilation?

Larry

∂02-Feb-82  1417	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: a proposal about compatibility  
Date:  2 Feb 1982 1714-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: a proposal about compatibility
To: FEINBERG at CMU-20C
cc: DLW at MIT-AI, common-lisp at SU-AI
In-Reply-To: Your message of 2-Feb-82 1559-EST

This is a response to a couple of requests to justify my comments.  Based
on one of these, I feel it necessary to say that nothing in this message
(nor in the previous one) should be taken to be sarcasm.  I am trying to
speak as directly as possible.  I find it odd when people take me as
being sarcastic when I start with the assumption that CL should be a
dialect of Lisp, and then give what I think is a fairly conservative
explanation of what I think that should mean.  However once I get into
the mode of looking for sarcasm, I see how easy it is to interpret
things that way.  Almost any of the statements I make below could be
taken as sarcasm.  It depends upon what expression you imagine being on
my face.  The rest of this message was typed with a deadpan expression.

I thought what I said was that if CL used a name in the set I mentioned,
that the use should be consistent with the old use.  I didn't say that
CL should in fact implement all of the old functions, although I would
not be opposed to such a suggestion.  But what I actually said was that
CL shouldn't use the old names to mean different things.

As for justification, consider the following points:
  - now and then we might like to transport code from one major family
	to another, i.e. not just Maclisp to CL, etc., but Interlisp to
	CL.  I realize this wouldn't be possible with code of some
	types, but I think at least some of our users do write what I
	would call "vanilla Lisp", i.e. Lisp that uses mostly common
	functions that they expect to be present in any Lisp system.  I
	admit that such transportation is not going to be easy under any
	circumstance and for that reason will not be all that common,
	but we should not make it more complicated than necessary.
  - I would like to be able to teach students Lisp, and then have them
	be able to use what they learned even if they end up using a
	different implementation.  Again, some reorientation is
	obviously going to be needed when they move to another
	implementation, but it would be nice not to have things that
	look like they ought to be the same, and aren't.  Further, it
	would be helpful for there to be enough similarity that we can
	continue to have textbooks describe Lisp.
  - I find myself having to deal with several dialects.  Of course I am
	probably a bit unusual, in that I am supporting users, rather
	than implementing systems.  Presumably most of the users will
	spend their time working on one system.  But I would like for
	the most common functions to do more or less the same thing in
	in all of these systems.
  - Now and then we write papers, journal articles, etc.  It would be
	helpful for these to be readable by people in other Lisp
	communities.
-------

∂02-Feb-82  1539	Richard M. Stallman <RMS at MIT-AI> 	No policy is a good policy  
Date: 2 February 1982 18:22-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: No policy is a good policy
To: Common-lisp at SU-AI

Common Lisp is an attempt to compromise betwen several goals:
cleanliness, utility, efficiency and compatibility both between
implementations and with Maclisp.  On any given issue, it is usually
possible to find a "right" solution which may meet most of these goals
well and meet the others poorly but tolerably.  Which goals have to be
sacrificed are different in each case.

For example, issue A may offer a clean, useful and efficient solution
which is incompatible, but in ways that are tolerable.  The other
solutions might be more compatible but worse in general.  Issue B may
offer a fully upward compatible solution which is very useful and fast
when implemented, which we may believe justifies being messy.  If we
are willing to consider each issue separately and sacrifice different
goals on each, the problem is easy.  But if we decide to make a global
choice of how much incompatibility we want, how much cleanliness we
want, etc., then probably whichever way we decide we will be unable to
use both the best solution for A and the best solution for B.  The
language becomes worse because it has been designed dogmatically.

Essentially the effect of having a global policy is to link issues A
and B, which could otherwise be considered separately.  The combined
problem is much harder than either one.  For example, if someone found a new
analogy between ways of designing the sequence function and ways of
designing read syntaxes for sequences, it might quite likely match
feasible designs for one with problematical designs for the other.
Then two problems which are proving enough work to get agreement on
individually would turn into one completely intractable problem.

It is very important to finish Common Lisp reasonably quickly, if the
effort is to be useful.  The study of philosophy of language design is
a worthy field but a difficulty one.  There are many more years of
work to be done in it.  If we make solving this field part of the plan
for designing Common Lisp, we will not be finished in time to do the
job that Common Lisp was intended for: to enable users of different
Maclisp descendents to write portable programs.

∂02-Feb-82  1926	DILL at CMU-20C 	upward compatibility   
Date:  2 Feb 1982 2225-EST
From: DILL at CMU-20C
Subject: upward compatibility
To: common-lisp at SU-AI

I believe that compatibility with other lisp dialects should be a
consideration in the design of Common Lisp, but it should absolutely have
less priority that considerations of portability, taste, and efficiency.
It is possible that this won't leave a whole lot of room for upward
compatibility.

If Common Lisp manages to be a high-quality, widely implemented common
language, the user community will end up being much larger than that of
any existing lisp dialect.  Imposing misfeatures on those users because
a much smaller community of users has gotten used to those features
doesn't make sense.

I also don't see why it is more important to maintain compatibility with
Maclisp than with other dialects.
-------

∂02-Feb-82  2148	RPG  	MVLet    
To:   common-lisp at SU-AI  
Scott pointed out to me that the MVCall construct can take
a general LAMBDA expression, complete with hairy LAMBDA list
syntax. Thus one can write:

		(MV-CALL #'(LAMBDA (FOO BAR (:REST ZTESCH)) ...)
			 (BAZOLA))

Which is virtually the same as:

	(MVLET (FOO BAR (:REST ZTESCH)) (BAZOLA) ...)

but above MVCall syntax strikes me as superior (using LAMBDA's for
LAMBDA-like things.

Therefore, I will go along with Scott's LISPM syntax + MVCALL.
			-rpg-

∂02-Feb-82  2223	Richard M. Stallman <RMS at MIT-AI>
Date: 3 February 1982 01:06-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI, dill at CMU-20C

The reason it is important to be compatible pretty much
with Maclisp is that that means being compatible with the
existing Lisp machine system, and that is very important
to all the Lisp machine users.  And to Lisp machine
system maintainers too.  It is fine if old Maclisp functions
get dropped from the definition of Common Lisp, and replaced
with cleaner ways of doing things: the Lisp machine can implement
the new way while continuing to support the old one, Common Lisp or no.
But making old Maclisp functions do new things that are fundamentally
incompatible will cause a great deal of trouble.

The purpose of the Common Lisp project was to unify Maclisp dialects.
The narrowness of the purpose is all that gives it a chance of success.
It may be an interesting project to design a totally new Lisp dialect,
but you have no chance of getting this many people to agree on a design
if you remove the constraints.

∂02-Feb-82  2337	David A. Moon <MOON at MIT-MC> 	upward compatibility   
Date: 3 February 1982 02:36-EST
From: David A. Moon <MOON at MIT-MC>
Subject: upward compatibility
To: common-lisp at SU-AI

I agree with RMS (for once).  Common Lisp should be made a good language,
but designing "pie in the sky" will simply result in there never being
a Common Lisp.  This is not a case of the Lisp Machine people being
recalcitrant and attempting to impose their own view of the world, but
simply that there is no chance of this large a group agreeing on anything
if there are no constraints.  I think the Lisp Machine people have already
shown far more tolerance and willingness to compromise than anyone would ever
have the right to expect.

∂03-Feb-82  1622	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
Date: 3 February 1982 19:20-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SUBST vs INLINE, consistent compilation
To: Masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

In Common Lisp the macro definition of SWAP would be the same of
as your SUBST, except for some commas (i.e. defmacro handles
normal argument lists).  I don't think Common Lisp needs subst
as another way of defining macros.  Inline functions are,
however, useful.

∂04-Feb-82  1513	Jon L White <JONL at MIT-MC> 	"exceptions" possibly based on misconception; and EVAL strikes again  
Date: 4 February 1982 18:04-EST
From: Jon L White <JONL at MIT-MC>
Subject: "exceptions" possibly based on misconception; and EVAL strikes again
To: Hic at SCRC-TENEX, Guy.Steele at CMU-10A
cc: common-lisp at SU-AI


The several "exceptions" just taken about implementing functional programming 
may be in part due to a misconception taken from RMS's remark

    Date: 29 January 1982 19:46-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    Subject: Trying to implement FPOSITION with LAMBDA-MACROs.
    . . . 
    The idea of FPOSITION is that ((FPOSITION X Y) MORE ARGS)
    expands into (FPOSITION-INTERNAL X Y MORE ARGS), and . . . 
    In JONL's suggestion, the expander for FPOSITION operates on the
    entire form in which the call to the FPOSITION-list appears, not
    just to the FPOSITION-list.

This isn't right -- in my suggestion, the expander for FPOSITION would 
operate only on (FPOSITION X Y), which *could* then produce something like 
(MACRO . <another-fun>); and it would be  <another-fun>  which would get 
the "entire form in which the call to the FPOSITION-list appears"

HIC is certainly justified in saying that something is wrong, but it looked
like to me (and maybe Guy) that he was saying alternatives to lambda-macros 
were wrong.  However, this side-diversion into a misconception has detracted 
from the main part of my "first suggestion", namely to fix the misdesign in 
EVAL whereby it totally evaluates a non-atomic function position before trying
any macro-expansion. 

    Date: 1 February 1982 20:13-EST
    From: Howard I. Cannon <HIC at MIT-MC>
    Subject:  The right way
    To: Guy.Steele at CMU-10A
    . . . 
    If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
    (or whatever), then I might propose changing lambda macros to be called
    in normal functional position, or just go to the scheme of not 
    distinguishing between lambda and regular macros.

So how about it?  Regardless of the lambda-macro question, or the style
of functional programming, let EVAL take

   ((MUMBLE ...) A1 ... A2)  into  `(,(macroexpand '(MUMBLE ...)) A1 ... A2)

and try its cycle again.  Only after (macroexpand '(MUMBLE ...)) fails to
produce something discernibly a function would the nefarious "evaluation"
come up for consideration.

[P.S. -- this isn't the old (STATUS PUNT) question -- that only applied to
 forms which had, from the beginning, an atomic-function position.]

∂04-Feb-82  2047	Howard I. Cannon <HIC at MIT-MC> 	"exceptions" possibly based on misconception; and EVAL strikes again   
Date: 4 February 1982 23:45-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  "exceptions" possibly based on misconception; and EVAL strikes again
To: JONL at MIT-MC
cc: common-lisp at SU-AI, Guy.Steele at CMU-10A

        If, however, we do accept (LAMBDA ...) as a valid form that self-evaluates 
        (or whatever), then I might propose changing lambda macros to be called
        in normal functional position, or just go to the scheme of not 
        distinguishing between lambda and regular macros.

    So how about it?  Regardless of the lambda-macro question, or the style
    of functional programming, let EVAL take

       ((MUMBLE ...) A1 ... A2)  into  `(,(macroexpand '(MUMBLE ...)) A1 ... A2)

Since, in my first note, I said "If, however, we do accept (LAMBDA ...) as a
valid form that...", and we aren't, I am strenuously against this suggestion.

∂05-Feb-82  0022	Earl A. Killian <EAK at MIT-MC> 	SUBST vs INLINE, consistent compilation   
Date: 3 February 1982 19:20-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  SUBST vs INLINE, consistent compilation
To: Masinter at PARC-MAXC
cc: Common-Lisp at SU-AI

In Common Lisp the macro definition of SWAP would be the same of
as your SUBST, except for some commas (i.e. defmacro handles
normal argument lists).  I don't think Common Lisp needs subst
as another way of defining macros.  Inline functions are,
however, useful.

∂05-Feb-82  2247	Fahlman at CMU-20C 	Maclisp compatibility    
Date:  6 Feb 1982 0141-EST
From: Fahlman at CMU-20C
Subject: Maclisp compatibility
To: common-lisp at SU-AI


I would like to second RMS's views about Maclisp compatibility: there are
many goals to be traded off here, and any rigid set of guidelines is
going to do more harm than good.  Early in the effort the following
general principles were agreed upon by those working on Common Lisp at
the time:

1. Common Lisp will not be a strict superset of Maclisp.  There are some
things that need to be changed, even at the price of incompatibility.
If it comes down to a clear choice between making Common Lisp better
and doing what Maclisp does, we make Common Lisp better.

2. Despite point 1, we should be compatible with Maclisp and Lisp
Machine Lisp unless there is a good reason not to be.  Functions added
or subtracted are relatively innocuous, but incompatible changes to
existing functions should only be made with good reason and after
careful deliberation.  Common Lisp started as a Maclisp derivitive, and
we intend to move over much code and many users from the Maclisp
world.  The easier we make that task, the better it is for all of us.

3. If possible, consistent with points 1 and 2, we should not do
anything that screws people moving over from Interlisp.  The same holds
for the lesser-used Lisps, but with correspondingly less emphasis.  I
think that Lisp 1.5 should get no special treatment here: all of its
important features show up in Maclisp, and the ones that have changed or
dropped away have done so for good reason.

-- Scott
-------

∂06-Feb-82  1200	Daniel L. Weinreb <dlw at MIT-AI> 	Maclisp compatibility    
Date: Friday, 6 February 1981, 14:56-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Maclisp compatibility
To: Fahlman at CMU-20C, common-lisp at SU-AI

Your message is exactly what I wanted to see.  This is just as much of a
policy as I think we need.  I didn't want any more rigid guidelines than
that; I just wanted a set of principles that we all agree upon.

Not everybody on the mailing list seems to agree with your set here.  I
do, by the way, but clearly HEDRICK does not.  I hope the official
referee will figure out what to do about this.  Guy?

∂06-Feb-82  1212	Daniel L. Weinreb <dlw at MIT-AI> 	Return values of SETF    
Date: Friday, 6 February 1981, 15:12-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Return values of SETF
To: common-lisp at SU-AI

I'm pretty much convinced by Masinter's mail.  SETF should be defined to
return the value that it stores.  SETF is really too important a form to
work in an explicitly undefined method, and compiler optimizations
and/or special-purpose settting functions (that exist only so that SETF
can turn into them) are well worth it to keep SETF from having to have
crummy "undefined" behavior.  (Not having any kind of up-to-date Common
Lisp manual, I have no idea how or if it is currently defined.)

∂06-Feb-82  1232	Daniel L. Weinreb <dlw at MIT-AI> 	MVLet     
Date: Friday, 6 February 1981, 15:25-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: MVLet    
To: RPG at SU-AI, common-lisp at SU-AI

I see your point.  I agree; given this insight, I am happy with the Lispm
syntax plus MVCALL.  There is one thing that I'd like to see improved,
if possible.  In the example:

		(MV-CALL #'(LAMBDA (FOO BAR (:REST ZTESCH)) ...)
			 (BAZOLA))

the order of events is that BAZOLA happens first, and the body of the
function happens second.  This has the same problem that
lambda-combinations had; LET was introduced to solve the problem.  If
anyone can figure out something that solves this problem for MV-CALL
without any other ill effects, I'd like to know about it.  One
possibility is to simply switch the order of the two subforms; what do
people think about that?

However, I'm not trying to be a troublemaker.  If nobody comes up with a
widely-liked improvement, I will be happy to accept the proposal as it
stands.

∂06-Feb-82  1251	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Maclisp compatibility 
Date:  6 Feb 1982 1547-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Maclisp compatibility
To: dlw at MIT-AI
cc: Fahlman at CMU-20C, common-lisp at SU-AI
In-Reply-To: Your message of 6-Feb-82 1506-EST

No, I think the approach suggested by the folks at CMU is fine.
-------

∂06-Feb-82  1416	Eric Benson <BENSON at UTAH-20> 	Re: Maclisp compatibility  
Date:  6 Feb 1982 1513-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: Maclisp compatibility
To: Fahlman at CMU-20C, common-lisp at SU-AI
In-Reply-To: Your message of 5-Feb-82 2341-MST

"Lisp 1.5 should get no special treatment here: all of its important features
show up in Maclisp, and the ones that have changed or dropped away have done
so for good reason."

I am curious about one feature of Lisp 1.5 (and also Standard Lisp) which was
dropped from Maclisp.  I am referring to the Flag/FlagP property list functions.
I realize that Put(Symbol, Indicator, T) can serve the same function, but I
can't see any good reason why the others should have been dropped.  In an
obvious implementation of property lists Put/Get can use dotted pairs and
Flag/FlagP use atoms, making the property list itself sort of a corrupted
association list.  Maclisp and its descendants seem to use a flat list of
alternating indicators and values.  It isn't clear to me what advantage this
representation gives over the a-list.  Were Flag and FlagP dropped as a
streamlining effort, or what?
-------

∂06-Feb-82  1429	Howard I. Cannon <HIC at MIT-MC> 	Return values of SETF
Date: 6 February 1982 17:23-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Return values of SETF
To: common-lisp at SU-AI
cc: dlw at MIT-AI

I strongly agree.  I have always thought it a screw that SETF did not return
a value like SETQ.  It sometimes makes for more compact, readable, and convenient
coding.

∂06-Feb-82  2031	Fahlman at CMU-20C 	Value of SETF  
Date:  6 Feb 1982 2328-EST
From: Fahlman at CMU-20C
Subject: Value of SETF
To: common-lisp at SU-AI


Just for the record, I am also persuaded by Masinter's arguments for
having SETF return the value that it stores, assuming that RPLACA and
RPLACD are the only forms that want to do something else.  It would
cause no particular problems in the Spice implementation to add two new
primitives that are like RPLACA and RPLACD but return the values, and
the additional uniformity would be well worth it.

-- Scott
-------

∂06-Feb-82  2102	Fahlman at CMU-20C 	Re: MVLet      
Date:  6 Feb 1982 2354-EST
From: Fahlman at CMU-20C
Subject: Re: MVLet    
To: dlw at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 6-Feb-82 1536-EST


DLW's suggestion that we switch the order of arguments to M-V-CALL, so
that the function comes after the argument forms, does not look very
attractive if you allow more than one argument form.  This would be the
universally reviled situation in which a single required argument comes
after a rest arg.

As currently proposed, with the function to be called as the first arg,
M-V-CALL exactly parallels the format of FUNCALL.  (The difference, of
course, is that M-V-CALL uses all of the values returned by each of the
argument forms, while FUNCALL accepts only one value from each argument
form.)

-- Scott
-------

∂07-Feb-82  0129	Richard Greenblatt <RG at MIT-AI>  
Date: 7 February 1982 04:26-EST
From: Richard Greenblatt <RG at MIT-AI>
To: common-lisp at SU-AI

Re compatibility, etc
  Its getting really hard to keep track of
where things "officially" stand.   Hopefully,
the grosser of the suggestions that go whizzing
by on this mailing list are getting flushed,
but I have this uneasy feeling that one
of these days I will turn around and find
there has been "agreement" to change something
really fundamental like EQ.
  Somewhere there should be a clear and current summary
of "Proposed Changes which would change
the world."  What I'm talking about here are cases
where large bodies of code can reasonably be
expected to be affected, or changes or extensions
time honored central concepts like MEMBER or LAMBDA.
  It would be nice to have summaries from time to time
on the new frobs (like this MV-LET thing) that are proposed
but that is somewhat less urgent.

∂07-Feb-82  0851	Fahlman at CMU-20C  
Date:  7 Feb 1982 1149-EST
From: Fahlman at CMU-20C
To: RG at MIT-AI
cc: common-lisp at SU-AI
In-Reply-To: Your message of 7-Feb-82 0426-EST


I feel sure that no really incompatible changes will become "official"
without another round of explicit proposal and feedback, though the
group has grown so large and diverse that we can no longer expect
unanimity on all issues -- we will have to be content with the emrgence
of substantial consensus, especially among those people representing
major implemenation efforts.  Of course, there is a weaker form of
"acceptance" in which a proposal seems to have been accepted by all
parties and therefore becomes the current working hypothesis, pending an
official round of feedback.

-- Scott
-------

∂07-Feb-82  2234	David A. Moon <Moon at MIT-MC> 	Flags in property lists
Date: Monday, 8 February 1982, 01:31-EST
From: David A. Moon <Moon at MIT-MC>
Subject: Flags in property lists
To: Eric Benson <BENSON at UTAH-20>
Cc: common-lisp at SU-AI

Flat property lists can be stored more efficiently than pair lists
in Lisp with cdr-coding.  That isn't why Maclisp dropped them, of
course; probably Maclisp dropped them because they are a crock and
because they make GET a little slower, which slows down the
interpreter in a system like Maclisp that stores function definitions
on the property list.

∂08-Feb-82  0749	Daniel L. Weinreb <DLW at MIT-MC> 	mv-call   
Date: 8 February 1982 10:48-EST
From: Daniel L. Weinreb <DLW at MIT-MC>
Subject: mv-call
To: common-lisp at SU-AI

I guess my real disagreement with mv-call is that I don't like to see it
used with more than one form.  I have explained before that the mv-call
with many forms has the effect of concatenating together the returned
values of many forms, which is something that I cannot possibly imagine
wanting to do, givn the way we use multiple values in practice today.  (I
CAN see it as useful in a completely different programming style that is so
far unexplored, but this is a standardization effort, not a language
experiment, and so I don't think that's relevant.)  This was my original
objection to mv-call.

RPG's message about mv-call shows how you can use it with only one form to
get the effect of the new-style lambda-binding multiple-value forms, and
that looked attractive.  But I still don't like the mv-call form when used
with more than one form.

I do not for one moment buy the "analogy with funcall" argument.  I think
of funcall as a function.  It takes arguments and does something with them,
namely , aply the firsrt to the rest.  mv-call is most certainly not a
function: it is a special form.  I think that in all important ways,
what it does is different in kind and spirit from funcall.  Now, I realize
that this is a matter of personal philosophy, and you may simply not feel
this way.

Anyway, I still don't want to make trouble.  So while I'd prefer having
mv-call only work with one form, and then to have the order of its subforms
reversed, I'll go along with the existing proposal if nobody supports me.

∂08-Feb-82  0752	Daniel L. Weinreb <DLW at MIT-MC>  
Date: 8 February 1982 10:51-EST
From: Daniel L. Weinreb <DLW at MIT-MC>
To: common-lisp at SU-AI

I agree with RG, even after hearing Scott's reply.  I would like to
see, in the next manual, a section prominently placed that summarizes
fundamental incompatibilities with Maclisp and changes in philosophy,
especially those that are not things that are already in Zetalisp.
For those people who have not been following Common Lisp closely,
and even for people like me who are following sort of closely, it would
be extremely valuable to be able to see these things without poring
over the entire manual.

∂08-Feb-82  1256	Guy.Steele at CMU-10A 	Flat property lists   
Date:  8 February 1982 1546-EST (Monday)
From: Guy.Steele at CMU-10A
To: benson at utah-20
Subject:  Flat property lists
CC: common-lisp at SU-AI
Message-Id: <08Feb82 154637 GS70@CMU-10A>

LISP 1.5 used flat property lists (see LISP 1.5 Programmer's Manual,
page 59).  Indeed, Standard LISP is the first I know of that did *not*
use flat property lists.  Whence came this interesting change, after all?
--Guy

∂08-Feb-82  1304	Guy.Steele at CMU-10A 	The "Official" Rules  
Date:  8 February 1982 1559-EST (Monday)
From: Guy.Steele at CMU-10A
To: rg at MIT-AI
Subject:  The "Official" Rules
CC: common-lisp at SU-AI
Message-Id: <08Feb82 155937 GS70@CMU-10A>

Well, I don't know what the official rules are, but my understanding
was that my present job is simply to make the revisions decided
upon in November, and when that revised document comes out we'll have
another round of discussion.  This is not to say that the discussion
going on now is useless.  I am carefully saving it all in a file for
future collation.  It is just that I thought I was not authorized to
make any changes on the basis of current discussion, but only on what
was agreed upon in November.  So everyone should rest assured that a
clearly labelled document like the previous "Discussion" document
will be announced before any other "official" changes are made.

(Meanwhile, I have a great idea for eliminating LAMBDA from the language
by using combinators...)
--Guy

∂08-Feb-82  1410	Eric Benson <BENSON at UTAH-20> 	Re:  Flat property lists   
Date:  8 Feb 1982 1504-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re:  Flat property lists
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI
In-Reply-To: Your message of 8-Feb-82 1346-MST

I think I finally figured out what's going on.  Indeed every Lisp dialect I
can find a manual for in my office describes property lists as flat lists
of alternating indicators and values.  The dialects which do have flags
(Lisp 1.5 and Lisp/360) appear to just throw them in as atoms in the flat
list.  This obviously leads to severe problems in synchronizing the search
down the list!  Perhaps this is the origin of Moon's (unsupported) claim
that flags are a crock.  Flags are not a crock, but the way they were
implemented certainly was!  This must have led to their elimination in more
recent dialects, such as Stanford Lisp 1.6, Maclisp and Interlisp.
Standard Lisp included flags, but recent implementations have used a more
reasonable implementation for them, by making the p-list resemble an a-list
except for the atomic flags.  Even without flags, an a-list seems like a
more obvious implementation to me, since it reflects the structure of the
data.  There is NO cost difference in space or speed (excluding cdr-coding)
between a flat list and an a-list if flags are not included.  The presence
of flags on the list requires a CONSP test for each indicator comparison
which would otherwise be unnecessary.

Much of the above is speculation.  Lisp historians please step forward and
correct me.
-------

∂08-Feb-82  1424	Don Morrison <Morrison at UTAH-20> 	Re:  Flat property lists
Date:  8 Feb 1982 1519-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re:  Flat property lists
To: Guy.Steele at CMU-10A
cc: benson at UTAH-20, common-lisp at SU-AI
In-Reply-To: Your message of 8-Feb-82 1346-MST

Stanford LISP 1.6  (which predates  "Standard" LISP)  used a-lists  for
instead of flat  property lists.   See the  manual by  Quam and  Diffie
(SAILON 28.7), section 3.1.  

It was also mentioned a message or two ago that even in implementations
without cdr-coding  flat  property  lists are  more  efficient.   Would
someone explain to me why?   If we assume that  cars and cdrs cost  the
same and do not have flags (Stanford LISP 1.6 does not have flags) then
I see no difference in  cost.  And certainly the a-list  implementation
is a bit more perspicuous. There's  got to be a reason besides  inertia
why nearly all LISPs use flat property lists.  But in any case,  Common
LISP has no  business telling  implementers how  to implement  property
lists -- simply explain the semantics of PutProp, GetProp, and RemProp,
or whatever they end up being called and leave it to the implementer to
use a  flat  list, a-list,  hash-table,  or,  if he  insists,  a  flat,
randomly ordered list of triples.  It should make no difference to  the
Common LISP definition. 
-------

∂08-Feb-82  1453	Richard M. Stallman <RMS at MIT-AI>
Date: 8 February 1982 16:56-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

In my opinion, the distinction between functions and special
forms is not very important, and Mv-call really is like funcall.

∂19-Feb-82  1656	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Revised sequence proposal 
Date: 19 Feb 1982 1713-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: Revised sequence proposal
Message-ID: <820118171315FAHLMAN@CMU-20C>

At long last, my revised revised proposal for sequence functions is
ready for public perusal and comment.  Sorry for the delay -- I've been
buried by other things and this revision was not as trivial to prepare
as I had expected -- several false starts.

The proposal is in the files <FAHLMAN>NNSEQ.PRESS and <FAHLMAN>NNSEQ.DOC
on CMU-20C.

-- Scott
   --------

∂20-Feb-82  1845	Scott.Fahlman at CMU-10A 	Revised sequence proposal    
Date: 20 February 1982 2145-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: common-lisp at su-ai
Subject:  Revised sequence proposal
Message-Id: <20Feb82 214553 SF50@CMU-10A>


...is also on CMUA as TEMP:NNSEQ.PRE[C380SF50] and also .DOC.  It might
be easier for some folks to FTP from there.
-- Scott

∂21-Feb-82  2357	MOON at SCRC-TENEX 	Fahlman's new new sequence proposal, and an issue of policy 
Date: Monday, 22 February 1982  02:50-EST
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject:Fahlman's new new sequence proposal, and an issue of policy

CMU-20C:<FAHLMAN>NNSEQ.DOC seems to be a reasonable proposal; let's accept
it and move on to something else.  A couple nits to pick:

I don't understand the type restrictions for CONCAT.  Is (vector fixnum) a
subtype of (vector t)?  Is (vector (mod 256.)) a subtype of (vector t)?
Presumably all 3 of these types require different open-coded access
operations on VAXes, so if CONCAT allows them to be concatenated without
explicit coercions then the type restriction is inutile.  I would suggest
flushing the type restrictions but retaining the output-type specifier.
After all, the overhead is only a type dispatch for each argument; the
inner loops can be open-coded on machines where that is useful.  The
alternative seems to be to have implementation-dependent type restrictions,
something we seem to have decided to avoid totally.

mumble-IF-NOT is equally as useful as mumble-IF, if you look at how they
are used.  This is because the predicate argument is rarely a lambda, but
is typically some pre-defined function, and most predicates do not come in
complementary versions.  (Myself, I invariably write such things with
LOOP, so I don't have a personal axe to grind.)

REMOVE should take :start/:end (perhaps the omission of these is just a
typo).


A possible other thing to move on to: It's pretty clear that the more
advanced things like the error system, LOOP, the package system, and
possibly the file system aren't going to be reasonable to standardize on
for some time (say, until the summer).  As far as packages go, let's say
that there are keywords whose names start with a colon and leave the rest
for later; keywords are the only part of packages that is really pervasive
in the language.  As far as errors go, let's adopt the names of the
error-reporting functions in the new Lisp machine error system and leave
the details of error-handling for a later time.  I'd like to move down to
some lower-level things.  Also I'm getting extremely tired of the large
ratio of hot air to visible results.  There are two things that are
important to realize:  We don't need to define a complete and comprehensive
Lisp system as the first iteration of Common Lisp for it to be useful.  If
the Common Lisp effort doesn't show some fruit soon people are going to
start dropping out.

We should finish defining the real basics like the function-calling
mechanism, evaluation, types, and the declaration mechanism.  Then we ought
to work on defining a kernel subset of the language in terms of which the
rest can be written (not necessarily efficiently); the Common Lisp
implementation in terms of itself may not actually be used directly and
completely by any implementation, but will provide a valuable form of
executable documentation as well as an important aid to bringing up of new
implementations.  Then some people should be delegated to write such code.
Doing this will also force out any fuzzy thinking in the basic low-level
stuff.

This is, in fact, exactly the way the Lisp machine system is structured.
The only problem is that it wasn't done formally and could certainly
benefit from rethinking now that we have 7 years of experience in building
Lisp systems this way behind us.  From what I know of VAX NIL, Spice Lisp,
and S-1 NIL, they are all structured this way also.

Note also that this kernel must include not only things that are in the
language, but some basic tools which ought not to have to be continuously
reinvented; for example the putative declaration system we are assuming
will exist and solve some of our problems, macro-writing tools, a
code-walking tool (which the new syntax for LOOP, for one, will implicitly
assume exists).

∂22-Feb-82  0729	Griss at UTAH-20 (Martin.Griss)    
Date: 22 Feb 1982 0820-MST
From: Griss at UTAH-20 (Martin.Griss)
To: MOON at SCRC-TENEX
cc: Griss
In-Reply-To: Your message of 22-Feb-82 0113-MST
Remailed-date: 22 Feb 1982 0827-MST
Remailed-from: Griss at UTAH-20 (Martin.Griss)
Remailed-to: common-lisp at SU-AI

Re: Moon's comment on middle-level code as "working" documentation. That is
exactly the route we have been following for PSL at Utah; in the process of
defining and porting our Versions 2 and 3 systems from 20 to VAX to Apollo
domain, a lot of details have been discussed and issues identified.
In order for us to become involved and for others to begin some sort of
implementation, a serious start has to be made on these modules.

We certainly would like to use PSL as starting point for a common lisp
implementation, and this would only happen when LISP sources and firm
agreement on some modules has been reached. We have hopes of PSL running on
DEC-20, VAX, 68000, 360/370 and CRAY sometime later in the year, and would
be delighted to have PSL as a significnat sub-set of Common LISP, if not
more. But right now, there is not much to do.

Martin Griss
-------


∂08-Feb-82  1222	Hanson at SRI-AI 	common Lisp 
Date:  8 Feb 1982 1220-PST
From: Hanson at SRI-AI
Subject: common Lisp
To:   rpg at SU-AI
cc:   hanson

	I would indeed like to influence Common Lisp, if it is not
too late, and if any of the deficiencies of FranzLisp are about to
be repeated.  There are a number of people here in the Vision group
who have various ideas and experiences with other Lisps that I can
try and stand up for.
	As I am pretty much stuck with FranzLisp on the Image Understanding
Testbed, there are a number of things that concern me which may or may not
have been considered in Common Lisp.  Among them are:
	* Sufficient IO flexibility to give you redirection from devices
to files (easy in Franz due to Unix's treating devices as files, possible
problems in other environments)
	* Single character IO to allow the construction of Command-completion
monitors in Lisp, etc. (Impossible without special Hackery in Franz since
it always waits for a line feed before transmitting a line.)
	* An integrated extensible screen editor like our current VAX/Unix/emacsor like the Lisp Machine editor.  Fanciness of the raw environment is not
a virtue. Let the extensibility take care of that.
	* USER-ORIENTED STRING MANIPULATION utilities.  Franz is a total loser
here - after a certain number of (implode (car (aexploden foo)))'s one begins
to lose one's sense of humor.
	* FLOATING POINT COMPUTATION that is as fast as the machine can go.
The VAX is pretty slow as it is, without having Lisp overhead in the way
when you want to do a convolution on a 1000x1000 picture file.
	* SPECIAL TWO-DIM DATA STRUCTURES allowing very fast access and
arithmetic, both 8-bit, 16-bit, and short and long floating point, for
such things as image processing, edge operators, convolutions.  I don't
know what you would do here, but possibly special matrix multiplying
SW down at the bottom level would be a start - one needs all kinds of
matrix arithmetic primitives to work analogously to the string primitives.
	Also, I've heard it said that special text primitives are also
desirable to write an efficient EMACS in Lisp.
	* DYNAMIC LINKING OF FOREIGN LANGUAGES.  You should be able to
do for almost anything what Franzlisp does for C, but with some far
better mechanism for passing things like strings BACK UP to Lisp (not
possible without hackery in Franz).  We want to be able to use Lisp
as an executive to run programs and maybe even subroutines written in
any Major language onthe VAX.

	-That's all I can think of for now, except maybe a device-independent
interactive Graphics package.  Some of us would be delighted to get together
and talk again as soon as you think it might be productive for the future
of Common Lisp.
	--Andy Hanson 415-859-4395  HANSON@SRI-AI
-------

∂28-Feb-82  1158	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL  
Date: 28 Feb 1982 1500-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: T and NIL
Message-ID: <820127150012FAHLMAN@CMU-20C>


OK, folks, the time has come.  We have to decide what Common Lisp is
going to do about the things that have traditionally been called T and
NIL before we go on any farther.  Up until now, we have deferred this
issue in the hope that people's positions would soften and that their
commitment to Common Lisp would increase over time, but we can't leave
this hanging any longer.  Almost any decision would be better than no
decision.

It is clear that this is an issue about which reasonable people can
differ (and about which unreasonable people can also differ).  I think
that most of us, if we were designing a Lisp totally from scratch, would
use something other than the symbols T and NIL as the markers for truth,
falsity, and list-emptiness.  Most of us have written code in which we
try to bind T as a random local, only to be reminded that this is
illegal.  Most of us have been disgusted at the prospect of taking the
CAR and CDR of the symbol NIL, but the advantages of being able to CDR
off the end of a list, in some situations, are undeniable.

On the other hand, the traditional Maclisp solution works, is used in
lots of code, and feels natural to lots of Lisp programmers.  Should we
let mere aesthetics (and arguable aesthetics at that) push us into
changing such a fundamental feature of the language?  At the least, this
requires doing a query-replace over all code being imported from
Maclisp; at worst, it may break things in subtle ways.

What it comes down to is a question of the relative value that each
group places on compatibility versus the desire to fix all of the things
that Maclisp did wrong.  The Lisp Machine people have opted for
compatibility on this issue, and have lots of code and lots of users
committed to the old style.  The VAX NIL people have opted for change,
with the introduction of special empty-list and truth objects.  They too
have working code embodying their decision, and are loathe to change.
The Spice Lisp group has gone with an empty-list object, but uses the
traditional T for truth.

What we need is some solution that is at least minimally acceptable to
all concerned.  It would be a real shame if anyone seceded from the
Common Lisp effort over so silly an issue, especially if all it comes
down to is refusing to do a moby query-replace.  However, in my opinion,
it would be even more of a shame if we left all of this up to the
individual implementors and tried to produce a language manual that
doesn't take a stand one way or the other.  Such a manual is guaranteed
to be confusing, and it is something that Common Lisp would have to live
with for many years, long after the present mixture of people and
projects has become irrelevant.  Either solution, on either of these
issues, is preferable to straddling the fence and having to say that
predicates return some "unspecified value guaranteed to be non-null" or
words to that effect.

On the T issue, the proposals are as follows:

1. Truth is represented by the symbol T, whose only special property is
that its value is permanently bound to T.

2. Truth (and also special input values to certain functions?) is
represented by a special truthity object, not a symbol.  This object is
represented externally as #T, and it presumably evaluates to itself.  In
this proposal, T is just another symbol with no special properties.

2A. Like proposal 2, but the symbol T is permanently bound to #T, so
that existing code with constructs like (RETURN T) doesn't break.

3. Implementors are free to choose between 1 and 2A.  Predicates are
documented as returning something non-null, but it is not specified what
this is.  It is not clear what to do about the T in CASE statements or
as the indicator of a default terminal stream.

I think this case is pretty clear.  As far as I can tell, everyone wants
to go with option 1 except JONL and perhaps some others associated with
VAX NIL, who already have code that uses #T.  Option 2 would allow us to
use T as a normal variable, which would be nice, but would break large
amounts of existing code.  Option 2A would break much less code, but if
T is going to be bound permanently to something, it is hard to see a
good reason not just to bind it to T.  Option 3 is the sort of ugly
compromise I discussed above.

If, as it appears, VAX NIL could be converted to using T with only a day
or so of effort, I think that they should agree to do this.  It would be
best to do this now, before VAX NIL has a large user community.  If there
are deeper issues involved than just having some code and not wanting to
change, a clear explanation of the VAX NIL position would be helpful.

The situation with respect to NIL is more complex.  The proposals are as
follows:

1. Go with the Maclisp solution.  Both "NIL" and "()" read in as the
symbol NIL.  NIL is permanently bound to itself, and is unique among
symbols in that you can take its CAR and CDR, getting NIL in either
case.  In other respects, NIL is a normal symbol: it has a property
list, can be defined as a function (Ugh!) and so on.  SYMBOLP, ATOM, and
NULL of NIL are T; CONSP of NIL is NIL; LISTP of NIL is controversial,
but probably should be T.

2. Go with the solution in the Swiss Cheese edition.  There is a
separate null object, sometimes called "the empty list", that is written
"()".  This object is used by predicates and conditionals to represent
false, and it is also the end-of-list marker.  () evaluates to itself,
and you can take the CAR and CDR of it, getting () in either case.
NULL, ATOM, and LISTP of () are T; CONSP and SYMBOLP of () are ().
Under this proposal, the symbol NIL is a normal symbol in all respects
except that its value is permanently bound to ().

3. Allow implementors to choose either 1 or 2.  For this to work, we
must require that the null object, whatever it is, prints as "()", at
least as an option.  Users must not represent the null object as 'NIL,
though NIL, (), and '() are all OK in contexts that evaluate them.  The
user can count on ATOM and NULL of () to be T, and CONSP of () to be ().
SYMBOLP of () is officailly undefined.  LISTP of () should be defined to
be T, so that one can test for CAR-ability and CDR-ability using this.

VAX NIL and Spice Lisp have gone with option 2; the Lisp Machine people
have stayed with option 1, and have expressed their disinclination to
convert to option 2.  Most of us in the Spice Lisp group were suspicious
of option 2 at first, but accepted it as a political compromise; now the
majority of our group has come to like this scheme, quite apart from
issues of inertia.  I would point out that option 2 breaks very little
existing code, since you can say things like (RETURN NIL) quite freely.
Code written under this scheme looks almost like code written for
Maclisp -- a big effort to change one's style is not necessary.  It is
necessary, however, to go through old code and convert any instances of
'NIL to NIL, and to locate any occurrences of NIL in contexts that
implicitly quote it.  Option 3 is another one of those ugly compromises
that I believe we should avoid.  My own view is that I would prefer
either option 1 or 2, with whatever one-time inconvenience that would
imply for someone, to the long-term confusion of option 3.

I propose that the Lisp Machine people and other proponents of option 1
should carefully consider option 2 and what it would take to convert to
that scheme.  It is not as bad as it looks at first glance.  If you are
willing to convert, that would be the ideal solution.  If not, I can
state that Spice Lisp would be willing to revert to option 1 rather than
cause a major schism; I cannot, of course, speak for the VAX NIL people
or any other groups on this.

Let me repeat that we must decide this issue as soon as possible.  We
have made a lot of progress on the multiple value and sequence issues,
but until we have settled T and NIL, we can hardly claim that the
language specification is stabilizing.  It would be awfully nice to have
this issue more or less settled before we mass-produce the next edition
of the manual.

-- Scott
   --------

∂28-Feb-82  1342	Scott E. Fahlman <FAHLMAN at CMU-20C> 	T and NIL addendum   
Date: 28 Feb 1982 1640-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: T and NIL addendum
Message-ID: <820127164026FAHLMAN@CMU-20C>

It occurs to me that my earlier note discusses the T and NIL issue
primarily in terms of the positions taken by the Lisp Machine and
VAX NIL communities.  The reason for this, of course, is that these two
groups have taken strong and incompatible positions that somehow have to
be resolved if we are to keep them both in the Common Lisp camp.  I did
not mean to imply that we are uninterested in the views and problems of
other implementations, existing or planned, or of random kibitzers for
that matter.

-- Scott
   --------

∂28-Feb-82  1524	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
Date: 28 February 1982 18:23-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  T and NIL.
To: COMMON-LISP at SU-AI

#+FLAME-MODE '|
Existing bodies of working code are absolutely no consideration for
the NIL implementors in this particular issue. I guess this might be
obvious, but why should we be so callously radical? It is simply
that we have reason to beleive that existing code which depends
on the present relationship between list and symbol semantics
and predicate semantics which will not run as-is in NIL are
execedingly easy to find and fix. We also beleive that the
existing lisp 1.5 semantics are inadvertently overloaded,
implying that GET, PUTPROP, SYMEVAL, and other symbol primitives
may be used on the return value of predicates and the empty list, and
needlessly implying that evaluation-semantics need not reflect
datatype-semantics. |


In bringing up Macsyma and other originally pdp10-maclisp code in NIL,
I have found it much easier to deal with the predicate-issue than with
the fact that CAR and CDR do error-checking. Well, the CAR/CDR problem
had already been "smoked" out of Macsyma by the Lispmachine. There
was no need to do any QUERY-REPLACE, and no subtle bugs.
(Non-trivial amounts of LISPMACHINE code were also snarfed for use in NIL,
 although Copyright issues [NIL wants to be publick domain] may force
 a rewrite of these parts. The only Lispmachine code which depended on
 () being a symbol explicitly said so in a comment, since probably the
 author felt "funny" about it in the first place.)

There was only one line of Macsyma which legally depended on the return values
of predicates other than their TRUTHITY or FALSITY. There were a few more
lines of Macsyma which depended illegally on the return value of
predicates. These were situations where GET, PUTPROP, and REMPROP
were being used on the return value of predicate-like functions,
e.g. using REMPROP on the return value of the "CDR-ASSQ" idiom, using
GET on the return value of GET. In good-old "bare" pdp-10 maclisp
with only one program running in it, this is not a problem, but
=> On the lispmachine, which has a large environment, many usages of
   property lists, it can be very dangerous for programs to unwitingly
   share the property lists of global symbols T and NIL. <=

The other part of the picture is that we know we can write code
which doesn't have things like #T in it, and which will run in
COMMON-LISP regardless of what COMMON-LISP does.

-gjc

∂28-Feb-82  1700	Kim.fateman at Berkeley 	smoking things out of macsyma 
Date: 28 Feb 1982 16:35:58-PST
From: Kim.fateman at Berkeley
To: COMMON-LISP@SU-AI, GJC@MIT-MC
Subject: smoking things out of macsyma


I really doubt that all problems are simple
to smoke out;  in fact, I suspect that there are still places
where the Lisp Machine version of Macsyma fails for mysterious
reasons.  These may be totally unrelated to T vs #T or NIL vs (),
but I do not see how GJC can be so confident.

For example, when we brought Macsyma up on the VAX, (after it
had allegedly been brought up on a CADR) we found
places where property lists were found by computing CAR of atoms;
we found a number of cases of (not)working-by-accident functions whose 
non-functionality was noticed only when run on the VAX with a modest
amount of additional error checking. (e.g. programs which should
have bombed out just chugged along on the pdp-10).

GJC claims there is (was?) only one line of Macsyma which legally
depends on other-than truthity of a predicate. I believe this is
false, but in any case, a  proof of his claim would require rather 
extensive analysis. Whichever way this decision goes (about NIL or ()),
I would be leery of making too much of GJC's note for supporting evidence.

∂28-Feb-82  1803	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re:  T and NIL. 
Date: 28 Feb 1982 2105-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: GJC at MIT-MC, COMMON-LISP at SU-AI
Subject: Re:  T and NIL.
Message-ID: <820127210537FAHLMAN@CMU-20C>
Regarding: Message from George J. Carrette <GJC at MIT-MC>
              of 28-Feb-82 1823-EST

I am not sure that I completely understand all of your (GJC's) recent
message.  Some of the phrases you use ("the predicate-issue", for
example, and some uses of "illegal") might be read in several ways.  I
want to be very sure that I understand your views.  Is the following a
reasonable summary, or am I misreading you:

1. The VAX NIL group's preference for separate truth and
empty-list/false objects is not primarily due to your investment in
existing code, but rather because you are concerned about the unwisdom
of overloading the symbols T and NIL.

2. On the basis of your experience in porting large programs from
Maclisp to NIL, you report that very few things have to be changed and
that it is very easy to find them all.

3. If, nevertheless, the Common Lisp community decides to go with the
traditional Maclisp use of T and NIL as symbols, you will be able to
live with that decision.

-- Scott
   --------

∂28-Feb-82  2102	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
Date: 1 March 1982 00:02-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  T and NIL.
To: FAHLMAN at CMU-20C
cc: COMMON-LISP at SU-AI

1. Right, not much VAX-NIL code written in LISP depends on this T and NIL issue.
2. Right, no query-replace was needed, no subtle bugs lurking due to this.
   I did make a readtable for Macsyma so that NIL read in as ().
3. Here I meant that the "T and NIL" thing is not an important
   TRANSPORTABILITY issue. Code which does not depend on the overloading
   will indeed run. But building the overloading into NIL at this point
   will cost something. I'm not sure it is worth it.


∂28-Feb-82  2333	George J. Carrette <GJC at MIT-MC> 	Take the hint.
Date: 1 March 1982 02:33-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: Take the hint.
To: Kim.fateman at UCB-C70
cc: COMMON-LISP at SU-AI

I really wish you wouldn't use the COMMON-LISP mailing
list for sales-pitches about how much better your Franz
implementation of Macsyma is than the Lispmachine implementation,
especially when it comes down to such blatant mud-slinging
as saying that you "suspect that there are still places
where the Lisp Machine version of Macsyma fails for mysterious
reasons."

Just because GJC mentions the magic word Macsyma doesn't mean you
have to take it as a cue to flame. What you said had nothing
to do with the concerns of COMMON-LISP. Who do you think cares about
what you "suspect" about the Lispm?


∂01-Mar-82  1356	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: T and NIL   
Date:  1 Mar 1982 1211-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: T and NIL
To: FAHLMAN at CMU-20C
cc: common-lisp at SU-AI
In-Reply-To: Your message of 28-Feb-82 1500-EST

If you are calling for a vote, here are mine.

On truth:  1, 2A, 2, 3.  As long as you are going to say that everything
non-NIL (non-()?) is true, it seems completely pointless to add a new
data-type to represent truth.

On emptiness:  1, 2, 3.

I feel very strongly about the undesirability of allowing differences
among implementations.  I feel less strongly about the undesirability
of changing T and NIL to #T and ().

Mostly, I simply don't understand the reason for changing NIL and T.  I
thought the goal of CL was to make changes only when there is some
reason for them.  The only reason I can figure out is that people find
it inelegant that T and NIL look like symbols but don't quite work like
normal symbols.  However it seems at least as inelegant to add a new data
type for each of them.  Particularly when the most likely proposals
leave NIL and T so they can't be rebound, thus not really solving the
problem of having NIL and T be odd.

By the way, I have another issue that is going to sound trivial at
first, but which may not end up to be:  Does anyone care about whether
Lisp code can be discussed verbally?  How are you going to read #T and
() aloud (e.g. in class, or when helping a user over the phone)?  I
claim the best pronunciation of () is probably the funny two-toned bleep
used by the Star Trek communicators, but I am unsure how to get it in
class.  In fact, if you end up with 2A and 2, which seem the most likely
"compromises", people are going to end up reading #T and () as "t" and
"nil".  That is fine as long as no one actually uses T and NIL as if
they were normal atoms.  But if they do, imagine talking (or thinking)
about a program that had a list (NIL () () NIL).

By the way, if you do decide to use proposal 1 for NIL, please consider
disallowing NIL as a function.  It seems that it is going to be worse
for us to allow NIL as a function than to implement property lists or
other attributes.
-------

∂01-Mar-82  2031	Richard M. Stallman <RMS at MIT-AI> 	Pronouncing ()    
Date: 1 March 1982 23:30-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: Pronouncing ()
To: common-lisp at SU-AI

If () becomes different from NIL, there will not be any particular
reason to use the symbol NIL.  Old code will still have NILs that are
evaluated, but in those places, NIL will be equivalent to ().

So there will rarely be a need to distinguish between the symbol NIL
and ().  It will be no more frequent than having to distinguish
between LIST-OF-A and (A) or between TWO and 2.  When the problem does
come up, it will not be insuperable, just a nuisance, like the other
two problems.

Alternatively, we might pronounce () as "empty" or "false".

∂01-Mar-82  2124	Richard M. Stallman <RMS at MIT-AI> 	() and T.    
Date: 1 March 1982 23:43-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: () and T.
To: common-lisp at SU-AI

I believe that () should be distinuished from NIL because
it is good if every data type is either all true or all false.
I don't like having one symbol be false and others true.

Another good result from distinguishing between () and NIL is
that the empty list can be LISTP.

For these reasons, I think that the Lisp machine should convert
to option 2 for NIL.

The situation for T is different.  Neither of those advantages
has a parallel for the case of T and #T.  It really doesn't matter
what non-() object is returned by predicates that want only to return
non-falsity, so the symbol T is as good as any.  There is no reason
to have #T as distinct from T.  However, option 3 is not really ugly.
Since one non-() value is as good as another, there is no great need
to require what value the implementation must use.  I prefer option 1,
but I think option 3 is nearly as good.

Meanwhile, let's have the predicates SYMBOLP, NUMBERP, STRINGP and CONSP
return their arguments, to indicate truth.  This makes possible the
construction
  (RANDOM-FUNCTION (OR (SYMBOLP expression) default))
where default might eval to a default symbol or might be a call to ERROR.
To do this now, we must write
  (RANDOM-FUNCTION (LET ((TEM expression))
		     (IF (SYMBOLP TEM) TEM
		       default)))
LISTP should probably return its argument when it is a non-() list.
(LISTP ()) should return some non-() list, also.
ATOM should return its argument if that is not ().
(ATOM ()) should return T.  Then ATOM's value is always an atom.

The general principle is: if a predicate FOO-P is true if given
falsehood as an argument, FOO-P should always return an object
of which FOO-P is true.
If, on the other hand, FOO-P is false when given falsehood as an
argument, then FOO-P should always return its argument to
indicate truth.

These two principles can be applied whether or not () and NIL
are the same.  If applied, they minimize the issue about T and #T.

∂02-Mar-82  1233	Jon L White <JONL at MIT-MC> 	NIL versus (), and more about predicates.    
Date: 2 March 1982 14:28-EST
From: Jon L White <JONL at MIT-MC>
Subject: NIL versus (), and more about predicates.
To: Fahlman at CMU-10A
cc: common-lisp at SU-AI


NIL and ()

  RMS just raised up several important points about why it would
  be worth the effort to distinguish the empty list from the symbol 
  NIL.  Some years ago when the NIL effort addressed this question,
  we felt that despite **potential** losing cases, there would be
  almost no effort involved in getting existing MacLISP code to
  work merely by binding NIL at top level to ().   GJC's comments
  (flaming aside) seem to indicate that the effect of this radical
  change on existing code is indeed infinitesimal;  the major problem 
  is convincing the unconvinced hacker of this fact.  I've informally
  polled a number of LISPMachine users at MIT over the last year on 
  this issue, and the majority response is that the NIL/() thing is 
  unimportant, or at most an annoyance -- it pales entirely when compared 
  to the advantages of a **stable** system (hmmm, LISPM still changing!).

Return value of Predicates:

  However, we didn't feel that it would be so easy to get around
  the fact the function NULL is routinely used both to test for the 
  nullist, and for checking predicate values.  That seems to imply that 
  the nullist will still have to do for boolean falsity in the LISP
  world.

  Boolean truthity could be any non-null object, and #T is merely a 
  way of printing it.  As long as #T reads in as the canonical truth 
  value, then there is no problem with existing NIL code, for I don't 
  believe anyone (except in a couple of malice aforethought cases) 
  explicitly tries to distinguish #T from other non-null objects.  
  Certainly, we all could live with a decision to have #T read in as T.
  But note that if #T isn't unique, then there is the old problem, as 
  with NIL and () in MacLISP now, that two formats are acceptable for 
  read-in, but only one can be canonically chosen for printout;  it would 
  thus be *possible* for a program to get confused if it were being 
  transported from an environment where the distinction wasn't made into 
  one where it was made.

  Most "random" predicates in PDP10 MacLISP (i.e., predicates that
  don't really return any useful value other than non-false) return the 
  value of the atom *:TRUTH, rather than a quoted constant, so that it is 
  possible to emulate NIL merely by setq'ing this atom.

  At the Common-LISP meeting last November, my only strong position
  was that it would be unwise *at this point in time* to commit "random" 
  predicates to return a specific non-false value (such as the symbol T).  
  The reason is simply that such a decision effectively closes out the 
  possibility of ever getting a truthity different from the symbol T -- not 
  that there is existing code depending on #T.  Had the original designers 
  of LISP been a little more forward-looking (and hindsight is always better 
  than foresight!) they would have provided one predicate to test for nullist,
  and another for "false";  even if one particular datum implements both,
  it would encourage more "structure" to programs.   I certainly don't feel 
  that the nullist/"false" merger can be so easily ignored as the nullist/NIL 
  merger.

  The T case for CASE/SELECT is unique anyway -- the T there is unlike
  the T in cond clauses, since it is not evaluated.  This problem
  would come up regardless of what the truthity value is (i.e., the
  problem of the symbols T and OTHERWISE being special-cased by CASE)

∂02-Mar-82  1322	Jon L White <JONL at MIT-MC> 	NOT and NULL: addendum to previous note 
Date: 2 March 1982 16:17-EST
From: Jon L White <JONL at MIT-MC>
Subject: NOT and NULL: addendum to previous note
To: Common-Lisp at SU-AI

The merging of the functionality of NOT and NULL makes it
mechanically impossible to separate out the usages of null
as the nullist from those usages as "false";  this merger,
of course, was almost demanded by the lack of a "false" 
distinct from null.   In fact, both names, NOT and NULL,
are probably around since antiquity, but there has not been
the two separate functionalities.

∂02-Mar-82  1322	George J. Carrette <GJC at MIT-MC> 	T and NIL.    
Date: 2 March 1982 15:59-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  T and NIL.
To: FAHLMAN at CMU-20C
cc: COMMON-LISP at SU-AI

To give some perspective, the things to change in the existing NIL code
to support the T & NIL symbol overloading from lisp 1.5 are:
[1] Change the type code of the () object to be of type SYMBOL.
[2] Change every primitive which acts on symbols to have a
    special case check like (DEFUN PLIST (X) (%PLIST (SYMBOL-FILTER X))).
    where: (DEFUN SYMBOL-FILTER (X)
	     (IF (SYMBOLP X)
		 (IF (NULL X) *NIL-SURROGATE* X)
                 (SYMBOL-FILTER (WRONG-TYPE-ARG-ERROR "Not a symbol" X))))
    and %PLIST is the usual open-compiled stucture reference.
[3] Make the usual changes in the evaluator, special case for T and NIL.
[4] Make #T read in as T.
[5] Recompile and reassemble the world.

So you can see that it isn't all that much work, won't slow things
down too much (mainly the evaluator), and won't make things any bigger.
Larger changes, such as changing the calling sequence of most things
in the virtual machine, have been made in NIL in the recent past.

Remember though, this will be in lisp, and not hidden away in microcode,
so users will be able to see the funny stuff going on. It won't be
as if the semantics were built-in to the hardware or engraved on
stones brought down from a mountain top.

Obviously the other more localized way of changing this is to redefine
CAR and CDR to work on the symbol NIL. Totally out of the question
in a non-microcoded implementation.


∂02-Mar-82  1406	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	I think I am missing something 
Date:  2 Mar 1982 1658-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: I think I am missing something
To: fahlman at CMU-20C
cc: common-lisp at SU-AI

In the last couple of days, I have been seeing lots of potentially
useful discussion on how difficult it is to change various programs or
dialects to fit various conventions.  However I was also interested to
see why one would want to change from the hallowed definitions of T and
NIL in the first place. One of the messages yesterday had what seemed at
first to be a good justification, and at least one person has made
comments in passing today that seem to indicate they were thinking the
same thing.  But I have a problem with it. The justification, as I
understand it, is that currently NIL is overloaded, and thus leads to
ambiguities.  The most common one is when you say (GET 'FOO), you get
back NIL, and you don't know whether this means there is no FOO
property, or there is one and its value is NIL.  I agree that this is
annoying.  However as I understand the proposal, () is going to be used
for both the empty list and Boolean false.  If so, I don't understand
how this resolves the ambiguity.  As far as I can see, the new symbol
NIL is going to be useless, except that it will help old code such as
(RETURN NIL) to work. Basically everybody is now going to use () where
they used to use NIL. As far as I can see, the same ambiguity is going
to be there.  Under the new system, FOO is just as likely to have a
value of () as it was to have a value of NIL under the old system, so I
still can't tell what is going on if (GET 'FOO) returns ().  Even if you
separate the two functions, and have a () and a #FALSE (the canonical
object indicating falsity), something that would break *very* large
amounts of code, I would think there would be a reasonable number of
applications where properties would have Boolean values.  So (GET 'FOO)
would still sometimes return #FA

∂03-Mar-82  1158	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
Date:  3 Mar 1982 1228-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: The truth value returned by predicates
To: Common-Lisp at SU-AI

It seems to me that, except for those predicates like MEMBER which return a
specific value, the implementation should be allowed to return any handy
non-false value.  This is inconsequential for microcoded implementations,
but could save a great deal in "stock hardware" versions.  Whether or not
more predicates should return useful values, as Stallman suggests, is a
different matter.  My feeling is "why not?" since programmers are free to
use this feature or not, as they see fit.  I think that it might lead to
obscure code, but I wouldn't force my opinion on others if it doesn't
infringe on me.  For the same reason, I think either option 1 or 2 for
NIL/() is reasonable.  In fact, most opinions on this matter seem to be "I
prefer X but I can live with Y."  Although I think () is cleaner, I'm
inclined to agree with Hedrick that it's not that much cleaner.  It truly
pains me to go for the conservative option, but I just don't think there's
enough to gain by changing.
-------

∂03-Mar-82  1337	Eric Benson <BENSON at UTAH-20> 	The truth value returned by predicates    
Date:  3 Mar 1982 1228-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: The truth value returned by predicates
To: Common-Lisp at SU-AI

It seems to me that, except for those predicates like MEMBER which return a
specific value, the implementation should be allowed to return any handy
non-false value.  This is inconsequential for microcoded implementations,
but could save a great deal in "stock hardware" versions.  Whether or not
more predicates should return useful values, as Stallman suggests, is a
different matter.  My feeling is "why not?" since programmers are free to
use this feature or not, as they see fit.  I think that it might lead to
obscure code, but I wouldn't force my opinion on others if it doesn't
infringe on me.  For the same reason, I think either option 1 or 2 for
NIL/() is reasonable.  In fact, most opinions on this matter seem to be "I
prefer X but I can live with Y."  Although I think () is cleaner, I'm
inclined to agree with Hedrick that it's not that much cleaner.  It truly
pains me to go for the conservative option, but I just don't think there's
enough to gain by changing.
-------

∂03-Mar-82  1753	Richard M. Stallman <RMS at MIT-AI>
Date: 3 March 1982 20:33-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

Hedrick is correct in saying that distinguishing () from NIL
does not make it possible to distinguish between "no property"
and a property whose value is false, with GET.  However, I think
his message seemed to imply a significance for this fact which it does
not have.

As long as we want GET to return the value of the property, unaltered
(as opposed to returning a list containing the object, for example),
and as long as we want any object at all to be allowed as a value
of a property, then it is impossible to find anything that GET
can return in order to indicate unambiguously that there is no property.

I don't think this is relevant to the question of NIL and ().
The reasons why I think it would be good to distinguish the two
have nothing to do with GET.

It is convenient that the empty list and false are the same.  I do not
think, even aside from compatibility, that these should have been
distinguished.  The reasons that apply to NIL vs () have no
analog for the empty list vs false.

∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	T and NIL   
Date: 4 March 1982 19:01-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  T and NIL
To: FAHLMAN at CMU-20C
cc: common-lisp at SU-AI

If you're taking a poll, I prefer 2 and then 3 on the NIL issue.
The T issue I can't get too excited about.  Whatever is decided
for T, perhaps the implementation that NIL uses should be
encouraged, if it allows experimentation with a separate data type
truth value simply by setting symbols T and *:TRUTH.

∂04-Mar-82  1846	Earl A. Killian <EAK at MIT-MC> 	Fahlman's new new sequence proposal, and an issue of policy   
Date: 4 March 1982 19:08-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Fahlman's new new sequence proposal, and an issue of policy
To: MOON at SCRC-TENEX
cc: common-lisp at SU-AI

    Date: Monday, 22 February 1982  02:50-EST
    From: MOON at SCRC-TENEX

    mumble-IF-NOT is equally as useful as mumble-IF, if you look at how they
    are used.  This is because the predicate argument is rarely a lambda, but
    is typically some pre-defined function, and most predicates do not come in
    complementary versions.  (Myself, I invariably write such things with
    LOOP, so I don't have a personal axe to grind.)

Another possibility is to define a function composition operator.
Then you'd do
	(mumble-IF ... (COMPOSE #'NOT #'SYMBOLP) ...)
instead of
	(mumble-IF ... (LAMBDA (X) (NOT (SYMBOLP X))) ...)
This is nicer because it avoids introducing the extra name X.
(Maybe the #'s wouldn't be needed?)

∂05-Mar-82  0101	Richard M. Stallman <RMS at MIT-AI> 	COMPOSE 
Date: 5 March 1982 02:27-EST
From: Richard M. Stallman <RMS at MIT-AI>
Subject: COMPOSE
To: common-lisp at SU-AI

COMPOSE can be defined as a lambda macro, I think.

∂05-Mar-82  0902	Jon L White <JONL at MIT-MC> 	What are you missing?  and "patching"  ATOM and LISTP  
Date: 5 March 1982 12:01-EST
From: Jon L White <JONL at MIT-MC>
Subject: What are you missing?  and "patching"  ATOM and LISTP
To: HEDRICK at RUTGERS
cc: common-lisp at SU-AI

  Date:  2 Mar 1982 1658-EST
  From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
  Subject: I think I am missing something
  . . .  
The reasons for distinguishing NIL from () aren't related to the GET 
problem mentioned in your note;  RMS mentioned this too in his note 
of Mar 3.   In fact, since Common-Lisp will have multiple-values, the 
only sensible solution for GET (and others like it, such as the HASH-GET 
I have in a hashing package) is to return two values, the second of which 
tells whether or not the flag/attribute was found.

A more important aspect is the potential uniformity of functions which
act on lists -- there needn't be a split of code, one way to handle
non-null lists, and the other way to handle null (e.g. CAR and CDR).
In fact, I think RMS's statement of the problem on Mar 1 is quite succinct, 
and bears repeating here:
    Date: 1 March 1982 23:43-EST
    From: Richard M. Stallman <RMS at MIT-AI>
    Subject: () and T.
    I believe that () should be distinuished from NIL because
    it is good if every data type is either all true or all false.
    I don't like having one symbol be false and others true.
    Another good result from distinguishing between () and NIL is
    that the empty list can be LISTP. . . . 

However, even though it would be reasonable for CONSP to return its argument
when "true", I don't believe there is advantage to having predicates like 
ATOM and LISTP to try to return some "buggered" value for null.  There has to 
be some kind of discontinuity for any predicate which attempts to return its 
argument when "true", but which is "true" for the "false" datum;  that 
discontinuity is as bad as CAR and CDR being applicable to one special symbol 
(namely NIL).  The limiting case in this line of reasoning is the predicate 
NOT -- how could it return its argument?  Patching ATOM and LISTP for the 
argument of "false" makes as much sense to me as patching NOT.


∂05-Mar-82  0910	Jon L White <JONL at MIT-MC> 	How useful will a liberated T and NIL be?    
Date: 5 March 1982 12:09-EST
From: Jon L White <JONL at MIT-MC>
Subject: How useful will a liberated T and NIL be?
To: Hedrick at RUTGERS
cc: common-lisp at SU-AI


The following point is somewhat subsidary to your main point in the
note of Mar 2;  but it is an issue worth facing now, and one which I 
don't believe has hit the mails yet (although it has had some verbal
discussion here):
    Date:  2 Mar 1982 1658-EST
    From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
    Subject: I think I am missing something
    . . .   As far as I can see, the new symbol
    NIL is going to be useless, except that it will help old code such as
    (RETURN NIL) to work. 
As to the prospect that the symbol NIL (and the symbol T if Fahlman's
option 2 or 2A on "truthity" is taken) will become useless due to being
globally bound to null (and to #T for T), Well: Such binding is relevant 
only to old code.   New code is free to bind those symbols at will, so long 
as the new code doesn't try to call old code with **dynamic** rebindings of 
NIL and/or T.  I believe we will have local declarations in Common-Lisp, and 
a "correct" evaluator (vis-a-vis local variables), so code like
  (DEFUN FOO (PRED F T)
    (DECLARE (LOCAL F T))
    (COND (F (NULL PRED))
	  (T PRED)
	  (#T () )))
will be totally isolated from the effects of the global binding of T.


∂05-Mar-82  1129	MASINTER at PARC-MAXC 	NIL and T   
Date:  5 MAR 1982 1129-PST
From: MASINTER at PARC-MAXC
Subject: NIL and T
To:   Common-Lisp at SU-AI

Divergences in Common-Lisp from common practice in the major dialects
of Lisp in use today should be made for good reason.

The stronger the divergence, the better the reasons need to be.
The strength of the divergence can be measured by the amount of
impact a change can potentially have on an old program: 
 little or no impact (e.g., adding new functions)
 mechanically convertible (e.g., changing order of arguments)
 mechanically detectable (e.g., removing functions in favor of others)
 not mechanically detectable (e.g., changing the type of the empty list).


Good reasons can come under several categories: uniformity, 
ease of efficient implementation, usefulness of the feature,
and aesthetics.

Aesthetic arguments can be general ("I like it") or specific
("the following program is 'cleaner').


I think that changing NIL and T requires very strong reasons.
Most of the arguements for the change have been in terms of
general aesthetics. I do not believe there are strong arguments
for this divergence: the number of situations in which programs
become clearer is vanishingly small, and not nearly enough to
justify this source of confusion to most anyone who has used
most any dialect of Lisp in the last ten years.

Larry

∂05-Mar-82  1308	Kim.fateman at Berkeley 	aesthetics, NIL and T    
Date: 5 Mar 1982 12:52:43-PST
From: Kim.fateman at Berkeley
To: common-lisp@su-ai
Subject: aesthetics, NIL and T

Although the discussion would not lead one to believe this, I suspect
that at least some of the motivation is based on implementation
strategy.  That is, if NIL is an atom, and can have a property list,
then it cannot (perhaps) be stored in "location 0" of read-only memory
(or whatever hack was used to make (cdr nil) = nil).
This kind of consideration (though maybe not exactly this), would eventually
come to the surface, and unless people face up to questions like how
much does it really cost in implementation inconvenience and run-time
efficiency, we are whistling in the dark .  I reject the argument that
has been advanced
that it costs nothing in some dialects,
unless other strategies for the same machine are compared.  In
some sense, you could say that "bignum arithmetic" costs nothing in
certain lisps  "because it is done all the time anyway"! Ditto for
some kinds of debugging info.

∂05-Mar-82  2045	George J. Carrette <GJC at MIT-MC> 	I won't die if (SYMBOLP (NOT 'FOO)) => T, but really now...
Date: 5 March 1982 23:45-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: I won't die if (SYMBOLP (NOT 'FOO)) => T, but really now...
To: MASINTER at PARC-MAXC
cc: COMMON-LISP at SU-AI

I have to admit that "divergences in Common-Lisp from common practice
in the major dialects in use today" doesn't concern me too much.  
Aren't there great differences amoung the lisps in fundamental areas, 
such as function calling? [E.G. The Interlisp feature such that user-defined
functions do not take well-defined numbers of arguments.]

The kind of thing that concerns me is the sapping away of productivity
caused by continous changes in a given language, and having to
continuously deal with multiple changing languages when supporting
large programming systems in those lisp dialects. I know that given a
reasonable lisp language I can write the macrology that will make it
look pretty much the way I want it to look, and stability in the
language aids in this, because then I wouldn't have to spend a lot of
effort continuously maintaining the macrolibraries.

The aesthetic considerations are then very important. For example, the more
operator overloading which is built-in to a language, and the
more things in a language which have no logical reason to be in
it other than "history," the greater the difficulty of doing the
customization of the language. Considerations of sparseness, uniformity, 
and orthogonality are aesthetics, and are known to be important in
language design.

Also, what *is* the source of confusion for a person who has
programmed in lisp for ten years? Have you seen the change in
programming style which happened in MIT Lisps in the last three or four
years, let alone ten? Have you observed the difference between the
lisp appearing in Patrick Winston's book, verses what is
COMMON-PRACTICE in the LispMachine world? Have you seen what Gerry
Sussman has been teaching to six-hundred MIT undergraduates a year?
How could one possibly be worried then about "operator retraining
considerations" over such a trivial item as the empty list not being a
symbol? My gosh, haven't you heard that COMMON-LISP is going for
lexical-scoping in a big way? What about "operator retraining" for that?

-gjc

∂05-Mar-82  2312	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Lexical Scoping 
Date: 6 Mar 1982 0211-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: GJC at MIT-MC, common-lisp at SU-AI
Subject: Lexical Scoping
Message-ID: <820205021146FAHLMAN@CMU-20C>
Regarding: Message from George J. Carrette <GJC at MIT-MC>
              of 5-Mar-82 2345-EST

Before you all panic over GJC's comment and go running off on yet
another tangent, why don't we wait and see what Guy proposes on the
lexical-scoping issue.  I suspect that it won't be super-radical.

The debate among two or three people is interesting, but I would really
like to hear from anyone else out there who has a strong opinion on
the T/NIL issue.  Are there any Lisp Machine people besides RMS who
care about this?  Even if you are standing pat on the position you
stated before, it would be useful to get a confirmation of that.

-- Scott
   --------

_≡`l5≠CdZ`d@@bHbp∪β1C\A¬¬oIK\yβ→β8AChA5∪([≠|@∪/!ChA∩↓giSY0AiQS9VACE=khA(↓C]HA9∪_@~)	CiJh@lA≠¬eGP@Drpd@Djtbn5'(~)
e←ZhAβYC8A¬Co⊃K\@y¬→β≤A¬hA≠∪P[≠ε|4∃'kE)KGht↓/QCh↓∩Agi%YXAi!S]VA¬E←kh↓(AC]⊂A≥∪_4∃)↑t↓G←[[=\[YSM`ACh↓'*[β$~∃GFhA
β⊃1≠β≤A¬hAπ≠TZdaε4∀~∀@@A	CQJt@l↓≠Cd@Drpd@@dbb[∃'(~∀@@A
I←ZtAMG←ih↓
\A
¬QY[C8@y
β!→≠β≤↓ChAπ5*Zda|~∀~(@@@AQQJAI∃ECiJ↓C[←]≤Aio↑↓←dAi!eKJAAK←aY∀ASfA%]iKe∃giS]≤XAEkPA∩Ao=kYHAIKCYYd~∀@@AYSW∀Ai↑A!KCdA→e←ZA¬]s←]∀AKYg∀A←kh↓iQKe∀AoQ↑↓QCfA∧Agie=]NA←AS]S←8A←\~(@@@AQQJA(=≥∪_A%ggkJ8@Aβe∀AiQKIJAC]dA→Sg@A≠CG!S]JAAK←aY∀AEKg%IKfAI≠&Ao!↑~∀@@AGCIJACE=khAi!Sf}@↓mK\↓SLAs=jACe∀AgiC9IS]N↓aChA=\AiQ∀Aa←g%iS←\↓s←j~(@@@AMiCiK⊂AEKM=eJXA%hAo←UYHAE∀AkgK→kXAi<AOKh↓BAG←9MSe[¬iS←\↓←LAi!Ch\~(~∃∩A5kghA!CmJAMiCei∃HAi↑↓gK]H↓BA[KMgCOJ↓CE←kPAiQSLA(AC9HA≥∪0ASggUJACh↓YKCgP@j~∃QS[Kf↓]←nX↓EkhA∃CGPAQS[JA$Agi←@A[sg∃YLAE∃GCkg∀A∩AG¬]]←h↓S[CO%]JAi!ChASPAoSY0∩v~∃
QC]O∀AC]s	←IrOLA[S]⊂ACE←UhAC]eiQS]≤\@A¬UhAgS9GJAs=jACg,XA∩AMiSYX↓MKKX↓iQCh~∀~∀_≡`l5≠CdZ`d@@bHjb∪β1C\A¬¬oIK\yβ→β8AChA5∪([≠|@∪/!ChA∩↓giSY0AiQS9VACE=khA(↓C]HA9∪_@~)	CiJh@lA≠¬eGP@Drpd@Djtj`5'(~)
e←ZhAβYC8A¬Co⊃K\@y¬→β≤A¬hA≠∪P[≠ε|4∃'kE)KGht↓/QCh↓∩Agi%YXAi!S]VA¬E←kh↓(AC]⊂A≥∪_4∃)↑t↓G←[[=\[YSM`ACh↓'*[β$~∃GFhA
β⊃1≠β≤A¬hAπ≠TZdaε4∀~∃'=eerA¬E←kh↓iQJA→eCO[∃]hA∩↓Ukgh↓gK]h↓i↑As=jACY0\@A∩↓ieSK⊂Ai↑AMi←`A%hXAEUh~∃π=≠'β(↓SfAcUSGWKHAiQC8A∩AC4\~∀~)∩A[kMhAQCYJAgi¬eiKH↓i↑Ag∃]HAB↓[Kgg¬OJAC	←khAQQSfAP←≥∪_↓Sggk∀AChA1KCghjAiS5Kf~∃9←nXA	khAK¬GPAi%[JA∩↓gi←`↓[sgK1LAEK
CkgJ↓∩AGC9]←hA%[COS9JAiQ¬hASh↓oSYX↓GQC]≥J~∃Cybody's mind about anything.  (You might not have even gotten this one if I
hadn't accidentally sent a piece of it.)  But since you ask, I still feel that
the idea of changing the usage of T and NIL is a total waste of everybody's
time.  The current discussion seems unlikely to resolve anything and finding it
in my mailbox every day is just rubbing me in the wrong direction.  I don't see
where the morality and cleanliness of () even comes close to justifying its
incompatibility, and I seem to remember that Common Lisp was supposed to be
more about compatibility than morality.

∂06-Mar-82  1326	Howard I. Cannon <HIC at MIT-MC> 	T/NIL 
Date: 6 March 1982 16:26-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  T/NIL
To: FAHLMAN at CMU-20C
cc: GJC at MIT-MC, common-lisp at SU-AI

I am still violently against changing it from what we have now.

I don't remember the numbers, but NIL should be a symbol, and false,
and the empty list, and CAR/CDRable, and T should be canonical truth.

∂06-Mar-82  1351	Eric Benson <BENSON at UTAH-20> 	CAR of NIL  
Date:  6 Mar 1982 1446-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: CAR of NIL
To: Common-Lisp at SU-AI

I can understand why one would want to be able to take the CDR of NIL, but
why in the world should CAR of NIL be defined?  That seems like it's just
making sloppy programming safe.  Why is NIL more sensible for the CAR of NIL
than any other random value?  Please excuse the tirade, I was just getting used
to the idea of the CDR of NIL being NIL.
-------

∂06-Mar-82  1429	KIM.jkf@Berkeley (John Foderaro) 	t and nil  
Date: 6-Mar-82 14:15:22-PST (Sat)
From: KIM.jkf@Berkeley (John Foderaro)
Subject: t and nil
Via: KIM.BerkNet (V3.73 [1/5/82]); 6-Mar-82 14:15:22-PST (Sat)
To: fahlman@cmu-20c
Cc: common-lisp@su-ai

  I see no reason to change the current meanings of t and nil.  I consider
the fact that nil is the empty list and represents false to be one of the 
major features of the language and definitely not a bug.  I've read
over the many letters on the subject and I still don't understand
what the benefit of () and #t are?  I would like to see lots and lots
of concrete examples where using () and #t improve the code.  If the
proponents of the change can't provide such examples, then they are 
attempting to solve a non-problem.
  Aesthetically, I much prefer 'nil' to () [just as I prefer (a b c)
to (a . (b . (c . nil))) ]

  I hope that the  common-lisp committee goes back to the task
of desribing a common subset of existing lisp dialects for the 
purpose of improving lisp software portability.  The lisp language
works, there is lots of software to prove it.  Please
leave lisp alone.


∂06-Mar-82  1911	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: CAR of NIL  
Date:  6 Mar 1982 2208-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: CAR of NIL
To: BENSON at UTAH-20
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 6-Mar-82 1646-EST

The usefulness of CAR and CDR of NIL may depend upon the dialect.  In
Interlisp and R/UCI Lisp it allows one to pretend that data structures,
including function calls, have components that they do not in fact have.
E.g. at least in R/UCI Lisp, optional arguments fall out automatically
from this convention.  Suppose FOO has two arguments, but you call (FOO
A).  When the interpeter or compiler attempts to find the second
argument, they get NIL, because (CADDR '(FOO A)) is NIL under the (CAR
NIL) = NIL, (CDR NIL) = NIL rule.  This has the effect of making NIL an
automatic default value.  In practice this works most of the time, and
avoids a fair amount of hair in implementing real default values and
optional args.  Similar things can be done with user data structures.
It seems fairly clear to me that if (CDR NIL) is to be NIL, (CAR NIL)
must be also, since typically what you really want is that (CADR NIL),
(CADDR NIL), etc., should be NIL. Whether all of this is as important in
MAClisp is less clear.  MAClisp allows explicit declaration of optional
arguments, and if they are not declared, then presumably we want to
treat missing args as errors.  Similarly, Common Lisp will have much
more flexible record structures than the old R/UCI Lisp did (though
Interlisp of course has similar features). It seems to me that if people
write programs using the modern structuring concepts available in Common
Lisp, CAR and CDR NIL will again not be necessary for user data
structures.  Thus as an attempt to find errors as soon as possible, one
might prefer it to be considered an error. It is my impression that CAR
and CDR NIL are being suggested to help compatibility with existing
implementatins, and that *VERY* large amounts of code that depend upon
it.  One would probably not do it if designing from scratch.

-------

∂06-Mar-82  2306	JMC  
Count me against (car nil) and (cdr nil).

∂06-Mar-82  2314	Eric Benson <BENSON at UTAH-20> 	Re: CAR of NIL   
Date:  7 Mar 1982 0010-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: CAR of NIL
To: HEDRICK at RUTGERS
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 6-Mar-82 2008-MST

Thanks.  I figured there was a semi-sensible-if-archaic explanation for it.
If the thing has to have a CAR as well as a CDR, I guess I'll change my
vote from NIL to ().  From an implementor's standpoint, it's not too tough
for the CDR of NIL to be NIL; just put the value cell of an ID record in
the same position as the CDR cell in a pair record.  It's rather slim
grounds for choosing the layout, but these things tend to be rather
arbitrary anyway.  If it has to have 2 fields dedicated to NIL, things get
hairier.  One could put the property list cell of an ID in the CAR
position, but then of course NIL's real property list has to go somewhere
else, and we need special code in property list accessing for NIL.  If it
has to be special-cased, there's probably a more intelligent way to do it.
I'd rather have a separate data type that looks like a pair, even if it
means losing one more precious tag.
-------

∂07-Mar-82  0923	Daniel L. Weinreb <dlw at MIT-AI> 	Re: CAR of NIL 
Date: 7 March 1982 12:17-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Re: CAR of NIL
To: BENSON at UTAH-20
cc: Common-Lisp at SU-AI

I'd like to point out that the justification you give for your vote is
purely in terms of estimated implementation difficulty.

∂07-Mar-82  1111	Eric Benson <BENSON at UTAH-20> 	Re: CAR of NIL   
Date:  7 Mar 1982 1209-MST
From: Eric Benson <BENSON at UTAH-20>
Subject: Re: CAR of NIL
To: dlw at MIT-AI
cc: Common-Lisp at SU-AI
In-Reply-To: Your message of 7-Mar-82 1017-MST

True enough.  Standard Lisp defines () as NIL, but its CAR and CDR are
illegal.  I don't see the conversion to () as a great effort, mainly just
a matter of finding cases of 'NIL.  Since I don't have an ideological axe to
grind, I see the issue as the cost of converting old code vs. the cost to
new implementations of overloading NIL.
-------

∂07-Mar-82  1609	FEINBERG at CMU-20C 	() vs NIL
Date: 7 March 1982  19:11-EST (Sunday)
From: FEINBERG at CMU-20C
To:   Common-Lisp at SU-AI
Subject: () vs NIL

Howdy!
	I am strongly in favor of proposal #2, () should be the
representation of the empty list and falsehood.  The symbol NIL would
be permanently bound to () for compatibility purposes.  Any reasonable
Maclisp code would still work fine wrt. this change.  Certainly people
converting Maclisp code have much more dramatic changes to deal with,
like forward slash turning into backward slash (/ => \).  Unless
someone can come up with some reasonable code which would break with
this change, I would claim that compatibility is not an issue here,
and so we should go with what seems to me as a better way to represent
the empty list and false.  Is there any reason why people are against
this, aside from inertia?

∂07-Mar-82  2121	Richard M. Stallman <RMS at MIT-AI>
Date: 8 March 1982 00:10-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

When Maclisp was changed to make the car and cdr of nil be nil,
it was partly to facilitate transporting Interlisp code,
but mostly because people thought it was an improvement.
I've found it saves me great numbers of explicit tests of nullness.
I don't think that any other improvements in data structure facilities
eliminate the usefulness of this.  I still appreciate it on the Lisp machine
despite the presence of defstruct and flavors.

∂08-Mar-82  0835	Jon L White <JONL at MIT-MC> 	Divergence
Date: 8 March 1982 11:29-EST
From: Jon L White <JONL at MIT-MC>
Subject: Divergence
To: Masinter at PARC-MAXC
cc: common-lisp at SU-AI


You raise an extremely important point;  the slow evolution of Lisp
which has taken place over the past 10 years has been mostly "conservative"
(i.e., upward-compatible including bugs and misfeatures).  The several
"radical" departures from basic Lisp failed to get wide acceptance for just
that reason -- e.g., XLISP at Rutgers and MDL here at MIT.

    Date:  5 MAR 1982 1129-PST
    From: MASINTER at PARC-MAXC
    Divergences in Common-Lisp from common practice in the major dialects
    of Lisp in use today should be made for good reason.
    The stronger the divergence, the better the reasons need to be.
    The strength of the divergence can be measured by the amount of
    impact a change can potentially have on an old program: 
     little or no impact (e.g., adding new functions)
     mechanically convertible (e.g., changing order of arguments)
     mechanically detectable (e.g., removing functions in favor of others)
     not mechanically detectable (e.g., changing the type of the empty list).
    . . . 

However, I'd like to remind the community that COMMON-LISP was never
intended to be merely the merger of all existing MacLISP-like dialects.
Our original goal was to define a stable subset which all these
implementions could support, and which would serve as a fairly complete
medium for writing transportable code.  Note the important items: 
	stability
	transportability  (both "stock" and special-purpose hardware)
	completeness      (for user, not necessarily for implementor)
	good new features
Each implementation has to "give a little" for this to be a cooperative 
venture; I certainly hope that no one group would be refractory to
another group's issues.

Previous notes from RMS and myself tried to make the case, as succinctly
as possible, for () vs NIL;  these arguments may be better appreciated 
by a relative newcomer to Lisp [and it is the future generations who will 
benefit from the "fixes" applied now].  I believe that many in the current
user/implementor community *** who have already adapted themselves to the 
various warts and wrinkles of Lisp *** have overestimated the cost of
the NIL/() change and underestimated the impact of the "warts" on future 
generations.  

My note titled "How useful will a liberated T and NIL be?"
attempts to show that only the worst malice-aforethought cases will
cause problems, despite the potential loophole for failure at
mechanical conversion.  As Benson put it, probably the only place
where the "compatibility" approach [i.e., setq'ing NIL to ()] may
fail is in instances of "'NIL", and similar constructs.

∂08-Mar-82  1904	<Guy.Steele at CMU-10A>  	There's a market out there...
Date:  8 March 1982 2203-EST (Monday)
From: <Guy.Steele at CMU-10A> 
To: bug-macsyma at MIT-MC, common-lisp at SU-AI
Subject:  There's a market out there...

From today's Pittsburgh Press:

  Dear Consumer Reports:  After 35 years, I'm studying algebra again.
Can you recommend a calculator out of the many available that would be
reasonably suited to solving algebra problems?
  I already have several calculators that are suited to basic arithmetic
but not much more.

  Dear Reader:  Nearly all calculators do arithmetic: You plug in the numbers
and you get an answer.
  Most of them will not do algebra.  They will not factor; they will not
solve equations.  Only some special programmable calculators have those
algebraic capabilities.

--Guy

∂10-Mar-82  2021	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Vectors and Arrays   
Date: 10 Mar 1982 2318-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: common-lisp at SU-AI
Subject: Vectors and Arrays
Message-ID: <820209231809FAHLMAN@CMU-20C>


There is yet another rather fundamental issue that we need to discuss:
how to handle vectors and arrays.  It is not clear to me what, if
anything, was decided at the November meeting.  There is a line in the
"Decisions" document indicating that henceforth vector is to be a
subtype of array, but this could mean any number of things, some of them
reasonable and some of them not.  Let me try briefly to spell out the
issues as I see them and to propose a possible solution.

First, let me explain the rationale for making vectors and arrays
distinct types in the Swiss Cheese edition.

In non-microcoded implementations (which Common Lisp MUST accommodate if
it is to be at all common), it is important to have a very simple vector
data type for quick access.  Features that are only used occasionally
and that make access more expensive should not be included in the
simplest kind of vector.  That means leaving out fill pointers and the
ability to expand vectors after they are allocated, since the latter
requires an extra level of indirection or some sort of forwarding
pointer.  These simple vectors are referenced with VREF and VSET, which
tip off the compiler that the vector is going to be a simple one.
Bit-vectors and strings must also be of this simple form, for the same
reason: they need to be accessed very efficiently.

Given a vector data type, it is very straightforward to build arrays
into the system.  An array is simply a record structure (built from a
vector of type T) that contains slots for the data vector (or string),
the number of indices, the range of each index, a fill pointer, perhaps
some header slots, and so on.  The actual data is in a second vector.
Arrays are inherently more expensive to reference (using AREF and ASET),
and it seems to me that this is where you want to put the frills.  The
extra level of indirection makes it possible to expand the array by
allocating a new data vector; the expanded array (meaning the header
vector) is EQ to the original.  A fill pointer adds negligible expense
here, and makes sense since the array is able to grow.  (To provide fill
pointers without growability is pretty ugly.)

So, the original proposal, as reflected in Swiss Cheese, was that
vectors and arrays be separate types, even if the array is 1-D.  The
difference is that arrays can be expanded and can have fill pointers and
headers, while vectors cannot.  Strings and bit-vectors would be
vectors; if you want the added hair, you create a 1-D array of bits or
of characters.  VREF only works on vectors and can therefore be
open-coded efficiently; there is no reason why AREF should not work on
both types, but the array operations that depend on the fancy features
will only work on arrays.

The problem is that the Lisp Machine people have already done this the
opposite way: they implement arrays as the fundamental entity, complete
with headers, fill pointers, displacement, and growability.  There is no
simpler or cheaper form of vector-like object available on their system.
(I think that this is a questionable decision, even for a microcoded
system, but this is not the forum in which to debate that.  The fact
remains that their view of arrays is woven all through Zetalisp
and they evidently do not want to change it.)

Now, if the Lisp Machine people really wanted to, they could easily
implement the simpler kind of vector using their arrays.  There would
simply be a header bit in certain 1-D arrays that marks these as vectors;
arrays so marked would not be growable and could into be given headers or
fill pointers.  Those things that we would have called vectors in the
original scheme would have this bit set, and true arrays would not.

I was not at the November meeting, but I gather that the Lisp Machine
folks rejected this suggestion -- why do extra work just to break
certain features that you are already paying for and that, in the case
of strings, are already being used in some places?  The position stated
by Moon was that there would be no distinction between vectors and any
other 1-D arrays in Zetalisp.  However, if we simply merge these types
throughout Common Lisp, the non-microcoded implementations are screwed.

Could everyone live with the following proposal?

1. Vector is a subtype of Array.  String and Bit-Vector are subtypes of
Vector.

2. AREF and ASET work for all arrays (including the subtypes listed
above).  The generic sequence operators work for all of the above, but
only for 1-D arrays.  (I believe that the proposal to treat multi-D
arrays as sequences was voted down.)

3. VREF and VSET work only for vectors, including Strings and
Bit-Vectors.

4. We need a new predicate (call it "EXTENSIBLEP"?).  If an array is
extensible, then one can grow it, give it a fill pointer, displace it,
etc.

5. In the Common Lisp spec, we say that vectors and their subtypes
(including strings) are not, in general, extensible.  The arrays
created by MAKE-ARRAY are extensible, at least as a default.  Thus, in
vanilla Common Lisp, users could choose between fast, simple vectors and
strings and the slower, extensible 1-D arrays.

6. Implementations (including Zetalisp) will be free to make vectors
extensible.  In such implementations, all arrays would be extensible and
there would be no difference between vectors and 1-D arrays.
Implementations that take this step would be upward-compatible supersets
of Common Lisp.  Code from vanilla implementations can be ported to
Zetalisp without change, and nothing will break; the converse is not
true, of course.  This is just one of the ways in which Zetalisp is a
superset, so we haven't really given anything up by allowing this
flexibility.

7. It would be nice if the superset implementations provided a
"compatibility mode" switch which would signal a (correctable) runtime
error if a vector is used in an extensible way.  One could turn this on
in order to debug code that is meant to be portable to all Common Lisp
implementations.  This, of course, is optional.
   --------

∂10-Mar-82  2129	Griss at UTAH-20 (Martin.Griss) 	Re: Vectors and Arrays
Date: 10 Mar 1982 2225-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: Vectors and Arrays
To: FAHLMAN at CMU-20C
cc: Griss
In-Reply-To: Your message of 10-Mar-82 2118-MST
Remailed-date: 10 Mar 1982 2227-MST
Remailed-from: Griss at UTAH-20 (Martin.Griss)
Remailed-to: common-lisp at SU-AI

Seems a pity not have the VECTORS as the basic, "efficient" type, build arrays
as proposed; the other model of "compatibility" just to avoid change to ZetaLISP
seems the wrong approach; once again it adds and "institutionalizes" the
large variety of alternatives that tend to make the task of defining and
implementing a simple kernel more difficult.
-------

∂10-Mar-82  2350	MOON at SCRC-TENEX 	Vectors and Arrays--briefly   
Date: Thursday, 11 March 1982  02:38-EST
From: MOON at SCRC-TENEX
To:   Scott E. Fahlman <FAHLMAN at CMU-20C>
Cc:   common-lisp at SU-AI
Subject: Vectors and Arrays--briefly

In the Lisp machine (both of them), those arrays that have as few
features as vectors do are implemented as efficiently as vectors
could be.  Thus there would be no advantage to adding vectors, and
all the usual disadvantages of having more than one way of doing
something.  The important difference between Lisp computers and
Fortran computers is that on the former it costs nothing for ASET
to check at run time whether it is accessing a simple array or
a complex one, while on the latter this decision must be made at
compile time.  Hence vectors.  Since vectors add nothing to the
language on our machine, we would prefer to keep whatever is put in
for them as unobtrusive as possible in order to avoid confusing our
users with unnecessary multiple ways of doing the same thing.  Of
course, we are willing to put in functions to allow portability
to and from implementations that can't get along without vectors.

A second issue is that there are very few programs that use strings
for anything more than you can do with them in Pascal (i.e. print
them out) that would be portable to implementations that did not
permit strings with fill-pointers.  The important point here is that
it needs to be possible to create an object with a fill-pointer on
which the string-specific functions can operate.  This could be
done either by making those functions accept arrays or by making
vectors have fill-pointers.  This was discussed at the November
meeting; if my memory is operating correctly the people with
non-microcoded implementations (the only ones who care) opted for
making vectors have fill-pointers on the theory that it would be
more efficient than the alternative.  I believe it is the case that
it is really the string-specific functions we are talking about here,
not just the generic sequence functions.

To address the proposal.  1 and 2 are okay.  It is inconvenient to
enforce 3 in compiled code on the Lisp machine, since we would have
to add new instructions solely for this purpose.  It's unclear what
4 means (but presumably if it was clarified there would be no problem
in implementing it, other than the possibility that vectors might
become less efficient than arrays on the Lisp machine because of the
need to find a place in the array representation to remember that
they are vectors).  5 is okay except that the portable subset really
needs strings with fill-pointers (extensibility is also desirable,
but very much less important).  6 and 7 are okay (but see 3).

To me the important issue is that to the user of a Lisp machine,
vectors and VREF are not something he has to worry about except
under the heading of writing portable code.

∂11-Mar-82  1829	Richard M. Stallman <RMS at MIT-AI>
Date: 11 March 1982 20:13-EST
From: Richard M. Stallman <RMS at MIT-AI>
To: common-lisp at SU-AI

The distinction between vectors and arrays is only a compromise
for the sake of old-fashioned architectures.  It is much less
clean than having only one kind of object.  It is ok for the
Lisp machine to accommodate to this compromise by defining
a few extra function names, which will be synonyms of existing
functions on the Lisp machine, but would be different from those
existing functions in other implementations.  But it would be
bad to implement any actual distinction between "vectors"
and "arrays".

∂12-Mar-82  0825	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re: Vectors and Arrays    
Date: 12 Mar 1982 1117-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: DLW at MIT-AI
cc: common-lisp at SU-AI
Subject: Re: Vectors and Arrays
Message-ID: <820211111735FAHLMAN@CMU-20C>
Regarding: Message from Daniel L. Weinreb <DLW at MIT-AI>
              of 11-Mar-82 1216-EST

This is basically a re-send of a message I sent yesterday, attempting to
clarify the Vector/Array proposal.  Either I or the mailer seems to
have messed up, so I'm trying again.  If you did get the earlier
message, I apologize for the redundancy.

To clarify the proposal: The Common Lisp spec would require that VREF
and VSET work for all vectors.  If they also work for other kinds of
arrays in Zetalisp (i.e. they just translate into AREF and ASET), that
would be OK -- another way in which Zetalisp is a superset.  As with the
business about extensibility, it would be nice to have a compatibility
mode in which VREF would complain about non-vector args, but this is not
essential.  Note also that Zetalisp users could continue to write all
their code using AREF/ASET instead of VREF/VSET; if they port this code
to a "Fortran machine" it would still work, but would not be optimally
fast.

The whole aim of the proposal is to allow Zetalisp to continue to build
arrays their way, while not imposing inefficiency on non-microcoded
implementations.  So we would definitely provide accessing and modifying
primitives for getting at fill-pointers and the like.  Legal Common Lisp
code would not get at such things by looking in slot 27 of the array
header vector, or whatever.

I would not be violently opposed to requiring all vectors (including
strings) to have a fill pointer.  This would cost one extra word per
vector, but the total overhead would be small.  It would not really cost
extra time per access, since we would just bounds-check against the
fill-pointer instead of the allocated length.  If a compiler wants to
provide (as an option, not the default) a maximum-speed vector access
without bounds checking, it could still do so, and would run roughly the
same set of risks.  (Probably this is unwise in any event.)  So the cost
of fill pointers is really not so bad.  The reason we left them out was
because it seemed that providing a fill-pointer in a non-growable vector
was not a useful or clean thing to do.  And allowing vectors to grow
really is a significant added expense without forwarding pointers in the
hardware.

Do the Zetalisp folks really want fill pointers in non-growable strings,
or would it be better to go with mostly simple strings, with character
arrays around for when you want an elastic editor buffer or something?

-- Scott
   --------

∂12-Mar-82  1035	MOON at SCRC-TENEX 	Re: Vectors and Arrays   
Date: Friday, 12 March 1982  13:11-EST
From: MOON at SCRC-TENEX
To:   Scott E. Fahlman <FAHLMAN at CMU-20C>
Cc:   common-lisp at SU-AI
Subject: Re: Vectors and Arrays

Yes, we want fill-pointers in non-growable strings.  I think I said this
in my message anyway.  Actually it only takes about 15 seconds to figure
out how to have two kinds of vectors, one with fill pointers and one
without, while still being able to open-code VREF, VSET, and
VECTOR-ACTIVE-LENGTH in one instruction (VECTOR-LENGTH, on the other hand,
would have to check which kind of vector it was given).  So the extra
storage is not an issue in any case.

∂14-Mar-82  1152	Symbolics Technical Staff 	The T and NIL issues   
Date: Sunday, 14 March 1982  14:40-EST
From: Symbolics Technical Staff
Reply-to: Moon@SCRC-TENEX@MIT-MC
To:   Common-Lisp at SU-AI
Subject: The T and NIL issues

I'm sorry this message has been so long delayed; my time has been
completely occupied with other projects recently.

We have had some internal discussions about the T and NIL issues.  If we
were designing a completely new language, we would certainly rethink these,
as well as the many other warts (or beauty marks) in Lisp.  (We might not
necessarily change them, but we would certainly rethink them.)  However,
the advantages to be gained by changing T and NIL now are quite small
compared to the costs of conversion.  The only resolution to these issues
that Symbolics can accept is to retain the status quo.

To summarize the status quo:  NIL is a symbol, the empty list, and the
distinguished "false" value.  SYMBOLP, ATOM, and LISTP are true of it;
CONSP is not.  CAR, CDR, and EVAL of NIL are NIL.  NIL may not be used
as a function nor as a variable.  NIL has a property list.  T is a symbol
and the default "true" value used by predicates that are not semi-predicates
(i.e. that don't return "meaningful" values when they are true.)  EVAL of
T is T.  T may not be used as a variable.  T is a keyword recognized by
certain functions, such as FORMAT.

The behavior of LISTP is a change to the status quo which we agreed to long
ago, and would have implemented long ago if we weren't waiting for Common
Lisp before making any incompatible changes.  The status quo is that NIL
has a property list, however this point is probably open to negotiation if
anyone feels strongly that the property-list functions should error when
given NIL.

The use of T as a syntactic keyword in CASEQ and SELECTQ should not be
carried over into their Common Lisp replacement, CASE.  It is based on a
misunderstanding of the convention about T in COND and certainly adds
nothing to the understandability of the language.

T and NIL are just like the hundreds of other reserved words in Lisp,
dozens of which are reserved as variables, most of the rest as functions.
Any particular program that wants to use these names for ordinary symbols
rather than the special reserved ones can easily do so through the use of
packages.  There should be a package option in the portable package system
by which the reserved NIL can be made to print as "()" rather than
"GLOBAL:NIL" when desired.

∂14-Mar-82  1334	Earl A. Killian <EAK at MIT-MC> 	The T and NIL issues  
Date: 14 March 1982 16:34-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  The T and NIL issues
To: Moon at SCRC-TENEX
cc: Common-Lisp at SU-AI

There is certainly one advantage to having () not be a symbol for
Common Lisp (though not for the Lisp Machine), and that's
implementation and efficiency.  The last time this came up, DLW
pointed out that having the CAR and CDR of a symbol be () was
only an implementation detail, as if that made it unimportant.
Now I understand that many Common Lisp decisions have given
implementation a back seat to aesthetics, but here's a case where
most people (except HIC) think the aesthetics call for the change
(the usual argument against the change is compatibility, not
aesthetics -- you even said in a completely ne language, you
would rethink them).

You said "The only resolution to these issues that Symbolics can
accept is to retain the status quo", but you didn't say why.
Why?  If compatibility is the only reason, then why isn't the
reader hack of NIL => () acceptable?  I just don't believe many
programs depend on (SYMBOLP NIL).

What if others don't want to kludge up their implementation, and
so the only thing they can accept is a change in the status quo?

∂14-Mar-82  1816	Daniel L. Weinreb <dlw at MIT-AI> 	Re: Vectors and Arrays   
Date: Sunday, 14 March 1982, 18:27-EST
From: Daniel L. Weinreb <dlw at MIT-AI>
Subject: Re: Vectors and Arrays
To: FAHLMAN at CMU-20C
Cc: common-lisp at SU-AI

What I especially wanted to see clarified was this: you said that arrays
could be thought of as being implemented as a vector, one of whose
elements is another, internal vector, that holds the real values of the
elements.  Are you proposing that there be a primitive to access this
internal vector?  Such a primitive might be hard to implement if arrays
are not really implemented the way you said.  (I'm not saying we can't
do it; I don't know for sure whether it's very hard or not.  I just
wanted to know what you were proposing.)

∂14-Mar-82  1831	Jon L White <JONL at MIT-MC> 	The T and NIL issues (and etc.)    
Date: 14 March 1982 21:31-EST
From: Jon L White <JONL at MIT-MC>
Subject:  The T and NIL issues (and etc.)
To: moon at SCRC-TENEX
cc: common-lisp at SU-AI


The msg of following dateline certainly describes well the status
quo in MacLISP (both PDP10 and LISPM), as well pointing out the
T is special-cased in CASE clauses.  
    Date: Sunday, 14 March 1982  14:40-EST
    From: Symbolics Technical Staff
    Reply-to: Moon@SCRC-TENEX@MIT-MC
    To:   Common-Lisp at SU-AI
    Subject: The T and NIL issues
    . . . 
But as EAK says, there  is no reasoning given, beyond the authors' 
personal preference, for retaining the "wart" of NIL = ().

One comment from that msg deserves special attention:
    T and NIL are just like the hundreds of other reserved words in Lisp,
    dozens of which are reserved as variables, most of the rest as functions.
    . . . 
Why should even dozens of user-visible variables be reserved?  This is one 
of the strongest complaints against LISP heard around some MIT quarters --
that it has become too hairy, and the presence of the LISPManual doesn't
help any.  And again, even if there be many "reserved" names for functions, 
the separabililty of function-cell/value-cell makes this irrelevant to the 
T/NIL issue.  

Perhaps the package system could "hide" more of the systemic 
function/variables, but why should it come up now?  The notion of 
lexically-scoped variables, as mentioned in my note
    Date: 5 March 1982 12:09-EST
    From: Jon L White <JONL at MIT-MC>
    Subject: How useful will a liberated T and NIL be?
    To: Hedrick at RUTGERS
indicates that the variable T (and indeed NIL too) can be fully useful, 
even if its global value serves in its present "status quo" capacity.  
E.g., in
  (DEFUN FOO (PRED F T)
    (DECLARE (LOCAL F T))
    (COND (F (NULL PRED))
	  (T PRED)
	  (#T () )))
the local declaration will totally isolate "T" from the effects of any
global binding.

∂14-Mar-82  1947	George J. Carrette <GJC at MIT-MC> 	T and NIL
Date: 14 March 1982 22:48-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: T and NIL
To: EAK at MIT-MC
cc: common-lisp at SU-AI

Efficiency really isn't an issue here because it is very easy to get
CAR and CDR of a symbol NIL to be NIL. Take VAX-NIL for instance,
symbols have two value-cells, so its easy to make CAR access one of
the cells, and CDR the other. One could even arrange to have the symbol
structure reside across a page boundary, so the CAR/CDR cells would be
on a read-only-page, and the function cells, PLIST, and PNAME would be
on a read-write-page. There would be an average of one instruction more
executed in the error-checking version of CAR and CDR. For the benefit
of other lisps I would recomend that the function cell be pure too though.

However, it is interesting that the overloading *was* relatively costly
in terms of codesize for various open-coded primitives in Maclisp.
Doubling the number of instructions for TYPEP, triple for PLIST,
50% more for SYMBOLP. Of course there was a time not very long ago,
(see the "Interim Lisp Manual" AI MEMO by JONL) when the 18 bit address
space of the pdp-10 was said to be more than anyone could want.


∂14-Mar-82  2046	Jon L White <JONL at MIT-MC> 	Why Vectors? and taking a cue from SYSLISP   
Date: 14 March 1982 23:08-EST
From: Jon L White <JONL at MIT-MC>
Subject:  Why Vectors? and taking a cue from SYSLISP
To: fahlman at CMU-10A
cc: COMMON-LISP at SU-AI


This note is probably the only comment I'll have to say  during this round
(via mail) on the "Vectors and Arrays" issue.  There is so much in 
the mails on it now that I think we should have had more face-to-face 
discussions, preferably by a small representative group which could
present its recommendations.

Originally, the NIL proposal wanted a distinction between
ARRAYs, with potentially hairy access methods, and simple
linear index-accessed data, which we broke down into three 
prominent cases of VECTORs of "Q"s, STRINGs of characters, 
and BITStrings of random data.  The function names VREF,
CHAR, and BIT/NIBBLE are merely access methods, for there is 
a certain amount of "mediation" that has to be done upon 
access of a sequence with packed elements.  Admittedly, this 
distinction is non-productive when micro-code can select the 
right access method at runtime (based on some internal structure 
of the datum), but it is critical for efficient open-compilation
on stock hardware.  So much for history and rationale.

Some of the discussion on the point now seems to be centered
around just exactly how these data structures will be implemented,
and what consequences that might have for the micro-coded case.
E.g., do we need two kinds of VECTORs?  I don't think so, but in 
order to implement vectors to have the "growabililty" property it may 
be better to drop the data format of the existing NIL implementations
(where the length count is stored in the word preceeding the data)
For instance, if vectors (all kinds: Q, character, and bit) are 
implemented as a fixed word header with a count/active component and 
an address component then the following advantages(+)/disadvantages(-) 
can be seen:
  1+) Normal user code has type-safe operations on structured data
      (at least in interpreter and close-compiled code)
  2+) "system" type code can just extract the address part, and
      deal with the data words almost as if the code were written 
      in a machine-independent systems language (like "C"?)  I think
      the SYSLISP approach of the UTAH people may be somewhat like this.
  3-) Access to an individual element, by the normal user-level functions,
      is slower by one memory reference;  but this may be of lesser
      importance if most "heavy" usage is through system functions like
      STRING-REVERSE-SEARCH.  There is also room for optimization
      by clever compilers to bypass most of the "extra" time.
  4-) use of "addresses", rather than typed data is a loophole in
      the memory integrity of the system;  but who needs to protect
      the system programmer from himself anyway.
  5+) hardware forwarding pointers wouldn't be necessary to make
      growability and sharability work -- they work by updating the
      address and length components of the vector header;  true, there 
      would not be full compatibility with forwarding-pointer 
      implementations (installing a new "address" part loses some
      updates that wouldn't be lost under forwarding pointers), but
      at least NSUBSTRING and many others could work properly.
  6-) without micro-code, it would probably be a loss to permit random
      addresses (read, locatives) into the middle of vectors; thus
      sharability would probably require a little extra work somewhere so 
      that the GC wouldn't get fould up.  Shared data may need to be
      identified.  This can be worked out.
  7+) even "bibop" implementations with generally-non-relocating GC can 
      implement these kinds of vectors (that is, with "headers") without 
      much trouble.
  8+) it will be easier to deal with chunks of memory allocated by a
      host (non-Lisp) operating system this way;  e.g. a page, whereby
      any "header" for the page need not appear at any fixed offset
      from the beginning of the  page.

As far as I can see, retaining the NIL idea of having header information 
stored at fixed offset from the first address primarily alleviates point 3 
above.  It also permits two kinds of vectors (one with and one without 
header information) to be implemented so that the same open-coded accessing 
sequence will work for both.  I think we may be going down a wrong track by 
adhering to this design, which is leading us to two kinds of vectors.   
The SYSLISP approach, with possibly additional "system" function names
for the various access methods, should be an attractive alternative.
[DLW's note of
    Date: Sunday, 14 March 1982, 18:27-EST
    From: Daniel L. Weinreb <dlw at MIT-AI>
    Subject: Re: Vectors and Arrays
    To: FAHLMAN at CMU-20C
seems to indicate his confusion about the state of this question -- it
does need to be cleared up.]

∂14-Mar-82  2141	Scott E. Fahlman <FAHLMAN at CMU-20C> 	Re: Vectors and Arrays    
Date: 15 Mar 1982 0037-EST
From: Scott E. Fahlman <FAHLMAN at CMU-20C>
To: dlw at MIT-AI
cc: common-lisp at SU-AI
Subject: Re: Vectors and Arrays
Message-ID: <820214003722FAHLMAN@CMU-20C>
Regarding: Message from Daniel L. Weinreb <dlw at MIT-AI>
              of 14-Mar-82 1842-EST

No, I am not proposing that there be primitives to access the data vector
of an array in user-level Common Lisp.  An implementation might, of course,
provide this as a non-portable hack for use in writing system-level stuff.
At the user level, the only way to get at the data in an array is through
AREF.

-- Scott
   --------

∂17-Mar-82  1846	Kim.fateman at Berkeley 	arithmetic
Date: 17 Mar 1982 17:00:32-PST
From: Kim.fateman at Berkeley
To: steele@cmu-10
Subject: arithmetic
Cc: common-lisp@su-ai

Major argument against providing log(-1) = #c(0 3.14...):
(etc)

It provides a violation of log(a*b) = log(a)+log(b), which most
people expect to hold on the real numbers.  You may argue that
by asking for log of a negative number, the user was asking for it,
yet it is more likely than not that this came up by a programming
error, or perhaps roundoff error.  The option of computing
log (-1+0*i) (or perhaps clog(-1)), is naturally open.

I strongly suggest rational arithmetic 
be both canonical (2/4 converted to 1/2) and REQUIRE 1/0, -1/0 and 0/0.
Given that the gcd(x,0) is x, there is
almost no checking needed for these peculiar numbers, representing
+inf, -inf, and undefined.  Rules like 1/inf -> 0 fall through free.

The only "extra" check is that if the denominator of a sum turns
out to be 0, one has to figure out if the answer is 1/0, -1/0, or 0/0.

Similar ideas for +-inf, und, hold for IEEE-format numbers.

I have a set of programs which implement (in Franz) a common-lisp-like
i/o scheme, rational numbers, DEC-D flonums, integers, arbitrary-precision
floating point (macsyma "bigfloat"),
and  complex numbers (of any mixture of these, eg.  #c(3.0 1/2)). 
In the works is an interval arithmetic package, and a trap-handler.
There is also a compiler package in the works so that (+ ....) is
compiled with appropriate efficiency in the context of
appropriate declarations. 

I would be glad to share these programs with anyone who cares to
look at the stuff.

The important transcendental functions are implemented for real
arguments of flonum and bigfloat. 

Q: What did you have in mind for, for example, sqrt(rational)?
(what is the "required coercion"?)

∂18-Mar-82  0936	Don Morrison <Morrison at UTAH-20> 	Re: arithmetic
Date: 18 Mar 1982 1035-MST
From: Don Morrison <Morrison at UTAH-20>
Subject: Re: arithmetic
To: Kim.fateman at UCB-C70
cc: common-lisp at SU-AI
In-Reply-To: Your message of 17-Mar-82 1800-MST

Would it not make more sense to have 1/0, -1/0, and 0/0 print as something
which says infinity, -infinity, and undefined (e.g. #INF, #-INF, #UNDEF (I
know these aren't good choices, but you get the idea)).  There is still
nothing to prevent the imlementer from representing them internally as
1/0,-1/0, and 0/0 and having everything fall through nicely; readers and
printers just have to be a little cleverer.
-------

∂18-Mar-82  1137	MOON at SCRC-TENEX 	complex log    
Date: Thursday, 18 March 1982  14:23-EST
From: MOON at SCRC-TENEX
to:   common-lisp at su-ai
Subject: complex log

On issue 81 the November meeting voted for D.  I think the people at the
meeting didn't really understand the issues, and Fateman's message of
yesterday reinforces my belief that C is the only satisfactory choice.
This implies that complex numbers with 0 imaginary part don't normalize
to real numbers.  This is probably a good idea anyway, since complex
numbers are (usually) flonums, so zero isn't well-defined.  We don't
normalize flonums with 0 fraction part to integers.)

∂18-Mar-82  1432	CSVAX.fateman at Berkeley 	INF vs 1/0   
Date: 18 Mar 1982 14:05:30-PST
From: CSVAX.fateman at Berkeley
To: Morrison@UTAH-20
Subject: INF vs 1/0
Cc: common-lisp@su-ai



Basically, reading and writing these guys either way is no big deal.
There are representations of infinity in several floating point formats
(IEEE single, double, extended), which are printed as #[s INF] etc.
in the simple read/print package I have.  #[r INF] would be consistent,
though eliminating some of the syntax  (the CL manual does not have the
[] stuff) may make numeric type info hard to determine.  I do not like
to use unbounded lookahead scanners.  (Think about reading an atom which
looks like a 2000 digit bignum, but then turns out to be something else
on the 2001th character).


Undefined numeric objects  ("Not A Number") in the IEEE stuff, is much
stickier.  Presumably there is some information encoded in the number
that should be presented (e.g. how the object was produced.)

∂24-Mar-82  2102	Guy.Steele at CMU-10A 	T and NIL   
Date: 24 March 1982 2357-EST (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  T and NIL


As nearly as I can tell, the arguments about changing NIL to () may be
divided into these categories (I realize that I may have omitted some
arguments--please don't deluge me with repetitions of things I have left
out here):

Aesthetics.
   Pro: NIL is ugly, especially as the empty list.
   Con: () is ugly, especially as logical falseness.

Convenience.
   Pro: Predicates such as SYMBOLP can usefully return the argument.
   Con:	If you change it then the empty list and false don't have property
	lists.

Compatibility.
   Con: Old code may be incompatible and may not be mechanically convertible.
	There is a large investment in old code.
		[I cannot resist noting here that the usual cycle of life
		continues: the radicals of 1975 are today's conservatives.]
   Pro: A small amount of anecdotal evidence indicates that old code that
	actually does depend on the empty list or falseness being a symbol
	has a bug lurking in it.

Inertia.
   Con: LISP has always used NIL, and people are used to it.
   Pro: It isn't difficult to get used to ().  Not only NIL has tried it;
	the Spice LISP project has used () for over a year and has found
	it quite comfortable.
   Con: Nevertheless, many people remain unconvinced of this, and this
	may serve as a significant barrier to getting people to try
	Common Lisp.

Implementation.
   Pro: In non-microcoded implementations, it is difficult to make
	CAR and CDR, SYMBOLP, and symbol-manipulating functions all
	be as efficient in compiled code as they might be if NIL and ()
	were distinct objects.

Ad hominem.
   [I will not dignify these arguments by repeating them here.]

Different people weigh these categories differently in importance.
I happen to lay great weight on aesthetics (the Pro side), convenience,
and implementation, and much less on compatibility and inertia.

Someone has also pointed out that the argument from implementation would
disappear if CAR and CDR of NIL were no longer permitted.  This strikes
me as quite perceptive and reasonable.  However, I am quite certain that
hundreds of *correct* programs now depend on this, as opposed to the
programs (whose very existence is still doubtful to me) that, correctly
or otherwise, depend on () being the symbol NIL.

Therefore I remain convinced that making the empty list *not* be a symbol
is technically and aesthetically the better choice.


HOWEVER, the primary purpose of Common LISP is not to be maximally
elegant, nor to be technically perfect, nor yet to be implementable with
maximal ease, although these are laudable aims and are important
secondary goals of Common LISP.

	The primary goal of Common LISP is to be Common.

If so trivial and stupid an issue as () versus NIL will defeat efforts to
achieve this primary goal; and, which is more important, if inertia and
unfamiliarity might prevent new implementors from adopting Common LISP;
then I must yield.  I speak for myself, the Spice LISP project, and the
new DEC-sponsored VAX Common LISP project: we will all yield on this issue
and endorse the traditional role of NIL as symbol, falseness, and empty
lists, for the sake of preserving the commonality of Common LISP.

Similar remarks apply to T and #T; for the sake of commonality, #T ought
not be a part of Common LISP (but neither should Common LISP usurp it).

This issue must be settled soon; many outside people think that because
we haven't settled this apparently fundamental matter therefore Common
LISP is nowhere close to convergence.  Moreover, *any* decision is better
than trying to straddle the fence.

In any event, something has to go into the next draft of the manual,
pending what I hope will be a final resolution of this issue at the next
face-to-face meeting.  Since every major project (with the possible
exception of Vax NIL?) is now willing to go along with the use of the
symbol NIL as the false value and empty-list and with the use of the
symbol T as the standard truth value, this seems to be the only
reasonable choice.

--Guy

∂29-Mar-82  1037	Guy.Steele at CMU-10A 	NIL and ()  
Date: 29 March 1982 1307-EST (Monday)
From: Guy.Steele at CMU-10A
To: McDermott at Yale
Subject:  NIL and ()
CC: COMMON-LISP at SU-AI

    Date:    29-Mar-82 0923-EST
    From:    Drew McDermott <Mcdermott at YALE>
    I agree with everything you said in your message to Rees (forwarded
    to me), especially the judgement that CARing and CDRing the empty
    list is more important than whether NIL is identical to it.  What
    I am wondering is how the voting has gone?  How heavy is the majority
    in favor of the old way of doing things?  Who are they?  It seems 
    a shame for them to be able to exploit the willingness to yield of
    those on the correct side of this issue.
    -------

Drew,
   First it must of course be admitted that "correctness" is here at least
partly a matter of judgement.  Given that, I can report on what I believe
to be the latest sentiments of various groups.  In favor of the
traditional ()=NIL are the LISP Machine community (except for RMS),
the Standard LISP folks at Utah, and the Rutgers crowd.  In favor of ()
and NIL being separate (but willing to yield) are Spice LISP at CMU, S-1 NIL,
and DEC's Common LISP project.  The VAX NIL project is in favor of
separating () and NIL, but I don't know whether they are willing to compromise,
as I have not yet heard from them.
--Guy

∂30-Mar-82  0109	George J. Carrette <GJC at MIT-MC> 	NIL and () in VAX NIL.  
Date: 30 March 1982 03:55-EST
From: George J. Carrette <GJC at MIT-MC>
Subject: NIL and () in VAX NIL.
To: Guy.Steele at CMU-10A
cc: McDermott at YALE, common-lisp at SU-AI

I would quote John Caldwell Calhoun (by the way, Yale class of 1804)
here, except that it could lead to unwanted associations with other
losing causes, so instead I'll labour the obvious. If the COMMON-LISP
manual is a winning and presentable document then the NIL and () issue
couldn't possibly cause VAX NIL to secede.


∂06-Apr-82  1337	The Technical Staff of Lawrence Livermore National Laboratory <CL at S1-A> 	T, NIL, ()    
Date: 06 Apr 1982 1021-PST
From: The Technical Staff of Lawrence Livermore National Laboratory <CL at S1-A>
Subject: T, NIL, ()
To:   common-lisp at SU-AI
Reply-To: rpg  

This is confirm that S-1 Lisp is in agreement with the statements
of Guy Steele on the subject of T, NIL, and (), and though it would be
nice to improve the clarity and elegance of Common Lisp, we will forego
such to remain common. It is unfortunate that Symbolics finds it impossible
to compromise, however, we find no problem with their technical position.

What is next on the agenda? Another meeting? More manual writing? Perhaps
Steele would like to farm out some writing to `volunteers'?

∂20-Apr-82  1457	RPG   via S1-A 	Test
To:   common-lisp at SU-AI  
This is a test of the Common Lisp mailing list.
			-rpg-

∂20-May-82  1316	FEINBERG at CMU-20C 	DOSTRING 
Date: 20 May 1982  16:12-EDT (Thursday)
From: FEINBERG at CMU-20C
To:   Common-Lisp at SU-AI
Subject: DOSTRING

Howdy!
	Dostring was a very useful iteration construct, and I request
that it be put back into the manual.  I know that there is dotimes,
but I am much more interested in the character of the string, not the
index into it.  It is very inefficient to keep on accessing the nth
character of a string, and a hassle to lambda bind it, when there was
such a perfect construct for dealing with this all before.  I realize
we can't keep all the type specific functions, but this one seems
especially useful.

∂02-Jun-82  1338	Guy.Steele at CMU-10A 	Keyword-style sequence functions
Date:  2 June 1982 1625-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Keyword-style sequence functions

Folks,
  At the November meeting there was a commission to produce
three parallel chapters on the sequence functions.  I'm going nuts
trying to get them properly coordinated in the manual; it would
be a lot easier if I could just know which one is right and do it
that way.
  As I recall, there was a fair amount of sentiment in favor of
Fahlman's version of the keyword-oriented proposal, and no serious
objections.  As a quick summary, here is a three-way comparison
of the schemes as proposed:
	;; Cross product
(remove 4 '(1 2 4 1 3 4 5))			=> (1 2 1 3 5)
(remove 4 '(1 2 4 1 3 4 5) 1)			=> (1 2 1 3 4 5)
(remove-from-end 4 '(1 2 4 1 3 4 5) 1)		=> (1 2 4 1 3 5)
(rem #'> 3 '(1 2 4 1 3 4 5))			=> (4 3 4 5)
(rem-if #'oddp '(1 2 4 1 3 4 5))		=> (2 4 4)
(rem-from-end-if #'evenp '(1 2 4 1 3 4 5) 1)	=> (1 2 4 1 3 5)
	;; Functional
(remove 4 '(1 2 4 1 3 4 5))			=> (1 2 1 3 5)
(remove 4 '(1 2 4 1 3 4 5) 1)			=> (1 2 1 3 4 5)
(remove-from-end 4 '(1 2 4 1 3 4 5) 1)		=> (1 2 4 1 3 5)
((fremove #'< 3) '(1 2 4 1 3 4 5))		=> (4 3 4 5)
((fremove #'oddp) '(1 2 4 1 3 4 5))		=> (2 4 4)
((fremove-from-end #'evenp) '(1 2 4 1 3 4 5) 1)	=> (1 2 4 1 3 5)
	;; Keyword
(remove 4 '(1 2 4 1 3 4 5))			=> (1 2 1 3 5)
(remove 4 '(1 2 4 1 3 4 5) :count 1)		=> (1 2 1 3 4 5)
(remove 4 '(1 2 4 1 3 4 5) :count 1 :from-end t)=> (1 2 4 1 3 5)
(remove 3 '(1 2 4 1 3 4 5) :test #'>)		=> (4 3 4 5)
(remove-if #'oddp '(1 2 4 1 3 4 5))		=> (2 4 4)
(remove-if '(1 2 4 1 3 4 5) :count 1 :from-end t :test #'evenp)	=> (1 2 4 1 3 5)

Remember that, as a rule, for each basic operation the cross-product
version has ten variant functions ({equal,eql,eq,if,if-not}x{-,from-end}),
the functional version has four variants ({-,f}x{-,from-end}),
and the keyword version has three variants ({-,if,if-not}).

What I want to know is, is everyone willing to tentatively agree on
the keyword-style sequence functions?  If so, I can get the next version
out faster, with less work.

If anyone seriously and strongly objects, please let me know as soon
as possible.
--Guy

∂04-Jun-82  0022	MOON at SCRC-TENEX 	Keyword-style sequence functions   
Date: Friday, 4 June 1982  03:06-EDT
From: MOON at SCRC-TENEX
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: Keyword-style sequence functions

I'll take the keyword-style ones, as long as this line of your message
    (remove-if '(1 2 4 1 3 4 5) :count 1 :from-end t :test #'evenp)	=> (1 2 4 1 3 5)
is really a typo for
    (remove-if #'evenp '(1 2 4 1 3 4 5) ':count 1 ':from-end t)	=> (1 2 4 1 3 5)

∂04-Jun-82  0942	Guy.Steele at CMU-10A 	Bug in message about sequence fns    
Date:  4 June 1982 1214-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Bug in message about sequence fns
In-Reply-To:  Richard M. Stallman@MIT-AI's message of 3 Jun 82 23:45-EST

Thanks go to RMS for noticing a bug in my last message.  The last
example for the keyword-style functions should not be
(remove-if '(1 2 4 1 3 4 5) :count 1 :from-end t :test #'evenp)
but should instead be
(remove-if #'evenp '(1 2 4 1 3 4 5) :count 1 :from-end t)

I wasn't paying attention when I fixed another bug, resulting
in this bug.
--Guy

∂11-Jun-82  1933	Quux 	Proposed new FORMAT operator: ~U("units")   
Date: 11 June 1982 2233-EDT (Friday)
From: Quux
To: bug-lisp at MIT-AI, bug-lispm at MIT-AI, common-lisp at SU-AI
Subject:  Proposed new FORMAT operator: ~U("units")
Sender: Guy.Steele at CMU-10A
Reply-To: Guy.Steele at CMU-10A

Here's a krevitch that will really snork your flads.  ~U swallows
an argument, which should be a floating-point number (an integer or
ratio may be floated first).  The argument is then scaled by 10↑(3*K)
for some integer K, so that it lies in [1.0,1000.0).  If this
K is suitably small, then the scaled number is printed, then a space,
then a metric-system prefix.  If not, then the number is printed
in exponential notation, then a space.  With a :, prints the short prefix.
Examples:
 (FORMAT () "~Umeters, ~Uliters, ~:Um, ~:UHz" 50300.0 6.0 .013 1.0e7)
  =>  "50.5 kilometers, 6.0 liters, 13.0 mm, 10.0 MHz"

And you thought ~R was bad!

∂12-Jun-82  0819	Quux 	More on ~U (short) 
Date: 12 June 1982 1119-EDT (Saturday)
From: Quux
To: bug-lisp at MIT-AI, bug-lispm at MIT-AI, common-lisp at SU-AI
Subject:  More on ~U (short)
Sender: Guy.Steele at CMU-10A
Reply-To: Guy.Steele at CMU-10A

I forgot to mention that the @ flag should cause scaling by powers of 2↑10
instead of 10↑3:  (format () "~Ubits, ~:Ub, ~@Ubits, ~:@Ub" 65536 65536 65536 65536)
   =>  "65.536 kilobits, 65.536 Kb, 64.0 kilobits, 64.0 Kb"
--Q

∂18-Jun-82  1924	Guy.Steele at CMU-10A 	Suggested feature from EAK 
Date: 17 June 1982 1421-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Suggested feature from EAK


- - - - Begin forwarded message - - - -
Mail-From: ARPANET host MIT-MC received by CMU-10A at 16-Jun-82 21:08:16-EDT
Date: 16 June 1982 20:27-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject: Common Lisp feature
To: Guy Steele at CMU-10A

From experience trying to get things to work in several different
dialects (or just different operating systems), I think that it
is absolutely imperative that there be a simple way to load
packages (I don't mean the lispm sense) that you depend on, if
they're not already present.  Having to do this by hand with
eval-when, status feature, load, etc. etc. is very painful, very
error prone, and rarely portable (you usually at least have to
add additional conditionals for each new system).

How about
	(REQUIRE name)
which is (compile load eval) and by whatever means locally
appropriate, insures that the features specified by name are
present (probably by loading a fasl file from an implementation
specific directory if name isn't on a features list).  This may
want to be a macro so that name need not be quoted.

It's possible that REQUIRE could be extended to load different
things at compiled and load times (e.g. if you only need
declarations at compile time), but I don't care myself.
- - - - End forwarded message - - - -

∂18-Jun-82  2237	JonL at PARC-MAXC 	Re: Suggested feature from EAK 
Date: 18 Jun 1982 22:38 PDT
From: JonL at PARC-MAXC
Subject: Re: Suggested feature from EAK
In-reply-to: Guy.Steele's message of 17 June 1982 1421-EDT (Thursday)
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

Certainly something like this is necessary.  (I must say that I'm impressed
with the facilities for doing such things in InterLisp --  DECLARE: likely
was the precursor of EVAL-WHEN.)   EAK's conception of REQUIRE
seems to be a step in the right direction, and a couple of relevant points
from past MacLisp experience are worth noting: 

  1)                 VERSION NUMBERING
        A few years ago when Bob Kerns and I hacked on this problem, we
     felt that the "requirement" should be for some specific, named feature,
     as opposed to the required loading of some file.  (EAK may have been
     in on those discussions back then).   True, most of our "requirements" 
     were for file loadings (its certainly easy to make a "feature" synonymous
     with the extent of some file of functions), but not all were like that.  
     There is a very fuzzy distinction between the MacLisp "features" list, 
     and the trick of putting a VERSION property on a (major component
     part of the) file name to indicate that the file is loaded.  
        But a typical "feature" our code often wanted was, say, "file
     EXTBAS loaded, with version number greater than <n>";  thus we'd make
     some dumped system, and then load in a file which may (or may not)
     require re-loading file EXTBAS in order to get a version greater than the 
     one resident in the dump.  Simple file loading doesn't fit that case.
        Xerox's RemoteProcedureCall protocol specifies a kind of "handshaking"
     between caller and callee as to both the program "name" and permissible
     version numbers.

  2)                FEATURE SETS 
          The facility that Kerns subsequently developed attempted to 
     "relativize" a set of features so that a cross-compiler could access the
     "features" in the object (target) environment without damaging those 
     in the (current) compilation environment.  (This was called SHARPC 
     on the MIT-MC NILCOM directory, since it was carefully integrated 
     with the "sharp-sign" reader macro).  I might add that "cross-compilation"
     doesn't mean only from one machine-type to another -- it's an appropriate
     scenario any time the object environment is expected to differ in some
     relevant way.   Software updating is such a case -- e.g. compiling with
     version 85 of "feature" <mumble>, for expected use in a  system with 
     version 86 of <mumble> loaded.   I believe there was a suggestion left
     outstanding from last fall that CommonLisp  adopt a feature set facility 
     like the one in the VAX/NIL (a slightly re-worked version of Kern's
     original one).

  3)               LOADCOMP
        Another trick from the InterLisp world: there are several varieties of
     "load" functions, two of which might be appropriate for EAK's suggestion.
      2a) LOAD is more or less the standard one which just gobbles down
          the whole file in the equivalent of a Read-Eval loop
      2b) LOADCOMP gobbles down only the things that the compiler would
          evaluate, if it were actually compiling the file;  the idea is to get
          macros etc that appear under (EVAL-WHEN (COMPILE ...) ...)
          Thus when a file is being compiled it can cause the declarations etc
          from another to be snarfed; in actual use, LOADCOMP can be (and is)
          called by functions which prepare some particular environment, 
          and not just by (EVAL-WHEN (COMPILE) ...) expressions in files.  
          [Since InterLisp file generally have a "file map" stored on them, it's
           possible to omit reading any of the top-level DEFUN's; thus this
           really isn't as slow as it at might first seem.]     

∂19-Jun-82  1230	David A. Moon <Moon at SCRC-TENEX at MIT-AI> 	Proposed new FORMAT operator: ~U("units")   
Date: Saturday, 19 June 1982, 15:08-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-AI>
Subject: Proposed new FORMAT operator: ~U("units")
To: Guy.Steele at CMU-10A
Cc: bug-lisp at MIT-AI, bug-lispm at MIT-AI, common-lisp at SU-AI
In-reply-to: The message of 11 Jun 82 22:33-EDT from Quux

Tilde yourself!  I think this is a little too specialized to go into FORMAT.

∂02-Jul-82  1005	Guy.Steele at CMU-10A 	SIGNUM function  
Date:  2 July 1982 1303-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  SIGNUM function

    
    Date:  Wednesday, 30 June 1982, 17:47-EDT
    From:  Alan Bawden <Alan at SCRC-TENEX>
    Subject:  SIGNUM function in Common Lisp

    Someone just asked for a SIGN function in LispMachine Lisp.  It seems
    like an obvious enough omission in the language, so I started to
    implement it for him.  I noticed that Common Lisp specifies that this
    function should be called "SIGNUM".  Is there a good reason for this?
    Why not call it "SIGN" since that is what people are used to calling it
    (in the non-complex case at least)?

I called it "SIGNUM" because that is what most mathematicians call it.
See any good mathematical dictionary.  (Note, too, that the name of the
ACM special interest group on numerical mathematics is SIGNUM, a fine
inside joke.)  However, people in other areas (such as applied mathematics
and engineering) do call it "SIGN".  The standard abbreviation is SGN(X),
with SG(X) apparently a less preferred alternative.

As for programming-language tradition, here are some results:
*  PASCAL, ADA, SAIL, and MAD (?) have no sign-related function.
*  PL/I, BLISS, ALGOL 60, and ALGOL 68 call it "SIGN".
*  SIMSCRIPT II calls it "SIGN.F".
*  BASIC calls it SGN.
*  APL calls it "signum" in documentation, but in code the multiplication
   sign is used as a unary operator to denote it.  (Interestingly, such
   an operator was not defined in Iverson's original book, "A Programming
   Language", but he does note that the "sign function" can be defined
   as (x>0)-(x<0).  Recall that < and > are 0/1-valued.  I haven't tracked
   down exactly when it got introduced as a primitive, and how it came
   to be called "signum" in the APL community.)
*  FORTRAN has a function called SIGN, but it doesn't mean the sign
   function -- it means "transfer of sign".  SIGN(A,B) = A*sgn(B),
   but undefined if B=0.

I chose "SIGNUM" for Common LISP for compatibility with APL and mathematical
terminology, and also to prevent confusion with FORTRAN, whose SIGN function
takes two arguments.  I don't feel strongly about the name.  I observe,
however, that if the extension to complex numbers is retained, then
compatibility with APL, the only other language to make this useful
extension, may be in order.  (The signum function on complex numbers
is elsewhere also called the "unit" or "unit-vector" function for
obvious reasons.  It is called "unit" in Chris van Wyk's IDEAL language
for picture-drawing.)
--Guy

∂02-Jul-82  1738	MOON at SCRC-TENEX 	SIGN or SIGNUM 
Date: Friday, 2 July 1982  20:12-EDT
From: MOON at SCRC-TENEX
To:   common-lisp at sail
Subject:SIGN or SIGNUM

Seems to me the truly APL-compatible thing would be for SIGN
with one argument to be the APL unary x and with two arguments
to be the Fortran SIGN transfer function.

∂07-Jul-82  1339	Earl A. Killian            <Killian at MIT-MULTICS> 	combining sin and sind
Date:     7 July 1982 1332-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  combining sin and sind
To:       Common-Lisp at SU-AI

Instead of having both sin and sind (arguments in radians and degrees)
respectively, how aobut defining sin as
          (defun sin (x &optional (y radians)) ...)
Where the second optional argument specifies the units in "cycles".
You'd use 2*pi for radians (the default), and 2*pi/360 for degrees.  To
get the simplicity of sind, you'd define the variable degrees to be
2*pi/360 and write (sin x degrees).

∂07-Jul-82  1406	Earl A. Killian            <Killian at MIT-MULTICS> 	user type names  
Date:     7 July 1982 1310-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  user type names
To:       Common-Lisp at SU-AI

My very rough draft manual does not specify any way for a user to define
a type name in terms of the primitive types, which seems like a serious
omission.  Perhaps this has already been fixed?  If not, I propose
          (DEFTYPE name (args ...) expansion)
E.g. instead of building in unsigned-byte, you could do
          (deftype unsigned-byte (s) (integer 0 (- (expt 2 s) 1)))
The need for this should be obvious, even though it doesn't exist in
Lisp now.  Basically Common Lisp is going to force you to specify types
more often that older Lisps if you want efficiency, so you need a way of
abbreviating things for brevity, clarity, and maintainability.  I'd hate
to have to write
          (map #'+ (the (vector (integer 0 (- (expt 2 32) 1)) 64) x)
                   (the (vector (integer 0 (- (expt 2 32) 1)) 64) y))
I can barely find the actual vectors being used!

This also allows you define lots of the builtin types yourself, which
seems more elegant than singling out signed-byte as worthy of inclusion.
Also, it provides a facility that exists in languages such as Pascal.

Now, how would you implement deftypes?  A macro mechanism seems like the
appropriate thing.  E.g. when the interpreter or compiler finds a type
expression it can't grok, it would do
          (funcall (get (car expr) 'type) expr)
and use the returned frob as the type.

∂07-Jul-82  1444	Earl A. Killian            <Killian at MIT-MULTICS> 	trunc  
Date:     7 July 1982 1420-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  trunc
To:       Common-Lisp at SU-AI

Warning: the definition of trunc in Common Lisp is not the same as an
integer divide instruction on most machines (except the S-1).  The
difference occurs when the divisor is negative.  For example, (trunc 5
-2) is defined to be the same as (trunc (/ 5 -2)) = (trunc -2.5) = -2,
whereas most machines divide such that the sign of the remainder is the
same as the sign of the dividend (aka numerator), which gives -3 for
5/-2.

Implementors should make sure that they do the appropriate testing
(ugh), unless someone wants to propose kludging the definition.

∂07-Jul-82  1753	Earl A. Killian <EAK at MIT-MC> 	combining sin and sind
Date: 7 July 1982 18:32-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  combining sin and sind
To: Common-Lisp at SU-AI

I meant 360, not 2*pi/360 in my previous message.

∂07-Jul-82  1945	Guy.Steele at CMU-10A 	Comment on HAULONG    
Date:  7 July 1982 2244-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Comment on HAULONG

What do people think of the following suggested change?  I suspect
MacLISP HAULONG was defined as it was because internally it used
sign-magnitude representation.  EAK's suggestion is more appropriate
for two's-complement, and the LOGxxx functions implicitly assume
that as a model.

- - - - Begin forwarded message - - - -
Mail-From: ARPANET host MIT-Multics received by CMU-10A at 7-Jul-82 17:03:29-EDT
Date:     7 July 1982 1352-pdt
From:     Earl A. Killian            <Killian at MIT-MULTICS>
Subject:  haulong
To:       Guy Steele at CMUa

I think the definition in the manual for haulong:

ceiling(log2(abs(integer)+1))

is poor.  Better would be

if integer < 0 then ceiling(log2(-integer)) else ceiling(log2(integer+1))

I know of no non-conditional expression for this haulong (if you should
ever discover one, please let me know).  The only numbers that this
matters for are -2↑N.  Amusingly enough, I found this exact bug in the two
compilers I've worked on (i.e. they thought it took 9 bits instead of 8
to store a -256..255).
- - - - End forwarded message - - - -

∂07-Jul-82  1951	Guy.Steele at CMU-10A 	Re: trunc   
Date:  7 July 1982 2250-EDT (Wednesday)
From: Guy.Steele at CMU-10A
To: Earl A. Killian <Killian at MIT-MULTICS>
Subject:  Re: trunc
CC: common-lisp at SU-AI
In-Reply-To:  Earl A. Killian@MIT-MULTICS's message of 7 Jul 82 16:20-EST

No, EAK, I think there's a bug in your complaint.  Indeed most machines
divide so that sign of remainder equals sign of dividend.  So 5/-2 must
yield a remainder of 1, not -1.  To do that the quotient must be -2, not -3.
(Recall that dividend = quotient*divisor + remainder, so 5 = (-2)*(-2) + 1.)
So TRUNC does indeed match standard machine division.
--Guy

∂07-Jul-82  2020	Scott E. Fahlman <Fahlman at Cmu-20c> 	Comment on HAULONG   
Date: Wednesday, 7 July 1982  23:14-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: Comment on HAULONG


EAK's suggestion for Haulong looks good to me.
-- Scott

∂08-Jul-82  1034	Guy.Steele at CMU-10A 	HAULONG
Date:  8 July 1982 1320-EDT (Thursday)
From: Guy.Steele at CMU-10A
To: David.Dill at CMU-10A
Subject:  HAULONG
CC: common-lisp at SU-AI

    Date:  8 July 1982 0038-EDT (Thursday)
    From: David.Dill at CMU-10A (L170DD60)
    
    Isn't this a dumb name?

Yes, it is -- but it's traditional, from MacLISP.  Maybe if its
definition is "fixed" then its name should be also?  (But I happen
to like it as it is.)
--Guy


∂08-Jul-82  1723	Earl A. Killian <EAK at MIT-MC> 	HAULONG
Date: 8 July 1982 20:24-EDT
From: Earl A. Killian <EAK at MIT-MC>
Subject:  HAULONG
To: Guy.Steele at CMU-10A
cc: common-lisp at SU-AI

I think names like this really ought to be changed.  Obviously
you can't rename important functions for aesthetics, but for
obscure ones like this, a cleanup is in order.

integer-length?  precision?

Also, how about bit-count instead of count-bits?  It's less
imperative and more descriptive.

∂08-Jul-82  1749	Kim.fateman at Berkeley 	Re:  HAULONG   
Date: 8 Jul 1982 17:41:12-PDT
From: Kim.fateman at Berkeley
To: EAK@MIT-MC, Guy.Steele@CMU-10A
Subject: Re:  HAULONG
Cc: common-lisp@SU-AI

I would think ceillog2  (ceiling of base-2 logarithm)  would be a
good basis for a name, if that is, in fact, what it does.

You know the function in maclisp which pulls off the n high bits
(or -n low bits)  is called HAIPART...

∂09-Jul-82  1450	Guy.Steele at CMU-10A 	Meeting?    
Date:  9 July 1982 1748-EDT (Friday)
From: Guy.Steele at CMU-10A
To: common-lisp at SU-AI
Subject:  Meeting?

[Sorry if this is a duplication, but an extra notice can't hurt,
especially if it is the only one!]

Inasmuch as lots of LISP people will be in Pittsburgh the week of
the LISP and AAAI conferences, it has been suggested that another
Common LISP meeting be held at C-MU on Saturday, August 22, 1982.
Preparatory to that I will strive mightily to get draft copies of
the Common LISP manual with all the latest revisions to people as
soon as possible, along with a summary of outstanding issues that
must be resolved.  Is this agreeable to everyone?  Please tell me
whether or not you expect to be able to attend.
--Thanks,
  Guy

∂09-Jul-82  2047	Scott E. Fahlman <Fahlman at Cmu-20c> 	Meeting?   
Date: Friday, 9 July 1982  23:39-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   Guy.Steele at CMU-10A
Cc:   common-lisp at SU-AI
Subject: Meeting?


Guy,
My corporeal manifestion will be there.  My essence may well be
elsewhere.
-- Scott