I defined macro at-least, generalized version of operator or. (at-least expr1 ... exprn) should return true if expr2 ... exprn evaluate to true - at least expr1 times. Also, evaluation should be "lazy", i.e. if it is discovered that true already occurred expr1 times, additional evaluations shouldn't be performed, as typical for or. The test should be:
Code: Select all
;; Both Newlisp and Common Lisp
(let ((x 1) (y 2) (z 3) (n 3))
(print (at-least n
(at-least (- n 1) (= x 7) (= y 2) (= z 3))
(at-least (- n n) nil nil nil nil)
(at-least (* 1 z) 1 (= 2 2) (let ((z 100))
(= z 1000))))))
-> nil
change 1000 to 100 -> true
Code: Select all
;;Newlisp
(define-macro (at-least atn)
(let ((aten (eval atn)))
(doargs(ati (zero? aten))
(when (eval ati)
(dec aten)))
(zero? aten)))
Code: Select all
;;Common Lisp
(defmacro at-least (n &rest es &aux (nsym (gensym)) (carsym (gensym)))
(if (null es) nil
`(let* ((,nsym ,n) (,carsym ,(car es)))
(cond ((zerop ,nsym) t)
((= ,nsym 1) (or ,carsym ,@`,(cdr es)))
(t (at-least (if ,carsym (1- ,nsym) ,nsym) ,@`,(cdr es)))))))
To be fair, CL definition is safer, i.e. it is harder to shoot oneself into foot. This problem can be routineously fixed in Newlips - by use of the lexical scope features in Newlisp - contexts or applying techniques I described on my blog, in this case, applying naming convention once code is written and tested solves the problem completely. For example,
Code: Select all
;;Newlisp
(define-macro (at-least at-least_n)
(let ((at-least_en (eval at-least_n)))
(doargs(at-least_i (zero? at-least_en))
(when (eval at-least_i)
(dec at-least_en)))
(zero? at-least_en)))
Beside obvious technical inferiority of not being the first class, the main problem with Common Lisp macro is its complexity. Cavallaro is obviously talented and experienced programmer, able to write complex code. His macro has 57 tokens vs my 18 tokens - so it is 3.5 times longer. Relatively advanced techniques, like gensym and list splicing (,@) are used. I can safely say, according to my experience, that not many people are able - or motivated - to write such code. Rainer Joswig wrote shorter macro, 31 tokens, but it is related to the simpler version of the problem. Still, his code was more complicated than Newlisp. Newlisp macro, is, for a contrast just one loop - nothing remotedly advanced here.
Is there any reasons one might prefer CL macros over Newlisp macros? Yes, if use of the at-least is simple, i.e. at-least is mentioned only in expressions like in my test, but never anything like (map at-least ...), (apply at-least ...), (eval (at-least ...)) - in such situations, CL macros allow compilation and compiled code can run significantly faster. This is the main reason Common Lispers avoid eval. Newlisp code presented here, even if, theoretically, compiled (Newlisp actually has no compiler) wouldn't run that fast. However, if code contains these expressions, then Newlisp version is either only possible, or it allows faster evaluation. Yes, in some cases interpreters are faster than compilers. If time is not critical - Newlisp version is just simpler.
---
I'll publish this on my blog, but I thought it might be of interest for those who collect first impressions on Newlisp on this forum as well.