• bug fixes
• new 'read' hooks into internal source reader
files and changes notes: http://newlisp.org/downloads/development/
no binary installers for this development release
Lutz
ps: I am out of the country for 2 weeks and will not be able to be on the internet on a daily schedule to read email, give support etc.
development release newLISP 9.3.1
-
- Posts: 2038
- Joined: Tue Nov 29, 2005 8:28 pm
- Location: latiitude 50N longitude 3W
- Contact:
Call me naive, but does parse without the str-break argument not do this?cormullion wrote:Looks fascinating... Will this function allow me to convert newLISP code into a sequence of tokens (eg like my attempt at tokenizing, tokenizer.lsp)? Eg for formatting source code...?
With newLISP you can grow your lists from the right side!
If pattern is given. If pattern is not given, it splits by newlisp lexical rules.Jeff wrote:No, parse is more like split-by-pattern.
Code: Select all
> (parse "(sin (+ 1 1.5))")
("(" "sin" "(" "+" "1" "1.5" ")" ")")
With newLISP you can grow your lists from the right side!
-
- Posts: 2038
- Joined: Tue Nov 29, 2005 8:28 pm
- Location: latiitude 50N longitude 3W
- Contact:
One of the problems with parse is that it doesn't preserve comments. So for pretty-printing or formatting it can be considered a destructive function... :)
There was an issue with colons I seem to remember:
And parse turns quotes into braces, too.
There was an issue with colons I seem to remember:
Code: Select all
(parse {(define (fred:jim) (println fred jim))})
If you are good with regular expressions, you could use the second syntax of 'parse' or the first syntax of 'find-all' with regular expressions, to tokenize, or a combination of both. Only the parse, without any options behaves as Cormullion describes. But 'find-all' is probably better suited, because there the regular expression describes the token itself and not the space in-between, as 'parse' does.
Lutz
Lutz