Lisp Implementations

Lisp Implementations

Lisp Implementations Walter van Roggen Artificial Intelligence Technology Group Digital Equipment Corporation 290 Donald Lynch Blvd, Marlborough, MA, 01752, DLBS-2/B10 [email protected] At last some benchmark numbers are available for several machines and implementations. I've tried to get the latest numbers for each implementation. Unfortunately, some vendors are less than enthusiastic about releasing any numbers at all, so they are either not listed here or their numbers were taken from reliable sources. If you want to see numbers for other implementations, contact the vendors to get them to send their latest for publication here. The benchmark times are in cpu seconds. The geometric mean listed at the bottom of each column is of the ratios of individual times for each implementation compared to the time for VAX LISP V2.2 on a VAX-11/T80. [I had to choose something as the standard, and most vendors compare with the VAX-11/780 at one time or another, despite its age.] I believe all the numbers were taken on machines with sufficient memory to avoid significant paging. For VAX LISP we've found that the "knee" in the curve of the graph relating paging to memory size is less than 1 Megabyte for the benchmarks as a whole. • The Symbolics implementation was running release 7.1. • The TI implementation was running release 3.0. • The Lucid Common Lisp for the VAX implementation was running V1.0 on VMS V4.2. • The VAX LISP implementation was running V2.2 on VMS V4.6. • The Sun implementation was Sun Common Lisp 2.1, running on 4.2bsd release 3.2. • The Apollo implementation was DOMAIN/Common Lisp. LP 1-5.37 Symbolics TI Explorer DEC MicroVAX-II Benchmark 3650 I LucidCL VAX LISP Tak 0.45 1.15 0.99 1.16 Stak 2.31 5.21 3.68 4.70 Ctak 5.47 2.42 3.78 9.91 Takl 4.81 8.79 3.69 7.12 Takr 0.45 1.18 1.50 2.51 Boyer 8.63 19.56 28.00 44.27 Browse 11.17 45.70 49.44 52.80 Destru 1.25 2.97 5.26 4.08 Trav-Init 6.59 15.70 11.53 14.69 Tray-Run 36.69 94.52 76.78 129.00 Deriv 2.68 5.69 11.59 12.28 DDeriv 2.55 5.98 15.36 21.24 Div2-Iter 1.07 2.49 3.92 4.56 Div2-Recur 2.02 3.53 4.97 6.80 FFT 2.59 19.45 103.52 35.58 Puzzle 6.20 25.28 27.09 23.38 Triang 136.77 330.90 256.79 435.00 Fprint na 2.57 3.77 5.03 Fread n& 6.66 8.64 6.00 Tprint na na 4.62 2.68 Frpoly-r-15 2.56 5.23 na 16.84 Frrpoly-r2-15 15.04 10.86 109.05 39.90 Frpoly-r3-15 2.53 6.99 17.02 23.80 Geo mean 2.87 1.26 0.83 0.74 Table 1: Common Lisp benchmarks for Symbolics, TI Explorer, and DEC MicroVAX systems LP 1-5.38 Sun Apollo Benchmark 3/160 3/60 3/260 3000 580T 4000 Tak 0.44 0.36 0.24 0.59 0.37 0.29 Stak 2.38 1.82 1.12 3.OO 1.57 1.32 Ctak 1.70 1.32 0.90 2.99 2.07 1.18 Takl 2.26 1.78 1.26 2.85 1.69 1.32 Takr 0.70 0.56 0.38 0.92 0.55 0.43 Boyer 12.54 9.62 6.48 18.41 13.02 9.89 Browse 19.00 18.14 9.92 32.20 22.77 18.09 Destru 1.58 1.24 0.88 2.56 1.79 1.46 Trav-Init 4.62 3.60 2.58 5.82 4.07 2.97 Tray-Run 37.94 29.72 23.32 45.72 38.43 29.16 Deriv 3.80 2.80 1.98 7.61 5.13 4.81 DDeriv 5.18 3.86 2.62 10.43 7.06 6.47 Div2-Iter 0.98 0.74 0.58 2.68 2.01 1.94 Div2-Recur 1.44 1.16 0.86 3.39 2.82 2.50 FFT 4.62 3.72 3.88 6.29 2.57 2.90 Puzzle 5.06 4.14 2.92 6.60 3.77 3.02 Triang 137.78 107.96 72.92 178.13 95.44 78.46 Fprint 1.24 0.96 0.82 1.94 1.33 1.03 Fread 3.90 3.00 2.12 4.24 3.09 2.38 Tprint 1.94 1.38 0.98 2.20 1.52 1.19 Frpoly-r-15 3.90 2.92 1.94 5.43 4.15 2.84 Frpoly-r2-15 44.90 35.12 24.88 23.30 17.50 13.96 Frpoly-r3-15 7.65 5.86 4.28 11.79 8.52 6.40 Geo mean 2.33 2.98 4.18 1.63 2.44 3.04 Table 2: Common Lisp benchmarks for Apollo and Sun systems LP 1-5.39 DEC VAX Benchmark p'VaxII 3600 11/780 11/785 8650 8700 Tak 1.16 0.45 1.23 0.97 0.25 0.21 Stak 4.70 1.00 3.04 2.36 0.63 0.48 Ctak 9.91 2.67 6.64 4.80 1.32 1.34 Takl 7.12 2.02 5.49 4.12 1.14 0.99 Takr 2.51 0.87 2.37 1.42 0.41 0.28 Boyer 44.27 12.32 28.72 18.66 5.63 4.77 Browse 52.80 13.50 32.57 21.96 6.13 6.33 Destru 4.08 1.45 3.81 2.78 0.65 0.73 Trav-Init 14.69 4.89 11.49 7.39 2.33 2.22 Tray-Run 129.00 33.38 95.84 69.63 19.60 17.61 Deriv 12.28 3.75 8.82 6.20 1.73 1.70 DDeriv 21.24 5.42 12.36 8.04 2.51 2.14 Div2-1ter 4.56 1.30 3.26 2.34 0.64 0.55 Div2-Recur 6.80 2.08 5.72 3.77 1.10 0.89 FFT 35.58 9.27 23.08 15.60 4.57 4.31 Puzzle 23.38 8.01 19.43 13.04 3.78 3.65 Triang 435.00 101.49 271.35 191.67 50.8'6 49.04 Fprint 5.03 1.80 3.97 2.66 0.77 0.77 Fread 6.00 2.28 5.37 2.63 0.89 0.82 Tprint 2.68 0.90 1.69 1.09 0.31 0.33 Frpoly-r-15 16.84 5.68 10.84 6.72 2.46 2.13 Frpoly-r2-15 39.90 14.38 33.73 21.11 6.26 6.10 Frpoly-r3-15 23.80 7.99 14.00 8.56 3.02 2.67 Geo mean 0.74 2.42 1.00 1.49 5.13 5.63 Table 3: Common Lisp benchmarks for various VAX systems LP I-5.40 Will Clinger provides the following information on a Scheme version of the above benchmarks: Gabriel Benchmark timings for MacScheme+Toolsmith Version 1.0 Machines: • Macintosh II with 5 Mby RAM, 40 Mby internal hard disk; Finder 6.0b2, System 4.1; disk cache turned off • Macintosh Plus with 1 Mby RAM, 2 Mby QuickDrive (external RAMdisk); Finder 5.5, System 4.1; disk cache turned off Optimization levels: • opt=2 Native code. • opt=l Interpreted byte code. (The default.) • opt=0 Unoptimized byte code. (Best for debugging.) Notes on the implementation: Values are represented as tagged pointers, with the tag in the high bits. The tag bits are masked off on each access in anticipation of future Macintoshes, though masking is unnecessary with current hardware. Fixnums are represented in a 30-bit two's complement immediate format. All arithmetic is generic. All flonums are boxed. The compiler algorithm and abstract machine architecture used at optimization level 0 have been proved correct relative to a denotational semantics for Scheme. Higher levels are based on this correctness proof but add a few optimizations that have not been subjected to formal proof. Multitasking is supported through a programmable interrupt system and first class continuations. As 'in PC Scheme, continuation frames are always allocated on a stack, but are copied into the heap on each task switch and on each call to call-with-current-continuation. Unlike PC Scheme, continuations in the heap are invoked without being copied back into the stack. The CTAK benchmark (as modified for Scheme) tests this mechanism. All benchmarks use generic arithmetic. All benchmarks perform full tag checking and bounds checks. CPSTAK Continuation-passing version of TAK. An excellent test of first class procedures and tail re- cursion, both of which are used heavily in Scheme. CTAK In Scheme this is a very heavy test of first class continuations (call-with-current-continuation). The results are not comparable with results for other dialects. TAKR The Macintosh Plus timings show the effect of self-recursion optimization. The Macintosh II timings show the additional effect of the 68020's 256-byte on-chip instruction cache. FFT Version 1.0 does not use the Macintosh II's floating point coprocessor.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us