Arc Forumnew | comments | leaders | submitlogin
2 points by sacado 6106 days ago | link | parent

For 1, that's not what I mean. If + is defined this way :

  (def + args
    (if
      (all [isa _ 'num])
        (apply int+ args)
      (all [isa _ 'string])
        (apply str+ args)
      (all [isa _ 'cons])
        (apply cons+ args)
        (err "oops")))
We just have a formal definition for + that we don't have right now. (As a side note, this is close to the actual implementation of the '+ defined in ac.scm). How can we know it works for strings, lists ? Does it works on chars or not ?

The goal is not to get rid of, say, -, * and / and implement them in Arc. Their signification is obvious and no one would need to see how they are implemented. This is not a math class. But this is not true of +, because by opening the black box it is, we have :

- the possibility for alternative implementations of the arc compiler, as most of the language is defined in itself. Imagine I want to work on a Forth implementation to put Arc code into fridges or TVs ; or pg thinks it's time to move on a C implementation to have a super-fast Arc. The smaller these axioms, the easier it is to get compatible compilers or to avoid bugs from one version to another.

- the possibility to say "I now want to use + for hash tables, let's add it, how is that implemented ?",

- the possibility to say "I don't need + to work for strings or chars or anything, I just want numeric addition because I'm crunching numbers ; +int is the function in need".

Incidentally, if I want to write +int, the only way to do so currently is to do (def +int a b (- a (- b))), and we both agree to say that it's a bad idea.