Arc Forumnew | comments | leaders | submitlogin
1 point by sacado 6081 days ago | link | parent

OK, so I started doing it in mzscheme. It shouldn't be done in pure Arc for the following reasons :

- compiling Arc code is better done on the compiler side rather than on the arc side

- that way, I can get rid of n+ et al., as they never really get called in Arc code

- manipulating what is generated by the 'ac function is easier than manipulating raw Arc code : 'ac does macro-expansions and translates ssyntax in Scheme syntax.

In practice, until now, I added a function, inside 'ac, wrapping the result of the latter. This function explores and modifies the code generated by 'ac. Every time it sees a 'lambda, it takes its args and body and generates an actual lambda that looks like the arc code I wrote there : http://arclanguage.org/item?id=5216 .

So, 2 hash tables are generated for each lambda. Well, as you can guess, the code is eating as much memory as you can imagine, particularily if you arco the code in arc.arc (which should be done anyway, should it only be to accelerate loops). Now, I'm quite sure the solution based on hash tables is a dead end.

Maybe I should do otherwise : instead of using hash tables, I could make the function's code grow everytime a new type is applied on it :

  (if (type-is-the-new-type)
    (call new-generated-code-for-this-type)
    (call the-previously-generated-code))
I don't know if this would work better, however I will probably not work on this today. Writing macros generating macros generating lambdas analysing other lambdas to generate other lambdas is an unfailing source of headaches. And bugs too.