You're trying to access the first element of the first element of '(x). You can't access the first element of a symbol....
---
(+ car( (eval (car '(x)))) 2)
You're trying to add three things together here:
- The car function
- ((eval (car '(x)))), which is a roundabout way of saying (x) here
- 2
---
car x
Here you've entered two commands at once: One is "car", which evaluates to the car function, and the other is "x", which evaluates to 5. They're both printed to the console, but "arc>" is printed before each command is read, so the output looks quirky.
>> Also, please just copy-paste the session from your terminal in future.
I see, I will.
>> If it's easier we can go over more examples interactively over chat somewhere. Let me know if you want to try that; you'll see my email if you click on my username above.
Cool, that's more fun for sure :)
>> ... (the other points) ...
Fortunately I got those things. Also seems like the prompt is doing (pr (eval MyPrg))
> "seems like the prompt is doing (pr (eval MyPrg))"
Exactly! Lisp and similar languages are said to have a read-eval-print loop or REPL. The interpreter Reads an expression from the prompt, Eval's it, and then Prints the result.
'(x) is a cons cell whose car is the symbol x, and whose cdr is nil. Your assignment updates this cons; after the assignment its car is 2 and its cdr is nil. This is hard to see because there's no way to access this cons after the fact. So here's a similar example:
arc> (= x 5) ; initialize
5
arc> (= l '(x))
(x)
arc> (= (car l) 2)
2
arc> l
(2)
Does this make sense? Feel free to ask for more clarification.
The = macro is set up to special-case certain function names, such as car. The point is to let us treat certain compound expressions as though they were assignable variables:
arc> (= l '(x))
(x)
arc> l
(x)
arc> (= (car l) 2)
2
arc> (car l)
2
For a more technical explanation, = is a macro.
Evaluation in Arc has two stages: First, macros expand. Then the expanded code executes, doing function calls and branching and so on.
After one step of macro expansion, (= (car l) 2) looks something like this:
arc> (macex1 '(= (car l) 2))
(do (atwith (gs1822 l gs1823 2) ((fn (val) (scar gs1822 val)) gs1823)))
That's a lot of cruft, but what's important is (scar gs1822 val). For almost all purposes, (= (car l) 2) is equivalent to (scar l 2), just easier to read.
The effect of (scar l 2) is that it modifies l by changing its car to 2, and then it returns 2. That is, it implements exactly the behavior we want from (= (car l) 2).
>> The = macro is set up to special-case certain function names, such as car.
Here we are. :D Yup sure. Sounds just terribly bad.
Same for:
arc> (+ (eval (car '(x))) 2)
7
AND the fact we can't assign a value to x with a derivative of that expression - that expression being good in itself.
But well. I'll continue the tutorial because I've just read half of it - and I just blocked on the hashtables the true purpose of my questions here.
Why - I'm looking for a semantic reason, not a syntaxic one - can't we get a hashtable like this (or kind-of of course):
(map = (x y z) '(1, 2, 3.14))
Please note that map should map eh. And there is no reason it don't. At least in your answer - by the way thank you for it, I'll win a lot of time by studying it.
I'm pretty sure we can do absolutely everything with symbols, evaluation (and non-evaluation), calls, lists, bindings and some generic flow controls and some things I'm missing...
After that we can bind a combinaison of those things to a symbol like 'map or 'my-super-hastable-über-easy-to-use and win LOCs, it's not a problem.
If I'm missing something, a true reason that, for example, map can't build a map - that is bind symbols to values and returning the whole in a list the same way we can ________ symbols to values and returning the whole in a list, you gonna be my new best friend :)
To be clear: whether I'm missing a terribly obvious syntax to do this with map or there is a semantic reason I'm missing or it's a problem.
You might also be interested to know that (= tbl.k v) expands to (sref tbl v k).
---
"the fact we can't assign a value to x with a derivative of that expression - that expression being good in itself."
Yeah, it would be kinda nice if (= (eval 'x) 2) worked. It could expand to (eval `(assign x (',(fn () 2)))). In fact, the defset utility lets us add that behavior ourselves if we want it.
>> In fact, the defset utility lets us add that behavior ourselves if we want it.
At this stage, I also can go back to C :)
Do you get what I mean? Arc, if I may, should be a language which is damn hard consistent at least on the basic concepts and which use kind of self generated strategy to reduce LOCs, not hard coded pseudo-concept like most languages.
Ok. I will maybe try to write an implementation by myself with C.
"Arc, if I may, should be a language which is damn hard consistent at least on the basic concepts and which use kind of self generated strategy to reduce LOCs, not hard coded pseudo-concept like most languages."
You might be surprised to hear this, but many of us here like Arc for exactly this reason. It inherits its simplicity and conistency from Scheme.
However, Arc's main improvement upon Scheme is the fact that it uses some quirky abbreviations. These abbreviations do take the form of "hard coded pseudo-concepts."
I might have been wrong to inform you about these abbreviations when you were just starting to learn Arc. If you can, forget about (= ...) and (a:b ...) and stuff, and then see if you like the language any better. ;)
By the way it also leads to: may a symbol be unbound (=bound to nothing)? I think not. I think that a symbol should be bound to a function self assignment and then we might do
> (x 1)
x bound to 1
And then, a hastable would be:
> '((x 1) (y 2) (z 3))
blabla
By the way the quote here which is required seems be the beginning of an absolutely beautiful system - the way one bind x, y and z before the retreiving of a value of such a hashtable might be the start of such dynamic evaluations/calls... And of course, one might not bind x, y and z before such an expression.
EDIT: so there should be kind of bound and unbound hashtables. Sounds good. I might be wrong.
Also, =, which do not evaluate the first argument which is pretty ok (other functions we write do the same on arguments), is able to bind something new to x.
Oh and
> (x 1 2)
x bound to '(1 2)
which is a good start for the consistency of the functions.
I'm looking for something like that when I'm working on something like arc.
For the fun, I gonna make a few try of implementations of such a language. Let's see if I can do a little bit better/atomic/consistent/powerful in the basic concepts.
One of my other concern is to generalize terms. And the spaces idea looks like a pretty good idea: a name space, a symbol/bind space, a type space, an algebric space and a call space .slt. Each term having something in those.
So I can bind things to 3.14 or to "hello world" and so I can call the function bound to 4 with ((+ 2 2)).
I'm not sure it will work at all ie. consistency/useability.
Also, macros are very useful because of the scope flexibility they offer (the only other thing they offer is code text über tweaking): we love use so-called global/context variables and we certainly need more flexibility for that.
I think that there is a misconception in the programming world behind the fact that scopes are defined by functions. I think - in fact I'm sure because I've already done it - you can define scopes independently of everything else. And then, you can call a function which will use variables of the call context, arguments becoming true arguments, not context vars. OO tried to solve that with just the bad idea.
I will use {} for scopes.
So let's try with the bind rules I've defined earlier:
(def y '2)
{
(def x '1)
(def MyFn '((pr x) (pr y)))
(MyFn)
{
(x '10) ; let's say its possible in the call space since 1 is not a list
(def y '3.14)
(MyFn)
}
(MyFn)
}
> 1
> 2
> 10
> 3.14
> 10
> 2
Don't see any problem in that except there is no argument in my function. I've not found something I like for the args already but it will come.
Note that:
(MyFn '({(def x '"blabla") (pr x) (pr y)}))
is possible to. I don't see any problem in that. The rule is: scope are independant of everything.
Interesting idea to have functions not create a new scope by default. But it would make it too easy to create dynamically-scoped variables.
(def foo() {
(def a 34)
(bar)})
(def bar()
a)
Here a acts dynamically scoped. I think it's very valuable to have a way to say "this variable is lexically scoped", meaning that functions called within the scope can't access it.
Ok let do that if I've well understood. That's cool :)
>> I think it's very valuable to have a way to say "this variable is lexically scoped", meaning that functions called within the scope can't access it.
Do you mean can't access it in the sense of c++ private lib or kind of can't use it?
In the case of can't use it:
Why would a function evaluate a variable which does not exist from its point of view? ie. compilation error
I've found for the inner zap.
{
(def MyFn '( (= MyFn '(3.14)) (1) ) )
(pr (MyFn))
(pr (MyFn))
}
> 1
> 3.14
:)
EDIT: you have edited your text, I need to re-evaluate it. But unfortunately, I've to sleep now :D
Let's continue tomorrow :) Thank you, that's pretty interresting :)
Yeah, sorry I got rid of the let from my code example. I thought I should follow your example more closely. Was that the change you noticed?
I think it would be really hard to implement let to have lexical scope. To do so you'd have to delete some bindings from the current scope before each function call. In that case functions modify the scopes going into them, sometimes deleting bindings and sometimes not. Seems confusing.
---
I don't follow your distinction between access and use..
"Why would a function evaluate a variable which does not exist from its point of view?"
Primarily because it assumes some implicit 'global' bindings. Like function names:
Much of the power of lisp derives from having just a few powerful concepts; function names are symbols just like any other and you can shadow their bindings like anything else.
Even aside from functions, codebases tend to have variables that are implicitly accessed without being passed in as arguments. Implicit variables can be either global or in packages. If two subsystems have common names and you make a stray call between them, it can get hard to debug.
---
I don't understand your code example; can you edit it to add indentation? Just put two spaces at the start of every line of code and it'll preserve indentation. (http://arclanguage.org/formatdoc)
There seem to be two schools of thought around debugging today. The first is to minimize debugging by use of types, like in Java or Haskell. The second is to embrace debugging as an eternal fact of life, and to ease things by making code super lightweight and easy to change.
Both approaches are valid; combining them doesn't seem to work well. The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic.
PG's style seems to be akin to sketching (http://paulgraham.com/hp.html; search for 'For a long time'). That implicitly assumes you're always making mistakes and constantly redoing code. My version of that is to add unit tests. That way I ensure I'm always making new mistakes.
I'd say both approaches you're talking about are all about failing fast, and that unit tests are a way to shove errors up to compile time manually, by running some arbitrary code after each compile. Languages that let the programmer run certain kinds of code at compile time anyway (like a type system or a macroexpander) have other options for where to shove these errors, though they may not always make sense there.
Conversely, they may not make sense in unit tests: If we want to know that a program behaves a certain way for all inputs, that might be easy to check with a static analysis but difficult (or effectively impossible) to check using example code.
---
"The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic."
I'd say Arc is a demonstration of this option. XD I thought the whole point of Arc being for sufficiently smart programmers was that no guard rails would be erected to save programmers from their own buggy programs.
---
Anyway, if a language designer is trying to make a language that's easy to debug, static types and unit tests are hardly the only options. Here's a more exhaustive selection:
- Reject obviously buggy programs as being semantically meaningless. This could be any kind of error discovered by semantic analysis, including parse errors and type errors.
- Give the programmer tools to view the complexity of the program in intermediate stages as it simplifies. Step debuggers do this for imperative languages. Other languages may have bigger challenges thanks to staging (like macroexpansion) or notions of "effect" that feature simultaneous, time-sensitive, or tentative behavior, for instance.
- Create rich visualizations of the program's potential behavior. We discussed Bret Victor's demonstrations of this recently (though I didn't participate, lol): http://arclanguage.org/item?id=15966
- Collapse the edit-debug cycle so that diagnostic information is continuously visible as the programmer works. Again, this is something Bret Victor champions with a very visual approach. IDEs also provide this kind of information in the form of highlighting compile time errors.
- Give the running program extra functionality that exposes details of the codebase that would normally be hidden. If a program runs with a REPL or step debugger attached, this can be easy. (Also, a programmer can easily pursue this option in lots of languages by manually inserting these interaction points, whether they're as simple as printing to the console or as complicated as a live level editor.)
- Provide tools that write satisfactory code on the programmer's behalf. IDEs do this interactively, especially in languages where sophisticated static analysis can be performed. Compilers do this to whole programs.
- Provide abstraction mechanisms for the programmer to use, so that a single bug doesn't have to be repeated throughout the codebase.
- Provide the programmer with an obvious way to write their own sophisticated debugging tools. A static analysis library might help here, for instance. An extensible static analysis framework, such as a type system, can also help.
- Provide the programmer with an obvious way to write and run unit tests.
- Simply encourage the programmer to hang in there.
You don't hear people say of Arc, "it worked the first time I wrote it." That's more Haskell's claim to fame.
The dichotomy I'm drawing isn't (in this case) about how much you empower the user but how you view debugging as an activity. I claim that Haskellers would like you to reason through the correctness of a program before it ever runs. They consider debugging to be waste. I consider it to be an essential part of the workflow.
The points on the state space that you enumerate are totally valid; I was just thinking at a coarser granularity. All your options with the word 'debug' (at least) belong in my second category.
Perhaps what's confusing is the word 'debugging' with all its negative connotations. I should say instead, relying on watching the program run while you build vs relying just on abstract pre-runtime properties. It's the old philosophical dichotomy of finding truth by reason vs the senses.
By fixing some mistakes I've made, I can go forward.
I think I'm able to eliminate the def and have a working evaluation/call system.
Let's say, we can have symbols and lists of symbols only. Symbols can be bound to another symbol or list.
For number and integer, the arithmetic functions work on the symbols as if they were number or integer. I don't see any problem in that, ie. lambda calculus.
Also, let the previous scope system.
Evaluation. An evaluation of a symbol gives its bound symbol or list.
If one evaluates a list, it's a call.
And now, the calls.
We can call everything. A call on a symbol bind the symbol to the following argument or to a list of the following arguments. If the symbol hasn't been called before in the current scope, it is defining a new symbol on the scope.
And if one call a list, it's a function call.
So the previous code looks like this now:
('y '2)
{
('x '1)
('MyFn '((pr x) (pr y)))
(MyFn)
{
('x '10) ;no problem in this anymore
('y '3.14)
(MyFn)
}
(MyFn)
}
> 1
> 2
> 10
> 3.14
> 10
> 2
What we can see now is that, everything ends up with a '.
That's why I would like to explore the opposite strategy, an ' in front of what I want to evaluate.
It gives:
(y 2)
{
(x 1)
('pr x)
(MyFn (('pr 'x) ('pr 'y)))
('MyFn)
{
(x 10)
(y 3.14)
('MyFn)
}
('MyFn)
}
> x
> 1
> 2
> 10
> 3.14
> 10
> 2
I would like to put a star (like in C) instead of a ' for evaluation but I didn't succeeded.
That's a lot like PicoLisp. In PicoLisp, functions are just lists:
: (de foo (X Y) # Define the function 'foo'
(* (+ X Y) (+ X Y)) )
-> foo
: (foo 2 3) # Call the function 'foo'
-> 25
: foo # Get the VAL of the symbol 'foo'
-> ((X Y) (* (+ X Y) (+ X Y)))
Unfortunately, this approach means not having lexical scope. If any function has a parameter named * or + and it calls foo, foo's behavior might be surprising. Worse, you can't use lambdas to encapsulate state! (Or other context...)
With dynamic scope, you might as well define every function at the top level; local function syntax is only useful for code organization.
In some cases, dynamic scope can be useful for certain variables (typically configuration variables), but it's actually very easy to simulate dynamic scope in a non-concurrent program; just change a global variable and reset it afterwards.
---
"I would like to put a star (like in C) instead of a ' for evaluation but I didn't succeeded."
>> That's a lot like PicoLisp. In PicoLisp, functions are just lists:
Functions are lists of instructions/operations you can call and re-call. In every languages of the world. - meaning there is no reason they are treaten in a special case or with a special type.
The true concept is the call.
>> Unfortunately, this approach means not having lexical scope. If any function has a parameter named or + and it calls foo, foo's behavior might be surprising.
That's about bad programming. Just know what you're doing.
>> Worse, you can't use lambdas to encapsulate state! (Or other context...)
I gonna look at those lambdas. Thx
>> With dynamic scope, you might as well define every function at the top level; local function syntax is only useful for code organization.
That's not a question of code organization. That is a question of sense. If you define your functions at the top level because you can do it, you'll need a debugguer and a default scope system based on lexical scope. Beleive me :)
So with dynamic scope, you just have a functions system which plays its role: the possibility to repeat code. And a macro system which plays its role: the possibility to - over - tweak the source text in a way which has nothing to do with programming in itself. Functions and macros should be orthogonal concepts. That's the meaning of a concept: something which is orthogonal to every other concepts in the system.
>> In some cases, dynamic scope can be useful for certain variables (typically configuration variables), but it's actually very easy to simulate dynamic scope in a non-concurrent program; just change a global variable and reset it afterwards.
The fact that Object Oriented programming exists tell you re wrong here.
>> That's because asterisks ( ) create italics on this forum*
"Functions are lists of instructions/operations you can call and re-call. In every languages of the world."
I'm going to nitpick your use of "list" there. There's no reason effects need to be short actions in sequence. We might want to apply effects continuously, in parallel, in combination, in reverse, under supervised control, or distributed on machines under multiple people's (sometimes untrustworthy) administration. I doubt there's one definition of "effect" that will satisfy everyone, but that doesn't mean we should settle for the same old imperative effects in every single language. :)
I'm also going to nitpick the premise that a function is something "you can call and re-call." It can be useful to write a function that you only call once... if you intend to define it more than once. And sometimes it's useful to enforce that a function that can only be invoked once, perhaps for security; languages can potentially help us express properties like that.
---
"That's about bad programming. Just know what you're doing."
If I write a library with (de foo (X Y) (* (+ X Y) (+ X Y))) in it, would you say I should document the fact that it isn't compatible with programs that use + and * as local variables? Fair enough.
However, suppose we were given a language that had foo in it, and it behaved strangely whenever we used + or * as a local variable. Outrageous! That's a "hard coded pseudo-concept"! :-p We should ditch that language and build a new one with more consistent and orthogonal principles.
Alas, that language is exactly what we're using as soon as we define (de foo (X Y) (* (+ X Y) (+ X Y))).
---
"If you define your functions at the top level because you can do it, you'll need a debugguer and a default scope system based on lexical scope."
No need for a debugger if you're a good programmer, right? :-p And I'm not sure what you mean by needing lexical scope, since we're assuming that we've given up lexical scope already.
But I forgot, one downside to defining functions at the top level is that you need to give them names (global names). Maybe this is related to what you mean.
---
"Functions and macros should be orthogonal concepts."
Who's saying otherwise?
---
"The fact that Object Oriented programming exists tell you re wrong here."
The fact that OO exists is irrelevant. My point is that Arc's global scope is enough to achieve dynamic scope in a non-concurrent program. Who cares what other languages do?
(Incidentally, most popular OO languages also have global scope--static fields--which allows exactly the same kind of dynamic scope technique.)
I'm a little lost :) Are you in favor of lexical scope or against it?
The argument that lexical scopes are entangled with our notion of functions, so let's drop them since they're not an orthogonal concept -- that seems internally consistent and worth thinking about.
Oh sorry if I've not been clear: I'm in favor of dynamic scope :)
>> The argument that lexical scopes are entangled with our notion of functions, so let's drop them since they're not an orthogonal concept
Exactly. In the fun exploration of an ultimate language for good programming, name conflicts should not drive the language design at all.
Programmers should manage their name spaces with care. Also, having a tool for this, like namespaces, is not a problem. Seems even pretty good and it fixes everything.
"Programmers should manage their namespaces with care."
Totally. I think I'm closer to your point of view than anybody here (http://arclanguage.org/item?id=15587, footnote 1; http://arclanguage.org/item?id=12777). I've gradually moved to the dark side in several ways: I no longer care about hygiene[1] or making macros easy to compile[2]. But I still hold on to lexical scope for reasons I can't fully articulate. If unit tests are as great as I say they are, do we need lexical scope? Without them changes may break seemingly distant, unrelated code. Something to think about.
>> If unit tests are as great as I say they are, do we need lexical scope?
Very very interesting. Unit testing.. This is such an engineering concept. Why not built in the language with meta tags (I don't know if it's possible at all)?
>> Without them changes may break seemingly distant, unrelated code. Something to think about.
Let's try the fun of an extreme code expansion language without any compromise :)
Some of you were right, there is a sensible problem in names/scope I've not expected. But I've the answer to everything :)
Libraries.
What are libraries? There are application foundations. In other words, applications are built on top of libraries.
So let's make it as it should.
A libraries is a function which takes in arguments an other libraries or an end application.
Let loadlast be a function which bind to a symbol the eval of the last instruction of a file. And let use the arc evaluation syntax.
App.ext:
////////////// app.ext ////////////////////
(loadlast '"lib1.ext" 'MyLib1)
(loadlast '"lib2.ext" 'MyLib2)
(loadlast '"lib3.ext" 'MyLib3)
(= 'MyApp
'(*put your application here*))
(MyLib1 '(MyLib2 '(Mylib3 MyApp))) ; This launches the whole
MyApp ; that makes MyApp a lib. MyApp is working with MyLib1, MyLib2 and MyLib3 and thus must be embed at least on top of a stack which contains them.
Lib1.ext:
///////////// lib1.ext ////////////////////
{
*blabla*
{
arg1; it evaluates (MyLib2 '(Mylib3 MyApp)) which can now use lib1 via the dynamic scope system
}
*blabla*
}