Arc Forumnew | comments | leaders | submitlogin
2 points by akkartik 4606 days ago | link | parent

Interesting idea to have functions not create a new scope by default. But it would make it too easy to create dynamically-scoped variables.

  (def foo() {
    (def a 34)
    (bar)})

  (def bar()
    a)
Here a acts dynamically scoped. I think it's very valuable to have a way to say "this variable is lexically scoped", meaning that functions called within the scope can't access it.


1 point by FredBrach 4606 days ago | link

Ok let do that if I've well understood. That's cool :)

>> I think it's very valuable to have a way to say "this variable is lexically scoped", meaning that functions called within the scope can't access it.

Do you mean can't access it in the sense of c++ private lib or kind of can't use it?

In the case of can't use it: Why would a function evaluate a variable which does not exist from its point of view? ie. compilation error

I've found for the inner zap.

{

(def MyFn '( (= MyFn '(3.14)) (1) ) )

(pr (MyFn))

(pr (MyFn))

}

> 1

> 3.14

:)

EDIT: you have edited your text, I need to re-evaluate it. But unfortunately, I've to sleep now :D Let's continue tomorrow :) Thank you, that's pretty interresting :)

-----

1 point by akkartik 4606 days ago | link

Yeah, sorry I got rid of the let from my code example. I thought I should follow your example more closely. Was that the change you noticed?

I think it would be really hard to implement let to have lexical scope. To do so you'd have to delete some bindings from the current scope before each function call. In that case functions modify the scopes going into them, sometimes deleting bindings and sometimes not. Seems confusing.

---

I don't follow your distinction between access and use..

"Why would a function evaluate a variable which does not exist from its point of view?"

Primarily because it assumes some implicit 'global' bindings. Like function names:

  (def foo() {
    (def list '(1 2 3))
    (bar)})

  (def bar()
    (list 1 2)) ; error
Much of the power of lisp derives from having just a few powerful concepts; function names are symbols just like any other and you can shadow their bindings like anything else.

Even aside from functions, codebases tend to have variables that are implicitly accessed without being passed in as arguments. Implicit variables can be either global or in packages. If two subsystems have common names and you make a stray call between them, it can get hard to debug.

---

I don't understand your code example; can you edit it to add indentation? Just put two spaces at the start of every line of code and it'll preserve indentation. (http://arclanguage.org/formatdoc)

-----

1 point by FredBrach 4604 days ago | link

>> Primarily because it assumes some implicit 'global' bindings. Like function names:

Nothing in that which must drive a language design.

>> Implicit variables can be either global or in packages.

This is what I'm trying to improve above.

>> it can get hard to debug.

Debug is about bad programming.

-----

1 point by akkartik 4603 days ago | link

> Debug is about bad programming.

If so I'm a terrible programmer :)

There seem to be two schools of thought around debugging today. The first is to minimize debugging by use of types, like in Java or Haskell. The second is to embrace debugging as an eternal fact of life, and to ease things by making code super lightweight and easy to change.

Both approaches are valid; combining them doesn't seem to work well. The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic.

PG's style seems to be akin to sketching (http://paulgraham.com/hp.html; search for 'For a long time'). That implicitly assumes you're always making mistakes and constantly redoing code. My version of that is to add unit tests. That way I ensure I'm always making new mistakes.

-----

1 point by rocketnia 4603 days ago | link

I'd say both approaches you're talking about are all about failing fast, and that unit tests are a way to shove errors up to compile time manually, by running some arbitrary code after each compile. Languages that let the programmer run certain kinds of code at compile time anyway (like a type system or a macroexpander) have other options for where to shove these errors, though they may not always make sense there.

Conversely, they may not make sense in unit tests: If we want to know that a program behaves a certain way for all inputs, that might be easy to check with a static analysis but difficult (or effectively impossible) to check using example code.

---

"The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic."

I'd say Arc is a demonstration of this option. XD I thought the whole point of Arc being for sufficiently smart programmers was that no guard rails would be erected to save programmers from their own buggy programs.

---

Anyway, if a language designer is trying to make a language that's easy to debug, static types and unit tests are hardly the only options. Here's a more exhaustive selection:

- Reject obviously buggy programs as being semantically meaningless. This could be any kind of error discovered by semantic analysis, including parse errors and type errors.

- Give the programmer tools to view the complexity of the program in intermediate stages as it simplifies. Step debuggers do this for imperative languages. Other languages may have bigger challenges thanks to staging (like macroexpansion) or notions of "effect" that feature simultaneous, time-sensitive, or tentative behavior, for instance.

- Create rich visualizations of the program's potential behavior. We discussed Bret Victor's demonstrations of this recently (though I didn't participate, lol): http://arclanguage.org/item?id=15966

- Collapse the edit-debug cycle so that diagnostic information is continuously visible as the programmer works. Again, this is something Bret Victor champions with a very visual approach. IDEs also provide this kind of information in the form of highlighting compile time errors.

- Give the running program extra functionality that exposes details of the codebase that would normally be hidden. If a program runs with a REPL or step debugger attached, this can be easy. (Also, a programmer can easily pursue this option in lots of languages by manually inserting these interaction points, whether they're as simple as printing to the console or as complicated as a live level editor.)

- Provide tools that write satisfactory code on the programmer's behalf. IDEs do this interactively, especially in languages where sophisticated static analysis can be performed. Compilers do this to whole programs.

- Provide abstraction mechanisms for the programmer to use, so that a single bug doesn't have to be repeated throughout the codebase.

- Provide the programmer with an obvious way to write their own sophisticated debugging tools. A static analysis library might help here, for instance. An extensible static analysis framework, such as a type system, can also help.

- Provide the programmer with an obvious way to write and run unit tests.

- Simply encourage the programmer to hang in there.

-----

1 point by akkartik 4603 days ago | link

"I'd say Arc is a demonstration of this option."

You don't hear people say of Arc, "it worked the first time I wrote it." That's more Haskell's claim to fame.

The dichotomy I'm drawing isn't (in this case) about how much you empower the user but how you view debugging as an activity. I claim that Haskellers would like you to reason through the correctness of a program before it ever runs. They consider debugging to be waste. I consider it to be an essential part of the workflow.

The points on the state space that you enumerate are totally valid; I was just thinking at a coarser granularity. All your options with the word 'debug' (at least) belong in my second category.

Perhaps what's confusing is the word 'debugging' with all its negative connotations. I should say instead, relying on watching the program run while you build vs relying just on abstract pre-runtime properties. It's the old philosophical dichotomy of finding truth by reason vs the senses.

-----