Still no type-checking or memory-safety, but we now have local variables.
http://akkartik.name/post/mu-2019-2
https://github.com/akkartik/mu
Still no type-checking or memory-safety, but we now have local variables.
http://akkartik.name/post/mu-2019-2
https://github.com/akkartik/mu
It would be really nice to be able to avoid null pointers by construction. But providing opt-in null pointers would require option types.
Option types can be seen as a special case of sum types (tagged unions) but without needing an explicit definition for each unique type. I like Ceylon's generalization, which lets one use types like int|bool
.
One interesting idea here is that anonymous unions are to sum types as tuples are to product types. The only wrinkle: it seems natural to refer to the variants of an anonymous union by type (you can't have int|int
), but tuples by position ((int, int)
is a common use case).
I'm also thinking about Rich Hickey's criticism of Haskell, that it should be possible to pass in an int
to a function that expects an int|bool
. That requires checking types based on their structure rather than their names.
But I'm reluctant to permit passing in a type point3D
to a function expecting a point2D
just because the member names are a superset. Perhaps structure should only be checked for anonymous types.
Should we be able to pass in anonymous types anywhere the language expects a type? In members of user-defined types? Any constraints seem surprising.
By now we're well in the territory of features that I'm not sure will have much adoption. Just because I wanted to provide clean concepts without surprising limitations.
I'm curious to hear where others would draw the line. How much of this seems reasonable, and how much excessive architecture astronaut-ism?
e.g. Foo
and Bar
can be automatically coerced in:
type Foo = A int * B boolean
type Bar = A int * B boolean
Still no type-checking or memory-safety, but we can now write any programs with int variables.
There's still no 'var' keyword, so we can't define local variables yet. But that's not insurmountable; just pass in extra arguments for any space you want on the stack 😀
result <- factorial n 0 0 0
Progress has been slow over the holiday season because I've been working on a paper about Mu for https://2020.programming-conference.org/home/salon-2020
But functions can now return outputs.
fn foo a: int -> result/eax: int {
result <- copy a
increment result
}
Sources for the memory-safe language, now at 5kLoC.
Caveats: no checking yet, only int types supported.
Design of the type system, particularly defining local variables.
I just built a treeshaker for SubX programs. To my pleasant surprise it took just about 2 hours.
https://github.com/akkartik/mu/blob/master/tools/treeshake.cc
I don't trust it yet, but it does yield some stats:
https://github.com/akkartik/mu/blob/master/linux/stats.txt
Notes:
a) Binary sizes, in spite of being just tens of KB, are quite bloated by tests and unused library functions. They can usually be shrunk to ~20% their original size. Programming in machine code naturally focuses on the essence.
b) 75% LoC in sources are tests. Verbose!
In the last 2 weeks I went from being able to translate:
fn foo {
}
to:
fn foo n: int {
increment n
}
That required 2k LoC. So it seems fair to say that things are still in the "black triangles" phase. And there are still gaping holes. Variable declarations don't really exist yet. (I can just parse them.)
One voice in my head (the one often active when interacting in this forum) whispers that if only I had better tools the process could have been shortened.
Another voice in my head whispers that I'm stupid for taking so long to figure out something some putative body else would find obvious. ("If deleting no-op nodes in a dependency graph causes nodes to fire before they're ready, that means some edges are being spuriously cut.") Or maybe I'm rusty, because I don't work anymore with graphs 12 years after finishing grad school.
But the dominant voice in my head is just elation, the flush of insight, of having tamed a small portion of the wilderness around me and inside my own head. And it wouldn't have happened without struggling for a while with the wilderness, no matter what tools I had. A big portion of today was spent trying to visualize graphs and finding them too large for my tools to handle. So I had to resort to progressively more and more precise tools. Text-mode scalpels over graphical assistants. And that process of going beyond what my regular tools can handle is a key characteristic of going out into the wilderness. When tools fail, the only thing left is to try something, see what happens, and think. No improvement in tools can substitute for the experience of having gone beyond your tools, over and over again.
There's a famous saying that insights come to the prepared mind. It's easy to read and watch Bret Victor and imagine that we are in the insight delivery business. But we're really in the mind preparing business.
Not much to report this week. Last week I implemented the instruction increment x
when x
is on the stack. This week I did x <- increment
when x is a register.
(In Mu, registers written to are return values, while memory locations written to are just passed in by reference.)
The good news: I now have a core data structure in place for code-generating different instructions, and this includes static dispatch.
http://akkartik.name/post/mu-2019-2