Data lives exactly as long as the lexical scope that created it.
Outer scopes can never retain references to inner allocations.
There is no GC.
No traditional Rust-style borrow checker.
No hidden lifetimes.
No implicit reference counting.
When a scope exits, everything allocated inside it is freed deterministically.
---
Here’s the basic idea in code:
fn handler() {
let user = load_user() // task-scoped allocation
CACHE.set(user) // compile error: escape from inner scope
CACHE.set(user.clone()) // explicit escape
}
If data needs to escape a scope, it must be cloned or moved explicitly.The compiler enforces these boundaries at compile time. There are no runtime lifetime checks.
Memory management becomes a structural invariant. Instead of the runtime tracking lifetimes, the program structure makes misuse unrepresentable.
Concurrency follows the same containment rules.
fn fetch_all(ids: [Id]) -> Result<[User]> {
parallel {
let users = fetch_users(ids)?
let prefs = fetch_prefs(ids)?
}
merge(users, prefs)
}
If any branch fails, the entire parallel scope is cancelled and all allocations inside it are freed deterministically.This is structured concurrency in the literal sense: when a parallel scope exits (success or failure), its memory is cleaned up automatically.
Failure and retry are also explicit control flow, not exceptional states:
let result = restart {
process_request(req)?
}
A restart discards the entire scope and retries from a clean slate.No partial state.
No manual cleanup logic.
---
Why I think this is meaningfully different:
The model is built around containment, not entropy. Certain unsafe states are prevented not by convention or discipline, but by structure.
This eliminates:
* Implicit lifetimes and hidden memory management
* Memory leaks and dangling pointers (the scope is the owner)
* Shared mutable state across unrelated lifetimes
If data must live longer than a scope, that fact must be made explicit in the code.
---
What I’m trying to learn at this stage:
1. Scalability. Can this work for long-running, high-performance servers without falling back to GC or pervasive reference counting?
2. Effect isolation. How should I/O and side effects interact with scope-based retries or cancellation?
3. Generational handles. Can this replace traditional borrowing without excessive overhead?
4. Failure modes. Where does this model break down compared to Rust, Go, or Erlang?
5. Usability. What common patterns become impossible, and are those useful constraints or deal-breakers?
---
Some additional ideas under the hood, still exploratory:
* Structured concurrency with epoch-style management (no global atomics)
* Strictly pinned execution zones per core, with lock-free allocation
* Crash-only retries, where failure always discards the entire scope
---
But the core question comes first:
Can a strictly scope-contained memory model like this actually work in practice, without quietly reintroducing GC or traditional lifetime machinery?
NOTE: This isn’t meant as “Rust but different” or nostalgia for old systems.
It’s an attempt to explore a fundamentally different way of thinking about memory and concurrency.
I’d love critical feedback on where this holds up — and where it collapses.
Thanks for reading.
- I once worked for about a decade with a body of server-side C code that was written like this. Almost every data structure was either statically allocated at startup or on the stack. I inherited the codebase and kept the original style, once I'd got my head around it.
Positives were that it made the code very easy to reason about, and my impression was that it made it reliable - ownership of data was mostly obvious, and it was hard to (for example) mistakenly use a data structure after it had been free'd. Memory usage under load was very predictable.
Downsides were that data structures (such as string buffers) had to be sized for the max use-case, and code changes had to be hammered into a basically hierarchical data model. It was also hard to incorporate third-party library code - leading to it having its own http and smtp handling, which wasn't great. Some of that might be a consequence of the choice of base language though.
- I'm not sure this needs to be its own language.
In C/C++, this can be done by just not using malloc() or new.
You can get an awfully long way in C with only stack variables (or even no variables, functional style). You can get a little bit further with variable length arrays, and alloca() added to the mix.
With C++, you have the choice of stack, or raw new/delete, or unique_ptr, or shared_ptr / reference counting. I think this "multi-paradigm" approach works pretty well, but of course its complicated, and lots of people mess it up.
I think, with well-designed C/C++, 90+% of things can be on the stack, and dynamic allocation can be very much the exception.
I've been switching back and forth across C/C++/Java for the past few months. The more I think about it, the more ridiculous/pathological the Java approach of every object dynamically allocated, impossible to create an object not on the heap seems.
I think the main problem is kind of a human one, that people see/learn about dynamic allocation/shared_ptr etc. and it becomes a hammer and everything looks like a nail, and they forget the prospect of just using stack variables, or more generally doing the simplest thing that will work.
Maybe some kind of language where doing dumb things is an error would be good. e.g., in C++ if you do new and delete in the same scope, it's an error because it could have been a stack variable, just like unreachable code is an error on Java.
- Very interesting! I suggest following up with this on a rust core devs forum, as there might be higher concentration of people capable giving feedback.
- J has some of this approach but it has been made mostly for math so it is not optimized for CRUDs.
- Great work! I look forward to the responses.