Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async/await/suspend/resume #6025

Open
Tracked by #11899
andrewrk opened this issue Aug 11, 2020 · 63 comments
Open
Tracked by #11899

async/await/suspend/resume #6025

andrewrk opened this issue Aug 11, 2020 · 63 comments
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase. frontend Tokenization, parsing, AstGen, Sema, and Liveness.
Milestone

Comments

@andrewrk
Copy link
Member

andrewrk commented Aug 11, 2020

This is a sub-task of #89.

@andrewrk andrewrk added enhancement Solving this issue will likely involve adding new logic or components to the codebase. frontend Tokenization, parsing, AstGen, Sema, and Liveness. labels Aug 11, 2020
@andrewrk andrewrk added this to the 0.7.0 milestone Aug 11, 2020
@andrewrk andrewrk modified the milestones: 0.7.0, 0.8.0 Oct 13, 2020
@andrewrk andrewrk modified the milestones: 0.8.0, 0.9.0 Jun 4, 2021
@andrewrk andrewrk modified the milestones: 0.9.0, 0.10.0 Nov 21, 2021
@Vexu Vexu mentioned this issue Jun 20, 2022
7 tasks
@Vexu Vexu modified the milestones: 0.10.0, 0.11.0 Aug 6, 2022
@linkpy
Copy link

linkpy commented Nov 10, 2022

I'm willing to try implementing async/await/suspend/resume for stage2 as i require them for a project i'm working on.

The issue is that i dont really know where to start.
It seems like AstGen supports them.
Sema doesn't (calls to failWithUseOfAsync, so here I know where I need to work) so I'll start with that.

The AIR only have async_call, async_call_alloc, suspend_begin, and suspend_end instructions. By looking at stage1 it seems like the await/suspend/resume instructions are missing. Should I try to just add the instructions and replace calls to failWithUseOfAsync by looking at how stage1 implements them?

Futhermore, is async implemented in a similar way in stage2 as stage1? (basically stage1 being a good representation of how stage2 implements and uses frames, async calls, suspends, resumes, etc)

Edit: I've been using the stage2-async branch, assuming that's where the async development is being done.

andrewrk added a commit that referenced this issue Dec 1, 2022
This commit removes async/await/suspend/resume from the language
reference, as that feature does not yet work in the self-hosted
compiler.

We will be regressing this feature temporarily. Users of these language
features should stick with 0.10.x with the `-fstage1` flag until they
are restored.

See tracking issue #6025.
@kuon
Copy link
Contributor

kuon commented Dec 6, 2022

I've been following the WASI development and it seems to be going great! That being said, I am currently working on a new project and I am using some specific stage2 features. I am not using async yet, but I'd love to introduce it soon. Can you provide a very rough estimate of when this is planned to be merged in master? It is just for general planning (no pressure). Cheers!

francisbouvier added a commit to lightpanda-io/zig-js-runtime that referenced this issue Jan 12, 2023
*Async JS*

For now only callback style is handled (Promises is planned later).

We use persistent handle on v8 JS callback call after retrieving the
event from the kernel, has the parent JS function is finished and
therefore local handles are already garbage collected by v8.

* Event Loop*

We do not use the event loop provided in Zig stdlib but instead Tigerbeetle
IO (https://github.com/tigerbeetledb/tigerbeetle/tree/main/src/io).
The main reason is to have a strictly single-threaded event loop, see
ziglang/zig#1908.
In addition the desing of Tigerbeetle IO based on io_uring (for Linux,
with wrapper around kqueue for MacOS), seems to be the right direction for IO.

Our loop provides callback style native APIs. Async/await style native
API are not planned until zig self-hosted compiler (stage2) support
concurrent features (see ziglang/zig#6025).

Signed-off-by: Francis Bouvier <[email protected]>
mlugg added a commit to mlugg/zig that referenced this issue Mar 17, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* 'cases/compile_errors/async/*' and all remaining 'safety/*' depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
mlugg added a commit to mlugg/zig that referenced this issue Mar 17, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
mlugg added a commit to mlugg/zig that referenced this issue Mar 19, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
mlugg added a commit to mlugg/zig that referenced this issue Mar 19, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
andrewrk pushed a commit that referenced this issue Mar 20, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see #6025.

Resolves: #14849
truemedian pushed a commit to truemedian/zig that referenced this issue Mar 30, 2023
There are now very few stage1 cases remaining:
* `cases/compile_errors/stage1/obj/*` currently don't work correctly on
  stage2. There are 6 of these, and most of them are probably fairly
  simple to fix.
* `cases/compile_errors/async/*` and all remaining `safety/*` depend on
  async; see ziglang#6025.

Resolves: ziglang#14849
@Vexu Vexu removed this from the 0.11.0 milestone Apr 23, 2023
@brymer-meneses
Copy link

I really like the async_func().await syntax in rust. It's so much cleaner than having to do something like

var some_value = (await (await async_func()).some_other_thing()).finally()
var some_value = async_func().await.some_other_thing().await.finally()

Would it be possible to adopt this kind of syntax?

@applejag
Copy link

applejag commented Jun 5, 2024

I really like the async_func().await syntax in rust. It's so much cleaner than having to do something like

var some_value = (await (await async_func()).some_other_thing()).finally()
var some_value = async_func().await.some_other_thing().await.finally()

Would it be possible to adopt this kind of syntax?

The original async syntax zig had would not be affected by this, as explained in this blog post from 2020: https://kristoff.it/blog/zig-colorblind-async-await/

Don't know if the syntax idea has changed, but I really liked that Zig just flipped the async/await function call usage, so that for the common case of calling a non-async function and an async function would have the same syntax.

const some_value = async_func().some_other_thing().finally()

// Equivalent to:
const frame = async async_func()
const other_frame = async (await frame).some_other_thing()
const some_value = (async other_frame).finally()

// Equivalent to:
const some_value = (await async (await async async_func()).some_other_thing()).finally()

Though, in this reversed case where you want to grab the async frame, maybe then it could be a field named async on the async function and an await field on the frame, just to get rid of the wrapping parentheses

const frame = async_func.async()
const other_frame = frame.await().some_other_thing.async()
const some_value = other_frame.await().finally()

// Equivalent to:
const some_value = async_func.async().await().some_other_thing.async().await().finally()

@mlugg
Copy link
Member

mlugg commented Jun 5, 2024

That syntax has not changed, and (if async is re-implemented) will not change, because it's required for colorless async. So, yes, we don't need await to be postfix for convenient chaining, because you just write async_func().async_method().synchronous_method().another_async_method() and everything works.

@revskill10
Copy link

revskill10 commented Jul 2, 2024

I think you could make async as the actual function, to transform a sync function into async one.

const asyncfn = std.async(syncfn);
const result = asyncfn()

@xphoenix
Copy link

xphoenix commented Aug 5, 2024

Good day, any ETA for this?

@wooster0
Copy link

wooster0 commented Aug 5, 2024

https://github.com/ziglang/zig/wiki/FAQ#what-is-the-status-of-async-in-zig

@sirweixiao

This comment was marked as off-topic.

@mlugg
Copy link
Member

mlugg commented Oct 12, 2024

Please don't add noise like this to the issue tracker.

  • The fate of async in Zig is undecided; it may return, if we determine we can do it well. See the FAQ entry linked above.
  • Zig is not C++; the design constraints are incredibly different. In particular, Zig has a focus on simplicity and minimalism which is in stark contrast to C++'s design.
  • More generally, the direction other languages take, particularly those such as C++, does not have any effect on us as a language.
  • The statement "coroutines are the trend of the future" is a non-point which cannot be used to justify a complex language feature.

If you can provide a particular reason you think Zig should retain async functionalities (or not) -- especially a concrete use case -- then feel free to give it. Otherwise, rest assured that the core team will get to this issue with time.

@sirweixiao

This comment was marked as spam.

@sirweixiao

This comment was marked as off-topic.

@JiaJiaJiang
Copy link

JiaJiaJiang commented Oct 13, 2024

Hello everyone, I have a problem, I don't know if it is suitable for this topic.
I read the Thread.zig source code and found that to create a child thread in nodejs wasm, I need to define the thread-spawn method in the importObject of the WebAssembly.instantiate method. Zig's wasm thread implementation will call this method to create a thread.
However, creating a worker in js is an asynchronous operation, so if I create a new worker in this method, it cannot wait for the thread to be created, the wasm process will continue to execute, and of course it will not be able to join the child thread normally (because they even have not been created yet).
Since the relevant zig operations are currently synchronous, the wasm process continues to occupy the host js process, so the js process cannot complete the creation of the child thread until the wasm code is executed.
If there is a way to allow the wasm(zig) process to actively return the execution right to the js process when creating a thread, and then return to the wasm process after js completes the creation, then this problem will be solved. Otherwise, it seems that there is no good way to use multithreading in nodejs wasm without changing the zig code to fit it (maybe the process needs to be split into several parts and called separately in js).

@kuon
Copy link
Contributor

kuon commented Oct 13, 2024

@JiaJiaJiang I am not sure if this will fix your issue. But I hacked something that can turn async JS calls into sync zig calls.

On zig side, have something like this:

extern fn send_recv(
    buf: [*]const u8,
    buf_len: usize,
    result: [*]u8,
    result_len: *usize,
) u8;

then, on the JS side (in your WASM thread that you spawn in a web worker), bind a function like this one:

    return function (buf_ptr, buf_len, result_ptr, result_len_ptr) {
        // instance is created with something like this WebAssembly.instantiate(...).instance
        const mem = get_memory_buffer() //         return instance.exports.memory.buffer
        const view = get_memory_view() //         return new DataView(instance.exports.memory.buffer)
        const ctx = get_shared_context() //see below
        const data = new Uint8Array(mem, buf_ptr, buf_len)

        ctx.lock()
        ctx.write(data)
        ctx.client_notify()
        ctx.unlock()

        ctx.wait_for_server()
        ctx.lock()
        const result = ctx.read()
        ctx.unlock()

        const result_len = view.getUint32(result_len_ptr, true)

        if (result.length === 0) {
            return 1// error codes for zig
        }

        if (result.length > result_len) {
            return 2 // error codes for zig
        }

        view.setUint32(result_len_ptr, result.length, true)
        const dest = new Uint8Array(mem, result_ptr, result.length)
        dest.set(result)

        return 0// error codes for zig
    }

In another web worker, do something like this:

    const step = async function () {
        const ctx = get_shared_context() // send the same context to both workers
        if (ctx.wait_for_client(10) !== true) {
            step()
            return
        }
        ctx.lock()

        const request = ctx.read()

       // process request.buffer, you can pass JSON commands, function names... encode it the way you like
       const response = await whatever_process_request(request) // this is where the magic happens as it turns an async call to a sync call
       ctx.write(new Uint8Array(response))

       ctx.server_notify()
       ctx.unlock()
       step()
    }

    step() // this starts an infinite loop

A shared context is something I threw together to allow to sync two thread

export default function SharedContext(buffer) {
    if (!buffer) {
        throw new Error("Buffer must be a shared buffer")
    }
    const META = new Int32Array(buffer, 0, 4)

    const LOCK = 0
    const CLIENT_NOTIFY = 1
    const SERVER_NOTIFY = 2
    const BUF_LEN = 3

    // LOCK values
    const UNLOCKED = 0
    const LOCKED = 1

    // NOTIFY values
    const OFF = 0
    const ON = 1

    const DATA = new Uint8Array(buffer, 16) // start at offset 16

    function write(buf) {
        if (buf.length > DATA.length) {
            return 1
        }

        DATA.set(buf, 0)
        Atomics.store(META, BUF_LEN, buf.length)

        return 0
    }

    function writeU32(n) {
        const buf = new Uint8Array(4)
        new DataView(buf).setUint32(n, true)
        return write(buf)
    }

    function lock() {
        while (true) {
            Atomics.wait(META, LOCK, LOCKED)
            if (
                Atomics.compareExchange(META, LOCK, UNLOCKED, LOCKED) ===
                UNLOCKED
            ) {
                Atomics.notify(META, LOCK)
                break
            }
        }
    }

    function unlock() {
        Atomics.store(META, LOCK, UNLOCKED)
        Atomics.notify(META, LOCK)
    }

    function read() {
        const len = Atomics.load(META, BUF_LEN)
        return DATA.slice(0, len)
    }

    function readU32(n) {
        const buf = read()
        return new DataView(buf).getUint32(true)
    }

    function client_notify() {
        Atomics.store(META, CLIENT_NOTIFY, ON)
        Atomics.notify(META, CLIENT_NOTIFY)
    }

    function server_notify() {
        Atomics.store(META, SERVER_NOTIFY, ON)
        Atomics.notify(META, SERVER_NOTIFY)
    }

    function wait_for_client(timeout) {
        if (Atomics.wait(META, CLIENT_NOTIFY, OFF, timeout) === "timed-out") {
            return false
        }

        Atomics.store(META, CLIENT_NOTIFY, OFF)

        return true
    }

    function wait_for_server(timeout) {
        if (Atomics.wait(META, SERVER_NOTIFY, OFF, timeout) === "timed-out") {
            return false
        }

        Atomics.store(META, SERVER_NOTIFY, OFF)

        return true
    }

    return {
        buffer,
        lock,
        unlock,
        write,
        read,
        client_notify,
        server_notify,
        wait_for_client,
        wait_for_server,
    }
}

Create it like this: shared_context = SharedContext(new SharedArrayBuffer(1024 * 1024))

This is something I threw together to unblock my project, I didn't analyze the performances but it works well enough.

@JiaJiaJiang
Copy link

JiaJiaJiang commented Oct 14, 2024

@kuon Thank you for your reply.
I tried to understand your code. Does it implement the function of calling a method in another thread and using atomic lock to wait for the result?
In my case, the problem occurred before the worker was created. The caller(in wasm) and the called async method(in js) were actually in two different contexts of one thread. So if there is no way for me to actively switch to js from the middle of the wasm process to let it complete the event loop, then js cannot execute any asynchronous code.

@mlugg
Copy link
Member

mlugg commented Oct 14, 2024

Please have this discussion in a Zig community instead. This issue exists to track the implementation of Zig's async/await feature in the self-hosted compiler. The issue tracker isn't for questions.

@kuon
Copy link
Contributor

kuon commented Oct 14, 2024

@mlugg I disagree that this discussion is not relevant to this thread. I think it provides good insights on real world usage and can help prioritize this issue and decide how it should be implemented. I use zig in a fairly large and popular app through WASM, and I was able to workaround the missing async with this workaround. It literally made the project possible. It shifted my priorities from "Zig needs async asap" to other things.

Deciding if async should come back and under what exact form is a very important topic for zig, and all arguments in favor or against which are pertinent should be weighted in. If we can give a temporary working solution to developers and unblock their workflow, I think it can have a big impact on their reception of zig.

With that said, I agree that the issue tracker should not be used for a ping/pong kind of discussion as the essence of the issue can be highly diluted and I am sorry if my participation made it go that way.

@DiXaS
Copy link

DiXaS commented Oct 31, 2024

I'm sorry, I'm not an expert in asynchronous programming, but tell me why it's impossible to add runtime like golang with green threads when say const cor = @import("coroutine") (this solves the problem of colored functions)?

@aep
Copy link

aep commented Nov 5, 2024

@noonien

Could you look into stackful continuations and effects instead?

i intended to propose this a while back but was discouraged because it seemed fundamentally incompatible with what everyone else wants.

i think any sensible proposal would need to have a good answer how this works with javascript. the current answer is that LLVM cororutines work, so that's just the path of least resistance.

@He-Pin
Copy link

He-Pin commented Dec 24, 2024

two colors function:), will this problem force all library consumers to use async + await?

@ThatDreche
Copy link

ThatDreche commented Dec 29, 2024

Hello everyone, I don't know whether this is helpful (I should get more into community in general), but here are my ideas I had for a kernel project (I don't know whether this is standard or whether it does work at all in practice, but I can't see the problem right now; however, JS async is a mystery to me, but if it introduces function colours, is it realistic it does not in zig when they should be compatible? That's what I grasp from some of the early discussion here.):

The idea is essentially that async functions work like this: There is no difference between async and non-async functions, but there are some functions dedicated for dealing with call stacks: allocate, prepare, resume, await (note: no suspend).

In the design, the most important part is resume. It works, in some pseudo-high-level-assembler-alike syntax, as follows:

resume(new_stack):
    push (all registers)
    push .reenter
    mov sp -> old_stack
    mov new_stack -> sp
    ret # Will always jump to .reenter, but in a different stack, so could also be left out (alongside the push)
.reenter:
    pop (all registers)
    return old_stack

This means, one will always need a stack to jump into. When wanting to call an “async” function, after allocating a stack, one would push all registers (prepare the stack) so that .reenter would place every parameter in the correct register and “resume” the function from the beginning (aka. calling it), as soon as it gets resumed, which may be way later. Now, one has a stack frame one can resume into to start the function.

A modification to the above resume function to support functions actually terminating: When the function being resumed into has already terminated, the stack will simply not get swapped at all (resume will do nothing). It would be enough to save in the returned stack pointer, as that would be reentry point. (Note: It is undefined behaviour to use an old pointer to a stack - this could also be changed by reserving the uppermost (my stack grows downward - perhaps too much x86) pointer on the stack for the real stack address and then dereferencing once more - but this undefined behaviour is comparable to use after free, in my imagination).

await then would essentially resume the stack so many times that it returns, and then takes the return value from the stack (which would also be saved somewhere there). Here lies another problem: One can simply “ignore” the error by simply deallocating the stack without resuming it ever again. Although, this poses the question, whether this is ignoring or keeping the stack suspended for an infinite amount of time. And if it did not terminate yet and no error has happened yet - does it count when an error would happen somewhen? And when it did already terminate and simply was not awaited yet - it is in a state where it is virtually waiting to return the error (in the sense of busy with passing the error upwards the stack), and would it be OK to interrupt the process there, although the error has happened yet?

So, in short, the following are the problems I can see:

  • One may never continue an async function at all, which, when one would combine stack deallocation with awaiting the stack (in user code), would be a memory leak, which zig does nothing to protect against (except in testing, but one may write a special testing implementation of preparing and awaiting a stack which keeps track of this). But it could actually be meaningful not to continue a noreturn function this way.
  • It would be undefined behaviour if the same stack pointer gets resumed twice (which could be fixed by storing the actual stack pointer at the top)
  • This still does not fix the missing support of external tools (LLVM, debuggers, …)
  • Probably minor: One needs to design an interface around stacks, dealing with allocators always returning the lower address while stacks may grow downward

But it would also open up new possibilities, the following being the main possibilities I can think of:

  • It would be up to user code to implement a scheduler - a function which essentially manages a list of processes (stacks) to resume. (Depending on the implementation, it may use some global mutable variable and gets called on one of the stacks of a process and determines the next stack to resume or may have its own stack which thus gets implicitly passed to the processes - so, resuming the scheduler would essentially be suspend).
  • Calling a function in a new stack can be implemented using this system as a function (instead of a builtin).

@He-Pin
Copy link

He-Pin commented Dec 30, 2024

I would like to suggest that the async functions support can live outside the language. There is no magic, and I don't think Zig Can do better than other languages with the limited async function contract.

@ThatDreche
Copy link

But I have figured out one more problem: async return needs a stack to continue on. Since asynchronous functions will always have a valid stack to return to, this could be saved somewhere on that stack, but I also have another, admittedly even stranger idea (other than separating asynchronous from synchronous functions): one could only allow noreturn functions as asynchronous functions and then let it be up to the code to implement async return and await. The disadvantage is that calling such a function gets more difficult, the advantage is that one could also implement Python like yield. But one point which will probably always separate synchronous from asynchronous functions is the fact that an asynchronous function will receive a stack it can resume into as implicit or explicit parameter.

And I think it has to be built in into zig directly because writing that resume code into an asm block would technically be undefined behaviour (intentional failure to list all clobbers, if you will. An alternative would be to change the wording of the specification).

So, do what you want with that idea, but these are my thoughts on this functionality.

@ethindp
Copy link

ethindp commented Dec 30, 2024

I'm pretty sure the entire point of Zig's idea of implementing async/await is that the colorness problem won't exist.

@ThatDreche
Copy link

ThatDreche commented Dec 30, 2024

Well, I have not figured out how to call async function in js, but I assume the colouring comes exactly from this that I simply can not figure this out. With the solution I wrote, it wold be clear how to call an asynchronous function from a synchronous one (create a stack and resume it as often as it is required for letting it terminate aka write the return value to a result location; yes, all of this can be done from a synchronous function). So you can clearly call an asynchronous function from a synchronous one, and if that is too much labour: write a library for it.

But for the noreturn variant, there should be the possibility to clean up stuff somehow, but in principle, when you put the stuff needing to be cleaned up into its own scope, you can use defer, and if that’s not possible, then you have to give the cleanup information to the function also cleaning up the stack somehow (eg via shared state). For better support of zig, optimal would be to have the stack save a pointer to the cleanup function which, as part of stack deinit, would get called there. (Assuming this will be supported, although this would be a good reason against it)

@omentic
Copy link

omentic commented Dec 30, 2024

Hi @ThatDreche. I'm not familiar with Zig async and am mostly following this issue out of curiosity, but I am deeply familiar with effects handlers systems which offer answers to some of your questions. I think you would find the paper Lexical Effect Handlers, Directly particularly interesting.

One may never continue an async function at all, which, when one would combine stack deallocation with awaiting the stack (in user code), would be a memory leak, which zig does nothing to protect against (except in testing, but one may write a special testing implementation of preparing and awaiting a stack which keeps track of this). But it could actually be meaningful not to continue a noreturn function this way.

If I understand what you mean by this, I think this would be a problem with the implementation of the scheduler, right?

It would be undefined behaviour if the same stack pointer gets resumed twice (which could be fixed by storing the actual stack pointer at the top)

This is generally a problem in the literature. Most implementations (incl. the one above) deal with this dynamically (and WasmFX traps) but it is also possible to solve this by a types system treating stacks as affine values (resumed no more than once). You could also keep it as undefined behavior for sure.

This still does not fix the missing support of external tools (LLVM, debuggers, …)

Yeah, debuggers for sure. If you provide little "bare" pieces of assembly as primitives for your stack switching operations like the paper linked above does, I think LLVM should be alright though.

And if it did not terminate yet and no error has happened yet - does it count when an error would happen somewhen?

You might be interested in looking into effects handlers more broadly, this is kind of their beauty - they allow for implementing all non-local control flow in the same system, in user code, providing compositional semantics for-free. Not sure if they're the right fit for Zig specifically. They're conceptually still a bit of a nightmare (because of how powerful they are), though languages like Effekt have been making strides on that front.

@ethindp
Copy link

ethindp commented Dec 30, 2024

I mean, with respect to LLVM coroutines, hasn't this already been discussed on this issue as to why they simply are insufficient?

@ThatDreche
Copy link

One may never continue an async function at all, which, when one would combine stack deallocation with awaiting the stack (in user code), would be a memory leak, which zig does nothing to protect against (except in testing, but one may write a special testing implementation of preparing and awaiting a stack which keeps track of this). But it could actually be meaningful not to continue a noreturn function this way.

If I understand what you mean by this, I think this would be a problem with the implementation of the scheduler, right?

No, as the scheduler only does cooperative scheduling. This means the called function will resume the scheduler somewhen. It only means the scheduler itself could decide the function is not worth running any more and letting it be as it is.

It would be undefined behaviour if the same stack pointer gets resumed twice (which could be fixed by storing the actual stack pointer at the top)

This is generally a problem in the literature. Most implementations (incl. the one above) deal with this dynamically (and WasmFX traps) but it is also possible to solve this by a types system treating stacks as affine values (resumed no more than once). You could also keep it as undefined behavior for sure.

Assuming I understand what you mean, this is not really a problem at all. Really, in my comments above, one could almost completely replace “problem” by “design decision which has to be done before implementing this”. In this case, it is the design decision on how to handle the fact that the used part of the stack may grow or shrink, so the bottom may be at different locations at different interrupts (or however this would be called in cooperative multitasking) and whether the current pointer should be managed by code or by the language.

@Himujjal
Copy link

Himujjal commented Feb 6, 2025

Use repassi/zigcoro library for the latest versions of Zig if anybody want to emulate the functionality till async/await is here in Zig. Its more than enough for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase. frontend Tokenization, parsing, AstGen, Sema, and Liveness.
Projects
None yet
Development

No branches or pull requests