Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access violation when upgrading to anchor 0.25 on account init #2070

Closed
andreihrs opened this issue Jul 19, 2022 · 4 comments · Fixed by #2313
Closed

Access violation when upgrading to anchor 0.25 on account init #2070

andreihrs opened this issue Jul 19, 2022 · 4 comments · Fixed by #2313

Comments

@andreihrs
Copy link
Contributor

andreihrs commented Jul 19, 2022

I updated a program to anchor 0.25.0 and solana 1.10.29, and I stumbled upon a new build error (this wasn't present in 0.24.2 on the same ix)

anchor_lang::Accounts>::try_accounts Stack offset of 4896 exceeded max offset of 4096 by 800 bytes, please minimize large stack variables

I managed to get around this error by boxing new accounts and deleting some clones in the instruction handler.

However, in my bpf tests, I ended up running into a new error:

2022-07-18T14:38:14.921391000Z DEBUG solana_runtime::message_processor::stable_log] Program failed to complete: Access violation in stack frame 5 at address 0x200005e10 of size 8 by instruction #80252

I ended up commenting all the code inside the instruction handler and left only the context. Managed to isolate the problem and ended up having just 3 accounts, the signer, the system program and an account (Boxed), that I was meaning to initialize.

    #[account(mut)]
    pub admin_authority: Signer<'info>,
    #[account(init, payer = admin_authority,  space = 8 + std::mem::size_of::<OracleMappings>())]
    pub oracle_mappings: Box<Account<'info, OracleMappings>>,
    pub system_program: Program<'info, System>,
}
#[account]
#[derive(Debug)]
pub struct OracleMappings {
    pub _placeholder0: Pubkey,
    // Validated pyth accounts
    pub pyth_1_price_info: Pubkey,
    // pub pyth_2_price_info: Pubkey,
    // pub pyth_3_price_info: Pubkey,
    // pub pyth_4_price_info: Pubkey,
    // pub pyth_5_price_info: Pubkey,
    // pub pyth_6_price_info: Pubkey,
    // pub pyth_7_price_info: Pubkey,
    pub price_pk: Pubkey,
    // All reserved is now 124 u64
    pub _reserved: [u64; 64],
    pub _reserved2: [u64; 32],
    pub _reserved3: [u64; 28],
}

if I comment out 6 pubkeys, the code works as expected. This account is deployed on mainnet, without the pubkeys commented out, and it should have worked as expected.

when I did cargo build-bpf --dump, the vm codes seemed to point out to the same function call as the stack overflow was pointing to (the try_accounts function). I'm wondering whether the new map_err introduced in #1800 is causing this breaking change.

also, when we initialize an account with account init, is the deserialized version of the account firstly created on stack, and then copied into the Box allocation?

I managed to create a new repo where the error is reproduced:
https://github.com/andreihrs/anchor-error-reprod

The bpf test can be run with the command: RUST_MIN_STACK=83886080 cargo test-bpf --manifest-path ./programs/error_reprod/Cargo.toml --test tests -- test -- --exact --nocapture

@andreihrs
Copy link
Contributor Author

andreihrs commented Jul 19, 2022

Update: tested locally and removed #1800 changes from anchor build, looks like it works as expected, error is not popping up anymore, FYI @paul-schaaf

@jeffhu1
Copy link

jeffhu1 commented Jul 26, 2022

Seconded, would also like to get this merged!

@solserer-labs
Copy link

i seem to have the same issue after upgrading to 0.25

@mrmizz
Copy link

mrmizz commented Sep 24, 2022

Here's an isolated commit that reproduces the same error. Added a new account with init annotation. Blows stack.
bigtimetapin/somos-crowd@cb48cbe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants