-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Allow accessing the filesystem in scripts and bundle/expose the fs modules. #306
Comments
I'm interested in this too. I believe Anoop mentioned in this other thread that it's not yet allowed for security reasons, but that the goal would be to allow users to explicitly enable access. My use case is that in my pre-request script, I want to read a token that is stored/updated in a specific file on my filesystem to make http calls to a Vault instance, to then retrieve secrets for use in fetching OAuth tokens that I can use in my requests. If there are any updates on this, please let us know. If you're comfortable opening this up to a contribution, I'd be happy to investigate and make an attempt at a PR. Edit: I tried to implement a work around locally, but ran into issues using the |
Could be secure to access files inside collections? I mean, having a module that only has access to files stored on the same path that the .bru files. |
There are two access that we need to consider when executing a script inside vm2
We decided to allow network access to the script by default since it is a really common thing that people who use Bruno need. We decided to disable filesystem access by default. {
"filesystemAccess": {
"allow": true
}
}
Yes, we will have to build a wrapper around |
Just got this completed. Cutting a new release now. |
@helloanoop
Now need this supported in CLI :) Thanks! |
@Rzpeg Can you share your script. You can replace the URLs and other things for privacy. Your script would be super helpful for the community to who want to do file uploads using bruno. |
@helloanoop Sure, here it goes:
|
@helloanoop Edit: Just did a test run on CLI 0.12.0, and not supported yet. |
Great job @Rzpeg. I have tried this changing "application/octet-stream" for "multipart/form-data" but does not work. |
This is because multipart/form-data content type has different body structure / specification. You need to define boundaries and construct request the proper manner: |
Hello, I got here via this issue. Please tell me if I should rather open a new ticket, but I found this thread quite relevant. My problem: What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in |
Yes, I agree with the whitelisting approach.. Also curious to know which tool were you using before Bruno :) And how were you managing secrets before. |
@helloanoop whitrlisting is a neat idea. |
2 hrs :) |
That's... At least 2 days quicker than anticipated. :) |
Awesome! In all honesty my current workflow is the good old copy/paste, which I'm trying to get away from. I am currently using Insomnia a bit on and off. There is an azure key vault plugin that I didn't get to work last time I tried (my fault no doubt). I stumbled across Bruno today and I got curious if I could get it working 😄 |
@jonasgheer @Rzpeg I have published Bru CLI These have support for whitelisting and filesystem access. There is a breaking change in config {
"version": "1",
"name": "bruno-testbench",
"type": "collection",
"scripts": {
"moduleWhitelist": [
"crypto"
],
"filesystemAccess": {
"allow": true
}
}
} Let me know if you face any further issues. Also @jonasgheer Once you get Azure Key Vault working, please share your script for the community reference in Scriptmania - #385 Let's keep this ticket open. Need to update docs. |
Oh wow, that was quick! Unfortunately I'm still facing some issues. After whitelisting "os" and "crypto" I ran into this error: Granted, the module requires that you have the azure cli installed, and that you are logged in. So I'm not even sure if it really possible to have it run in a somewhat "safe" manner? I'm not sure if these sorts of modules more or less require full blown host access or not 🤔 I'm flailing a bit in the dark here. My limited knowledge on both |
I might be making some progress here, hold your horses 😄 |
I have identified some problems. I'll start with the biggest one, which sadly might make this whole thing a no-go. @azure/keyvault-secrets pulls in 3 libraries amongst a bunch of others: These packages all have a line of code in them calling I found a related issue in the vm2 repo (it links further to other related issues). And as far as I have collected I guess the conclusion is that the internal node modules are not "real" but they are proxied (I'm on thin ice here) and as a result you are not allowed to extend other objects using them. This is as far as I can tell "Known issue" nr 2 in the readme:
I guess this might be something to pick up again after the project have moved to isolated-vm? Unless I have completed missed the mark here and there is an easier solution just in front of me that I can't think of 😅 |
Yes, that makes sense. Now, that the CLI route didn't work, I recommend trying the API route. @jonasgheer Can you take a stab at it? |
Makes sense @helloanoop let me have a look at this ... |
@DivyMohan14 I confirm that writing indeed works const fs = require('fs');
const path = require('path');
const filepath = path.join(bru.cwd(), 'log.txt');
fs.writeFileSync(filepath, JSON.stringify(res.getBody())); @DivyMohan14 When you have some time, please see if you are able to implement streaming event listening on the res object We should be then be able to use something like this let responseData = '';
res.on('data', (chunk) => {
responseData += chunk;
});
res.on('end', () => {
try {
let jsonData = JSON.parse(responseData);
// write to file
} catch (error) {
}
}); |
I did the setup that @Xceron might have, and yeah indeed the zip file write does not work as expected, and as you said @helloanoop the normal write works as expected.. The problem here does not seem to be with the file system or the write access, it looks to be some issue with the response configuration |
Looks like I found the issue, the issue is related to axios returning binary data, dealing with seems to be an issue an easier solution is to get the response as arrayBuffer instead... Wrote a logic in prepare request to check if the Content-Type is application/zip then set responseType as arrayBuffer in axios config A good resource for the issue: link here @helloanoop please have a look at PR @Xceron you can try to run the server from my branch in the meantime to check if this solves the issue ? |
@DivyMohan14 thank you for the quick PR! |
@Xceron Btw in the request headers did you mention My zip file stream mock setup
|
The server is a non-public Spring Boot (Java) Server, so I cannot share it, sadly. The headers of the response are I will try and set up your mock server to test. I'm not a JS guy, so it'll take some time. Will report back if I manage to pinpoint the problem. |
Cool no issues, in the PR if you noticed I was using 'Content-Type' as the identifier. So in bruno I believe in the header you are setting |
Don't know whether I can follow you, let me explain: My request is a file upload (which I do using the pre-request script shared here), which takes multiple files as input and uploads it to a server. The server uses those files, does some things and sends back a .zip. The files which the server receives are correct and work perfectly fine. So the request side from Bru work without any problem. The server then sends back the .zip and sets |
You run the script you mentioned here, post you get the response from the server right.. it said post response script ..? I will share my bru file here...
can you also share your bru file if possible ? will be easier to debug.. |
|
Yeah can you please copy this headers section from my bru file into yours ? and then try once |
Just tried that, still running into the Error |
In the response section are you still seeing the response in binary format? if yes then it is a problem it should have changed to response array buffer if you are running from my branch |
No, with the headers set I just run into the error and get no response. |
Got it thanks let me check, one thing maybe the |
can you share the response headers @Xceron ? will check that |
|
@Xceron as you can see if the transfer-encoding is chunked, then the server will not return the content-length |
If I just remove the check, the request will go through, I see binary outputs in the response, but the written zip is still corrupt. This happens with headers set and with headers not set. |
Cool thanks, Can you help me with some questions?
|
The server uses Spring's InputStreamResource to send the read file in chunks, so it streams it I don't get any buffer. Could this be related to the file size? The .zip in my tests are tiny (~3 KB) and the requests finish instantly |
Hmm interesting I also set up a server which streams zip files but the same setup seems to be working fine for me. |
@Xceron when trying to run your code I get an error: "Error invoking remote method 'send-http-request': VMError: Cannot find module 'form-data'". Any idea what is wrong? |
Hi Everyone, I am facing the same error any idea how to resolve I get the following error when trying to pass zip file
This is my script
bruno.json
package.json
Any idea how to solve ? |
Just faced the same issue. Seems you're missing the line
in your package.json. Can thanks #1346 |
Please include the fs modules (fs, fs-extra) in the distribution and allow accessing the filesystem.
This is needed to generate attachments from files in pre-request scripts
The text was updated successfully, but these errors were encountered: