-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Param --docker-volume-basedir in sam local start-lambda is not used as expected #3520
Comments
So sorry you encountered a behaviour which at least doesn't provide a smooth UX! Thanks for reporting the issue, we will be investigating it further. |
@ssenchenko is @aantillonl correct in his assumption that the |
I also wish I could run
I don't know what is standard dev workflow? Should I develop against functions deployed to cloud? Or should I install SAM cli outside my primary dev container - inside host machine WSL and then run local commands outside devcontainer? That would mean I would also have to install cdk outside, file watchers etc. since I want to restart local api on lambda code change. I feel that's wrong, I want to come to a new device, open repo in vscode, it spins it's dev container with sam, cdk, tsc, nodemon etc. installed and I can work, not having to worry about installing sam cli, cdk cli etc. somewhere outside on host machine and also opening terminals outside vscode on host machine. Only thing I want to have to install on host machine is Docker&vscode. All other things like sam cli, cdk cli should be project dependencies in my point of view and available via dev container. I did try to use It would be great if this could be somehow prioritised. I've found multiple existing issues with people trying to use |
I have encountered the same issue. |
I'm encountering this exact issue |
Also have this issue. Based on some tutorials out there, it does seem as though this feature might have worked in previous versions. Can also confirm setting via env var I have found a temporary fix which is ugly af but seems to work for me. It might not work for you VS Code dev container folks as I think it might do something funny with paths. Essentially, Instead of mounting your code wherever you like in your SAM container, you must instead mount it in the SAM container using the same path as the files on the host. Example docker-compose.yml
I recall previously seeing SAM doesn't support docker in docker so don't know if I'm getting confused or if this is a recent development Hope this helps! |
I have the same issue. I tried a couple of versions and if I remember correctly it was 1.18.0 or 1.19.0 which broke the functionality. The mentioned versions only support Nodejs 12 so running aws-sam-cli with newer Nodejs versions in a Docker container is currently impossible. Looks like the issue is not getting a lot of attention, so I have started thinking about migrating my lambdas to Serverless Framework. |
I haven't run SAM for a while, but it used to work for "docker IN docker", however did not work for "docker FROM docker" case (cuz messed paths). I gave up and stopped using sam for local dev at all. On local I'm running my lambdas through express and adapting it's req to event format => !working breakpoints! (without need to modify path mapping from local to remote with new asset hash after each change), pleasant DX. For my simple case it's enough. Instead of Serverless framework I would probably try https://docs.sst.dev/live-lambda-development which relays requests to local code through websocket. Since I dislike serverless framework yaml, cdk is better. |
Just wanted to share for anyone running into this issue because they wanted to use devcontainers in vscode along with
As others have said, the only real way around the issue is a bit ugly, and it is to essentially make the directory path within your docker container the same as the path of where your code is on your host machine. Thus when sam goes to look for the I was able to get my .devcontainer to work with "workspaceFolder": "${localWorkspaceFolder}",
"workspaceMount": "source=${localWorkspaceFolder},target=${localWorkspaceFolder},type=bind,consistency=cached",
with the above configuration, the host path matches the container path and sam-cli is able to do launch properly locally. Hope this helps someone, cheers. |
This issue is fixed by this PR and released since 1.57.0. I'm closing this issue. Thanks. |
|
Description:
Consider a scenario where Docker is running on a remote machine (or SAM CLI is running on a container itself).
Based on the docs, which for
--docker-volume-basedir
say:If I understand things correctly (which very likely may not be the case), running
sam local start-lambda --docker-volume-basedir /path/to/code/in/docker/machine
, should take the contents of the specified path on the Docker host and mount them on the Lambda Container, since the path exists in the Docker host machine, not in the local machine.However, in the current implementation, it seems that
--docker-volume-basedir
is ignored in favor of the path containing the template file when looking for the code to be mounted in the Lambda Container.Steps to reproduce:
Suppose you have your env variable
DOCKER_HOST=tcp://docker_host:2375
which tells the CLI to use docker from a remote host (this could be also defined with the parameter--container-host
), and you have mounted the source code to the Docker host as it is specified in the docs, to a directory e.g. /home/docker-host/stock-checker.Now, you have a sam app such as the Stock Trader sample in your current directory, e.g. `/home/my-work-dir/stock-checker/" and you start lambda locally and execute a function with:
Observed result:
You will see the log:
Mounting /home/my-work-dir/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container
Which will fail because that path does not exist in the Docker host. The resulting exception is that the handler module cannot be loaded.
Expected result:
The mounting path should be the one specified by the parameter
--docker-volume-basedir
, and the log should sayMounting /home/docker-host/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container
Which is a valid path in the Docker host, and the source code is correctly mounted into the Lambda Container.
Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
sam --version
: 1.36.0Add --debug flag to command you are running
Suggestion
The way the path to be mounted is defined by a few factors:
SamFunctionProvider()
without a value for use_raw_codeuri, which defaults to False, which in turn makes the created Function'scodeuri
to always have that absolute path relative to template directory.codeuri
only because it is already an absolute path (as shown in the previous point), and because callingos.path.join(cwd, codeuri)
would still return just the value ofcodeuri
, since it is an absolute path already. Completely disregardingcwd
which at this point holds the value passed to--docker-volume-basedir
As a possible solution, I have tried adding a check for
docker_volume_basedir
in invoke_context.py#L195, like this:In order to set a value for use_raw_codeuri
which will in turn make the resulting Function's
codeurifield to be a relative path, and thus making
resolve_code_pathreturn a path that utilizes the value of
cwd(same as
docker_volume_basedir`)If this solution seems feasible and fits your overall plan/vision, I'd like to chip in with a PR, or if you have any plans for this issue in the future I'd like to contribute as well.
Note, doing so messes up 4 test cases, need to review further how/if it is possible to implement this suggestion safely
The text was updated successfully, but these errors were encountered: