Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Param --docker-volume-basedir in sam local start-lambda is not used as expected #3520

Closed
aantillonl opened this issue Dec 7, 2021 · 11 comments
Labels

Comments

@aantillonl
Copy link

aantillonl commented Dec 7, 2021

Description:

Consider a scenario where Docker is running on a remote machine (or SAM CLI is running on a container itself).

Based on the docs, which for --docker-volume-basedir say:

If Docker is running on a remote machine, you must mount the path where the AWS SAM file exists on the Docker machine, and modify this value to match the remote machine.

If I understand things correctly (which very likely may not be the case), running sam local start-lambda --docker-volume-basedir /path/to/code/in/docker/machine, should take the contents of the specified path on the Docker host and mount them on the Lambda Container, since the path exists in the Docker host machine, not in the local machine.

However, in the current implementation, it seems that --docker-volume-basedir is ignored in favor of the path containing the template file when looking for the code to be mounted in the Lambda Container.

Steps to reproduce:

Suppose you have your env variable DOCKER_HOST=tcp://docker_host:2375 which tells the CLI to use docker from a remote host (this could be also defined with the parameter --container-host), and you have mounted the source code to the Docker host as it is specified in the docs, to a directory e.g. /home/docker-host/stock-checker.

Now, you have a sam app such as the Stock Trader sample in your current directory, e.g. `/home/my-work-dir/stock-checker/" and you start lambda locally and execute a function with:

$ sam local start-lambda  --debug --docker-volume-basedir /home/docker-host/stock-checker/.aws-sam/build

$ aws lambda invoke --endpoint http://localhost:3001 --function-name StockCheckerFunction response.json

Observed result:

You will see the log:
Mounting /home/my-work-dir/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container

Which will fail because that path does not exist in the Docker host. The resulting exception is that the handler module cannot be loaded.

Expected result:

The mounting path should be the one specified by the parameter --docker-volume-basedir, and the log should say

Mounting /home/docker-host/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container

Which is a valid path in the Docker host, and the source code is correctly mounted into the Lambda Container.

Additional environment details (Ex: Windows, Mac, Amazon Linux etc)

  1. OS: Ubuntu
  2. sam --version: 1.36.0
  3. AWS region: eu-west-1

Add --debug flag to command you are running

Suggestion

The way the path to be mounted is defined by a few factors:

  • First, when a Function object is created, its codeuri is set to an absolute path which is the same as where the template file exists.
    • This happens as a consequence of the following: nvoke_context.py#L195, which calls SamFunctionProvider() without a value for use_raw_codeuri, which defaults to False, which in turn makes the created Function's codeuri to always have that absolute path relative to template directory.
  • Second, when the mounting path for the Lambda Contianer is evaluated in local_lambda.py#L184, it follows into codeuri.py#L44, which will return codeuri only because it is already an absolute path (as shown in the previous point), and because calling os.path.join(cwd, codeuri) would still return just the value of codeuri, since it is an absolute path already. Completely disregarding cwd which at this point holds the value passed to --docker-volume-basedir

As a possible solution, I have tried adding a check for docker_volume_basedir in invoke_context.py#L195, like this:

self._function_provider = SamFunctionProvider(self._stacks, use_raw_codeuri=bool(self._docker_volume_basedir))

In order to set a value for use_raw_codeuriwhich will in turn make the resulting Function'scodeurifield to be a relative path, and thus makingresolve_code_pathreturn a path that utilizes the value ofcwd(same asdocker_volume_basedir`)

If this solution seems feasible and fits your overall plan/vision, I'd like to chip in with a PR, or if you have any plans for this issue in the future I'd like to contribute as well.

Note, doing so messes up 4 test cases, need to review further how/if it is possible to implement this suggestion safely

@ssenchenko ssenchenko added the stage/needs-investigation Requires a deeper investigation label Dec 9, 2021
@ssenchenko
Copy link
Contributor

So sorry you encountered a behaviour which at least doesn't provide a smooth UX! Thanks for reporting the issue, we will be investigating it further.
We appreciate your suggestions a lot

@Ustice
Copy link

Ustice commented Jan 25, 2022

@ssenchenko is @aantillonl correct in his assumption that the --docker-volume-basedir parameter is broken, or does it do something else entirely? There doesn't seem to be much in the way of documentation around it. It appears that it is currently impossible to run sam local start-api inside of a container, when binding to the host docker server. (Or more specifically, it is impossible to map the bound volumes on the host system to the container that sam spins up.)

@mataslib
Copy link

mataslib commented Feb 11, 2022

I also wish I could run sam local start-api from within vscode dev container (docker container). With having docker-from-docker feature on inside devcontainer.json. (So docker command is available within container console and sam spawns sibling containers via forwarding to host docker):

#devcontainer.json

features: [
  "docker-from-docker": {
      "version": "latest",
      "moby": true
  },
]

I don't know what is standard dev workflow? Should I develop against functions deployed to cloud? Or should I install SAM cli outside my primary dev container - inside host machine WSL and then run local commands outside devcontainer? That would mean I would also have to install cdk outside, file watchers etc. since I want to restart local api on lambda code change. I feel that's wrong, I want to come to a new device, open repo in vscode, it spins it's dev container with sam, cdk, tsc, nodemon etc. installed and I can work, not having to worry about installing sam cli, cdk cli etc. somewhere outside on host machine and also opening terminals outside vscode on host machine. Only thing I want to have to install on host machine is Docker&vscode. All other things like sam cli, cdk cli should be project dependencies in my point of view and available via dev container.

I did try to use --docker-volume-basedir "/home/mataslib/projects/bpcode_aws/aws" which I think should help me achieve my goal (to give correct host path to sam instead of container's one which is sam inferring - docker commands are runned in host context so it needs host paths, not container's one.)? It seems to have no effect at all. Logs still shows paths of devcontainer not the path of host machine I specified via --docker-volume-basedir : Mounting /workspaces/aws/cdk.out/asset.1df0f3958007f4f46ef43fc61ea4f60f32fda6d92f5135649a922252ed0eb6e1 as /var/task:ro,delegated inside runtime container. It should be something like /home/mataslib/projects/bpcode_aws/aws/cdk.out/asset.1df0f3958007f4f46ef43fc61ea4f60f32fda6d92f5135649a922252ed0eb6e1 instead.

It would be great if this could be somehow prioritised. I've found multiple existing issues with people trying to use sam inside docker container without success. I think it is common scenario and I'm surprised it is not possible as of now, or is it? I don't see inside but I think that passing base path shouldn't be big deal/effort and it would increase DX greatly.

@lukiwlosek
Copy link

I have encountered the same issue.

@sentros
Copy link

sentros commented May 31, 2022

I'm encountering this exact issue

@blwsh
Copy link

blwsh commented Sep 2, 2022

Also have this issue. Based on some tutorials out there, it does seem as though this feature might have worked in previous versions. Can also confirm setting via env var SAM_DOCKER_VOLUME_BASEDIR doesn't seem to work.

I have found a temporary fix which is ugly af but seems to work for me. It might not work for you VS Code dev container folks as I think it might do something funny with paths.

Essentially, Instead of mounting your code wherever you like in your SAM container, you must instead mount it in the SAM container using the same path as the files on the host.

Example docker-compose.yml

services:
  sam:
    image: my-sam-docker-image # If anyone knows of an official docker image with SAM, CDK and CLI please let me know :)
    volumes:
      - .:$PWD
    working_dir: $PWD
    entrypoint: >
      sam local start-api 
        --template path/to/StackStack.template.json 
        --container-host host.docker.internal
        --skip-pull-image
        --host 0.0.0.0
        --port 3000

I recall previously seeing SAM doesn't support docker in docker so don't know if I'm getting confused or if this is a recent development

Hope this helps!

@pt-ossi
Copy link

pt-ossi commented Sep 2, 2022

I have the same issue.

I tried a couple of versions and if I remember correctly it was 1.18.0 or 1.19.0 which broke the functionality. The mentioned versions only support Nodejs 12 so running aws-sam-cli with newer Nodejs versions in a Docker container is currently impossible.

Looks like the issue is not getting a lot of attention, so I have started thinking about migrating my lambdas to Serverless Framework.

@mataslib
Copy link

mataslib commented Sep 2, 2022

I recall previously seeing SAM doesn't support docker in docker so don't know if I'm getting confused or if this is a recent development

I haven't run SAM for a while, but it used to work for "docker IN docker", however did not work for "docker FROM docker" case (cuz messed paths).

I gave up and stopped using sam for local dev at all. On local I'm running my lambdas through express and adapting it's req to event format => !working breakpoints! (without need to modify path mapping from local to remote with new asset hash after each change), pleasant DX. For my simple case it's enough.

Instead of Serverless framework I would probably try https://docs.sst.dev/live-lambda-development which relays requests to local code through websocket. Since I dislike serverless framework yaml, cdk is better.

@ruubelly
Copy link

ruubelly commented Sep 4, 2022

Just wanted to share for anyone running into this issue because they wanted to use devcontainers in vscode along with sam and doing some sam local invokes. I was coming across the following issue when I didn't have things setup correctly:

docker.errors.APIError: 500 Server Error: Internal Server Error ("b'Mounts denied: \nThe path /xyz/123/.aws-sam/build/FunctionName is not shared from the host and is not known to Docker.\nYou can configure shared paths from Docker -> Preferences... -> Resources -> File Sharing.\nSee https://docs.docker.com/desktop/mac for more info.'")

As others have said, the only real way around the issue is a bit ugly, and it is to essentially make the directory path within your docker container the same as the path of where your code is on your host machine. Thus when sam goes to look for the .aws-sam folder as it mounts it onto the sibling docker container, it is able to find it within the host filesystem.

I was able to get my .devcontainer to work with sam local invoke by setting the following in my .devcontainer/devcontainer.json

    "workspaceFolder": "${localWorkspaceFolder}",
    "workspaceMount": "source=${localWorkspaceFolder},target=${localWorkspaceFolder},type=bind,consistency=cached",
  

with the above configuration, the host path matches the container path and sam-cli is able to do launch properly locally. Hope this helps someone, cheers.

@super132
Copy link
Contributor

super132 commented Jan 5, 2023

This issue is fixed by this PR and released since 1.57.0. I'm closing this issue. Thanks.

@super132 super132 closed this as completed Jan 5, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Jan 5, 2023

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests