Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add documentation and options for multi-task arguments #3989

Merged
merged 3 commits into from
Jul 18, 2024

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Jul 17, 2024

Summary by CodeRabbit

  • New Features

    • Introduced multi-task training support, including new parameters for enhanced flexibility and customization.
    • Added documentation for multi-task specific parameters and usage examples.
  • Documentation

    • Updated multi-task training section with detailed instructions and code snippets.
  • Chores

    • Added a new entry point for multi-task functionality in configuration files.

Copy link
Contributor

coderabbitai bot commented Jul 17, 2024

Walkthrough

Walkthrough

A new parameter fold_subdoc was added to multiple functions in argcheck.py, enhancing their flexibility and customization. The documentation was updated to include multi-task specific parameters, and a new entry point for multi-tasking was added to pyproject.toml. This set of changes focuses on improving the multi-task training capabilities and documentation for the deepmd library.

Changes

File Change Summary
deepmd/utils/argcheck.py Added fold_subdoc and multi_task arguments to various functions to increase flexibility.
doc/train/multi-task-training-pt.md Added a section on multi-task training parameters and a code snippet for generating arguments.
pyproject.toml Introduced a new entry point for multi-task training in the dpgui section.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Outside diff range, codebase verification and nitpick comments (4)
deepmd/utils/argcheck_nvnmd.py (1)

Line range hint 7-72: Approve the addition of fold_subdoc, but ensure comprehensive documentation.

The introduction of the fold_subdoc parameter is correctly implemented. Ensure that the documentation for this function is updated to explain the purpose and usage of fold_subdoc.

Would you like me to help update the documentation or open a GitHub issue for this task?

deepmd/utils/argcheck.py (2)

Line range hint 1808-1826: Refactor suggestion: Consolidate argument defaults and descriptions.

The function learning_rate_args has been modified to include a new parameter fold_subdoc. This parameter controls whether the documentation should be folded, which is useful for multi-task settings where argument structures might be complex.

However, the default values and descriptions for the arguments within this function are scattered and could be consolidated for better readability and maintainability.

def learning_rate_args(fold_subdoc: bool = False) -> Argument:
    doc_scale_by_worker = (
        "When parallel training or batch size scaled, how to alter learning rate. "
        "Valid values are `linear`(default), `sqrt` or `none`."
    )
    doc_lr = "The definition of learning rate"
    return Argument(
        "learning_rate",
        dict,
        [
            Argument(
                "scale_by_worker",
                str,
                optional=True,
                default="linear",
                doc=doc_scale_by_worker,
            )
        ],
        [learning_rate_variant_type_args()],
        optional=True,
        doc=doc_lr,
        fold_subdoc=fold_subdoc,
    )

Line range hint 2582-2600: Documentation generation function needs clarification.

The function gen_doc has been modified to include a new multi_task parameter, which is used to generate documentation tailored for multi-task training setups. However, the function's implementation could be clearer in how it handles the make_anchor and make_link parameters, especially since their interaction isn't well-documented.

def gen_doc(*, make_anchor=True, make_link=True, multi_task=False, **kwargs) -> str:
    if make_link:
        make_anchor = True
    ptr = []
    for ii in gen_args(multi_task=multi_task):
        ptr.append(ii.gen_doc(make_anchor=make_data_anchor, make_link=make_data_link, **kwargs))

    key_words = []
    for ii in "\n\n".join(ptr).split("\n"):
        if "argument path" in ii:
            key_words.append(ii.split(":")[1].replace("`", "").strip())
    # ptr.insert(0, make_index(key_words))

    return "\n\n".join(ptr)
deepmd/main.py (1)

514-518: Missing test cases for the --multi-task argument.

The --multi-task argument has been added to the parsers_doc parser in deepmd/main.py, but there are no corresponding test cases found in the repository to verify its functionality.

  • Please add test cases for the --multi-task argument to ensure its proper integration and functionality within the CLI.
Analysis chain

New CLI argument --multi-task added effectively.

The addition of the --multi-task argument to the parsers_doc parser is implemented correctly, using standard practices for command-line interfaces.

Please ensure the functionality integrates well with the existing command-line interface and that it triggers the expected behavior when used.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the integration and functionality of the `--multi-task` argument in the CLI.

# Test: Search for the argument usage in the CLI tests. Expect: At least one test case using this argument.
rg '--multi-task' --type python -A 5 --files-with-matches 'tests/'

Length of output: 513


Script:

#!/bin/bash
# Description: Verify the integration and functionality of the `--multi-task` argument in the CLI.

# Test: Search for the argument usage in the CLI tests. Expect: At least one test case using this argument.
rg '--multi-task' tests/ -A 5

Length of output: 478


Script:

#!/bin/bash
# Description: Verify the integration and functionality of the `--multi-task` argument in the CLI.

# Test: Search for the argument usage in the CLI tests. Expect: At least one test case using this argument.
rg -e '--multi-task' tests/ -A 5

Length of output: 79


Script:

#!/bin/bash
# Description: Verify the integration and functionality of the `--multi-task` argument in the CLI.

# Test: Search for the argument usage in all Python files across the entire repository. Expect: At least one test case using this argument.
rg -e '--multi-task' -t py -A 5

Length of output: 351

deepmd/utils/argcheck.py Show resolved Hide resolved
deepmd/entrypoints/doc.py Show resolved Hide resolved
source/tests/common/test_doc_train_input.py Show resolved Hide resolved
Copy link

codecov bot commented Jul 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.82%. Comparing base (0c0878e) to head (b0f949a).
Report is 107 commits behind head on devel.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #3989      +/-   ##
==========================================
- Coverage   82.83%   82.82%   -0.01%     
==========================================
  Files         522      522              
  Lines       50876    50881       +5     
  Branches     3019     3015       -4     
==========================================
+ Hits        42142    42144       +2     
  Misses       7799     7799              
- Partials      935      938       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

doc/train/multi-task-training-pt.md Show resolved Hide resolved
@njzjz njzjz requested review from iProzd and wanghan-iapcm July 17, 2024 20:09
@njzjz njzjz marked this pull request as ready for review July 17, 2024 20:09
njzjz added a commit to njzjz/dargs that referenced this pull request Jul 17, 2024
njzjz added a commit to deepmodeling/dargs that referenced this pull request Jul 17, 2024
inspired by deepmodeling/deepmd-kit#3989

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Documentation**
- Added a code snippet for writing Markdown files using MyST-Parser in
the documentation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Jul 18, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 18, 2024
@njzjz njzjz added this pull request to the merge queue Jul 18, 2024
Merged via the queue into deepmodeling:devel with commit 24d151a Jul 18, 2024
61 checks passed
@njzjz njzjz deleted the multi-task-args branch July 18, 2024 16:12
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
…eling#3989)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced multi-task training support, including new parameters for
enhanced flexibility and customization.
- Added documentation for multi-task specific parameters and usage
examples.

- **Documentation**
- Updated multi-task training section with detailed instructions and
code snippets.

- **Chores**
- Added a new entry point for multi-task functionality in configuration
files.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants