Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[video] Streaming AutoML Classification #2313

Merged
merged 8 commits into from
Aug 26, 2019
Merged

Conversation

anguillanneuf
Copy link
Member

@anguillanneuf anguillanneuf commented Aug 2, 2019

I had to hardcode project_name and model_id in the test for this sample. @dizcology @nnegrey Is there a better way to do it? For instance, use environment variables for them?

@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Aug 2, 2019
@anguillanneuf anguillanneuf added do not merge Indicates a pull request not ready for merge, due to either quality or timing. and removed cla: yes This human has signed the Contributor License Agreement. labels Aug 2, 2019
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Aug 2, 2019
@anguillanneuf
Copy link
Member Author

anguillanneuf commented Aug 2, 2019

@beccasaurus I just added some skeleton code. Would you kindly comment on function names and region tag names?

@anguillanneuf anguillanneuf removed the do not merge Indicates a pull request not ready for merge, due to either quality or timing. label Aug 24, 2019
@anguillanneuf anguillanneuf changed the title Video Intelligence Beta - Streaming/Live Streaming support for AutoML Custom Models [video]: Streaming AutoML Classification Aug 24, 2019
@anguillanneuf anguillanneuf changed the title [video]: Streaming AutoML Classification [video] Streaming AutoML Classification Aug 24, 2019

@pytest.mark.slow
def test_streaming_automl_classification(capsys, in_file):
model_id = 'VCN6363999689846554624'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anguillanneuf, responding here, but for modelid I'm not sure if there is a better way.

But for project_id you can typically use:

import os
os.environ['GCLOUD_PROJECT`]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still shows a literal model_id identifier here, I don't see it updated to use an environment variable

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I'm late to the party here but I just noticed that in_file isn't defined in this test - it's defined in test_track_objects_gcs() - should its definition also be added to this function so that we're following the practice of making sure tests can run alone and/or in any order? @anguillanneuf

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Leah for catching this! I had meant to use 'video_path' instead of 'in_file' because 'video_path' can be used as a fixture. Addressing it now.

@@ -36,6 +36,9 @@

python beta_snippets.py streaming-annotation-storage resources/cat.mp4 \
gs://mybucket/myfolder

python beta_snippets.py streaming-automl-classification \
resources/cat.mp4 projects/myproject/location/mylocation/model/mymodel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit:

Thoughts on changing:
projects/myproject/location/mylocation/model/mymodel
To:
projects/[PROJECT_ID]/location/[LOCATION]/model/[MODEL_NAME]

Or something like the above to help show which things need to be changed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. [LOCATION] needs to be hard coded to us-central1 because it is the only region that supports this feature at the moment.

Copy link
Contributor

@nnegrey nnegrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Added a few comments, only thing I think needs fixing or left alone is the License header, rest are just my own opinion, but not required.

from google.cloud.videointelligence_v1p3beta1 import enums

# path = 'path_to_file'
# model_path = 'projects/project_id/locations/location_id/models/model_id'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we ask for project_id and model_id to be passed in separately?

This is how we traditionally do samples which require a project path :)

Python has helper methods for creating these long model resource paths, e.g.

client.model_path("project id", "us-central1", "model id")

We have pre-existing examples for samples that require project ID (in "resource paths" like this) for: Product Search and Translation (v3 translation has Glossary objects with resource paths)

See Product Search example:
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/vision/cloud-client/product_search/product_set_management.py#L39

Excerpt:

# [START vision_product_search_create_product_set]
def create_product_set(
        project_id, location, product_set_id, product_set_display_name):
    """Create a product set.
    Args:
        project_id: Id of the project.
        location: A compute region name.
        product_set_id: Id of the product set.
        product_set_display_name: Display name of the product set.
    """
    client = vision.ProductSearchClient()

    # A resource that represents Google Cloud Platform location.
    location_path = client.location_path(
        project=project_id, location=location)

    # Create a product set with the product set specification in the region.
    product_set = vision.types.ProductSet(
            display_name=product_set_display_name)

    # The response is the product set with `name` populated.
    response = client.create_product_set(
        parent=location_path,
        product_set=product_set,
        product_set_id=product_set_id)

    # Display the product set information.
    print('Product set name: {}'.format(response.name))
# [END vision_product_search_create_product_set]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are missing a method that constructs the model path from the video intelligence client. How about I use string formatting here instead?

['SERVICE_ADDRESS',
 '_INTERFACE_NAME',
 '__class__',
 '__delattr__',
 '__dict__',
 '__dir__',
 '__doc__',
 '__eq__',
 '__format__',
 '__ge__',
 '__getattribute__',
 '__gt__',
 '__hash__',
 '__init__',
 '__init_subclass__',
 '__le__',
 '__lt__',
 '__module__',
 '__ne__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 '_client_info',
 '_inner_api_calls',
 '_method_configs',
 'enums',
 'from_service_account_file',
 'from_service_account_json',
 'streaming_annotate_video',
 'transport']

Copy link
Contributor

@beccasaurus beccasaurus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost LGTM

    1. Update the model ID to come from an environment variable (from Noah's comment, code not showing the change)
    1. Have the method accept project_id and model_id parameters and use client.model_path to build the full resource path

@anguillanneuf anguillanneuf merged commit 03840af into master Aug 26, 2019
@anguillanneuf anguillanneuf deleted the live_stream_automl branch August 26, 2019 21:15
@leahecole
Copy link
Collaborator

🎉

busunkim96 pushed a commit to busunkim96/python-videointelligence that referenced this pull request May 20, 2020
…/python-docs-samples#2313)

* Video Intelligence Beta - Streaming/Live Streaming support for AutoML custom models

* add test skeleton

* skeleton

* more skeleton code

* update sample: update video codec/test/model_id/etc.

* lint

* mask project id

* Noah's and Rebecca's suggestions
danoscarmike pushed a commit to googleapis/python-videointelligence that referenced this pull request Sep 30, 2020
…/python-docs-samples#2313)

* Video Intelligence Beta - Streaming/Live Streaming support for AutoML custom models

* add test skeleton

* skeleton

* more skeleton code

* update sample: update video codec/test/model_id/etc.

* lint

* mask project id

* Noah's and Rebecca's suggestions
dizcology pushed a commit that referenced this pull request Sep 11, 2023
* Video Intelligence Beta - Streaming/Live Streaming support for AutoML custom models

* add test skeleton

* skeleton

* more skeleton code

* update sample: update video codec/test/model_id/etc.

* lint

* mask project id

* Noah's and Rebecca's suggestions
leahecole pushed a commit that referenced this pull request Sep 15, 2023
* Video Intelligence Beta - Streaming/Live Streaming support for AutoML custom models

* add test skeleton

* skeleton

* more skeleton code

* update sample: update video codec/test/model_id/etc.

* lint

* mask project id

* Noah's and Rebecca's suggestions
parthea pushed a commit to googleapis/google-cloud-python that referenced this pull request Sep 22, 2023
…/python-docs-samples#2313)

* Video Intelligence Beta - Streaming/Live Streaming support for AutoML custom models

* add test skeleton

* skeleton

* more skeleton code

* update sample: update video codec/test/model_id/etc.

* lint

* mask project id

* Noah's and Rebecca's suggestions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants