From eb6f4d155623c3b6c698e53d27f3d509fd1f4f9a Mon Sep 17 00:00:00 2001 From: "gcf-owl-bot[bot]" <78513119+gcf-owl-bot[bot]@users.noreply.github.com> Date: Thu, 15 Dec 2022 18:13:33 -0500 Subject: [PATCH] feat: Added new fields to facilitate debugging (#465) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore: update to gapic-generator-python 1.5.0 feat: add support for `google.cloud..__version__` PiperOrigin-RevId: 484665853 Source-Link: https://github.com/googleapis/googleapis/commit/8eb249a19db926c2fbc4ecf1dc09c0e521a88b22 Source-Link: https://github.com/googleapis/googleapis-gen/commit/c8aa327b5f478865fc3fd91e3c2768e54e26ad44 Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiYzhhYTMyN2I1ZjQ3ODg2NWZjM2ZkOTFlM2MyNzY4ZTU0ZTI2YWQ0NCJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * update version in gapic_version.py * chore: Update to gapic-generator-python 1.6.0 feat(python): Add typing to proto.Message based class attributes feat(python): Snippetgen handling of repeated enum field PiperOrigin-RevId: 487326846 Source-Link: https://github.com/googleapis/googleapis/commit/da380c77bb87ba0f752baf07605dd1db30e1f7e1 Source-Link: https://github.com/googleapis/googleapis-gen/commit/61ef5762ee6731a0cbbfea22fd0eecee51ab1c8e Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiNjFlZjU3NjJlZTY3MzFhMGNiYmZlYTIyZmQwZWVjZWU1MWFiMWM4ZSJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * feat: new APIs added to reflect updates to the filestore service - Add ENTERPRISE Tier - Add snapshot APIs: RevertInstance, ListSnapshots, CreateSnapshot, DeleteSnapshot, UpdateSnapshot - Add multi-share APIs: ListShares, GetShare, CreateShare, DeleteShare, UpdateShare - Add ConnectMode to NetworkConfig (for Private Service Access support) - New status codes (SUSPENDED/SUSPENDING, REVERTING/RESUMING) - Add SuspensionReason (for KMS related suspension) - Add new fields to Instance information: max_capacity_gb, capacity_step_size_gb, max_share_count, capacity_gb, multi_share_enabled PiperOrigin-RevId: 487492758 Source-Link: https://github.com/googleapis/googleapis/commit/5be5981f50322cf0c7388595e0f31ac5d0693469 Source-Link: https://github.com/googleapis/googleapis-gen/commit/ab0e217f560cc2c1afc11441c2eab6b6950efd2b Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiYWIwZTIxN2Y1NjBjYzJjMWFmYzExNDQxYzJlYWI2YjY5NTBlZmQyYiJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * chore: Update gapic-generator-python to v1.6.1 PiperOrigin-RevId: 488036204 Source-Link: https://github.com/googleapis/googleapis/commit/08f275f5c1c0d99056e1cb68376323414459ee19 Source-Link: https://github.com/googleapis/googleapis-gen/commit/555c0945e60649e38739ae64bc45719cdf72178f Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiNTU1YzA5NDVlNjA2NDllMzg3MzlhZTY0YmM0NTcxOWNkZjcyMTc4ZiJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * feat: Added new fields to facilitate debugging * Added new field to Speech response proto, to give more information to indicate whether, or not, Biasing was applied (eg. did Biasing application timed out). * Added request_id to Speech response protos. PiperOrigin-RevId: 492276727 Source-Link: https://github.com/googleapis/googleapis/commit/4c253358b1d4add3bf74707d5f58d44e044c5da8 Source-Link: https://github.com/googleapis/googleapis-gen/commit/f15b9aca7ac2bd40b20e6715188732d08fc7fe21 Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZjE1YjlhY2E3YWMyYmQ0MGIyMGU2NzE1MTg4NzMyZDA4ZmM3ZmUyMSJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * chore: use templated setup.py and owlbot.py * fix(deps): Require google-api-core >=1.34.0, >=2.11.0 fix: Drop usage of pkg_resources fix: Fix timeout default values docs(samples): Snippetgen should call await on the operation coroutine before calling result PiperOrigin-RevId: 493260409 Source-Link: https://github.com/googleapis/googleapis/commit/fea43879f83a8d0dacc9353b3f75f8f46d37162f Source-Link: https://github.com/googleapis/googleapis-gen/commit/387b7344c7529ee44be84e613b19a820508c612b Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiMzg3YjczNDRjNzUyOWVlNDRiZTg0ZTYxM2IxOWE4MjA1MDhjNjEyYiJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * feat: Added new fields to facilitate debugging * Added new field to Speech response proto, to give more information to indicate whether, or not, Biasing was applied (eg. did Biasing application timed out). * Added request_id to Speech response protos. PiperOrigin-RevId: 493311906 Source-Link: https://github.com/googleapis/googleapis/commit/c9b244b4f64f3841be796762e6f2c5f219c443f8 Source-Link: https://github.com/googleapis/googleapis-gen/commit/d63ac840dec854ee7acab7b52b15deaf819eae07 Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZDYzYWM4NDBkZWM4NTRlZTdhY2FiN2I1MmIxNWRlYWY4MTllYWUwNyJ9 * 🦉 Updates from OwlBot post-processor See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md * add gapic_version.py * fix build Co-authored-by: Owl Bot Co-authored-by: Anthonios Partheniou --- ...ppet_metadata_google.cloud.speech.v1.json} | 3 +- ...tadata_google.cloud.speech.v1p1beta1.json} | 3 +- ...ppet_metadata_google.cloud.speech.v2.json} | 7 ++-- ...ted_speech_long_running_recognize_async.py | 2 +- ...ted_speech_long_running_recognize_async.py | 2 +- ..._generated_speech_batch_recognize_async.py | 2 +- ...erated_speech_create_custom_class_async.py | 2 +- ...enerated_speech_create_phrase_set_async.py | 2 +- ...enerated_speech_create_recognizer_async.py | 2 +- ...erated_speech_delete_custom_class_async.py | 2 +- ...enerated_speech_delete_phrase_set_async.py | 2 +- ...enerated_speech_delete_recognizer_async.py | 2 +- ...ated_speech_undelete_custom_class_async.py | 2 +- ...erated_speech_undelete_phrase_set_async.py | 2 +- ...erated_speech_undelete_recognizer_async.py | 2 +- ...erated_speech_update_custom_class_async.py | 2 +- ...enerated_speech_update_phrase_set_async.py | 2 +- ...enerated_speech_update_recognizer_async.py | 2 +- speech/microphone/noxfile.py | 15 ++++---- .../adaptation_v2_custom_class_reference.py | 14 ++++++-- ...aptation_v2_custom_class_reference_test.py | 10 ++++-- .../adaptation_v2_inline_custom_class.py | 4 ++- .../adaptation_v2_inline_phrase_set.py | 2 ++ .../adaptation_v2_phrase_set_reference.py | 9 +++-- speech/snippets/beta_snippets.py | 36 +++++++++---------- speech/snippets/create_recognizer.py | 2 ++ speech/snippets/noxfile.py | 15 ++++---- speech/snippets/profanity_filter.py | 2 +- speech/snippets/quickstart_v2.py | 2 ++ speech/snippets/speech_adaptation_beta.py | 2 +- speech/snippets/speech_quickstart_beta.py | 2 +- speech/snippets/speech_to_storage_beta.py | 2 +- speech/snippets/transcribe.py | 4 +-- speech/snippets/transcribe_async_file.py | 2 +- speech/snippets/transcribe_async_gcs.py | 4 ++- speech/snippets/transcribe_file_v2.py | 2 ++ speech/snippets/transcribe_gcs_v2.py | 2 ++ speech/snippets/transcribe_model_selection.py | 4 +-- speech/snippets/transcribe_multichannel.py | 8 ++--- speech/snippets/transcribe_streaming.py | 2 +- speech/snippets/transcribe_streaming_v2.py | 2 ++ ...nscribe_streaming_voice_activity_events.py | 2 ++ ...cribe_streaming_voice_activity_timeouts.py | 2 ++ 43 files changed, 119 insertions(+), 75 deletions(-) rename speech/generated_samples/{snippet_metadata_speech_v1.json => snippet_metadata_google.cloud.speech.v1.json} (99%) rename speech/generated_samples/{snippet_metadata_speech_v1p1beta1.json => snippet_metadata_google.cloud.speech.v1p1beta1.json} (99%) rename speech/generated_samples/{snippet_metadata_speech_v2.json => snippet_metadata_google.cloud.speech.v2.json} (99%) diff --git a/speech/generated_samples/snippet_metadata_speech_v1.json b/speech/generated_samples/snippet_metadata_google.cloud.speech.v1.json similarity index 99% rename from speech/generated_samples/snippet_metadata_speech_v1.json rename to speech/generated_samples/snippet_metadata_google.cloud.speech.v1.json index fef347d79dea..c793efe0e3c8 100644 --- a/speech/generated_samples/snippet_metadata_speech_v1.json +++ b/speech/generated_samples/snippet_metadata_google.cloud.speech.v1.json @@ -7,7 +7,8 @@ } ], "language": "PYTHON", - "name": "google-cloud-speech" + "name": "google-cloud-speech", + "version": "0.1.0" }, "snippets": [ { diff --git a/speech/generated_samples/snippet_metadata_speech_v1p1beta1.json b/speech/generated_samples/snippet_metadata_google.cloud.speech.v1p1beta1.json similarity index 99% rename from speech/generated_samples/snippet_metadata_speech_v1p1beta1.json rename to speech/generated_samples/snippet_metadata_google.cloud.speech.v1p1beta1.json index 4d8a8b90c001..b51d437c94b0 100644 --- a/speech/generated_samples/snippet_metadata_speech_v1p1beta1.json +++ b/speech/generated_samples/snippet_metadata_google.cloud.speech.v1p1beta1.json @@ -7,7 +7,8 @@ } ], "language": "PYTHON", - "name": "google-cloud-speech" + "name": "google-cloud-speech", + "version": "0.1.0" }, "snippets": [ { diff --git a/speech/generated_samples/snippet_metadata_speech_v2.json b/speech/generated_samples/snippet_metadata_google.cloud.speech.v2.json similarity index 99% rename from speech/generated_samples/snippet_metadata_speech_v2.json rename to speech/generated_samples/snippet_metadata_google.cloud.speech.v2.json index cd0f56ae8397..cdf74909cdd7 100644 --- a/speech/generated_samples/snippet_metadata_speech_v2.json +++ b/speech/generated_samples/snippet_metadata_google.cloud.speech.v2.json @@ -7,7 +7,8 @@ } ], "language": "PYTHON", - "name": "google-cloud-speech" + "name": "google-cloud-speech", + "version": "0.1.0" }, "snippets": [ { @@ -46,7 +47,7 @@ }, { "name": "files", - "type": "Sequence[google.cloud.speech_v2.types.BatchRecognizeFileMetadata]" + "type": "MutableSequence[google.cloud.speech_v2.types.BatchRecognizeFileMetadata]" }, { "name": "retry", @@ -138,7 +139,7 @@ }, { "name": "files", - "type": "Sequence[google.cloud.speech_v2.types.BatchRecognizeFileMetadata]" + "type": "MutableSequence[google.cloud.speech_v2.types.BatchRecognizeFileMetadata]" }, { "name": "retry", diff --git a/speech/generated_samples/speech_v1_generated_speech_long_running_recognize_async.py b/speech/generated_samples/speech_v1_generated_speech_long_running_recognize_async.py index 98af31d837b9..c216ad43c2a5 100644 --- a/speech/generated_samples/speech_v1_generated_speech_long_running_recognize_async.py +++ b/speech/generated_samples/speech_v1_generated_speech_long_running_recognize_async.py @@ -55,7 +55,7 @@ async def sample_long_running_recognize(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v1p1beta1_generated_speech_long_running_recognize_async.py b/speech/generated_samples/speech_v1p1beta1_generated_speech_long_running_recognize_async.py index cd291d9eff05..2ccda75e3391 100644 --- a/speech/generated_samples/speech_v1p1beta1_generated_speech_long_running_recognize_async.py +++ b/speech/generated_samples/speech_v1p1beta1_generated_speech_long_running_recognize_async.py @@ -55,7 +55,7 @@ async def sample_long_running_recognize(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_batch_recognize_async.py b/speech/generated_samples/speech_v2_generated_speech_batch_recognize_async.py index 64de219fe419..7421e58eb6fb 100644 --- a/speech/generated_samples/speech_v2_generated_speech_batch_recognize_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_batch_recognize_async.py @@ -48,7 +48,7 @@ async def sample_batch_recognize(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_create_custom_class_async.py b/speech/generated_samples/speech_v2_generated_speech_create_custom_class_async.py index cad6b2f12694..ac34d16b2dae 100644 --- a/speech/generated_samples/speech_v2_generated_speech_create_custom_class_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_create_custom_class_async.py @@ -48,7 +48,7 @@ async def sample_create_custom_class(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_create_phrase_set_async.py b/speech/generated_samples/speech_v2_generated_speech_create_phrase_set_async.py index d2932bee3778..90721b21c251 100644 --- a/speech/generated_samples/speech_v2_generated_speech_create_phrase_set_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_create_phrase_set_async.py @@ -48,7 +48,7 @@ async def sample_create_phrase_set(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_create_recognizer_async.py b/speech/generated_samples/speech_v2_generated_speech_create_recognizer_async.py index eb62638eb3a9..03dbbce76642 100644 --- a/speech/generated_samples/speech_v2_generated_speech_create_recognizer_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_create_recognizer_async.py @@ -53,7 +53,7 @@ async def sample_create_recognizer(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_delete_custom_class_async.py b/speech/generated_samples/speech_v2_generated_speech_delete_custom_class_async.py index 64dce73c81da..e166f4ca6efe 100644 --- a/speech/generated_samples/speech_v2_generated_speech_delete_custom_class_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_delete_custom_class_async.py @@ -48,7 +48,7 @@ async def sample_delete_custom_class(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_delete_phrase_set_async.py b/speech/generated_samples/speech_v2_generated_speech_delete_phrase_set_async.py index d5f1c64bfd9a..14fd5d671893 100644 --- a/speech/generated_samples/speech_v2_generated_speech_delete_phrase_set_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_delete_phrase_set_async.py @@ -48,7 +48,7 @@ async def sample_delete_phrase_set(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_delete_recognizer_async.py b/speech/generated_samples/speech_v2_generated_speech_delete_recognizer_async.py index 8de3bedd71bb..1ef850a84285 100644 --- a/speech/generated_samples/speech_v2_generated_speech_delete_recognizer_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_delete_recognizer_async.py @@ -48,7 +48,7 @@ async def sample_delete_recognizer(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_undelete_custom_class_async.py b/speech/generated_samples/speech_v2_generated_speech_undelete_custom_class_async.py index 2ae43d7f72d3..3ae9ff321b90 100644 --- a/speech/generated_samples/speech_v2_generated_speech_undelete_custom_class_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_undelete_custom_class_async.py @@ -48,7 +48,7 @@ async def sample_undelete_custom_class(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_undelete_phrase_set_async.py b/speech/generated_samples/speech_v2_generated_speech_undelete_phrase_set_async.py index 77808995b1c8..5b06559b2330 100644 --- a/speech/generated_samples/speech_v2_generated_speech_undelete_phrase_set_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_undelete_phrase_set_async.py @@ -48,7 +48,7 @@ async def sample_undelete_phrase_set(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_undelete_recognizer_async.py b/speech/generated_samples/speech_v2_generated_speech_undelete_recognizer_async.py index eb58be11bd8f..2754c91b4639 100644 --- a/speech/generated_samples/speech_v2_generated_speech_undelete_recognizer_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_undelete_recognizer_async.py @@ -48,7 +48,7 @@ async def sample_undelete_recognizer(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_update_custom_class_async.py b/speech/generated_samples/speech_v2_generated_speech_update_custom_class_async.py index 959038979403..ec13c2f1ee32 100644 --- a/speech/generated_samples/speech_v2_generated_speech_update_custom_class_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_update_custom_class_async.py @@ -47,7 +47,7 @@ async def sample_update_custom_class(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_update_phrase_set_async.py b/speech/generated_samples/speech_v2_generated_speech_update_phrase_set_async.py index 02ee5d96efc0..0f14f7088b26 100644 --- a/speech/generated_samples/speech_v2_generated_speech_update_phrase_set_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_update_phrase_set_async.py @@ -47,7 +47,7 @@ async def sample_update_phrase_set(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/generated_samples/speech_v2_generated_speech_update_recognizer_async.py b/speech/generated_samples/speech_v2_generated_speech_update_recognizer_async.py index d29f724c271f..cc2f894c61aa 100644 --- a/speech/generated_samples/speech_v2_generated_speech_update_recognizer_async.py +++ b/speech/generated_samples/speech_v2_generated_speech_update_recognizer_async.py @@ -52,7 +52,7 @@ async def sample_update_recognizer(): print("Waiting for operation to complete...") - response = await operation.result() + response = (await operation).result() # Handle the response print(response) diff --git a/speech/microphone/noxfile.py b/speech/microphone/noxfile.py index f5c32b22789b..e8283c38d4a0 100644 --- a/speech/microphone/noxfile.py +++ b/speech/microphone/noxfile.py @@ -160,6 +160,7 @@ def blacken(session: nox.sessions.Session) -> None: # format = isort + black # + @nox.session def format(session: nox.sessions.Session) -> None: """ @@ -187,7 +188,9 @@ def _session_tests( session: nox.sessions.Session, post_install: Callable = None ) -> None: # check for presence of tests - test_list = glob.glob("**/*_test.py", recursive=True) + glob.glob("**/test_*.py", recursive=True) + test_list = glob.glob("**/*_test.py", recursive=True) + glob.glob( + "**/test_*.py", recursive=True + ) test_list.extend(glob.glob("**/tests", recursive=True)) if len(test_list) == 0: @@ -209,9 +212,7 @@ def _session_tests( if os.path.exists("requirements-test.txt"): if os.path.exists("constraints-test.txt"): - session.install( - "-r", "requirements-test.txt", "-c", "constraints-test.txt" - ) + session.install("-r", "requirements-test.txt", "-c", "constraints-test.txt") else: session.install("-r", "requirements-test.txt") with open("requirements-test.txt") as rtfile: @@ -224,9 +225,9 @@ def _session_tests( post_install(session) if "pytest-parallel" in packages: - concurrent_args.extend(['--workers', 'auto', '--tests-per-worker', 'auto']) + concurrent_args.extend(["--workers", "auto", "--tests-per-worker", "auto"]) elif "pytest-xdist" in packages: - concurrent_args.extend(['-n', 'auto']) + concurrent_args.extend(["-n", "auto"]) session.run( "pytest", @@ -256,7 +257,7 @@ def py(session: nox.sessions.Session) -> None: def _get_repo_root() -> Optional[str]: - """ Returns the root folder of the project. """ + """Returns the root folder of the project.""" # Get root of this repository. Assume we don't have directories nested deeper than 10 items. p = Path(os.getcwd()) for i in range(10): diff --git a/speech/snippets/adaptation_v2_custom_class_reference.py b/speech/snippets/adaptation_v2_custom_class_reference.py index 7cae68c071f2..6185818d7dbe 100644 --- a/speech/snippets/adaptation_v2_custom_class_reference.py +++ b/speech/snippets/adaptation_v2_custom_class_reference.py @@ -20,7 +20,9 @@ from google.cloud.speech_v2.types import cloud_speech -def adaptation_v2_custom_class_reference(project_id, recognizer_id, phrase_set_id, custom_class_id, audio_file): +def adaptation_v2_custom_class_reference( + project_id, recognizer_id, phrase_set_id, custom_class_id, audio_file +): # Instantiates a client client = SpeechClient() @@ -44,7 +46,8 @@ def adaptation_v2_custom_class_reference(project_id, recognizer_id, phrase_set_i request = cloud_speech.CreateCustomClassRequest( parent=f"projects/{project_id}/locations/global", custom_class_id=custom_class_id, - custom_class=cloud_speech.CustomClass(items=[{"value": "fare"}])) + custom_class=cloud_speech.CustomClass(items=[{"value": "fare"}]), + ) operation = client.create_custom_class(request=request) custom_class = operation.result() @@ -53,7 +56,10 @@ def adaptation_v2_custom_class_reference(project_id, recognizer_id, phrase_set_i request = cloud_speech.CreatePhraseSetRequest( parent=f"projects/{project_id}/locations/global", phrase_set_id=phrase_set_id, - phrase_set=cloud_speech.PhraseSet(phrases=[{"value": f"${{{custom_class.name}}}", "boost": 20}])) + phrase_set=cloud_speech.PhraseSet( + phrases=[{"value": f"${{{custom_class.name}}}", "boost": 20}] + ), + ) operation = client.create_phrase_set(request=request) phrase_set = operation.result() @@ -81,6 +87,8 @@ def adaptation_v2_custom_class_reference(project_id, recognizer_id, phrase_set_i print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_adaptation_v2_custom_class_reference] diff --git a/speech/snippets/adaptation_v2_custom_class_reference_test.py b/speech/snippets/adaptation_v2_custom_class_reference_test.py index b869f4405519..618424fa4c96 100644 --- a/speech/snippets/adaptation_v2_custom_class_reference_test.py +++ b/speech/snippets/adaptation_v2_custom_class_reference_test.py @@ -47,8 +47,14 @@ def test_adaptation_v2_custom_class_reference(capsys): recognizer_id = "recognizer-" + str(uuid4()) phrase_set_id = "phrase-set-" + str(uuid4()) custom_class_id = "custom-class-" + str(uuid4()) - response = adaptation_v2_custom_class_reference.adaptation_v2_custom_class_reference( - project_id, recognizer_id, phrase_set_id, custom_class_id, os.path.join(RESOURCES, "fair.wav") + response = ( + adaptation_v2_custom_class_reference.adaptation_v2_custom_class_reference( + project_id, + recognizer_id, + phrase_set_id, + custom_class_id, + os.path.join(RESOURCES, "fair.wav"), + ) ) assert re.search( diff --git a/speech/snippets/adaptation_v2_inline_custom_class.py b/speech/snippets/adaptation_v2_inline_custom_class.py index 3c362fc35c39..77991fd9d571 100644 --- a/speech/snippets/adaptation_v2_inline_custom_class.py +++ b/speech/snippets/adaptation_v2_inline_custom_class.py @@ -49,7 +49,7 @@ def adaptation_v2_inline_custom_class(project_id, recognizer_id, audio_file): inline_phrase_set=phrase_set ) ], - custom_classes=[custom_class] + custom_classes=[custom_class], ) config = cloud_speech.RecognitionConfig( auto_decoding_config={}, adaptation=adaptation @@ -66,6 +66,8 @@ def adaptation_v2_inline_custom_class(project_id, recognizer_id, audio_file): print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_adaptation_v2_inline_custom_class] diff --git a/speech/snippets/adaptation_v2_inline_phrase_set.py b/speech/snippets/adaptation_v2_inline_phrase_set.py index e6bd581e1317..a51e468f3141 100644 --- a/speech/snippets/adaptation_v2_inline_phrase_set.py +++ b/speech/snippets/adaptation_v2_inline_phrase_set.py @@ -64,6 +64,8 @@ def adaptation_v2_inline_phrase_set(project_id, recognizer_id, audio_file): print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_adaptation_v2_inline_phrase_set] diff --git a/speech/snippets/adaptation_v2_phrase_set_reference.py b/speech/snippets/adaptation_v2_phrase_set_reference.py index ceb728557a1c..ae479db0ac83 100644 --- a/speech/snippets/adaptation_v2_phrase_set_reference.py +++ b/speech/snippets/adaptation_v2_phrase_set_reference.py @@ -20,7 +20,9 @@ from google.cloud.speech_v2.types import cloud_speech -def adaptation_v2_phrase_set_reference(project_id, recognizer_id, phrase_set_id, audio_file): +def adaptation_v2_phrase_set_reference( + project_id, recognizer_id, phrase_set_id, audio_file +): # Instantiates a client client = SpeechClient() @@ -44,7 +46,8 @@ def adaptation_v2_phrase_set_reference(project_id, recognizer_id, phrase_set_id, request = cloud_speech.CreatePhraseSetRequest( parent=f"projects/{project_id}/locations/global", phrase_set_id=phrase_set_id, - phrase_set=cloud_speech.PhraseSet(phrases=[{"value": "fare", "boost": 10}])) + phrase_set=cloud_speech.PhraseSet(phrases=[{"value": "fare", "boost": 10}]), + ) operation = client.create_phrase_set(request=request) phrase_set = operation.result() @@ -72,6 +75,8 @@ def adaptation_v2_phrase_set_reference(project_id, recognizer_id, phrase_set_id, print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_adaptation_v2_phrase_set_reference] diff --git a/speech/snippets/beta_snippets.py b/speech/snippets/beta_snippets.py index 51702e2bf8dc..1360dabeed55 100644 --- a/speech/snippets/beta_snippets.py +++ b/speech/snippets/beta_snippets.py @@ -59,8 +59,8 @@ def transcribe_file_with_enhanced_model(): for i, result in enumerate(response.results): alternative = result.alternatives[0] print("-" * 20) - print(u"First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("First alternative of result {}".format(i)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_enhanced_model_beta] @@ -108,8 +108,8 @@ def transcribe_file_with_metadata(): for i, result in enumerate(response.results): alternative = result.alternatives[0] print("-" * 20) - print(u"First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("First alternative of result {}".format(i)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_recognition_metadata_beta] @@ -139,8 +139,8 @@ def transcribe_file_with_auto_punctuation(): for i, result in enumerate(response.results): alternative = result.alternatives[0] print("-" * 20) - print(u"First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("First alternative of result {}".format(i)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_auto_punctuation_beta] @@ -159,9 +159,9 @@ def transcribe_file_with_diarization(): audio = speech.RecognitionAudio(content=content) diarization_config = speech.SpeakerDiarizationConfig( - enable_speaker_diarization=True, - min_speaker_count=2, - max_speaker_count=10, + enable_speaker_diarization=True, + min_speaker_count=2, + max_speaker_count=10, ) config = speech.RecognitionConfig( @@ -185,7 +185,7 @@ def transcribe_file_with_diarization(): # Printing out the output: for word_info in words_info: print( - u"word: '{}', speaker_tag: {}".format(word_info.word, word_info.speaker_tag) + "word: '{}', speaker_tag: {}".format(word_info.word, word_info.speaker_tag) ) # [END speech_transcribe_diarization_beta] @@ -219,8 +219,8 @@ def transcribe_file_with_multichannel(): alternative = result.alternatives[0] print("-" * 20) print("First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) - print(u"Channel Tag: {}".format(result.channel_tag)) + print("Transcript: {}".format(alternative.transcript)) + print("Channel Tag: {}".format(result.channel_tag)) # [END speech_transcribe_multichannel_beta] @@ -255,8 +255,8 @@ def transcribe_file_with_multilanguage(): for i, result in enumerate(response.results): alternative = result.alternatives[0] print("-" * 20) - print(u"First alternative of result {}: {}".format(i, alternative)) - print(u"Transcript: {}".format(alternative.transcript)) + print("First alternative of result {}: {}".format(i, alternative)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_multilanguage_beta] @@ -288,9 +288,9 @@ def transcribe_file_with_word_level_confidence(): alternative = result.alternatives[0] print("-" * 20) print("First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) print( - u"First Word and Confidence: ({}, {})".format( + "First Word and Confidence: ({}, {})".format( alternative.words[0].word, alternative.words[0].confidence ) ) @@ -326,8 +326,8 @@ def transcribe_file_with_spoken_punctuation_end_emojis(): for i, result in enumerate(response.results): alternative = result.alternatives[0] print("-" * 20) - print(u"First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("First alternative of result {}".format(i)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_spoken_punctuation_emojis_beta] diff --git a/speech/snippets/create_recognizer.py b/speech/snippets/create_recognizer.py index 986e7c5cd0ca..43a3efcbe14d 100644 --- a/speech/snippets/create_recognizer.py +++ b/speech/snippets/create_recognizer.py @@ -35,6 +35,8 @@ def create_recognizer(project_id, recognizer_id): print("Created Recognizer:", recognizer.name) return recognizer + + # [END speech_create_recognizer] diff --git a/speech/snippets/noxfile.py b/speech/snippets/noxfile.py index f5c32b22789b..e8283c38d4a0 100644 --- a/speech/snippets/noxfile.py +++ b/speech/snippets/noxfile.py @@ -160,6 +160,7 @@ def blacken(session: nox.sessions.Session) -> None: # format = isort + black # + @nox.session def format(session: nox.sessions.Session) -> None: """ @@ -187,7 +188,9 @@ def _session_tests( session: nox.sessions.Session, post_install: Callable = None ) -> None: # check for presence of tests - test_list = glob.glob("**/*_test.py", recursive=True) + glob.glob("**/test_*.py", recursive=True) + test_list = glob.glob("**/*_test.py", recursive=True) + glob.glob( + "**/test_*.py", recursive=True + ) test_list.extend(glob.glob("**/tests", recursive=True)) if len(test_list) == 0: @@ -209,9 +212,7 @@ def _session_tests( if os.path.exists("requirements-test.txt"): if os.path.exists("constraints-test.txt"): - session.install( - "-r", "requirements-test.txt", "-c", "constraints-test.txt" - ) + session.install("-r", "requirements-test.txt", "-c", "constraints-test.txt") else: session.install("-r", "requirements-test.txt") with open("requirements-test.txt") as rtfile: @@ -224,9 +225,9 @@ def _session_tests( post_install(session) if "pytest-parallel" in packages: - concurrent_args.extend(['--workers', 'auto', '--tests-per-worker', 'auto']) + concurrent_args.extend(["--workers", "auto", "--tests-per-worker", "auto"]) elif "pytest-xdist" in packages: - concurrent_args.extend(['-n', 'auto']) + concurrent_args.extend(["-n", "auto"]) session.run( "pytest", @@ -256,7 +257,7 @@ def py(session: nox.sessions.Session) -> None: def _get_repo_root() -> Optional[str]: - """ Returns the root folder of the project. """ + """Returns the root folder of the project.""" # Get root of this repository. Assume we don't have directories nested deeper than 10 items. p = Path(os.getcwd()) for i in range(10): diff --git a/speech/snippets/profanity_filter.py b/speech/snippets/profanity_filter.py index c588691776b2..0cf2b5e960c7 100644 --- a/speech/snippets/profanity_filter.py +++ b/speech/snippets/profanity_filter.py @@ -39,7 +39,7 @@ def sync_recognize_with_profanity_filter_gcs(gcs_uri): for i, result in enumerate(response.results): alternative = result.alternatives[0] - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_recognize_with_profanity_filter_gcs] diff --git a/speech/snippets/quickstart_v2.py b/speech/snippets/quickstart_v2.py index d045c42c4655..6ba58ef7ec8d 100644 --- a/speech/snippets/quickstart_v2.py +++ b/speech/snippets/quickstart_v2.py @@ -53,6 +53,8 @@ def quickstart_v2(project_id, recognizer_id, audio_file): print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_quickstart_v2] diff --git a/speech/snippets/speech_adaptation_beta.py b/speech/snippets/speech_adaptation_beta.py index 8543519a9fd4..bf254b27e13e 100644 --- a/speech/snippets/speech_adaptation_beta.py +++ b/speech/snippets/speech_adaptation_beta.py @@ -76,7 +76,7 @@ def sample_recognize(storage_uri, phrase): for result in response.results: # First alternative is the most probable result alternative = result.alternatives[0] - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_adaptation_beta] return response diff --git a/speech/snippets/speech_quickstart_beta.py b/speech/snippets/speech_quickstart_beta.py index d40e6d32f1c8..b507ac633a32 100644 --- a/speech/snippets/speech_quickstart_beta.py +++ b/speech/snippets/speech_quickstart_beta.py @@ -61,7 +61,7 @@ def sample_recognize(storage_uri): for result in response.results: # First alternative is the most probable result alternative = result.alternatives[0] - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_quickstart_beta] return response diff --git a/speech/snippets/speech_to_storage_beta.py b/speech/snippets/speech_to_storage_beta.py index 549388285f6e..5b0e4a102d4b 100644 --- a/speech/snippets/speech_to_storage_beta.py +++ b/speech/snippets/speech_to_storage_beta.py @@ -81,5 +81,5 @@ def export_transcript_to_storage_beta( print(f"Transcript: {result.alternatives[0].transcript}") print(f"Confidence: {result.alternatives[0].confidence}") -# [END speech_transcribe_with_speech_to_storage_beta] + # [END speech_transcribe_with_speech_to_storage_beta] return storage_transcript.results diff --git a/speech/snippets/transcribe.py b/speech/snippets/transcribe.py index 9243c7963978..3d45a82841a2 100644 --- a/speech/snippets/transcribe.py +++ b/speech/snippets/transcribe.py @@ -54,7 +54,7 @@ def transcribe_file(speech_file): # them to get the transcripts for the entire audio file. for result in response.results: # The first alternative is the most likely one for this portion. - print(u"Transcript: {}".format(result.alternatives[0].transcript)) + print("Transcript: {}".format(result.alternatives[0].transcript)) # [END speech_python_migration_sync_response] @@ -83,7 +83,7 @@ def transcribe_gcs(gcs_uri): # them to get the transcripts for the entire audio file. for result in response.results: # The first alternative is the most likely one for this portion. - print(u"Transcript: {}".format(result.alternatives[0].transcript)) + print("Transcript: {}".format(result.alternatives[0].transcript)) # [END speech_transcribe_sync_gcs] diff --git a/speech/snippets/transcribe_async_file.py b/speech/snippets/transcribe_async_file.py index db5ed5c37b73..e35073121229 100644 --- a/speech/snippets/transcribe_async_file.py +++ b/speech/snippets/transcribe_async_file.py @@ -52,7 +52,7 @@ def transcribe_file(speech_file): # them to get the transcripts for the entire audio file. for result in response.results: # The first alternative is the most likely one for this portion. - print(u"Transcript: {}".format(result.alternatives[0].transcript)) + print("Transcript: {}".format(result.alternatives[0].transcript)) print("Confidence: {}".format(result.alternatives[0].confidence)) # [END speech_python_migration_async_response] diff --git a/speech/snippets/transcribe_async_gcs.py b/speech/snippets/transcribe_async_gcs.py index 727b91512d2e..cb8207bacbbc 100644 --- a/speech/snippets/transcribe_async_gcs.py +++ b/speech/snippets/transcribe_async_gcs.py @@ -40,6 +40,8 @@ def transcribe_gcs(gcs_uri): # them to get the transcripts for the entire audio file. for result in response.results: # The first alternative is the most likely one for this portion. - print(u"Transcript: {}".format(result.alternatives[0].transcript)) + print("Transcript: {}".format(result.alternatives[0].transcript)) print("Confidence: {}".format(result.alternatives[0].confidence)) + + # [END speech_transcribe_async_gcs] diff --git a/speech/snippets/transcribe_file_v2.py b/speech/snippets/transcribe_file_v2.py index ef923051e2be..92bdf53cc478 100644 --- a/speech/snippets/transcribe_file_v2.py +++ b/speech/snippets/transcribe_file_v2.py @@ -53,6 +53,8 @@ def transcribe_file_v2(project_id, recognizer_id, audio_file): print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_transcribe_file_v2] diff --git a/speech/snippets/transcribe_gcs_v2.py b/speech/snippets/transcribe_gcs_v2.py index 0d9bdefe668a..592822fb9af6 100644 --- a/speech/snippets/transcribe_gcs_v2.py +++ b/speech/snippets/transcribe_gcs_v2.py @@ -47,6 +47,8 @@ def transcribe_gcs_v2(project_id, recognizer_id, gcs_uri): print("Transcript: {}".format(result.alternatives[0].transcript)) return response + + # [END speech_transcribe_gcs_v2] diff --git a/speech/snippets/transcribe_model_selection.py b/speech/snippets/transcribe_model_selection.py index 76db3c9cd731..6749853570af 100644 --- a/speech/snippets/transcribe_model_selection.py +++ b/speech/snippets/transcribe_model_selection.py @@ -53,7 +53,7 @@ def transcribe_model_selection(speech_file, model): alternative = result.alternatives[0] print("-" * 20) print("First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_model_selection] @@ -85,7 +85,7 @@ def transcribe_model_selection_gcs(gcs_uri, model): alternative = result.alternatives[0] print("-" * 20) print("First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_transcribe_model_selection_gcs] diff --git a/speech/snippets/transcribe_multichannel.py b/speech/snippets/transcribe_multichannel.py index 245738553111..34f498219fe4 100644 --- a/speech/snippets/transcribe_multichannel.py +++ b/speech/snippets/transcribe_multichannel.py @@ -52,8 +52,8 @@ def transcribe_file_with_multichannel(speech_file): alternative = result.alternatives[0] print("-" * 20) print("First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) - print(u"Channel Tag: {}".format(result.channel_tag)) + print("Transcript: {}".format(alternative.transcript)) + print("Channel Tag: {}".format(result.channel_tag)) # [END speech_transcribe_multichannel] @@ -81,8 +81,8 @@ def transcribe_gcs_with_multichannel(gcs_uri): alternative = result.alternatives[0] print("-" * 20) print("First alternative of result {}".format(i)) - print(u"Transcript: {}".format(alternative.transcript)) - print(u"Channel Tag: {}".format(result.channel_tag)) + print("Transcript: {}".format(alternative.transcript)) + print("Channel Tag: {}".format(result.channel_tag)) # [END speech_transcribe_multichannel_gcs] diff --git a/speech/snippets/transcribe_streaming.py b/speech/snippets/transcribe_streaming.py index 979478a6aae5..5b9abedc77b3 100644 --- a/speech/snippets/transcribe_streaming.py +++ b/speech/snippets/transcribe_streaming.py @@ -69,7 +69,7 @@ def transcribe_streaming(stream_file): # The alternatives are ordered from most likely to least. for alternative in alternatives: print("Confidence: {}".format(alternative.confidence)) - print(u"Transcript: {}".format(alternative.transcript)) + print("Transcript: {}".format(alternative.transcript)) # [END speech_python_migration_streaming_response] diff --git a/speech/snippets/transcribe_streaming_v2.py b/speech/snippets/transcribe_streaming_v2.py index d6f3fa57d991..1cba1206de12 100644 --- a/speech/snippets/transcribe_streaming_v2.py +++ b/speech/snippets/transcribe_streaming_v2.py @@ -74,6 +74,8 @@ def requests(config, audio): print("Transcript: {}".format(result.alternatives[0].transcript)) return responses + + # [END speech_transcribe_streaming_v2] diff --git a/speech/snippets/transcribe_streaming_voice_activity_events.py b/speech/snippets/transcribe_streaming_voice_activity_events.py index 50689433669a..65d6976eb0c0 100644 --- a/speech/snippets/transcribe_streaming_voice_activity_events.py +++ b/speech/snippets/transcribe_streaming_voice_activity_events.py @@ -92,6 +92,8 @@ def requests(config, audio): print("Transcript: {}".format(result.alternatives[0].transcript)) return responses + + # [END speech_transcribe_streaming_voice_activity_events] diff --git a/speech/snippets/transcribe_streaming_voice_activity_timeouts.py b/speech/snippets/transcribe_streaming_voice_activity_timeouts.py index 6b6bdfef03b0..3c68ef987c3a 100644 --- a/speech/snippets/transcribe_streaming_voice_activity_timeouts.py +++ b/speech/snippets/transcribe_streaming_voice_activity_timeouts.py @@ -107,6 +107,8 @@ def requests(config, audio): print("Transcript: {}".format(result.alternatives[0].transcript)) return responses + + # [END speech_transcribe_streaming_voice_activity_timeouts]