Skip to content

Commit

Permalink
Add enable_streaming_engine argument to google_dataflow_job (hashicor…
Browse files Browse the repository at this point in the history
…p#4585)

* Add enable_streaming_engine argument to google_dataflow_job

This should address hashicorp/terraform-provider-google#8649

* address PR feedback

Signed-off-by: Modular Magician <[email protected]>
  • Loading branch information
modular-magician committed Mar 11, 2021
1 parent 384384c commit 0d4c5ab
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 1 deletion.
3 changes: 3 additions & 0 deletions .changelog/4585.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
```release-note:enhancement
dataflow: added `enable_streaming_engine` argument
```
2 changes: 1 addition & 1 deletion google-beta/resource_dataflow_flex_template_job_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import (

"github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource"
"github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"google.golang.org/api/compute/v1"
compute "google.golang.org/api/compute/v1"
)

func TestAccDataflowFlexTemplateJob_basic(t *testing.T) {
Expand Down
7 changes: 7 additions & 0 deletions google-beta/resource_dataflow_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,12 @@ func resourceDataflowJob() *schema.Resource {
Computed: true,
Description: `The unique ID of this job.`,
},

"enable_streaming_engine": {
Type: schema.TypeBool,
Optional: true,
Description: `Indicates if the job should use the streaming engine feature.`,
},
},
UseJSONNumber: true,
}
Expand Down Expand Up @@ -540,6 +546,7 @@ func resourceDataflowJobSetupEnv(d *schema.ResourceData, config *Config) (datafl
MachineType: d.Get("machine_type").(string),
KmsKeyName: d.Get("kms_key_name").(string),
IpConfiguration: d.Get("ip_configuration").(string),
EnableStreamingEngine: d.Get("enable_streaming_engine").(bool),
AdditionalUserLabels: labels,
Zone: zone,
AdditionalExperiments: additionalExperiments,
Expand Down
2 changes: 2 additions & 0 deletions website/docs/r/dataflow_job.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ resource "google_dataflow_job" "pubsub_stream" {
name = "tf-test-dataflow-job1"
template_gcs_path = "gs://my-bucket/templates/template_file"
temp_gcs_location = "gs://my-bucket/tmp_dir"
enable_streaming_engine = true
parameters = {
inputFilePattern = "${google_storage_bucket.bucket1.url}/*.json"
outputTopic = google_pubsub_topic.topic.id
Expand Down Expand Up @@ -90,6 +91,7 @@ The following arguments are supported:
* `kms_key_name` - (Optional) The name for the Cloud KMS key for the job. Key format is: `projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY`
* `ip_configuration` - (Optional) The configuration for VM IPs. Options are `"WORKER_IP_PUBLIC"` or `"WORKER_IP_PRIVATE"`.
* `additional_experiments` - (Optional) List of experiments that should be used by the job. An example value is `["enable_stackdriver_agent_metrics"]`.
* `enable_streaming_engine` - (Optional) Enable/disable the use of [Streaming Engine](https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#streaming-engine) for the job. Note that Streaming Engine is enabled by default for pipelines developed against the Beam SDK for Python v2.21.0 or later when using Python 3.

## Attributes Reference

Expand Down

0 comments on commit 0d4c5ab

Please sign in to comment.