-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compute_disk_snapshot Weekly Schedule Detects Drift on Every Plan/Apply #155
Comments
Are you using |
Good call, edited accordingly |
Which value are you actually using though? Because I notice that the plan was attempting to change |
Also it looks like an upstream issue: hashicorp/terraform-provider-google#8460 |
I noticed that during testing as well and updated first to 06:00 and then to 23:00 where I still experienced the permadiff issues. |
I was able to resolve creating a module that uses the same snapshot_schedule variable format, but uses a different snapshot policy resource: # Build the region into the proper format
locals {
region = "https://www.googleapis.com/compute/v1/projects/${var.project_id}/regions/${var.region}"
}
# Build the weekly_schedule variables from the weekly_snapshot_schedule variable if it contains a weekly schedule
locals {
weekly_schedule = var.snapshot_schedule.weekly_schedule != null
weekly_schedule_start_time = var.snapshot_schedule.weekly_schedule == null ? "" : var.snapshot_schedule.weekly_schedule.day_of_weeks["start_time"]
weekly_schedule_day = var.snapshot_schedule.weekly_schedule == null ? "" : var.snapshot_schedule.weekly_schedule.day_of_weeks["day"]
}
resource "null_resource" "module_depends_on" {
triggers = {
value = length(var.module_depends_on)
}
}
# If weekly_schedule is not needed, build the snapshot policy for day or hourly using dynamic blocks
resource "google_compute_resource_policy" "dynamic_snapshot_policy" {
count = local.weekly_schedule ? 0 : 1
name = "${var.name}-snapshot"
project = var.project_id
region = local.region
snapshot_schedule_policy {
retention_policy {
max_retention_days = var.snapshot_retention_policy.max_retention_days
on_source_disk_delete = var.snapshot_retention_policy.on_source_disk_delete
}
schedule {
dynamic "daily_schedule" {
for_each = var.snapshot_schedule.daily_schedule == null ? [] : [var.snapshot_schedule.daily_schedule]
content {
days_in_cycle = daily_schedule.value.days_in_cycle
start_time = daily_schedule.value.start_time
}
}
dynamic "hourly_schedule" {
for_each = var.snapshot_schedule.hourly_schedule == null ? [] : [var.snapshot_schedule.hourly_schedule]
content {
hours_in_cycle = hourly_schedule.value["hours_in_cycle"]
start_time = hourly_schedule.value["start_time"]
}
}
}
snapshot_properties {
labels = {
name = var.enable_vss ? "windows" : "linux"
}
storage_locations = ["us"]
guest_flush = var.enable_vss
}
}
depends_on = [null_resource.module_depends_on]
}
# If weekly schedule is declared, build the snapshot policy using the local weekly variables
resource "google_compute_resource_policy" "weekly_snapshot_policy" {
count = local.weekly_schedule ? 1 : 0
name = "${var.name}-snapshot"
project = var.project_id
region = local.region
snapshot_schedule_policy {
retention_policy {
max_retention_days = var.snapshot_retention_policy.max_retention_days
on_source_disk_delete = var.snapshot_retention_policy.on_source_disk_delete
}
schedule {
weekly_schedule {
day_of_weeks {
start_time = local.weekly_schedule_start_time
day = local.weekly_schedule_day
}
}
}
snapshot_properties {
labels = {
name = var.enable_vss ? "windows" : "linux"
}
storage_locations = ["us"]
guest_flush = var.enable_vss
}
}
depends_on = [null_resource.module_depends_on]
}
# Determine which schedule resource should drive the attachement
locals {
snapshot_policy_name = local.weekly_schedule ? google_compute_resource_policy.weekly_snapshot_policy[0].name : google_compute_resource_policy.dynamic_snapshot_policy[0].name
}
# Attach the policy to the disks from the disk variables
resource "google_compute_disk_resource_policy_attachment" "attachment" {
for_each = toset(var.disks)
name = local.snapshot_policy_name
project = element(split("/", each.key), index(split("/", each.key), "projects", ) + 1, )
disk = element(split("/", each.key), index(split("/", each.key), "disks", ) + 1, )
zone = element(split("/", each.key), index(split("/", each.key), "zones", ) + 1, )
depends_on = [null_resource.module_depends_on, google_compute_resource_policy.dynamic_snapshot_policy, google_compute_resource_policy.weekly_snapshot_policy ]
} |
Confirmed that improperly formatted time was the cause of the issues despite my best efforts at reading plan outputs - closing issue. |
Terraform Version
0.13.5
Tested with 0.13.6 and 0.14.8
Terraform Configuration Files
Expected Behavior
Terraform should deploy the snapshot schedule and attach to disks declared in the variable - which it does. On subsequent
terraform apply
commands there should be no drift detected and the snapshot schedule policy and attachments should not prevent the apply from completing.Unexpected Behavior
When deployed with appropriate variables the weekly snapshot schedule is successfully deployed; however, the next Terraform apply or plan detects a drift in the snapshot schedule like:
If running an apply, the snapshot schedule tries to edit the schedule but errors out:
Steps to recreate
The text was updated successfully, but these errors were encountered: