From 53815166616da1beb6407f90108011b3af66200e Mon Sep 17 00:00:00 2001 From: "Marcel S. Henselin" Date: Fri, 19 Dec 2025 08:56:46 +0100 Subject: [PATCH] feat: mssql alpha instance (#2) * fix: remove unused attribute types and functions from backup models * fix: update API client references to use sqlserverflexalpha package * fix: update package references to use sqlserverflexalpha and modify user data source model * fix: add sqlserverflexalpha user data source to provider * fix: add sqlserverflexalpha user resource and update related functionality * chore: add stackit_sqlserverflexalpha_user resource and instance_id variable * fix: refactor sqlserverflexalpha user resource and enhance schema with status and default_database --------- Co-authored-by: Andre Harms Co-authored-by: Marcel S. Henselin --- .copywrite.hcl | 24 + .github/actions/build/action.yaml | 2 + .github/docs/contribution-guide/resource.go | 2 + .github/docs/contribution-guide/utils/util.go | 2 + .github/workflows/ci.yaml | 2 +- .goreleaser.yaml | 2 +- docs/data-sources/affinity_group.md | 39 -- docs/data-sources/cdn_custom_domain.md | 50 -- docs/data-sources/cdn_distribution.md | 84 --- docs/data-sources/dns_record_set.md | 43 -- docs/data-sources/dns_zone.md | 54 -- docs/data-sources/git.md | 43 -- docs/data-sources/iaas_project.md | 36 -- docs/data-sources/image.md | 73 --- docs/data-sources/image_v2.md | 161 ------ docs/data-sources/key_pair.md | 33 -- docs/data-sources/kms_key.md | 45 -- docs/data-sources/kms_keyring.md | 38 -- docs/data-sources/kms_wrapping_key.md | 47 -- docs/data-sources/loadbalancer.md | 174 ------ docs/data-sources/logme_credential.md | 39 -- docs/data-sources/logme_instance.md | 70 --- docs/data-sources/machine_type.md | 77 --- docs/data-sources/mariadb_credential.md | 41 -- docs/data-sources/mariadb_instance.md | 56 -- docs/data-sources/mongodbflex_instance.md | 76 --- docs/data-sources/mongodbflex_user.md | 43 -- docs/data-sources/network.md | 55 -- docs/data-sources/network_area.md | 49 -- docs/data-sources/network_area_region.md | 57 -- docs/data-sources/network_area_route.md | 58 -- docs/data-sources/network_interface.md | 47 -- docs/data-sources/objectstorage_bucket.md | 38 -- docs/data-sources/objectstorage_credential.md | 40 -- .../objectstorage_credentials_group.md | 38 -- docs/data-sources/observability_alertgroup.md | 47 -- docs/data-sources/observability_instance.md | 158 ------ .../observability_logalertgroup.md | 47 -- .../observability_scrapeconfig.md | 67 --- docs/data-sources/opensearch_credential.md | 41 -- docs/data-sources/opensearch_instance.md | 62 --- docs/data-sources/postgresflex_database.md | 40 -- docs/data-sources/postgresflex_instance.md | 62 --- docs/data-sources/postgresflex_user.md | 42 -- docs/data-sources/public_ip.md | 39 -- docs/data-sources/public_ip_ranges.md | 49 -- docs/data-sources/rabbitmq_credential.md | 44 -- docs/data-sources/rabbitmq_instance.md | 61 --- docs/data-sources/redis_credential.md | 41 -- docs/data-sources/redis_instance.md | 70 --- docs/data-sources/resourcemanager_folder.md | 36 -- docs/data-sources/resourcemanager_project.md | 37 -- docs/data-sources/routing_table.md | 48 -- docs/data-sources/routing_table_route.md | 65 --- docs/data-sources/routing_table_routes.md | 71 --- docs/data-sources/routing_tables.md | 54 -- docs/data-sources/scf_organization.md | 43 -- docs/data-sources/scf_organization_manager.md | 41 -- docs/data-sources/scf_platform.md | 40 -- docs/data-sources/secretsmanager_instance.md | 34 -- docs/data-sources/secretsmanager_user.md | 37 -- docs/data-sources/security_group.md | 40 -- docs/data-sources/security_group_rule.md | 72 --- docs/data-sources/server.md | 57 -- docs/data-sources/server_backup_schedule.md | 54 -- docs/data-sources/server_backup_schedules.md | 60 --- docs/data-sources/server_update_schedule.md | 45 -- docs/data-sources/server_update_schedules.md | 51 -- docs/data-sources/service_account.md | 33 -- docs/data-sources/ske_cluster.md | 153 ------ docs/data-sources/sqlserverflex_instance.md | 72 --- ...lex_user.md => sqlserverflexalpha_user.md} | 12 +- docs/data-sources/volume.md | 52 -- docs/ephemeral-resources/access_token.md | 73 --- docs/guides/aws_provider_s3_stackit.md | 91 ---- docs/guides/kubernetes_provider_ske.md | 83 --- docs/guides/opting_into_beta_resources.md | 35 -- docs/guides/scf_cloudfoundry.md | 251 --------- docs/guides/ske_kube_state_metric_alerts.md | 267 ---------- docs/guides/ske_log_alerts.md | 199 ------- docs/guides/stackit_cdn_with_custom_domain.md | 255 --------- docs/guides/stackit_org_service_account.md | 15 - .../using_loadbalancer_with_observability.md | 163 ------ docs/guides/vault_secrets_manager.md | 83 --- docs/index.md | 12 +- docs/resources/affinity_group.md | 115 ---- ...horization_organization_role_assignment.md | 43 -- .../authorization_project_role_assignment.md | 43 -- docs/resources/cdn_custom_domain.md | 65 --- docs/resources/cdn_distribution.md | 107 ---- docs/resources/dns_record_set.md | 55 -- docs/resources/dns_zone.md | 66 --- docs/resources/git.md | 61 --- docs/resources/image.md | 90 ---- docs/resources/key_pair.md | 87 --- docs/resources/kms_key.md | 51 -- docs/resources/kms_keyring.md | 42 -- docs/resources/kms_wrapping_key.md | 50 -- docs/resources/loadbalancer.md | 377 ------------- .../loadbalancer_observability_credential.md | 47 -- docs/resources/logme_credential.md | 44 -- docs/resources/logme_instance.md | 84 --- docs/resources/mariadb_credential.md | 46 -- docs/resources/mariadb_instance.md | 70 --- docs/resources/modelserving_token.md | 71 --- docs/resources/mongodbflex_instance.md | 105 ---- docs/resources/mongodbflex_user.md | 53 -- docs/resources/network.md | 90 ---- docs/resources/network_area.md | 64 --- docs/resources/network_area_region.md | 77 --- docs/resources/network_area_route.md | 114 ---- docs/resources/network_interface.md | 54 -- docs/resources/objectstorage_bucket.md | 44 -- docs/resources/objectstorage_credential.md | 48 -- .../objectstorage_credentials_group.md | 44 -- docs/resources/observability_alertgroup.md | 86 --- docs/resources/observability_credential.md | 39 -- docs/resources/observability_instance.md | 187 ------- docs/resources/observability_logalertgroup.md | 86 --- docs/resources/observability_scrapeconfig.md | 91 ---- docs/resources/opensearch_credential.md | 46 -- docs/resources/opensearch_instance.md | 76 --- docs/resources/postgresflex_database.md | 47 -- docs/resources/postgresflex_user.md | 51 -- ...tance.md => postgresflexalpha_instance.md} | 31 +- docs/resources/public_ip.md | 48 -- docs/resources/public_ip_associate.md | 50 -- docs/resources/rabbitmq_credential.md | 49 -- docs/resources/rabbitmq_instance.md | 78 --- docs/resources/redis_credential.md | 46 -- docs/resources/redis_instance.md | 87 --- docs/resources/resourcemanager_folder.md | 61 --- docs/resources/resourcemanager_project.md | 58 -- docs/resources/routing_table.md | 56 -- docs/resources/routing_table_route.md | 84 --- docs/resources/scf_organization.md | 57 -- docs/resources/scf_organization_manager.md | 49 -- docs/resources/secretsmanager_instance.md | 44 -- docs/resources/secretsmanager_user.md | 45 -- docs/resources/security_group.md | 49 -- docs/resources/security_group_rule.md | 87 --- docs/resources/server.md | 441 ---------------- docs/resources/server_backup_schedule.md | 70 --- .../server_network_interface_attach.md | 44 -- .../server_service_account_attach.md | 44 -- docs/resources/server_update_schedule.md | 54 -- docs/resources/server_volume_attach.md | 44 -- docs/resources/service_account.md | 39 -- .../resources/service_account_access_token.md | 85 --- docs/resources/service_account_key.md | 79 --- docs/resources/ske_cluster.md | 204 ------- docs/resources/ske_kubeconfig.md | 47 -- docs/resources/sqlserverflex_instance.md | 95 ---- ...lex_user.md => sqlserverflexalpha_user.md} | 12 +- docs/resources/volume.md | 63 --- .../stackit_affinity_group/data-source.tf | 4 - .../stackit_cdn_custom_domain/data-source.tf | 6 - .../stackit_cdn_distribution/data-source.tf | 5 - .../stackit_dns_record_set/data-source.tf | 5 - .../stackit_dns_zone/data-source.tf | 4 - .../data-sources/stackit_git/data-source.tf | 4 - .../stackit_iaas_project/data-source.tf | 3 - .../data-sources/stackit_image/data-source.tf | 4 - .../stackit_image_v2/data-source.tf | 28 - .../stackit_key_pair/data-source.tf | 3 - .../stackit_kms_key/data-source.tf | 5 - .../stackit_kms_keyring/data-source.tf | 4 - .../stackit_kms_wrapping_key/data-source.tf | 5 - .../stackit_loadbalancer/data-source.tf | 4 - .../stackit_logme_credential/data-source.tf | 5 - .../stackit_logme_instance/data-source.tf | 4 - .../stackit_machine_type/data-source.tf | 21 - .../stackit_mariadb_credential/data-source.tf | 5 - .../data-source.tf | 4 - .../stackit_network/data-source.tf | 4 - .../stackit_network_area/data-source.tf | 4 - .../data-source.tf | 4 - .../stackit_network_area_route/data-source.tf | 5 - .../stackit_network_interface/data-source.tf | 5 - .../data-source.tf | 4 - .../data-source.tf | 5 - .../data-source.tf | 4 - .../data-source.tf | 4 - .../data-source.tf | 5 - .../data-source.tf | 5 - .../data-source.tf | 5 - .../data-source.tf | 4 - .../data-source.tf | 4 - .../stackit_public_ip/data-source.tf | 4 - .../stackit_public_ip_ranges/data-source.tf | 17 - .../data-source.tf | 5 - .../stackit_rabbitmq_instance/data-source.tf | 4 - .../stackit_redis_credential/data-source.tf | 5 - .../stackit_redis_instance/data-source.tf | 4 - .../data-source.tf | 3 - .../data-source.tf | 4 - .../stackit_routing_table/data-source.tf | 5 - .../data-source.tf | 6 - .../data-source.tf | 5 - .../stackit_routing_tables/data-source.tf | 4 - .../stackit_scf_organization/data-source.tf | 4 - .../data-source.tf | 4 - .../stackit_scf_platform/data-source.tf | 4 - .../data-source.tf | 4 - .../data-source.tf | 5 - .../stackit_security_group/data-source.tf | 4 - .../data-source.tf | 5 - .../stackit_server/data-source.tf | 4 - .../data-source.tf | 5 - .../data-source.tf | 4 - .../data-source.tf | 5 - .../data-source.tf | 4 - .../stackit_service_account/data-source.tf | 4 - .../stackit_ske_cluster/data-source.tf | 4 - .../data-source.tf | 4 - .../stackit_sqlserverflex_user/data-source.tf | 5 - .../stackit_volume/data-source.tf | 4 - .../data-source.tf | 4 +- .../data-source.tf | 5 +- .../data-source.tf | 4 +- .../data-source.tf | 4 +- .../data-source.tf | 4 +- .../ephemeral-resource.tf | 44 -- examples/provider/provider.tf | 10 +- .../stackit_affinity_group/resource.tf | 11 - .../resource.tf | 11 - .../resource.tf | 11 - .../stackit_cdn_custom_domain/resource.tf | 15 - .../stackit_cdn_distribution/resource.tf | 24 - .../stackit_dns_record_set/resource.tf | 14 - .../resources/stackit_dns_zone/resource.tf | 16 - examples/resources/stackit_git/resource.tf | 19 - examples/resources/stackit_image/resource.tf | 20 - .../resources/stackit_key_pair/resource.tf | 11 - .../resources/stackit_kms_key/resource.tf | 8 - .../resources/stackit_kms_keyring/resource.tf | 5 - .../stackit_kms_wrapping_key/resource.tf | 8 - .../stackit_loadbalancer/resource.tf | 204 ------- .../resource.tf | 12 - .../stackit_logme_credential/resource.tf | 10 - .../stackit_logme_instance/resource.tf | 15 - .../stackit_mariadb_credential/resource.tf | 10 - .../stackit_mariadb_instance/resource.tf | 15 - .../stackit_mongodbflex_instance/resource.tf | 27 - .../stackit_mongodbflex_user/resource.tf | 13 - .../resources/stackit_network/resource.tf | 33 -- .../stackit_network_area/resource.tf | 13 - .../stackit_network_area_region/resource.tf | 18 - .../stackit_network_area_route/resource.tf | 21 - .../stackit_network_interface/resource.tf | 12 - .../stackit_objectstorage_bucket/resource.tf | 10 - .../resource.tf | 11 - .../resource.tf | 10 - .../resource.tf | 38 -- .../resource.tf | 5 - .../resource.tf | 17 - .../resource.tf | 38 -- .../resource.tf | 23 - .../stackit_opensearch_credential/resource.tf | 10 - .../stackit_opensearch_instance/resource.tf | 15 - .../resources/stackit_public_ip/resource.tf | 13 - .../stackit_public_ip_associate/resource.tf | 11 - .../stackit_rabbitmq_credential/resource.tf | 10 - .../stackit_rabbitmq_instance/resource.tf | 18 - .../stackit_redis_credential/resource.tf | 10 - .../stackit_redis_instance/resource.tf | 18 - .../resource.tf | 24 - .../resource.tf | 17 - .../stackit_routing_table/resource.tf | 14 - .../stackit_routing_table_route/resource.tf | 22 - .../stackit_scf_organization/resource.tf | 18 - .../resource.tf | 11 - .../resource.tf | 11 - .../stackit_secretsmanager_user/resource.tf | 12 - .../stackit_security_group/resource.tf | 13 - .../stackit_security_group_rule/resource.tf | 20 - examples/resources/stackit_server/resource.tf | 27 - .../resource.tf | 18 - .../resource.tf | 11 - .../resource.tf | 11 - .../resource.tf | 14 - .../stackit_server_volume_attach/resource.tf | 11 - .../stackit_service_account/resource.tf | 10 - .../resources/stackit_ske_cluster/resource.tf | 27 - .../stackit_ske_kubeconfig/resource.tf | 8 - examples/resources/stackit_volume/resource.tf | 15 - .../resource.tf | 6 +- .../resource.tf | 6 +- .../resource.tf | 6 +- .../resource.tf | 6 +- .../resource.tf | 6 +- go.mod | 54 ++ go.sum | 498 ++++++++++++++++++ golang-ci.yaml | 2 + main.go | 2 + pkg/postgresflexalpha/api_default_test.go | 2 + pkg/postgresflexalpha/wait/wait.go | 2 + pkg/postgresflexalpha/wait/wait_test.go | 2 + pkg/sqlserverflexalpha/api_default_test.go | 2 + .../model_get_backup_response.go | 97 ---- pkg/sqlserverflexalpha/model_list_backup.go | 104 ---- pkg/sqlserverflexalpha/wait/wait.go | 2 + pkg/sqlserverflexalpha/wait/wait_test.go | 2 + sample/main.tf | 2 + sample/providers.tf | 2 + sample/tf.sh | 2 + sample/tofu.sh | 2 + sample/user.tf | 8 + sample/variables.tf.example | 4 + scripts/check-docs.sh | 2 + scripts/lint-golangci-lint.sh | 2 + scripts/project.sh | 2 + scripts/replace.sh | 2 + scripts/tfplugindocs.sh | 6 +- stackit/internal/conversion/conversion.go | 2 + .../internal/conversion/conversion_test.go | 2 + stackit/internal/core/core.go | 2 + stackit/internal/core/core_test.go | 2 + stackit/internal/features/beta.go | 2 + stackit/internal/features/beta_test.go | 2 + stackit/internal/features/experiments.go | 2 + stackit/internal/features/experiments_test.go | 2 + .../authorization/authorization_acc_test.go | 114 ---- .../authorization/roleassignments/resource.go | 370 ------------- .../testfiles/double-definition.tf | 6 - .../authorization/testfiles/invalid-role.tf | 6 - .../testfiles/organization-role.tf | 6 - .../authorization/testfiles/prerequisites.tf | 10 - .../authorization/testfiles/project-owner.tf | 6 - .../services/authorization/utils/util.go | 29 - .../services/authorization/utils/util_test.go | 93 ---- ...ce.go.bak_test.go => resource_test.go.bak} | 2 + .../postgresflexalpha/instance/resource.go | 2 + .../instance/resource_test.go | 2 + ...or_unknown_if_flavor_unchanged_modifier.go | 2 + .../postgresflex_acc_test.go | 2 + .../postgresflexalpha/user/datasource.go | 2 + .../postgresflexalpha/user/datasource_test.go | 2 + .../postgresflexalpha/user/resource.go | 2 + .../postgresflexalpha/user/resource_test.go | 2 + .../services/postgresflexalpha/utils/util.go | 2 + .../postgresflexalpha/utils/util_test.go | 2 + .../sqlserverflexalpha/instance/datasource.go | 2 + .../sqlserverflexalpha/instance/resource.go | 2 + .../instance/resource_test.go | 2 + .../sqlserverflex_acc_test.go | 2 + .../testdata/resource-max.tf | 2 + .../testdata/resource-min.tf | 2 + .../sqlserverflexalpha/user/datasource.go | 131 +++-- .../user/datasource_test.go | 141 ++--- .../sqlserverflexalpha/user/resource.go | 215 +++++--- .../sqlserverflexalpha/user/resource_test.go | 326 ++++++------ .../services/sqlserverflexalpha/utils/util.go | 25 +- .../sqlserverflexalpha/utils/util_test.go | 39 +- stackit/internal/testutil/testutil.go | 2 + stackit/internal/testutil/testutil_test.go | 2 + stackit/internal/utils/attributes.go | 2 + stackit/internal/utils/attributes_test.go | 2 + stackit/internal/utils/headers.go | 2 + stackit/internal/utils/headers_test.go | 2 + stackit/internal/utils/regions.go | 2 + stackit/internal/utils/regions_test.go | 2 + .../utils/use_state_for_unknown_if.go | 2 + .../utils/use_state_for_unknown_if_test.go | 2 + stackit/internal/utils/utils.go | 2 + stackit/internal/utils/utils_test.go | 2 + stackit/internal/validate/validate.go | 2 + stackit/internal/validate/validate_test.go | 2 + stackit/provider.go | 124 ++++- stackit/provider_acc_test.go | 2 + stackit/testdata/provider-all-attributes.tf | 2 + stackit/testdata/provider-credentials.tf | 2 + .../testdata/provider-invalid-attribute.tf | 2 + .../guides/aws_provider_s3_stackit.md.tmpl | 91 ---- .../guides/kubernetes_provider_ske.md.tmpl | 83 --- .../guides/opting_into_beta_resources.md.tmpl | 35 -- templates/guides/scf_cloudfoundry.md.tmpl | 251 --------- .../ske_kube_state_metric_alerts.md.tmpl | 267 ---------- templates/guides/ske_log_alerts.md.tmpl | 199 ------- .../stackit_cdn_with_custom_domain.md.tmpl | 255 --------- .../stackit_org_service_account.md.tmpl | 15 - ...ng_loadbalancer_with_observability.md.tmpl | 163 ------ .../guides/vault_secrets_manager.md.tmpl | 83 --- .../resources/network_area_route.md.tmpl | 54 -- tools/tools.go | 12 + 385 files changed, 1431 insertions(+), 14841 deletions(-) create mode 100644 .copywrite.hcl delete mode 100644 docs/data-sources/affinity_group.md delete mode 100644 docs/data-sources/cdn_custom_domain.md delete mode 100644 docs/data-sources/cdn_distribution.md delete mode 100644 docs/data-sources/dns_record_set.md delete mode 100644 docs/data-sources/dns_zone.md delete mode 100644 docs/data-sources/git.md delete mode 100644 docs/data-sources/iaas_project.md delete mode 100644 docs/data-sources/image.md delete mode 100644 docs/data-sources/image_v2.md delete mode 100644 docs/data-sources/key_pair.md delete mode 100644 docs/data-sources/kms_key.md delete mode 100644 docs/data-sources/kms_keyring.md delete mode 100644 docs/data-sources/kms_wrapping_key.md delete mode 100644 docs/data-sources/loadbalancer.md delete mode 100644 docs/data-sources/logme_credential.md delete mode 100644 docs/data-sources/logme_instance.md delete mode 100644 docs/data-sources/machine_type.md delete mode 100644 docs/data-sources/mariadb_credential.md delete mode 100644 docs/data-sources/mariadb_instance.md delete mode 100644 docs/data-sources/mongodbflex_instance.md delete mode 100644 docs/data-sources/mongodbflex_user.md delete mode 100644 docs/data-sources/network.md delete mode 100644 docs/data-sources/network_area.md delete mode 100644 docs/data-sources/network_area_region.md delete mode 100644 docs/data-sources/network_area_route.md delete mode 100644 docs/data-sources/network_interface.md delete mode 100644 docs/data-sources/objectstorage_bucket.md delete mode 100644 docs/data-sources/objectstorage_credential.md delete mode 100644 docs/data-sources/objectstorage_credentials_group.md delete mode 100644 docs/data-sources/observability_alertgroup.md delete mode 100644 docs/data-sources/observability_instance.md delete mode 100644 docs/data-sources/observability_logalertgroup.md delete mode 100644 docs/data-sources/observability_scrapeconfig.md delete mode 100644 docs/data-sources/opensearch_credential.md delete mode 100644 docs/data-sources/opensearch_instance.md delete mode 100644 docs/data-sources/postgresflex_database.md delete mode 100644 docs/data-sources/postgresflex_instance.md delete mode 100644 docs/data-sources/postgresflex_user.md delete mode 100644 docs/data-sources/public_ip.md delete mode 100644 docs/data-sources/public_ip_ranges.md delete mode 100644 docs/data-sources/rabbitmq_credential.md delete mode 100644 docs/data-sources/rabbitmq_instance.md delete mode 100644 docs/data-sources/redis_credential.md delete mode 100644 docs/data-sources/redis_instance.md delete mode 100644 docs/data-sources/resourcemanager_folder.md delete mode 100644 docs/data-sources/resourcemanager_project.md delete mode 100644 docs/data-sources/routing_table.md delete mode 100644 docs/data-sources/routing_table_route.md delete mode 100644 docs/data-sources/routing_table_routes.md delete mode 100644 docs/data-sources/routing_tables.md delete mode 100644 docs/data-sources/scf_organization.md delete mode 100644 docs/data-sources/scf_organization_manager.md delete mode 100644 docs/data-sources/scf_platform.md delete mode 100644 docs/data-sources/secretsmanager_instance.md delete mode 100644 docs/data-sources/secretsmanager_user.md delete mode 100644 docs/data-sources/security_group.md delete mode 100644 docs/data-sources/security_group_rule.md delete mode 100644 docs/data-sources/server.md delete mode 100644 docs/data-sources/server_backup_schedule.md delete mode 100644 docs/data-sources/server_backup_schedules.md delete mode 100644 docs/data-sources/server_update_schedule.md delete mode 100644 docs/data-sources/server_update_schedules.md delete mode 100644 docs/data-sources/service_account.md delete mode 100644 docs/data-sources/ske_cluster.md delete mode 100644 docs/data-sources/sqlserverflex_instance.md rename docs/data-sources/{sqlserverflex_user.md => sqlserverflexalpha_user.md} (77%) delete mode 100644 docs/data-sources/volume.md delete mode 100644 docs/ephemeral-resources/access_token.md delete mode 100644 docs/guides/aws_provider_s3_stackit.md delete mode 100644 docs/guides/kubernetes_provider_ske.md delete mode 100644 docs/guides/opting_into_beta_resources.md delete mode 100644 docs/guides/scf_cloudfoundry.md delete mode 100644 docs/guides/ske_kube_state_metric_alerts.md delete mode 100644 docs/guides/ske_log_alerts.md delete mode 100644 docs/guides/stackit_cdn_with_custom_domain.md delete mode 100644 docs/guides/stackit_org_service_account.md delete mode 100644 docs/guides/using_loadbalancer_with_observability.md delete mode 100644 docs/guides/vault_secrets_manager.md delete mode 100644 docs/resources/affinity_group.md delete mode 100644 docs/resources/authorization_organization_role_assignment.md delete mode 100644 docs/resources/authorization_project_role_assignment.md delete mode 100644 docs/resources/cdn_custom_domain.md delete mode 100644 docs/resources/cdn_distribution.md delete mode 100644 docs/resources/dns_record_set.md delete mode 100644 docs/resources/dns_zone.md delete mode 100644 docs/resources/git.md delete mode 100644 docs/resources/image.md delete mode 100644 docs/resources/key_pair.md delete mode 100644 docs/resources/kms_key.md delete mode 100644 docs/resources/kms_keyring.md delete mode 100644 docs/resources/kms_wrapping_key.md delete mode 100644 docs/resources/loadbalancer.md delete mode 100644 docs/resources/loadbalancer_observability_credential.md delete mode 100644 docs/resources/logme_credential.md delete mode 100644 docs/resources/logme_instance.md delete mode 100644 docs/resources/mariadb_credential.md delete mode 100644 docs/resources/mariadb_instance.md delete mode 100644 docs/resources/modelserving_token.md delete mode 100644 docs/resources/mongodbflex_instance.md delete mode 100644 docs/resources/mongodbflex_user.md delete mode 100644 docs/resources/network.md delete mode 100644 docs/resources/network_area.md delete mode 100644 docs/resources/network_area_region.md delete mode 100644 docs/resources/network_area_route.md delete mode 100644 docs/resources/network_interface.md delete mode 100644 docs/resources/objectstorage_bucket.md delete mode 100644 docs/resources/objectstorage_credential.md delete mode 100644 docs/resources/objectstorage_credentials_group.md delete mode 100644 docs/resources/observability_alertgroup.md delete mode 100644 docs/resources/observability_credential.md delete mode 100644 docs/resources/observability_instance.md delete mode 100644 docs/resources/observability_logalertgroup.md delete mode 100644 docs/resources/observability_scrapeconfig.md delete mode 100644 docs/resources/opensearch_credential.md delete mode 100644 docs/resources/opensearch_instance.md delete mode 100644 docs/resources/postgresflex_database.md delete mode 100644 docs/resources/postgresflex_user.md rename docs/resources/{postgresflex_instance.md => postgresflexalpha_instance.md} (69%) delete mode 100644 docs/resources/public_ip.md delete mode 100644 docs/resources/public_ip_associate.md delete mode 100644 docs/resources/rabbitmq_credential.md delete mode 100644 docs/resources/rabbitmq_instance.md delete mode 100644 docs/resources/redis_credential.md delete mode 100644 docs/resources/redis_instance.md delete mode 100644 docs/resources/resourcemanager_folder.md delete mode 100644 docs/resources/resourcemanager_project.md delete mode 100644 docs/resources/routing_table.md delete mode 100644 docs/resources/routing_table_route.md delete mode 100644 docs/resources/scf_organization.md delete mode 100644 docs/resources/scf_organization_manager.md delete mode 100644 docs/resources/secretsmanager_instance.md delete mode 100644 docs/resources/secretsmanager_user.md delete mode 100644 docs/resources/security_group.md delete mode 100644 docs/resources/security_group_rule.md delete mode 100644 docs/resources/server.md delete mode 100644 docs/resources/server_backup_schedule.md delete mode 100644 docs/resources/server_network_interface_attach.md delete mode 100644 docs/resources/server_service_account_attach.md delete mode 100644 docs/resources/server_update_schedule.md delete mode 100644 docs/resources/server_volume_attach.md delete mode 100644 docs/resources/service_account.md delete mode 100644 docs/resources/service_account_access_token.md delete mode 100644 docs/resources/service_account_key.md delete mode 100644 docs/resources/ske_cluster.md delete mode 100644 docs/resources/ske_kubeconfig.md delete mode 100644 docs/resources/sqlserverflex_instance.md rename docs/resources/{sqlserverflex_user.md => sqlserverflexalpha_user.md} (80%) delete mode 100644 docs/resources/volume.md delete mode 100644 examples/data-sources/stackit_affinity_group/data-source.tf delete mode 100644 examples/data-sources/stackit_cdn_custom_domain/data-source.tf delete mode 100644 examples/data-sources/stackit_cdn_distribution/data-source.tf delete mode 100644 examples/data-sources/stackit_dns_record_set/data-source.tf delete mode 100644 examples/data-sources/stackit_dns_zone/data-source.tf delete mode 100644 examples/data-sources/stackit_git/data-source.tf delete mode 100644 examples/data-sources/stackit_iaas_project/data-source.tf delete mode 100644 examples/data-sources/stackit_image/data-source.tf delete mode 100644 examples/data-sources/stackit_image_v2/data-source.tf delete mode 100644 examples/data-sources/stackit_key_pair/data-source.tf delete mode 100644 examples/data-sources/stackit_kms_key/data-source.tf delete mode 100644 examples/data-sources/stackit_kms_keyring/data-source.tf delete mode 100644 examples/data-sources/stackit_kms_wrapping_key/data-source.tf delete mode 100644 examples/data-sources/stackit_loadbalancer/data-source.tf delete mode 100644 examples/data-sources/stackit_logme_credential/data-source.tf delete mode 100644 examples/data-sources/stackit_logme_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_machine_type/data-source.tf delete mode 100644 examples/data-sources/stackit_mariadb_credential/data-source.tf delete mode 100644 examples/data-sources/stackit_mongodbflex_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_network/data-source.tf delete mode 100644 examples/data-sources/stackit_network_area/data-source.tf delete mode 100644 examples/data-sources/stackit_network_area_region/data-source.tf delete mode 100644 examples/data-sources/stackit_network_area_route/data-source.tf delete mode 100644 examples/data-sources/stackit_network_interface/data-source.tf delete mode 100644 examples/data-sources/stackit_objectstorage_bucket/data-source.tf delete mode 100644 examples/data-sources/stackit_objectstorage_credential/data-source.tf delete mode 100644 examples/data-sources/stackit_objectstorage_credentials_group/data-source.tf delete mode 100644 examples/data-sources/stackit_observability_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_observability_logalertgroup/data-source.tf delete mode 100644 examples/data-sources/stackit_observability_scrapeconfig/data-source.tf delete mode 100644 examples/data-sources/stackit_opensearch_credential/data-source.tf delete mode 100644 examples/data-sources/stackit_opensearch_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_postgresflex_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_public_ip/data-source.tf delete mode 100644 examples/data-sources/stackit_public_ip_ranges/data-source.tf delete mode 100644 examples/data-sources/stackit_rabbitmq_credential/data-source.tf delete mode 100644 examples/data-sources/stackit_rabbitmq_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_redis_credential/data-source.tf delete mode 100644 examples/data-sources/stackit_redis_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_resourcemanager_folder/data-source.tf delete mode 100644 examples/data-sources/stackit_resourcemanager_project/data-source.tf delete mode 100644 examples/data-sources/stackit_routing_table/data-source.tf delete mode 100644 examples/data-sources/stackit_routing_table_route/data-source.tf delete mode 100644 examples/data-sources/stackit_routing_table_routes/data-source.tf delete mode 100644 examples/data-sources/stackit_routing_tables/data-source.tf delete mode 100644 examples/data-sources/stackit_scf_organization/data-source.tf delete mode 100644 examples/data-sources/stackit_scf_organization_manager/data-source.tf delete mode 100644 examples/data-sources/stackit_scf_platform/data-source.tf delete mode 100644 examples/data-sources/stackit_secretsmanager_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_secretsmanager_user/data-source.tf delete mode 100644 examples/data-sources/stackit_security_group/data-source.tf delete mode 100644 examples/data-sources/stackit_security_group_rule/data-source.tf delete mode 100644 examples/data-sources/stackit_server/data-source.tf delete mode 100644 examples/data-sources/stackit_server_backup_schedule/data-source.tf delete mode 100644 examples/data-sources/stackit_server_backup_schedules/data-source.tf delete mode 100644 examples/data-sources/stackit_server_update_schedule/data-source.tf delete mode 100644 examples/data-sources/stackit_server_update_schedules/data-source.tf delete mode 100644 examples/data-sources/stackit_service_account/data-source.tf delete mode 100644 examples/data-sources/stackit_ske_cluster/data-source.tf delete mode 100644 examples/data-sources/stackit_sqlserverflex_instance/data-source.tf delete mode 100644 examples/data-sources/stackit_sqlserverflex_user/data-source.tf delete mode 100644 examples/data-sources/stackit_volume/data-source.tf rename examples/data-sources/{stackit_postgresflex_database => stackitprivatepreview_postgresflexalpha_database}/data-source.tf (64%) rename examples/data-sources/{stackit_observability_alertgroup => stackitprivatepreview_postgresflexalpha_instance}/data-source.tf (54%) rename examples/data-sources/{stackit_mongodbflex_user => stackitprivatepreview_postgresflexalpha_user}/data-source.tf (65%) rename examples/data-sources/{stackit_mariadb_instance => stackitprivatepreview_sqlserverflexalpha_instance}/data-source.tf (54%) rename examples/data-sources/{stackit_postgresflex_user => stackitprivatepreview_sqlserverflexalpha_user}/data-source.tf (64%) delete mode 100644 examples/ephemeral-resources/stackit_access_token/ephemeral-resource.tf delete mode 100644 examples/resources/stackit_affinity_group/resource.tf delete mode 100644 examples/resources/stackit_authorization_organization_role_assignment/resource.tf delete mode 100644 examples/resources/stackit_authorization_project_role_assignment/resource.tf delete mode 100644 examples/resources/stackit_cdn_custom_domain/resource.tf delete mode 100644 examples/resources/stackit_cdn_distribution/resource.tf delete mode 100644 examples/resources/stackit_dns_record_set/resource.tf delete mode 100644 examples/resources/stackit_dns_zone/resource.tf delete mode 100644 examples/resources/stackit_git/resource.tf delete mode 100644 examples/resources/stackit_image/resource.tf delete mode 100644 examples/resources/stackit_key_pair/resource.tf delete mode 100644 examples/resources/stackit_kms_key/resource.tf delete mode 100644 examples/resources/stackit_kms_keyring/resource.tf delete mode 100644 examples/resources/stackit_kms_wrapping_key/resource.tf delete mode 100644 examples/resources/stackit_loadbalancer/resource.tf delete mode 100644 examples/resources/stackit_loadbalancer_observability_credential/resource.tf delete mode 100644 examples/resources/stackit_logme_credential/resource.tf delete mode 100644 examples/resources/stackit_logme_instance/resource.tf delete mode 100644 examples/resources/stackit_mariadb_credential/resource.tf delete mode 100644 examples/resources/stackit_mariadb_instance/resource.tf delete mode 100644 examples/resources/stackit_mongodbflex_instance/resource.tf delete mode 100644 examples/resources/stackit_mongodbflex_user/resource.tf delete mode 100644 examples/resources/stackit_network/resource.tf delete mode 100644 examples/resources/stackit_network_area/resource.tf delete mode 100644 examples/resources/stackit_network_area_region/resource.tf delete mode 100644 examples/resources/stackit_network_area_route/resource.tf delete mode 100644 examples/resources/stackit_network_interface/resource.tf delete mode 100644 examples/resources/stackit_objectstorage_bucket/resource.tf delete mode 100644 examples/resources/stackit_objectstorage_credential/resource.tf delete mode 100644 examples/resources/stackit_objectstorage_credentials_group/resource.tf delete mode 100644 examples/resources/stackit_observability_alertgroup/resource.tf delete mode 100644 examples/resources/stackit_observability_credential/resource.tf delete mode 100644 examples/resources/stackit_observability_instance/resource.tf delete mode 100644 examples/resources/stackit_observability_logalertgroup/resource.tf delete mode 100644 examples/resources/stackit_observability_scrapeconfig/resource.tf delete mode 100644 examples/resources/stackit_opensearch_credential/resource.tf delete mode 100644 examples/resources/stackit_opensearch_instance/resource.tf delete mode 100644 examples/resources/stackit_public_ip/resource.tf delete mode 100644 examples/resources/stackit_public_ip_associate/resource.tf delete mode 100644 examples/resources/stackit_rabbitmq_credential/resource.tf delete mode 100644 examples/resources/stackit_rabbitmq_instance/resource.tf delete mode 100644 examples/resources/stackit_redis_credential/resource.tf delete mode 100644 examples/resources/stackit_redis_instance/resource.tf delete mode 100644 examples/resources/stackit_resourcemanager_folder/resource.tf delete mode 100644 examples/resources/stackit_resourcemanager_project/resource.tf delete mode 100644 examples/resources/stackit_routing_table/resource.tf delete mode 100644 examples/resources/stackit_routing_table_route/resource.tf delete mode 100644 examples/resources/stackit_scf_organization/resource.tf delete mode 100644 examples/resources/stackit_scf_organization_manager/resource.tf delete mode 100644 examples/resources/stackit_secretsmanager_instance/resource.tf delete mode 100644 examples/resources/stackit_secretsmanager_user/resource.tf delete mode 100644 examples/resources/stackit_security_group/resource.tf delete mode 100644 examples/resources/stackit_security_group_rule/resource.tf delete mode 100644 examples/resources/stackit_server/resource.tf delete mode 100644 examples/resources/stackit_server_backup_schedule/resource.tf delete mode 100644 examples/resources/stackit_server_network_interface_attach/resource.tf delete mode 100644 examples/resources/stackit_server_service_account_attach/resource.tf delete mode 100644 examples/resources/stackit_server_update_schedule/resource.tf delete mode 100644 examples/resources/stackit_server_volume_attach/resource.tf delete mode 100644 examples/resources/stackit_service_account/resource.tf delete mode 100644 examples/resources/stackit_ske_cluster/resource.tf delete mode 100644 examples/resources/stackit_ske_kubeconfig/resource.tf delete mode 100644 examples/resources/stackit_volume/resource.tf rename examples/resources/{stackit_postgresflex_database => stackitprivatepreview_postgresflexalpha_database}/resource.tf (68%) rename examples/resources/{stackit_postgresflex_instance => stackitprivatepreview_postgresflexalpha_instance}/resource.tf (74%) rename examples/resources/{stackit_postgresflex_user => stackitprivatepreview_postgresflexalpha_user}/resource.tf (68%) rename examples/resources/{stackit_sqlserverflex_instance => stackitprivatepreview_sqlserverflexalpha_instance}/resource.tf (73%) rename examples/resources/{stackit_sqlserverflex_user => stackitprivatepreview_sqlserverflexalpha_user}/resource.tf (67%) create mode 100644 sample/user.tf delete mode 100644 stackit/internal/services/authorization/authorization_acc_test.go delete mode 100644 stackit/internal/services/authorization/roleassignments/resource.go delete mode 100644 stackit/internal/services/authorization/testfiles/double-definition.tf delete mode 100644 stackit/internal/services/authorization/testfiles/invalid-role.tf delete mode 100644 stackit/internal/services/authorization/testfiles/organization-role.tf delete mode 100644 stackit/internal/services/authorization/testfiles/prerequisites.tf delete mode 100644 stackit/internal/services/authorization/testfiles/project-owner.tf delete mode 100644 stackit/internal/services/authorization/utils/util.go delete mode 100644 stackit/internal/services/authorization/utils/util_test.go rename stackit/internal/services/postgresflexalpha/database/{resource.go.bak_test.go => resource_test.go.bak} (99%) delete mode 100644 templates/guides/aws_provider_s3_stackit.md.tmpl delete mode 100644 templates/guides/kubernetes_provider_ske.md.tmpl delete mode 100644 templates/guides/opting_into_beta_resources.md.tmpl delete mode 100644 templates/guides/scf_cloudfoundry.md.tmpl delete mode 100644 templates/guides/ske_kube_state_metric_alerts.md.tmpl delete mode 100644 templates/guides/ske_log_alerts.md.tmpl delete mode 100644 templates/guides/stackit_cdn_with_custom_domain.md.tmpl delete mode 100644 templates/guides/stackit_org_service_account.md.tmpl delete mode 100644 templates/guides/using_loadbalancer_with_observability.md.tmpl delete mode 100644 templates/guides/vault_secrets_manager.md.tmpl delete mode 100644 templates/resources/network_area_route.md.tmpl create mode 100644 tools/tools.go diff --git a/.copywrite.hcl b/.copywrite.hcl new file mode 100644 index 00000000..b26d46f2 --- /dev/null +++ b/.copywrite.hcl @@ -0,0 +1,24 @@ +# NOTE: This file is for HashiCorp specific licensing automation and can be deleted after creating a new repo with this template. +schema_version = 1 + +project { + license = "Apache-2.0" + copyright_year = 2025 + + header_ignore = [ + # internal catalog metadata (prose) + "META.d/**/*.yaml", + + # examples used within documentation (prose) + "examples/**", + + # GitHub issue template configuration + ".github/ISSUE_TEMPLATE/*.yml", + + # golangci-lint tooling configuration + ".golangci.yml", + + # GoReleaser tooling configuration + ".goreleaser.yml", + ] +} diff --git a/.github/actions/build/action.yaml b/.github/actions/build/action.yaml index 3601b23f..c5e5becd 100644 --- a/.github/actions/build/action.yaml +++ b/.github/actions/build/action.yaml @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + name: Build description: "Build pipeline" inputs: diff --git a/.github/docs/contribution-guide/resource.go b/.github/docs/contribution-guide/resource.go index d12f50a2..6044a91c 100644 --- a/.github/docs/contribution-guide/resource.go +++ b/.github/docs/contribution-guide/resource.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package foo import ( diff --git a/.github/docs/contribution-guide/utils/util.go b/.github/docs/contribution-guide/utils/util.go index bc58a48d..61ee7257 100644 --- a/.github/docs/contribution-guide/utils/util.go +++ b/.github/docs/contribution-guide/utils/util.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml index ae3117ea..c9902955 100644 --- a/.github/workflows/ci.yaml +++ b/.github/workflows/ci.yaml @@ -8,7 +8,7 @@ on: - main env: - GO_VERSION: "1.24" + GO_VERSION: "1.25" CODE_COVERAGE_FILE_NAME: "coverage.out" # must be the same as in Makefile CODE_COVERAGE_ARTIFACT_NAME: "code-coverage" diff --git a/.goreleaser.yaml b/.goreleaser.yaml index 6483bfb0..55baab60 100644 --- a/.goreleaser.yaml +++ b/.goreleaser.yaml @@ -1,4 +1,4 @@ -# Copyright (c) HashiCorp, Inc. +# Copyright (c) STACKIT # SPDX-License-Identifier: MPL-2.0 # Visit https://goreleaser.com for documentation on how to customize this diff --git a/docs/data-sources/affinity_group.md b/docs/data-sources/affinity_group.md deleted file mode 100644 index 63fc0629..00000000 --- a/docs/data-sources/affinity_group.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_affinity_group Data Source - stackit" -subcategory: "" -description: |- - Affinity Group schema. Must have a region specified in the provider configuration. ---- - -# stackit_affinity_group (Data Source) - -Affinity Group schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_affinity_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - affinity_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `affinity_group_id` (String) The affinity group ID. -- `project_id` (String) STACKIT Project ID to which the affinity group is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`affinity_group_id`". -- `members` (List of String) Affinity Group schema. Must have a `region` specified in the provider configuration. -- `name` (String) The name of the affinity group. -- `policy` (String) The policy of the affinity group. diff --git a/docs/data-sources/cdn_custom_domain.md b/docs/data-sources/cdn_custom_domain.md deleted file mode 100644 index 071e4dad..00000000 --- a/docs/data-sources/cdn_custom_domain.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_cdn_custom_domain Data Source - stackit" -subcategory: "" -description: |- - CDN distribution data source schema. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_cdn_custom_domain (Data Source) - -CDN distribution data source schema. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_cdn_custom_domain" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - distribution_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "https://xxx.xxx" -} -``` - - -## Schema - -### Required - -- `distribution_id` (String) CDN distribution ID -- `name` (String) -- `project_id` (String) STACKIT project ID associated with the distribution - -### Optional - -- `certificate` (Attributes) The TLS certificate for the custom domain. If omitted, a managed certificate will be used. If the block is specified, a custom certificate is used. (see [below for nested schema](#nestedatt--certificate)) - -### Read-Only - -- `errors` (List of String) List of distribution errors -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`distribution_id`". -- `status` (String) Status of the distribution - - -### Nested Schema for `certificate` - -Read-Only: - -- `version` (Number) A version identifier for the certificate. Required for custom certificates. The certificate will be updated if this field is changed. diff --git a/docs/data-sources/cdn_distribution.md b/docs/data-sources/cdn_distribution.md deleted file mode 100644 index 4c8618e4..00000000 --- a/docs/data-sources/cdn_distribution.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_cdn_distribution Data Source - stackit" -subcategory: "" -description: |- - CDN distribution data source schema. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_cdn_distribution (Data Source) - -CDN distribution data source schema. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_cdn_distribution" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - distribution_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `distribution_id` (String) STACKIT project ID associated with the distribution -- `project_id` (String) STACKIT project ID associated with the distribution - -### Read-Only - -- `config` (Attributes) The distribution configuration (see [below for nested schema](#nestedatt--config)) -- `created_at` (String) Time when the distribution was created -- `domains` (Attributes List) List of configured domains for the distribution (see [below for nested schema](#nestedatt--domains)) -- `errors` (List of String) List of distribution errors -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`distribution_id`". -- `status` (String) Status of the distribution -- `updated_at` (String) Time when the distribution was last updated - - -### Nested Schema for `config` - -Optional: - -- `blocked_countries` (List of String) The configured countries where distribution of content is blocked - -Read-Only: - -- `backend` (Attributes) The configured backend for the distribution (see [below for nested schema](#nestedatt--config--backend)) -- `optimizer` (Attributes) Configuration for the Image Optimizer. This is a paid feature that automatically optimizes images to reduce their file size for faster delivery, leading to improved website performance and a better user experience. (see [below for nested schema](#nestedatt--config--optimizer)) -- `regions` (List of String) The configured regions where content will be hosted - - -### Nested Schema for `config.backend` - -Read-Only: - -- `geofencing` (Map of List of String) A map of URLs to a list of countries where content is allowed. -- `origin_request_headers` (Map of String) The configured origin request headers for the backend -- `origin_url` (String) The configured backend type for the distribution -- `type` (String) The configured backend type. Possible values are: `http`. - - - -### Nested Schema for `config.optimizer` - -Read-Only: - -- `enabled` (Boolean) - - - - -### Nested Schema for `domains` - -Read-Only: - -- `errors` (List of String) List of domain errors -- `name` (String) The name of the domain -- `status` (String) The status of the domain -- `type` (String) The type of the domain. Each distribution has one domain of type "managed", and domains of type "custom" may be additionally created by the user diff --git a/docs/data-sources/dns_record_set.md b/docs/data-sources/dns_record_set.md deleted file mode 100644 index 6566491f..00000000 --- a/docs/data-sources/dns_record_set.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_dns_record_set Data Source - stackit" -subcategory: "" -description: |- - DNS Record Set Resource schema. ---- - -# stackit_dns_record_set (Data Source) - -DNS Record Set Resource schema. - -## Example Usage - -```terraform -data "stackit_dns_record_set" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - zone_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - record_set_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the dns record set is associated. -- `record_set_id` (String) The rr set id. -- `zone_id` (String) The zone ID to which is dns record set is associated. - -### Read-Only - -- `active` (Boolean) Specifies if the record set is active or not. -- `comment` (String) Comment. -- `error` (String) Error shows error in case create/update/delete failed. -- `fqdn` (String) Fully qualified domain name (FQDN) of the record set. -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`zone_id`,`record_set_id`". -- `name` (String) Name of the record which should be a valid domain according to rfc1035 Section 2.3.4. E.g. `example.com` -- `records` (List of String) Records. -- `state` (String) Record set state. -- `ttl` (Number) Time to live. E.g. 3600 -- `type` (String) The record set type. E.g. `A` or `CNAME` diff --git a/docs/data-sources/dns_zone.md b/docs/data-sources/dns_zone.md deleted file mode 100644 index 1b7a5cec..00000000 --- a/docs/data-sources/dns_zone.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_dns_zone Data Source - stackit" -subcategory: "" -description: |- - DNS Zone resource schema. ---- - -# stackit_dns_zone (Data Source) - -DNS Zone resource schema. - -## Example Usage - -```terraform -data "stackit_dns_zone" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - zone_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the dns zone is associated. - -### Optional - -- `dns_name` (String) The zone name. E.g. `example.com` -- `zone_id` (String) The zone ID. - -### Read-Only - -- `acl` (String) The access control list. -- `active` (Boolean) -- `contact_email` (String) A contact e-mail for the zone. -- `default_ttl` (Number) Default time to live. -- `description` (String) Description of the zone. -- `expire_time` (Number) Expire time. -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`zone_id`". -- `is_reverse_zone` (Boolean) Specifies, if the zone is a reverse zone or not. -- `name` (String) The user given name of the zone. -- `negative_cache` (Number) Negative caching. -- `primaries` (List of String) Primary name server for secondary zone. -- `primary_name_server` (String) Primary name server. FQDN. -- `record_count` (Number) Record count how many records are in the zone. -- `refresh_time` (Number) Refresh time. -- `retry_time` (Number) Retry time. -- `serial_number` (Number) Serial number. -- `state` (String) Zone state. -- `type` (String) Zone type. -- `visibility` (String) Visibility of the zone. diff --git a/docs/data-sources/git.md b/docs/data-sources/git.md deleted file mode 100644 index a2be6b18..00000000 --- a/docs/data-sources/git.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_git Data Source - stackit" -subcategory: "" -description: |- - Git Instance datasource schema. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_git (Data Source) - -Git Instance datasource schema. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_git" "git" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID linked to the git instance. -- `project_id` (String) STACKIT project ID to which the git instance is associated. - -### Read-Only - -- `acl` (List of String) Restricted ACL for instance access. -- `consumed_disk` (String) How many bytes of disk space is consumed. -- `consumed_object_storage` (String) How many bytes of Object Storage is consumed. -- `created` (String) Instance creation timestamp in RFC3339 format. -- `flavor` (String) Instance flavor. If not provided, defaults to git-100. For a list of available flavors, refer to our API documentation: `https://docs.api.stackit.cloud/documentation/git/version/v1beta` -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`instance_id`". -- `name` (String) Unique name linked to the git instance. -- `url` (String) Url linked to the git instance. -- `version` (String) Version linked to the git instance. diff --git a/docs/data-sources/iaas_project.md b/docs/data-sources/iaas_project.md deleted file mode 100644 index 19aea853..00000000 --- a/docs/data-sources/iaas_project.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_iaas_project Data Source - stackit" -subcategory: "" -description: |- - Project details. Must have a region specified in the provider configuration. ---- - -# stackit_iaas_project (Data Source) - -Project details. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_iaas_project" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID. - -### Read-Only - -- `area_id` (String) The area ID to which the project belongs to. -- `created_at` (String) Date-time when the project was created. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`". -- `internet_access` (Boolean) Specifies if the project has internet_access -- `state` (String, Deprecated) Specifies the status of the project. -- `status` (String) Specifies the status of the project. -- `updated_at` (String) Date-time when the project was last updated. diff --git a/docs/data-sources/image.md b/docs/data-sources/image.md deleted file mode 100644 index 34fa0c35..00000000 --- a/docs/data-sources/image.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_image Data Source - stackit" -subcategory: "" -description: |- - Image datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_image (Data Source) - -Image datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_image" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - image_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `image_id` (String) The image ID. -- `project_id` (String) STACKIT project ID to which the image is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `checksum` (Attributes) Representation of an image checksum. (see [below for nested schema](#nestedatt--checksum)) -- `config` (Attributes) Properties to set hardware and scheduling settings for an image. (see [below for nested schema](#nestedatt--config)) -- `disk_format` (String) The disk format of the image. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`image_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `min_disk_size` (Number) The minimum disk size of the image in GB. -- `min_ram` (Number) The minimum RAM of the image in MB. -- `name` (String) The name of the image. -- `protected` (Boolean) Whether the image is protected. -- `scope` (String) The scope of the image. - - -### Nested Schema for `checksum` - -Read-Only: - -- `algorithm` (String) Algorithm for the checksum of the image data. -- `digest` (String) Hexdigest of the checksum of the image data. - - - -### Nested Schema for `config` - -Read-Only: - -- `boot_menu` (Boolean) Enables the BIOS bootmenu. -- `cdrom_bus` (String) Sets CDROM bus controller type. -- `disk_bus` (String) Sets Disk bus controller type. -- `nic_model` (String) Sets virtual network interface model. -- `operating_system` (String) Enables operating system specific optimizations. -- `operating_system_distro` (String) Operating system distribution. -- `operating_system_version` (String) Version of the operating system. -- `rescue_bus` (String) Sets the device bus when the image is used as a rescue image. -- `rescue_device` (String) Sets the device when the image is used as a rescue image. -- `secure_boot` (Boolean) Enables Secure Boot. -- `uefi` (Boolean) Enables UEFI boot. -- `video_model` (String) Sets Graphic device model. -- `virtio_scsi` (Boolean) Enables the use of VirtIO SCSI to provide block device access. By default instances use VirtIO Block. diff --git a/docs/data-sources/image_v2.md b/docs/data-sources/image_v2.md deleted file mode 100644 index b417f17b..00000000 --- a/docs/data-sources/image_v2.md +++ /dev/null @@ -1,161 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_image_v2 Data Source - stackit" -subcategory: "" -description: |- - Image datasource schema. Must have a region specified in the provider configuration. - ~> Important: When using the name, name_regex, or filter attributes to select images dynamically, be aware that image IDs may change frequently. Each OS patch or update results in a new unique image ID. If this data source is used to populate fields like boot_volume.source_id in a server resource, it may cause Terraform to detect changes and recreate the associated resource. - To avoid unintended updates or resource replacements: - Prefer using a static image_id to pin a specific image version.If you accept automatic image updates but wish to suppress resource changes, use a lifecycle block to ignore relevant changes. For example: - - resource "stackit_server" "example" { - boot_volume = { - size = 64 - source_type = "image" - source_id = data.stackit_image.latest.id - } - - lifecycle { - ignore_changes = [boot_volume[0].source_id] - } - } - - You can also list available images using the STACKIT CLI https://github.com/stackitcloud/stackit-cli: - - stackit image list - - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_image_v2 (Data Source) - -Image datasource schema. Must have a `region` specified in the provider configuration. - -~> Important: When using the `name`, `name_regex`, or `filter` attributes to select images dynamically, be aware that image IDs may change frequently. Each OS patch or update results in a new unique image ID. If this data source is used to populate fields like `boot_volume.source_id` in a server resource, it may cause Terraform to detect changes and recreate the associated resource. - -To avoid unintended updates or resource replacements: - - Prefer using a static `image_id` to pin a specific image version. - - If you accept automatic image updates but wish to suppress resource changes, use a `lifecycle` block to ignore relevant changes. For example: - -```hcl -resource "stackit_server" "example" { - boot_volume = { - size = 64 - source_type = "image" - source_id = data.stackit_image.latest.id - } - - lifecycle { - ignore_changes = [boot_volume[0].source_id] - } -} -``` - -You can also list available images using the [STACKIT CLI](https://github.com/stackitcloud/stackit-cli): - -```bash -stackit image list -``` - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_image_v2" "default" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - image_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -data "stackit_image_v2" "name_match" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "Ubuntu 22.04" -} - -data "stackit_image_v2" "name_regex_latest" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name_regex = "^Ubuntu .*" -} - -data "stackit_image_v2" "name_regex_oldest" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name_regex = "^Ubuntu .*" - sort_ascending = true -} - -data "stackit_image_v2" "filter_distro_version" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = { - distro = "debian" - version = "11" - } -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the image is associated. - -### Optional - -- `filter` (Attributes) Additional filtering options based on image properties. Can be used independently or in conjunction with `name` or `name_regex`. (see [below for nested schema](#nestedatt--filter)) -- `image_id` (String) Image ID to fetch directly -- `name` (String) Exact image name to match. Optionally applies a `filter` block to further refine results in case multiple images share the same name. The first match is returned, optionally sorted by name in ascending order. Cannot be used together with `name_regex`. -- `name_regex` (String) Regular expression to match against image names. Optionally applies a `filter` block to narrow down results when multiple image names match the regex. The first match is returned, optionally sorted by name in ascending order. Cannot be used together with `name`. -- `region` (String) The resource region. If not defined, the provider region is used. -- `sort_ascending` (Boolean) If set to `true`, images are sorted in ascending lexicographical order by image name (such as `Ubuntu 18.04`, `Ubuntu 20.04`, `Ubuntu 22.04`) before selecting the first match. Defaults to `false` (descending such as `Ubuntu 22.04`, `Ubuntu 20.04`, `Ubuntu 18.04`). - -### Read-Only - -- `checksum` (Attributes) Representation of an image checksum. (see [below for nested schema](#nestedatt--checksum)) -- `config` (Attributes) Properties to set hardware and scheduling settings for an image. (see [below for nested schema](#nestedatt--config)) -- `disk_format` (String) The disk format of the image. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`image_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `min_disk_size` (Number) The minimum disk size of the image in GB. -- `min_ram` (Number) The minimum RAM of the image in MB. -- `protected` (Boolean) Whether the image is protected. -- `scope` (String) The scope of the image. - - -### Nested Schema for `filter` - -Optional: - -- `distro` (String) Filter images by operating system distribution. For example: `ubuntu`, `ubuntu-arm64`, `debian`, `rhel`, etc. -- `os` (String) Filter images by operating system type, such as `linux` or `windows`. -- `secure_boot` (Boolean) Filter images with Secure Boot support. Set to `true` to match images that support Secure Boot. -- `uefi` (Boolean) Filter images based on UEFI support. Set to `true` to match images that support UEFI. -- `version` (String) Filter images by OS distribution version, such as `22.04`, `11`, or `9.1`. - - - -### Nested Schema for `checksum` - -Read-Only: - -- `algorithm` (String) Algorithm for the checksum of the image data. -- `digest` (String) Hexdigest of the checksum of the image data. - - - -### Nested Schema for `config` - -Read-Only: - -- `boot_menu` (Boolean) Enables the BIOS bootmenu. -- `cdrom_bus` (String) Sets CDROM bus controller type. -- `disk_bus` (String) Sets Disk bus controller type. -- `nic_model` (String) Sets virtual network interface model. -- `operating_system` (String) Enables operating system specific optimizations. -- `operating_system_distro` (String) Operating system distribution. -- `operating_system_version` (String) Version of the operating system. -- `rescue_bus` (String) Sets the device bus when the image is used as a rescue image. -- `rescue_device` (String) Sets the device when the image is used as a rescue image. -- `secure_boot` (Boolean) Enables Secure Boot. -- `uefi` (Boolean) Enables UEFI boot. -- `video_model` (String) Sets Graphic device model. -- `virtio_scsi` (Boolean) Enables the use of VirtIO SCSI to provide block device access. By default instances use VirtIO Block. diff --git a/docs/data-sources/key_pair.md b/docs/data-sources/key_pair.md deleted file mode 100644 index 6000df6e..00000000 --- a/docs/data-sources/key_pair.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_key_pair Data Source - stackit" -subcategory: "" -description: |- - Key pair resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_key_pair (Data Source) - -Key pair resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_key_pair" "example" { - name = "example-key-pair-name" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the SSH key pair. - -### Read-Only - -- `fingerprint` (String) The fingerprint of the public SSH key. -- `id` (String) Terraform's internal resource ID. It takes the value of the key pair "`name`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container. -- `public_key` (String) A string representation of the public SSH key. E.g., `ssh-rsa ` or `ssh-ed25519 `. diff --git a/docs/data-sources/kms_key.md b/docs/data-sources/kms_key.md deleted file mode 100644 index d853c151..00000000 --- a/docs/data-sources/kms_key.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_kms_key Data Source - stackit" -subcategory: "" -description: |- - KMS Key datasource schema. Uses the default_region specified in the provider configuration as a fallback in case no region is defined on datasource level. ---- - -# stackit_kms_key (Data Source) - -KMS Key datasource schema. Uses the `default_region` specified in the provider configuration as a fallback in case no `region` is defined on datasource level. - -## Example Usage - -```terraform -data "stackit_kms_key" "key" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - key_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `key_id` (String) The ID of the key -- `keyring_id` (String) The ID of the associated key ring -- `project_id` (String) STACKIT project ID to which the key is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `access_scope` (String) The access scope of the key. Default is `PUBLIC`. Possible values are: `PUBLIC`, `SNA`. -- `algorithm` (String) The encryption algorithm that the key will use to encrypt data. Possible values are: `aes_256_gcm`, `rsa_2048_oaep_sha256`, `rsa_3072_oaep_sha256`, `rsa_4096_oaep_sha256`, `rsa_4096_oaep_sha512`, `hmac_sha256`, `hmac_sha384`, `hmac_sha512`, `ecdsa_p256_sha256`, `ecdsa_p384_sha384`, `ecdsa_p521_sha512`. -- `description` (String) A user chosen description to distinguish multiple keys -- `display_name` (String) The display name to distinguish multiple keys -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`keyring_id`,`key_id`". -- `import_only` (Boolean) States whether versions can be created or only imported. -- `protection` (String) The underlying system that is responsible for protecting the key material. Possible values are: `software`. -- `purpose` (String) The purpose for which the key will be used. Possible values are: `symmetric_encrypt_decrypt`, `asymmetric_encrypt_decrypt`, `message_authentication_code`, `asymmetric_sign_verify`. diff --git a/docs/data-sources/kms_keyring.md b/docs/data-sources/kms_keyring.md deleted file mode 100644 index 6b820194..00000000 --- a/docs/data-sources/kms_keyring.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_kms_keyring Data Source - stackit" -subcategory: "" -description: |- - KMS Keyring datasource schema. Uses the default_region specified in the provider configuration as a fallback in case no region is defined on datasource level. ---- - -# stackit_kms_keyring (Data Source) - -KMS Keyring datasource schema. Uses the `default_region` specified in the provider configuration as a fallback in case no `region` is defined on datasource level. - -## Example Usage - -```terraform -data "stackit_kms_keyring" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `keyring_id` (String) An auto generated unique id which identifies the keyring. -- `project_id` (String) STACKIT project ID to which the keyring is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `description` (String) A user chosen description to distinguish multiple keyrings. -- `display_name` (String) The display name to distinguish multiple keyrings. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`keyring_id`". diff --git a/docs/data-sources/kms_wrapping_key.md b/docs/data-sources/kms_wrapping_key.md deleted file mode 100644 index 83e66a93..00000000 --- a/docs/data-sources/kms_wrapping_key.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_kms_wrapping_key Data Source - stackit" -subcategory: "" -description: |- - KMS wrapping key datasource schema. ---- - -# stackit_kms_wrapping_key (Data Source) - -KMS wrapping key datasource schema. - -## Example Usage - -```terraform -data "stackit_kms_wrapping_key" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - wrapping_key_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `keyring_id` (String) The ID of the associated keyring -- `project_id` (String) STACKIT project ID to which the keyring is associated. -- `wrapping_key_id` (String) The ID of the wrapping key - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `access_scope` (String) The access scope of the key. Default is `PUBLIC`. Possible values are: `PUBLIC`, `SNA`. -- `algorithm` (String) The wrapping algorithm used to wrap the key to import. Possible values are: `rsa_2048_oaep_sha256`, `rsa_3072_oaep_sha256`, `rsa_4096_oaep_sha256`, `rsa_4096_oaep_sha512`, `rsa_2048_oaep_sha256_aes_256_key_wrap`, `rsa_3072_oaep_sha256_aes_256_key_wrap`, `rsa_4096_oaep_sha256_aes_256_key_wrap`, `rsa_4096_oaep_sha512_aes_256_key_wrap`. -- `created_at` (String) The date and time the creation of the wrapping key was triggered. -- `description` (String) A user chosen description to distinguish multiple wrapping keys. -- `display_name` (String) The display name to distinguish multiple wrapping keys. -- `expires_at` (String) The date and time the wrapping key will expire. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`keyring_id`,`wrapping_key_id`". -- `protection` (String) The underlying system that is responsible for protecting the key material. Possible values are: `software`. -- `public_key` (String) The public key of the wrapping key. -- `purpose` (String) The purpose for which the key will be used. Possible values are: `wrap_symmetric_key`, `wrap_asymmetric_key`. diff --git a/docs/data-sources/loadbalancer.md b/docs/data-sources/loadbalancer.md deleted file mode 100644 index e6b9f411..00000000 --- a/docs/data-sources/loadbalancer.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_loadbalancer Data Source - stackit" -subcategory: "" -description: |- - Load Balancer data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_loadbalancer (Data Source) - -Load Balancer data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-load-balancer" -} -``` - - -## Schema - -### Required - -- `name` (String) Load balancer name. -- `project_id` (String) STACKIT project ID to which the Load Balancer is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `disable_security_group_assignment` (Boolean) If set to true, this will disable the automatic assignment of a security group to the load balancer's targets. This option is primarily used to allow targets that are not within the load balancer's own network or SNA (STACKIT Network area). When this is enabled, you are fully responsible for ensuring network connectivity to the targets, including managing all routing and security group rules manually. This setting cannot be changed after the load balancer is created. -- `external_address` (String) External Load Balancer IP address where this Load Balancer is exposed. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`","region","`name`". -- `listeners` (Attributes List) List of all listeners which will accept traffic. Limited to 20. (see [below for nested schema](#nestedatt--listeners)) -- `networks` (Attributes List) List of networks that listeners and targets reside in. (see [below for nested schema](#nestedatt--networks)) -- `options` (Attributes) Defines any optional functionality you want to have enabled on your load balancer. (see [below for nested schema](#nestedatt--options)) -- `plan_id` (String) The service plan ID. If not defined, the default service plan is `p10`. Possible values are: `p10`, `p50`, `p250`, `p750`. -- `private_address` (String) Transient private Load Balancer IP address. It can change any time. -- `security_group_id` (String) The ID of the egress security group assigned to the Load Balancer's internal machines. This ID is essential for allowing traffic from the Load Balancer to targets in different networks or STACKIT Network areas (SNA). To enable this, create a security group rule for your target VMs and set the `remote_security_group_id` of that rule to this value. This is typically used when `disable_security_group_assignment` is set to `true`. -- `target_pools` (Attributes List) List of all target pools which will be used in the Load Balancer. Limited to 20. (see [below for nested schema](#nestedatt--target_pools)) - - -### Nested Schema for `listeners` - -Optional: - -- `server_name_indicators` (Attributes List) A list of domain names to match in order to pass TLS traffic to the target pool in the current listener (see [below for nested schema](#nestedatt--listeners--server_name_indicators)) - -Read-Only: - -- `display_name` (String) -- `port` (Number) Port number where we listen for traffic. -- `protocol` (String) Protocol is the highest network protocol we understand to load balance. -- `target_pool` (String) Reference target pool by target pool name. -- `tcp` (Attributes) Options that are specific to the TCP protocol. (see [below for nested schema](#nestedatt--listeners--tcp)) -- `udp` (Attributes) Options that are specific to the UDP protocol. (see [below for nested schema](#nestedatt--listeners--udp)) - - -### Nested Schema for `listeners.server_name_indicators` - -Optional: - -- `name` (String) A domain name to match in order to pass TLS traffic to the target pool in the current listener - - - -### Nested Schema for `listeners.tcp` - -Read-Only: - -- `idle_timeout` (String) Time after which an idle connection is closed. The default value is set to 5 minutes, and the maximum value is one hour. - - - -### Nested Schema for `listeners.udp` - -Read-Only: - -- `idle_timeout` (String) Time after which an idle session is closed. The default value is set to 1 minute, and the maximum value is 2 minutes. - - - - -### Nested Schema for `networks` - -Read-Only: - -- `network_id` (String) Openstack network ID. -- `role` (String) The role defines how the load balancer is using the network. - - - -### Nested Schema for `options` - -Read-Only: - -- `acl` (Set of String) Load Balancer is accessible only from an IP address in this range. -- `observability` (Attributes) We offer Load Balancer metrics observability via ARGUS or external solutions. (see [below for nested schema](#nestedatt--options--observability)) -- `private_network_only` (Boolean) If true, Load Balancer is accessible only via a private network IP address. - - -### Nested Schema for `options.observability` - -Read-Only: - -- `logs` (Attributes) Observability logs configuration. (see [below for nested schema](#nestedatt--options--observability--logs)) -- `metrics` (Attributes) Observability metrics configuration. (see [below for nested schema](#nestedatt--options--observability--metrics)) - - -### Nested Schema for `options.observability.logs` - -Read-Only: - -- `credentials_ref` (String) Credentials reference for logs. -- `push_url` (String) Credentials reference for logs. - - - -### Nested Schema for `options.observability.metrics` - -Read-Only: - -- `credentials_ref` (String) Credentials reference for metrics. -- `push_url` (String) Credentials reference for metrics. - - - - - -### Nested Schema for `target_pools` - -Optional: - -- `session_persistence` (Attributes) Here you can setup various session persistence options, so far only "`use_source_ip_address`" is supported. (see [below for nested schema](#nestedatt--target_pools--session_persistence)) - -Read-Only: - -- `active_health_check` (Attributes) (see [below for nested schema](#nestedatt--target_pools--active_health_check)) -- `name` (String) Target pool name. -- `target_port` (Number) Identical port number where each target listens for traffic. -- `targets` (Attributes List) List of all targets which will be used in the pool. Limited to 1000. (see [below for nested schema](#nestedatt--target_pools--targets)) - - -### Nested Schema for `target_pools.session_persistence` - -Optional: - -- `use_source_ip_address` (Boolean) If true then all connections from one source IP address are redirected to the same target. This setting changes the load balancing algorithm to Maglev. - - - -### Nested Schema for `target_pools.active_health_check` - -Read-Only: - -- `healthy_threshold` (Number) Healthy threshold of the health checking. -- `interval` (String) Interval duration of health checking in seconds. -- `interval_jitter` (String) Interval duration threshold of the health checking in seconds. -- `timeout` (String) Active health checking timeout duration in seconds. -- `unhealthy_threshold` (Number) Unhealthy threshold of the health checking. - - - -### Nested Schema for `target_pools.targets` - -Read-Only: - -- `display_name` (String) Target display name -- `ip` (String) Target IP diff --git a/docs/data-sources/logme_credential.md b/docs/data-sources/logme_credential.md deleted file mode 100644 index 147cd6cc..00000000 --- a/docs/data-sources/logme_credential.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_logme_credential Data Source - stackit" -subcategory: "" -description: |- - LogMe credential data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_logme_credential (Data Source) - -LogMe credential data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_logme_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credential_id` (String) The credential's ID. -- `instance_id` (String) ID of the LogMe instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `host` (String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `username` (String) diff --git a/docs/data-sources/logme_instance.md b/docs/data-sources/logme_instance.md deleted file mode 100644 index bd29872c..00000000 --- a/docs/data-sources/logme_instance.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_logme_instance Data Source - stackit" -subcategory: "" -description: |- - LogMe instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_logme_instance (Data Source) - -LogMe instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_logme_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the LogMe instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `name` (String) Instance name. -- `parameters` (Attributes) (see [below for nested schema](#nestedatt--parameters)) -- `plan_id` (String) The selected plan ID. -- `plan_name` (String) The selected plan name. -- `version` (String) The service version. - - -### Nested Schema for `parameters` - -Read-Only: - -- `enable_monitoring` (Boolean) Enable monitoring. -- `fluentd_tcp` (Number) -- `fluentd_tls` (Number) -- `fluentd_tls_ciphers` (String) -- `fluentd_tls_max_version` (String) -- `fluentd_tls_min_version` (String) -- `fluentd_tls_version` (String) -- `fluentd_udp` (Number) -- `graphite` (String) If set, monitoring with Graphite will be enabled. Expects the host and port where the Graphite metrics should be sent to (host:port). -- `ism_deletion_after` (String) Combination of an integer and a timerange when an index will be considered "old" and can be deleted. Possible values for the timerange are `s`, `m`, `h` and `d`. -- `ism_jitter` (Number) -- `ism_job_interval` (Number) Jitter of the execution time. -- `java_heapspace` (Number) The amount of memory (in MB) allocated as heap by the JVM for OpenSearch. -- `java_maxmetaspace` (Number) The amount of memory (in MB) used by the JVM to store metadata for OpenSearch. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted (in seconds). -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key. -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `opensearch_tls_ciphers` (List of String) -- `opensearch_tls_protocols` (List of String) -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. diff --git a/docs/data-sources/machine_type.md b/docs/data-sources/machine_type.md deleted file mode 100644 index 7a200ae0..00000000 --- a/docs/data-sources/machine_type.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_machine_type Data Source - stackit" -subcategory: "" -description: |- - Machine type data source. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_machine_type (Data Source) - -Machine type data source. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_machine_type" "two_vcpus_filter" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "vcpus==2" -} - -data "stackit_machine_type" "filter_sorted_ascending_false" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "vcpus >= 2 && ram >= 2048" - sort_ascending = false -} - -data "stackit_machine_type" "intel_icelake_generic_filter" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "extraSpecs.cpu==\"intel-icelake-generic\" && vcpus == 2" -} - -# returns warning -data "stackit_machine_type" "no_match" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "vcpus == 99" -} -``` - - -## Schema - -### Required - -- `filter` (String) Expr-lang filter for filtering machine types. - -Examples: -- vcpus == 2 -- ram >= 2048 -- extraSpecs.cpu == "intel-icelake-generic" -- extraSpecs.cpu == "intel-icelake-generic" && vcpus == 2 - -Syntax reference: https://expr-lang.org/docs/language-definition - -You can also list available machine-types using the [STACKIT CLI](https://github.com/stackitcloud/stackit-cli): - -```bash -stackit server machine-type list -``` -- `project_id` (String) STACKIT Project ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. -- `sort_ascending` (Boolean) Sort machine types by name ascending (`true`) or descending (`false`). Defaults to `false` - -### Read-Only - -- `description` (String) Machine type description. -- `disk` (Number) Disk size in GB. -- `extra_specs` (Map of String) Extra specs (e.g., CPU type, overcommit ratio). -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`image_id`". -- `name` (String) Name of the machine type (e.g. 's1.2'). -- `ram` (Number) RAM size in MB. -- `vcpus` (Number) Number of vCPUs. diff --git a/docs/data-sources/mariadb_credential.md b/docs/data-sources/mariadb_credential.md deleted file mode 100644 index dc5de136..00000000 --- a/docs/data-sources/mariadb_credential.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mariadb_credential Data Source - stackit" -subcategory: "" -description: |- - MariaDB credential data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_mariadb_credential (Data Source) - -MariaDB credential data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_mariadb_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credential_id` (String) The credential's ID. -- `instance_id` (String) ID of the MariaDB instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `host` (String) -- `hosts` (List of String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `name` (String) -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `username` (String) diff --git a/docs/data-sources/mariadb_instance.md b/docs/data-sources/mariadb_instance.md deleted file mode 100644 index be2553cc..00000000 --- a/docs/data-sources/mariadb_instance.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mariadb_instance Data Source - stackit" -subcategory: "" -description: |- - MariaDB instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_mariadb_instance (Data Source) - -MariaDB instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_mariadb_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the MariaDB instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `name` (String) Instance name. -- `parameters` (Attributes) (see [below for nested schema](#nestedatt--parameters)) -- `plan_id` (String) The selected plan ID. -- `plan_name` (String) The selected plan name. -- `version` (String) The service version. - - -### Nested Schema for `parameters` - -Read-Only: - -- `enable_monitoring` (Boolean) Enable monitoring. -- `graphite` (String) -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted. -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. diff --git a/docs/data-sources/mongodbflex_instance.md b/docs/data-sources/mongodbflex_instance.md deleted file mode 100644 index 47ad14ae..00000000 --- a/docs/data-sources/mongodbflex_instance.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mongodbflex_instance Data Source - stackit" -subcategory: "" -description: |- - MongoDB Flex instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_mongodbflex_instance (Data Source) - -MongoDB Flex instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_mongodbflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the MongoDB Flex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `acl` (List of String) The Access Control List (ACL) for the MongoDB Flex instance. -- `backup_schedule` (String) The backup schedule. Should follow the cron scheduling system format (e.g. "0 0 * * *"). -- `flavor` (Attributes) (see [below for nested schema](#nestedatt--flavor)) -- `id` (String) Terraform's internal data source ID. It is structured as "`project_id`,`region`,`instance_id`". -- `name` (String) Instance name. -- `options` (Attributes) Custom parameters for the MongoDB Flex instance. (see [below for nested schema](#nestedatt--options)) -- `replicas` (Number) -- `storage` (Attributes) (see [below for nested schema](#nestedatt--storage)) -- `version` (String) - - -### Nested Schema for `flavor` - -Read-Only: - -- `cpu` (Number) -- `description` (String) -- `id` (String) -- `ram` (Number) - - - -### Nested Schema for `options` - -Read-Only: - -- `daily_snapshot_retention_days` (Number) The number of days that daily backups will be retained. -- `monthly_snapshot_retention_months` (Number) The number of months that monthly backups will be retained. -- `point_in_time_window_hours` (Number) The number of hours back in time the point-in-time recovery feature will be able to recover. -- `snapshot_retention_days` (Number) The number of days that continuous backups (controlled via the `backup_schedule`) will be retained. -- `type` (String) Type of the MongoDB Flex instance. -- `weekly_snapshot_retention_weeks` (Number) The number of weeks that weekly backups will be retained. - - - -### Nested Schema for `storage` - -Read-Only: - -- `class` (String) -- `size` (Number) diff --git a/docs/data-sources/mongodbflex_user.md b/docs/data-sources/mongodbflex_user.md deleted file mode 100644 index 4489321f..00000000 --- a/docs/data-sources/mongodbflex_user.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mongodbflex_user Data Source - stackit" -subcategory: "" -description: |- - MongoDB Flex user data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_mongodbflex_user (Data Source) - -MongoDB Flex user data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_mongodbflex_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the MongoDB Flex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `user_id` (String) User ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `database` (String) -- `host` (String) -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`region`,`instance_id`,`user_id`". -- `port` (Number) -- `roles` (Set of String) -- `username` (String) diff --git a/docs/data-sources/network.md b/docs/data-sources/network.md deleted file mode 100644 index dba50f60..00000000 --- a/docs/data-sources/network.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network Data Source - stackit" -subcategory: "" -description: |- - Network resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_network (Data Source) - -Network resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_network" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_id` (String) The network ID. -- `project_id` (String) STACKIT project ID to which the network is associated. - -### Optional - -- `region` (String) Can only be used when experimental "network" is set. This is likely going to undergo significant changes or be removed in the future. -The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`network_id`". -- `ipv4_gateway` (String) The IPv4 gateway of a network. If not specified, the first IP of the network will be assigned as the gateway. -- `ipv4_nameservers` (List of String) The IPv4 nameservers of the network. -- `ipv4_prefix` (String, Deprecated) The IPv4 prefix of the network (CIDR). -- `ipv4_prefix_length` (Number) The IPv4 prefix length of the network. -- `ipv4_prefixes` (List of String) The IPv4 prefixes of the network. -- `ipv6_gateway` (String) The IPv6 gateway of a network. If not specified, the first IP of the network will be assigned as the gateway. -- `ipv6_nameservers` (List of String) The IPv6 nameservers of the network. -- `ipv6_prefix` (String, Deprecated) The IPv6 prefix of the network (CIDR). -- `ipv6_prefix_length` (Number) The IPv6 prefix length of the network. -- `ipv6_prefixes` (List of String) The IPv6 prefixes of the network. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `name` (String) The name of the network. -- `nameservers` (List of String, Deprecated) The nameservers of the network. This field is deprecated and will be removed soon, use `ipv4_nameservers` to configure the nameservers for IPv4. -- `prefixes` (List of String, Deprecated) The prefixes of the network. This field is deprecated and will be removed soon, use `ipv4_prefixes` to read the prefixes of the IPv4 networks. -- `public_ip` (String) The public IP of the network. -- `routed` (Boolean) Shows if the network is routed and therefore accessible from other networks. -- `routing_table_id` (String) Can only be used when experimental "network" is set. This is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. -The ID of the routing table associated with the network. diff --git a/docs/data-sources/network_area.md b/docs/data-sources/network_area.md deleted file mode 100644 index 86590676..00000000 --- a/docs/data-sources/network_area.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_area Data Source - stackit" -subcategory: "" -description: |- - Network area datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_network_area (Data Source) - -Network area datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_network_area" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID. -- `organization_id` (String) STACKIT organization ID to which the network area is associated. - -### Read-Only - -- `default_nameservers` (List of String, Deprecated) List of DNS Servers/Nameservers. -- `default_prefix_length` (Number, Deprecated) The default prefix length for networks in the network area. -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`network_area_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `max_prefix_length` (Number, Deprecated) The maximal prefix length for networks in the network area. -- `min_prefix_length` (Number, Deprecated) The minimal prefix length for networks in the network area. -- `name` (String) The name of the network area. -- `network_ranges` (Attributes List, Deprecated) List of Network ranges. (see [below for nested schema](#nestedatt--network_ranges)) -- `project_count` (Number) The amount of projects currently referencing this area. -- `transfer_network` (String, Deprecated) Classless Inter-Domain Routing (CIDR). - - -### Nested Schema for `network_ranges` - -Read-Only: - -- `network_range_id` (String) -- `prefix` (String) diff --git a/docs/data-sources/network_area_region.md b/docs/data-sources/network_area_region.md deleted file mode 100644 index 09ac1be3..00000000 --- a/docs/data-sources/network_area_region.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_area_region Data Source - stackit" -subcategory: "" -description: |- - Network area region data source schema. ---- - -# stackit_network_area_region (Data Source) - -Network area region data source schema. - -## Example Usage - -```terraform -data "stackit_network_area_region" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID. -- `organization_id` (String) STACKIT organization ID to which the network area is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`network_area_id`,`region`". -- `ipv4` (Attributes) The regional IPv4 config of a network area. (see [below for nested schema](#nestedatt--ipv4)) - - -### Nested Schema for `ipv4` - -Read-Only: - -- `default_nameservers` (List of String) List of DNS Servers/Nameservers. -- `default_prefix_length` (Number) The default prefix length for networks in the network area. -- `max_prefix_length` (Number) The maximal prefix length for networks in the network area. -- `min_prefix_length` (Number) The minimal prefix length for networks in the network area. -- `network_ranges` (Attributes List) List of Network ranges. (see [below for nested schema](#nestedatt--ipv4--network_ranges)) -- `transfer_network` (String) IPv4 Classless Inter-Domain Routing (CIDR). - - -### Nested Schema for `ipv4.network_ranges` - -Read-Only: - -- `network_range_id` (String) -- `prefix` (String) Classless Inter-Domain Routing (CIDR). diff --git a/docs/data-sources/network_area_route.md b/docs/data-sources/network_area_route.md deleted file mode 100644 index e17027d5..00000000 --- a/docs/data-sources/network_area_route.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_area_route Data Source - stackit" -subcategory: "" -description: |- - Network area route data resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_network_area_route (Data Source) - -Network area route data resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_route_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID to which the network area route is associated. -- `network_area_route_id` (String) The network area route ID. -- `organization_id` (String) STACKIT organization ID to which the network area is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `destination` (Attributes) Destination of the route. (see [below for nested schema](#nestedatt--destination)) -- `id` (String) Terraform's internal data source ID. It is structured as "`organization_id`,`region`,`network_area_id`,`network_area_route_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `next_hop` (Attributes) Next hop destination. (see [below for nested schema](#nestedatt--next_hop)) - - -### Nested Schema for `destination` - -Read-Only: - -- `type` (String) CIDRV type. Possible values are: `cidrv4`, `cidrv6`. -- `value` (String) An CIDR string. - - - -### Nested Schema for `next_hop` - -Read-Only: - -- `type` (String) Type of the next hop. Possible values are: `blackhole`, `internet`, `ipv4`, `ipv6`. -- `value` (String) Either IPv4 or IPv6 (not set for blackhole and internet). diff --git a/docs/data-sources/network_interface.md b/docs/data-sources/network_interface.md deleted file mode 100644 index 77e5d6ef..00000000 --- a/docs/data-sources/network_interface.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_interface Data Source - stackit" -subcategory: "" -description: |- - Network interface datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_network_interface (Data Source) - -Network interface datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_network_interface" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_id` (String) The network ID to which the network interface is associated. -- `network_interface_id` (String) The network interface ID. -- `project_id` (String) STACKIT project ID to which the network interface is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `allowed_addresses` (List of String) The list of CIDR (Classless Inter-Domain Routing) notations. -- `device` (String) The device UUID of the network interface. -- `id` (String) Terraform's internal data source ID. It is structured as "`project_id`,`region`,`network_id`,`network_interface_id`". -- `ipv4` (String) The IPv4 address. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a network interface. -- `mac` (String) The MAC address of network interface. -- `name` (String) The name of the network interface. -- `security` (Boolean) The Network Interface Security. If set to false, then no security groups will apply to this network interface. -- `security_group_ids` (List of String) The list of security group UUIDs. If security is set to false, setting this field will lead to an error. -- `type` (String) Type of network interface. Some of the possible values are: Possible values are: `server`, `metadata`, `gateway`. diff --git a/docs/data-sources/objectstorage_bucket.md b/docs/data-sources/objectstorage_bucket.md deleted file mode 100644 index bdf4fee2..00000000 --- a/docs/data-sources/objectstorage_bucket.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_objectstorage_bucket Data Source - stackit" -subcategory: "" -description: |- - ObjectStorage bucket data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_objectstorage_bucket (Data Source) - -ObjectStorage bucket data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_objectstorage_bucket" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-name" -} -``` - - -## Schema - -### Required - -- `name` (String) The bucket name. It must be DNS conform. -- `project_id` (String) STACKIT Project ID to which the bucket is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal data source identifier. It is structured as "`project_id`,`region`,`name`". -- `url_path_style` (String) -- `url_virtual_hosted_style` (String) diff --git a/docs/data-sources/objectstorage_credential.md b/docs/data-sources/objectstorage_credential.md deleted file mode 100644 index e7bfb035..00000000 --- a/docs/data-sources/objectstorage_credential.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_objectstorage_credential Data Source - stackit" -subcategory: "" -description: |- - ObjectStorage credential data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_objectstorage_credential (Data Source) - -ObjectStorage credential data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_objectstorage_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credential_id` (String) The credential ID. -- `credentials_group_id` (String) The credential group ID. -- `project_id` (String) STACKIT Project ID to which the credential group is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `expiration_timestamp` (String) -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`credentials_group_id`,`credential_id`". -- `name` (String) diff --git a/docs/data-sources/objectstorage_credentials_group.md b/docs/data-sources/objectstorage_credentials_group.md deleted file mode 100644 index e5934d3b..00000000 --- a/docs/data-sources/objectstorage_credentials_group.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_objectstorage_credentials_group Data Source - stackit" -subcategory: "" -description: |- - ObjectStorage credentials group data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_objectstorage_credentials_group (Data Source) - -ObjectStorage credentials group data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_objectstorage_credentials_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credentials_group_id` (String) The credentials group ID. -- `project_id` (String) Object Storage Project ID to which the credentials group is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal data source identifier. It is structured as "`project_id`,`region`,`credentials_group_id`". -- `name` (String) The credentials group's display name. -- `urn` (String) Credentials group uniform resource name (URN) diff --git a/docs/data-sources/observability_alertgroup.md b/docs/data-sources/observability_alertgroup.md deleted file mode 100644 index 9fa930a6..00000000 --- a/docs/data-sources/observability_alertgroup.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_alertgroup Data Source - stackit" -subcategory: "" -description: |- - Observability alert group datasource schema. Used to create alerts based on metrics (Thanos). Must have a region specified in the provider configuration. ---- - -# stackit_observability_alertgroup (Data Source) - -Observability alert group datasource schema. Used to create alerts based on metrics (Thanos). Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_observability_alertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-alert-group" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) Observability instance ID to which the alert group is associated. -- `name` (String) The name of the alert group. Is the identifier and must be unique in the group. -- `project_id` (String) STACKIT project ID to which the alert group is associated. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`name`". -- `interval` (String) Specifies the frequency at which rules within the group are evaluated. The interval must be at least 60 seconds and defaults to 60 seconds if not set. Supported formats include hours, minutes, and seconds, either singly or in combination. Examples of valid formats are: '5h30m40s', '5h', '5h30m', '60m', and '60s'. -- `rules` (Attributes List) (see [below for nested schema](#nestedatt--rules)) - - -### Nested Schema for `rules` - -Read-Only: - -- `alert` (String) The name of the alert rule. Is the identifier and must be unique in the group. -- `annotations` (Map of String) A map of key:value. Annotations to add or overwrite for each alert -- `expression` (String) The PromQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending/firing alerts. -- `for` (String) Alerts are considered firing once they have been returned for this long. Alerts which have not yet fired for long enough are considered pending. Default is 0s -- `labels` (Map of String) A map of key:value. Labels to add or overwrite for each alert diff --git a/docs/data-sources/observability_instance.md b/docs/data-sources/observability_instance.md deleted file mode 100644 index bd6b3fd2..00000000 --- a/docs/data-sources/observability_instance.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_instance Data Source - stackit" -subcategory: "" -description: |- - Observability instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_observability_instance (Data Source) - -Observability instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) The Observability instance ID. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `acl` (Set of String) The access control list for this instance. Each entry is an IP address range that is permitted to access, in CIDR notation. -- `alert_config` (Attributes) Alert configuration for the instance. (see [below for nested schema](#nestedatt--alert_config)) -- `alerting_url` (String) Specifies Alerting URL. -- `dashboard_url` (String) Specifies Observability instance dashboard URL. -- `grafana_initial_admin_password` (String, Sensitive) Specifies an initial Grafana admin password. -- `grafana_initial_admin_user` (String) Specifies an initial Grafana admin username. -- `grafana_public_read_access` (Boolean) If true, anyone can access Grafana dashboards without logging in. -- `grafana_url` (String) Specifies Grafana URL. -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`instance_id`". -- `is_updatable` (Boolean) Specifies if the instance can be updated. -- `jaeger_traces_url` (String) -- `jaeger_ui_url` (String) -- `logs_push_url` (String) Specifies URL for pushing logs. -- `logs_retention_days` (Number) Specifies for how many days the logs are kept. Default is set to `7`. -- `logs_url` (String) Specifies Logs URL. -- `metrics_push_url` (String) Specifies URL for pushing metrics. -- `metrics_retention_days` (Number) Specifies for how many days the raw metrics are kept. Default is set to `90`. -- `metrics_retention_days_1h_downsampling` (Number) Specifies for how many days the 1h downsampled metrics are kept. must be less than the value of the 5m downsampling retention. Default is set to `90`. -- `metrics_retention_days_5m_downsampling` (Number) Specifies for how many days the 5m downsampled metrics are kept. must be less than the value of the general retention. Default is set to `90`. -- `metrics_url` (String) Specifies metrics URL. -- `name` (String) The name of the Observability instance. -- `otlp_traces_url` (String) -- `parameters` (Map of String) Additional parameters. -- `plan_id` (String) The Observability plan ID. -- `plan_name` (String) Specifies the Observability plan. E.g. `Observability-Monitoring-Medium-EU01`. -- `targets_url` (String) Specifies Targets URL. -- `traces_retention_days` (Number) Specifies for how many days the traces are kept. Default is set to `7`. -- `zipkin_spans_url` (String) - - -### Nested Schema for `alert_config` - -Read-Only: - -- `global` (Attributes) Global configuration for the alerts. (see [below for nested schema](#nestedatt--alert_config--global)) -- `receivers` (Attributes List) List of alert receivers. (see [below for nested schema](#nestedatt--alert_config--receivers)) -- `route` (Attributes) The route for the alert. (see [below for nested schema](#nestedatt--alert_config--route)) - - -### Nested Schema for `alert_config.global` - -Read-Only: - -- `opsgenie_api_key` (String, Sensitive) The API key for OpsGenie. -- `opsgenie_api_url` (String) The host to send OpsGenie API requests to. Must be a valid URL -- `resolve_timeout` (String) The default value used by alertmanager if the alert does not include EndsAt. After this time passes, it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. -- `smtp_auth_identity` (String) SMTP authentication information. Must be a valid email address -- `smtp_auth_password` (String, Sensitive) SMTP Auth using LOGIN and PLAIN. -- `smtp_auth_username` (String) SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. -- `smtp_from` (String) The default SMTP From header field. Must be a valid email address -- `smtp_smart_host` (String) The default SMTP smarthost used for sending emails, including port number. Port number usually is 25, or 587 for SMTP over TLS (sometimes referred to as STARTTLS). - - - -### Nested Schema for `alert_config.receivers` - -Read-Only: - -- `email_configs` (Attributes List) List of email configurations. (see [below for nested schema](#nestedatt--alert_config--receivers--email_configs)) -- `name` (String) Name of the receiver. -- `opsgenie_configs` (Attributes List) List of OpsGenie configurations. (see [below for nested schema](#nestedatt--alert_config--receivers--opsgenie_configs)) -- `webhooks_configs` (Attributes List) List of Webhooks configurations. (see [below for nested schema](#nestedatt--alert_config--receivers--webhooks_configs)) - - -### Nested Schema for `alert_config.receivers.email_configs` - -Read-Only: - -- `auth_identity` (String) SMTP authentication information. Must be a valid email address -- `auth_password` (String, Sensitive) SMTP authentication password. -- `auth_username` (String) SMTP authentication username. -- `from` (String) The sender email address. Must be a valid email address -- `send_resolved` (Boolean) Whether to notify about resolved alerts. -- `smart_host` (String) The SMTP host through which emails are sent. -- `to` (String) The email address to send notifications to. Must be a valid email address - - - -### Nested Schema for `alert_config.receivers.opsgenie_configs` - -Read-Only: - -- `api_key` (String) The API key for OpsGenie. -- `api_url` (String) The host to send OpsGenie API requests to. Must be a valid URL -- `priority` (String) Priority of the alert. Possible values are: `P1`, `P2`, `P3`, `P4`, `P5`. -- `send_resolved` (Boolean) Whether to notify about resolved alerts. -- `tags` (String) Comma separated list of tags attached to the notifications. - - - -### Nested Schema for `alert_config.receivers.webhooks_configs` - -Read-Only: - -- `google_chat` (Boolean) Google Chat webhooks require special handling, set this to true if the webhook is for Google Chat. -- `ms_teams` (Boolean) Microsoft Teams webhooks require special handling, set this to true if the webhook is for Microsoft Teams. -- `send_resolved` (Boolean) Whether to notify about resolved alerts. -- `url` (String, Sensitive) The endpoint to send HTTP POST requests to. Must be a valid URL - - - - -### Nested Schema for `alert_config.route` - -Read-Only: - -- `group_by` (List of String) The labels by which incoming alerts are grouped together. For example, multiple alerts coming in for cluster=A and alertname=LatencyHigh would be batched into a single group. To aggregate by all possible labels use the special value '...' as the sole label name, for example: group_by: ['...']. This effectively disables aggregation entirely, passing through all alerts as-is. This is unlikely to be what you want, unless you have a very low alert volume or your upstream notification system performs its own grouping. -- `group_interval` (String) How long to wait before sending a notification about new alerts that are added to a group of alerts for which an initial notification has already been sent. (Usually ~5m or more.) -- `group_wait` (String) How long to initially wait to send a notification for a group of alerts. Allows to wait for an inhibiting alert to arrive or collect more initial alerts for the same group. (Usually ~0s to few minutes.) . -- `receiver` (String) The name of the receiver to route the alerts to. -- `repeat_interval` (String) How long to wait before sending a notification again if it has already been sent successfully for an alert. (Usually ~3h or more). -- `routes` (Attributes List) List of child routes. (see [below for nested schema](#nestedatt--alert_config--route--routes)) - - -### Nested Schema for `alert_config.route.routes` - -Read-Only: - -- `continue` (Boolean) Whether an alert should continue matching subsequent sibling nodes. -- `group_by` (List of String) The labels by which incoming alerts are grouped together. For example, multiple alerts coming in for cluster=A and alertname=LatencyHigh would be batched into a single group. To aggregate by all possible labels use the special value '...' as the sole label name, for example: group_by: ['...']. This effectively disables aggregation entirely, passing through all alerts as-is. This is unlikely to be what you want, unless you have a very low alert volume or your upstream notification system performs its own grouping. -- `group_interval` (String) How long to wait before sending a notification about new alerts that are added to a group of alerts for which an initial notification has already been sent. (Usually ~5m or more.) -- `group_wait` (String) How long to initially wait to send a notification for a group of alerts. Allows to wait for an inhibiting alert to arrive or collect more initial alerts for the same group. (Usually ~0s to few minutes.) -- `match` (Map of String, Deprecated) A set of equality matchers an alert has to fulfill to match the node. This field is deprecated and will be removed after 10th March 2026, use `matchers` in the `routes` instead -- `match_regex` (Map of String, Deprecated) A set of regex-matchers an alert has to fulfill to match the node. This field is deprecated and will be removed after 10th March 2026, use `matchers` in the `routes` instead -- `matchers` (List of String) A list of matchers that an alert has to fulfill to match the node. A matcher is a string with a syntax inspired by PromQL and OpenMetrics. -- `receiver` (String) The name of the receiver to route the alerts to. -- `repeat_interval` (String) How long to wait before sending a notification again if it has already been sent successfully for an alert. (Usually ~3h or more). diff --git a/docs/data-sources/observability_logalertgroup.md b/docs/data-sources/observability_logalertgroup.md deleted file mode 100644 index a2f7a132..00000000 --- a/docs/data-sources/observability_logalertgroup.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_logalertgroup Data Source - stackit" -subcategory: "" -description: |- - Observability log alert group datasource schema. Used to create alerts based on logs (Loki). Must have a region specified in the provider configuration. ---- - -# stackit_observability_logalertgroup (Data Source) - -Observability log alert group datasource schema. Used to create alerts based on logs (Loki). Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_observability_logalertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-log-alert-group" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) Observability instance ID to which the log alert group is associated. -- `name` (String) The name of the log alert group. Is the identifier and must be unique in the group. -- `project_id` (String) STACKIT project ID to which the log alert group is associated. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`name`". -- `interval` (String) Specifies the frequency at which rules within the group are evaluated. The interval must be at least 60 seconds and defaults to 60 seconds if not set. Supported formats include hours, minutes, and seconds, either singly or in combination. Examples of valid formats are: '5h30m40s', '5h', '5h30m', '60m', and '60s'. -- `rules` (Attributes List) (see [below for nested schema](#nestedatt--rules)) - - -### Nested Schema for `rules` - -Read-Only: - -- `alert` (String) The name of the alert rule. Is the identifier and must be unique in the group. -- `annotations` (Map of String) A map of key:value. Annotations to add or overwrite for each alert -- `expression` (String) The LogQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending/firing alerts. -- `for` (String) Alerts are considered firing once they have been returned for this long. Alerts which have not yet fired for long enough are considered pending. Default is 0s -- `labels` (Map of String) A map of key:value. Labels to add or overwrite for each alert diff --git a/docs/data-sources/observability_scrapeconfig.md b/docs/data-sources/observability_scrapeconfig.md deleted file mode 100644 index 575ab6b5..00000000 --- a/docs/data-sources/observability_scrapeconfig.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_scrapeconfig Data Source - stackit" -subcategory: "" -description: |- - Observability scrape config data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_observability_scrapeconfig (Data Source) - -Observability scrape config data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_observability_scrapeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) Observability instance ID to which the scraping job is associated. -- `name` (String) Specifies the name of the scraping job -- `project_id` (String) STACKIT project ID to which the scraping job is associated. - -### Read-Only - -- `basic_auth` (Attributes) A basic authentication block. (see [below for nested schema](#nestedatt--basic_auth)) -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`instance_id`,`name`". -- `metrics_path` (String) Specifies the job scraping url path. -- `saml2` (Attributes) A SAML2 configuration block. (see [below for nested schema](#nestedatt--saml2)) -- `sample_limit` (Number) Specifies the scrape sample limit. -- `scheme` (String) Specifies the http scheme. -- `scrape_interval` (String) Specifies the scrape interval as duration string. -- `scrape_timeout` (String) Specifies the scrape timeout as duration string. -- `targets` (Attributes List) The targets list (specified by the static config). (see [below for nested schema](#nestedatt--targets)) - - -### Nested Schema for `basic_auth` - -Read-Only: - -- `password` (String, Sensitive) Specifies basic auth password. -- `username` (String) Specifies basic auth username. - - - -### Nested Schema for `saml2` - -Read-Only: - -- `enable_url_parameters` (Boolean) Specifies if URL parameters are enabled - - - -### Nested Schema for `targets` - -Read-Only: - -- `labels` (Map of String) Specifies labels. -- `urls` (List of String) Specifies target URLs. diff --git a/docs/data-sources/opensearch_credential.md b/docs/data-sources/opensearch_credential.md deleted file mode 100644 index c4ab336c..00000000 --- a/docs/data-sources/opensearch_credential.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_opensearch_credential Data Source - stackit" -subcategory: "" -description: |- - OpenSearch credential data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_opensearch_credential (Data Source) - -OpenSearch credential data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_opensearch_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credential_id` (String) The credential's ID. -- `instance_id` (String) ID of the OpenSearch instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `host` (String) -- `hosts` (List of String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `password` (String, Sensitive) -- `port` (Number) -- `scheme` (String) -- `uri` (String, Sensitive) -- `username` (String) diff --git a/docs/data-sources/opensearch_instance.md b/docs/data-sources/opensearch_instance.md deleted file mode 100644 index 8d3782ea..00000000 --- a/docs/data-sources/opensearch_instance.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_opensearch_instance Data Source - stackit" -subcategory: "" -description: |- - OpenSearch instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_opensearch_instance (Data Source) - -OpenSearch instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_opensearch_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the OpenSearch instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `name` (String) Instance name. -- `parameters` (Attributes) (see [below for nested schema](#nestedatt--parameters)) -- `plan_id` (String) The selected plan ID. -- `plan_name` (String) The selected plan name. -- `version` (String) The service version. - - -### Nested Schema for `parameters` - -Read-Only: - -- `enable_monitoring` (Boolean) Enable monitoring. -- `graphite` (String) If set, monitoring with Graphite will be enabled. Expects the host and port where the Graphite metrics should be sent to (host:port). -- `java_garbage_collector` (String) The garbage collector to use for OpenSearch. -- `java_heapspace` (Number) The amount of memory (in MB) allocated as heap by the JVM for OpenSearch. -- `java_maxmetaspace` (Number) The amount of memory (in MB) used by the JVM to store metadata for OpenSearch. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted (in seconds). -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key. -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `plugins` (List of String) List of plugins to install. Must be a supported plugin name. The plugins `repository-s3` and `repository-azure` are enabled by default and cannot be disabled. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. -- `tls_ciphers` (List of String) List of TLS ciphers to use. -- `tls_protocols` (String) The TLS protocol to use. diff --git a/docs/data-sources/postgresflex_database.md b/docs/data-sources/postgresflex_database.md deleted file mode 100644 index d69ca39c..00000000 --- a/docs/data-sources/postgresflex_database.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_postgresflex_database Data Source - stackit" -subcategory: "" -description: |- - Postgres Flex database resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_postgresflex_database (Data Source) - -Postgres Flex database resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_postgresflex_database" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `database_id` (String) Database ID. -- `instance_id` (String) ID of the Postgres Flex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`,`database_id`". -- `name` (String) Database name. -- `owner` (String) Username of the database owner. diff --git a/docs/data-sources/postgresflex_instance.md b/docs/data-sources/postgresflex_instance.md deleted file mode 100644 index 91f4136a..00000000 --- a/docs/data-sources/postgresflex_instance.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_postgresflex_instance Data Source - stackit" -subcategory: "" -description: |- - Postgres Flex instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_postgresflex_instance (Data Source) - -Postgres Flex instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_postgresflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the PostgresFlex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `acl` (List of String) The Access Control List (ACL) for the PostgresFlex instance. -- `backup_schedule` (String) -- `flavor` (Attributes) (see [below for nested schema](#nestedatt--flavor)) -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`region`,`instance_id`". -- `name` (String) Instance name. -- `replicas` (Number) -- `storage` (Attributes) (see [below for nested schema](#nestedatt--storage)) -- `version` (String) - - -### Nested Schema for `flavor` - -Read-Only: - -- `cpu` (Number) -- `description` (String) -- `id` (String) -- `ram` (Number) - - - -### Nested Schema for `storage` - -Read-Only: - -- `class` (String) -- `size` (Number) diff --git a/docs/data-sources/postgresflex_user.md b/docs/data-sources/postgresflex_user.md deleted file mode 100644 index 5e91aeba..00000000 --- a/docs/data-sources/postgresflex_user.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_postgresflex_user Data Source - stackit" -subcategory: "" -description: |- - Postgres Flex user data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_postgresflex_user (Data Source) - -Postgres Flex user data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_postgresflex_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the PostgresFlex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `user_id` (String) User ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `host` (String) -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`region`,`instance_id`,`user_id`". -- `port` (Number) -- `roles` (Set of String) -- `username` (String) diff --git a/docs/data-sources/public_ip.md b/docs/data-sources/public_ip.md deleted file mode 100644 index 1f104878..00000000 --- a/docs/data-sources/public_ip.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_public_ip Data Source - stackit" -subcategory: "" -description: |- - Public IP resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_public_ip (Data Source) - -Public IP resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_public_ip" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - public_ip_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the public IP is associated. -- `public_ip_id` (String) The public IP ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal datasource ID. It is structured as "`project_id`,`region`,`public_ip_id`". -- `ip` (String) The IP address. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `network_interface_id` (String) Associates the public IP with a network interface or a virtual IP (ID). diff --git a/docs/data-sources/public_ip_ranges.md b/docs/data-sources/public_ip_ranges.md deleted file mode 100644 index 5a9da3a0..00000000 --- a/docs/data-sources/public_ip_ranges.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_public_ip_ranges Data Source - stackit" -subcategory: "" -description: |- - A list of all public IP ranges that STACKIT uses. ---- - -# stackit_public_ip_ranges (Data Source) - -A list of all public IP ranges that STACKIT uses. - -## Example Usage - -```terraform -data "stackit_public_ip_ranges" "example" {} - -# example usage: allow stackit services and customer vpn cidr to access observability apis -locals { - vpn_cidrs = ["X.X.X.X/32", "X.X.X.X/24"] -} - -resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Monitoring-Medium-EU01" - # Allow all stackit services and customer vpn cidr to access observability apis - acl = concat(data.stackit_public_ip_ranges.example.cidr_list, local.vpn_cidrs) - metrics_retention_days = 90 - metrics_retention_days_5m_downsampling = 90 - metrics_retention_days_1h_downsampling = 90 -} -``` - - -## Schema - -### Read-Only - -- `cidr_list` (List of String) A list of IP range strings (CIDRs) extracted from the public_ip_ranges for easy consumption. -- `id` (String) Terraform's internal resource ID. It takes the values of "`public_ip_ranges.*.cidr`". -- `public_ip_ranges` (Attributes List) A list of all public IP ranges. (see [below for nested schema](#nestedatt--public_ip_ranges)) - - -### Nested Schema for `public_ip_ranges` - -Read-Only: - -- `cidr` (String) Classless Inter-Domain Routing (CIDR) diff --git a/docs/data-sources/rabbitmq_credential.md b/docs/data-sources/rabbitmq_credential.md deleted file mode 100644 index a95165ad..00000000 --- a/docs/data-sources/rabbitmq_credential.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_rabbitmq_credential Data Source - stackit" -subcategory: "" -description: |- - RabbitMQ credential data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_rabbitmq_credential (Data Source) - -RabbitMQ credential data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_rabbitmq_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credential_id` (String) The credential's ID. -- `instance_id` (String) ID of the RabbitMQ instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `host` (String) -- `hosts` (List of String) -- `http_api_uri` (String) -- `http_api_uris` (List of String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `management` (String) -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `uris` (List of String) -- `username` (String) diff --git a/docs/data-sources/rabbitmq_instance.md b/docs/data-sources/rabbitmq_instance.md deleted file mode 100644 index c6205f65..00000000 --- a/docs/data-sources/rabbitmq_instance.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_rabbitmq_instance Data Source - stackit" -subcategory: "" -description: |- - RabbitMQ instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_rabbitmq_instance (Data Source) - -RabbitMQ instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_rabbitmq_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the RabbitMQ instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `name` (String) Instance name. -- `parameters` (Attributes) (see [below for nested schema](#nestedatt--parameters)) -- `plan_id` (String) The selected plan ID. -- `plan_name` (String) The selected plan name. -- `version` (String) The service version. - - -### Nested Schema for `parameters` - -Read-Only: - -- `consumer_timeout` (Number) The timeout in milliseconds for the consumer. -- `enable_monitoring` (Boolean) Enable monitoring. -- `graphite` (String) Graphite server URL (host and port). If set, monitoring with Graphite will be enabled. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted. -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `plugins` (List of String) List of plugins to install. Must be a supported plugin name. -- `roles` (List of String) List of roles to assign to the instance. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. -- `tls_ciphers` (List of String) List of TLS ciphers to use. -- `tls_protocols` (String) TLS protocol to use. diff --git a/docs/data-sources/redis_credential.md b/docs/data-sources/redis_credential.md deleted file mode 100644 index bab579b7..00000000 --- a/docs/data-sources/redis_credential.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_redis_credential Data Source - stackit" -subcategory: "" -description: |- - Redis credential data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_redis_credential (Data Source) - -Redis credential data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_redis_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `credential_id` (String) The credential's ID. -- `instance_id` (String) ID of the Redis instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `host` (String) -- `hosts` (List of String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `load_balanced_host` (String) -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) Connection URI. -- `username` (String) diff --git a/docs/data-sources/redis_instance.md b/docs/data-sources/redis_instance.md deleted file mode 100644 index 32ead269..00000000 --- a/docs/data-sources/redis_instance.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_redis_instance Data Source - stackit" -subcategory: "" -description: |- - Redis instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_redis_instance (Data Source) - -Redis instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_redis_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the Redis instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal data source. identifier. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `name` (String) Instance name. -- `parameters` (Attributes) (see [below for nested schema](#nestedatt--parameters)) -- `plan_id` (String) The selected plan ID. -- `plan_name` (String) The selected plan name. -- `version` (String) The service version. - - -### Nested Schema for `parameters` - -Read-Only: - -- `down_after_milliseconds` (Number) The number of milliseconds after which the instance is considered down. -- `enable_monitoring` (Boolean) Enable monitoring. -- `failover_timeout` (Number) The failover timeout in milliseconds. -- `graphite` (String) Graphite server URL (host and port). If set, monitoring with Graphite will be enabled. -- `lazyfree_lazy_eviction` (String) The lazy eviction enablement (yes or no). -- `lazyfree_lazy_expire` (String) The lazy expire enablement (yes or no). -- `lua_time_limit` (Number) The Lua time limit. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `maxclients` (Number) The maximum number of clients. -- `maxmemory_policy` (String) The policy to handle the maximum memory (volatile-lru, noeviction, etc). -- `maxmemory_samples` (Number) The maximum memory samples. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted. -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key -- `min_replicas_max_lag` (Number) The minimum replicas maximum lag. -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `notify_keyspace_events` (String) The notify keyspace events. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `snapshot` (String) The snapshot configuration. -- `syslog` (List of String) List of syslog servers to send logs to. -- `tls_ciphers` (List of String) List of TLS ciphers to use. -- `tls_ciphersuites` (String) TLS cipher suites to use. -- `tls_protocols` (String) TLS protocol to use. diff --git a/docs/data-sources/resourcemanager_folder.md b/docs/data-sources/resourcemanager_folder.md deleted file mode 100644 index e7abfbe9..00000000 --- a/docs/data-sources/resourcemanager_folder.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_resourcemanager_folder Data Source - stackit" -subcategory: "" -description: |- - Resource Manager folder data source schema. To identify the folder, you need to provide the container_id. ---- - -# stackit_resourcemanager_folder (Data Source) - -Resource Manager folder data source schema. To identify the folder, you need to provide the container_id. - -## Example Usage - -```terraform -data "stackit_resourcemanager_folder" "example" { - container_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `container_id` (String) Folder container ID. Globally unique, user-friendly identifier. - -### Read-Only - -- `creation_time` (String) Date-time at which the folder was created. -- `folder_id` (String) Folder UUID identifier. Globally unique folder identifier -- `id` (String) Terraform's internal resource ID. It is structured as "`container_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container. A label key must match the regex [A-ZÄÜÖa-zäüöß0-9_-]{1,64}. A label value must match the regex ^$|[A-ZÄÜÖa-zäüöß0-9_-]{1,64}. -- `name` (String) The name of the folder. -- `parent_container_id` (String) Parent resource identifier. Both container ID (user-friendly) and UUID are supported. -- `update_time` (String) Date-time at which the folder was last modified. diff --git a/docs/data-sources/resourcemanager_project.md b/docs/data-sources/resourcemanager_project.md deleted file mode 100644 index 6b9ff69c..00000000 --- a/docs/data-sources/resourcemanager_project.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_resourcemanager_project Data Source - stackit" -subcategory: "" -description: |- - Resource Manager project data source schema. To identify the project, you need to provider either project_id or container_id. If you provide both, project_id will be used. ---- - -# stackit_resourcemanager_project (Data Source) - -Resource Manager project data source schema. To identify the project, you need to provider either project_id or container_id. If you provide both, project_id will be used. - -## Example Usage - -```terraform -data "stackit_resourcemanager_project" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - container_id = "example-container-abc123" -} -``` - - -## Schema - -### Optional - -- `container_id` (String) Project container ID. Globally unique, user-friendly identifier. -- `project_id` (String) Project UUID identifier. This is the ID that can be used in most of the other resources to identify the project. - -### Read-Only - -- `creation_time` (String) Date-time at which the project was created. -- `id` (String) Terraform's internal data source. ID. It is structured as "`container_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container. A label key must match the regex [A-ZÄÜÖa-zäüöß0-9_-]{1,64}. A label value must match the regex ^$|[A-ZÄÜÖa-zäüöß0-9_-]{1,64} -- `name` (String) Project name. -- `parent_container_id` (String) Parent resource identifier. Both container ID (user-friendly) and UUID are supported -- `update_time` (String) Date-time at which the project was last modified. diff --git a/docs/data-sources/routing_table.md b/docs/data-sources/routing_table.md deleted file mode 100644 index 68044569..00000000 --- a/docs/data-sources/routing_table.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_routing_table Data Source - stackit" -subcategory: "" -description: |- - Routing table datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_routing_table (Data Source) - -Routing table datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -data "stackit_routing_table" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID to which the routing table is associated. -- `organization_id` (String) STACKIT organization ID to which the routing table is associated. -- `routing_table_id` (String) The routing tables ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `created_at` (String) Date-time when the routing table was created -- `default` (Boolean) When true this is the default routing table for this network area. It can't be deleted and is used if the user does not specify it otherwise. -- `description` (String) Description of the routing table. -- `id` (String) Terraform's internal datasource ID. It is structured as "`organization_id`,`region`,`network_area_id`,`routing_table_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `name` (String) The name of the routing table. -- `system_routes` (Boolean) This controls whether the routes for project-to-project communication are created automatically or not. -- `updated_at` (String) Date-time when the routing table was updated diff --git a/docs/data-sources/routing_table_route.md b/docs/data-sources/routing_table_route.md deleted file mode 100644 index 7344a87d..00000000 --- a/docs/data-sources/routing_table_route.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_routing_table_route Data Source - stackit" -subcategory: "" -description: |- - Routing table route datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_routing_table_route (Data Source) - -Routing table route datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -data "stackit_routing_table_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - route_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID to which the routing table is associated. -- `organization_id` (String) STACKIT organization ID to which the routing table is associated. -- `route_id` (String) Route ID. -- `routing_table_id` (String) The routing tables ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `created_at` (String) Date-time when the route was created -- `destination` (Attributes) Destination of the route. (see [below for nested schema](#nestedatt--destination)) -- `id` (String) Terraform's internal datasource ID. It is structured as "`organization_id`,`region`,`network_area_id`,`routing_table_id`,`route_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `next_hop` (Attributes) Next hop destination. (see [below for nested schema](#nestedatt--next_hop)) -- `updated_at` (String) Date-time when the route was updated - - -### Nested Schema for `destination` - -Read-Only: - -- `type` (String) CIDRV type. Possible values are: `cidrv4`, `cidrv6`. Only `cidrv4` is supported during experimental stage. -- `value` (String) An CIDR string. - - - -### Nested Schema for `next_hop` - -Read-Only: - -- `type` (String) Type of the next hop. Possible values are: `blackhole`, `internet`, `ipv4`, `ipv6`. -- `value` (String) Either IPv4 or IPv6 (not set for blackhole and internet). Only IPv4 supported during experimental stage. diff --git a/docs/data-sources/routing_table_routes.md b/docs/data-sources/routing_table_routes.md deleted file mode 100644 index 2215de81..00000000 --- a/docs/data-sources/routing_table_routes.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_routing_table_routes Data Source - stackit" -subcategory: "" -description: |- - Routing table routes datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_routing_table_routes (Data Source) - -Routing table routes datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -data "stackit_routing_table_routes" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID to which the routing table is associated. -- `organization_id` (String) STACKIT organization ID to which the routing table is associated. -- `routing_table_id` (String) The routing tables ID. - -### Optional - -- `region` (String) The datasource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal datasource ID. It is structured as "`organization_id`,`region`,`network_area_id`,`routing_table_id`,`route_id`". -- `routes` (Attributes List) List of routes. (see [below for nested schema](#nestedatt--routes)) - - -### Nested Schema for `routes` - -Read-Only: - -- `created_at` (String) Date-time when the route was created -- `destination` (Attributes) Destination of the route. (see [below for nested schema](#nestedatt--routes--destination)) -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `next_hop` (Attributes) Next hop destination. (see [below for nested schema](#nestedatt--routes--next_hop)) -- `route_id` (String) Route ID. -- `updated_at` (String) Date-time when the route was updated - - -### Nested Schema for `routes.destination` - -Read-Only: - -- `type` (String) CIDRV type. Possible values are: `cidrv4`, `cidrv6`. Only `cidrv4` is supported during experimental stage. -- `value` (String) An CIDR string. - - - -### Nested Schema for `routes.next_hop` - -Read-Only: - -- `type` (String) Type of the next hop. Possible values are: `blackhole`, `internet`, `ipv4`, `ipv6`. -- `value` (String) Either IPv4 or IPv6 (not set for blackhole and internet). Only IPv4 supported during experimental stage. diff --git a/docs/data-sources/routing_tables.md b/docs/data-sources/routing_tables.md deleted file mode 100644 index 26eac9c3..00000000 --- a/docs/data-sources/routing_tables.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_routing_tables Data Source - stackit" -subcategory: "" -description: |- - Routing table datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_routing_tables (Data Source) - -Routing table datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -data "stackit_routing_tables" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `network_area_id` (String) The network area ID to which the routing table is associated. -- `organization_id` (String) STACKIT organization ID to which the routing table is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal datasource ID. It is structured as "`organization_id`,`region`,`network_area_id`". -- `items` (Attributes List) List of routing tables. (see [below for nested schema](#nestedatt--items)) - - -### Nested Schema for `items` - -Read-Only: - -- `created_at` (String) Date-time when the routing table was created -- `default` (Boolean) When true this is the default routing table for this network area. It can't be deleted and is used if the user does not specify it otherwise. -- `description` (String) Description of the routing table. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `name` (String) The name of the routing table. -- `routing_table_id` (String) The routing tables ID. -- `system_routes` (Boolean) This controls whether the routes for project-to-project communication are created automatically or not. -- `updated_at` (String) Date-time when the routing table was updated diff --git a/docs/data-sources/scf_organization.md b/docs/data-sources/scf_organization.md deleted file mode 100644 index 43be29ff..00000000 --- a/docs/data-sources/scf_organization.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_scf_organization Data Source - stackit" -subcategory: "" -description: |- - STACKIT Cloud Foundry organization datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_scf_organization (Data Source) - -STACKIT Cloud Foundry organization datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_scf_organization" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - org_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `org_id` (String) The ID of the Cloud Foundry Organization -- `project_id` (String) The ID of the project associated with the organization - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used - -### Read-Only - -- `created_at` (String) The time when the organization was created -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`region`,`org_id`". -- `name` (String) The name of the organization -- `platform_id` (String) The ID of the platform associated with the organization -- `quota_id` (String) The ID of the quota associated with the organization -- `status` (String) The status of the organization (e.g., deleting, delete_failed) -- `suspended` (Boolean) A boolean indicating whether the organization is suspended -- `updated_at` (String) The time when the organization was last updated diff --git a/docs/data-sources/scf_organization_manager.md b/docs/data-sources/scf_organization_manager.md deleted file mode 100644 index 92e1ffa7..00000000 --- a/docs/data-sources/scf_organization_manager.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_scf_organization_manager Data Source - stackit" -subcategory: "" -description: |- - STACKIT Cloud Foundry organization manager datasource schema. ---- - -# stackit_scf_organization_manager (Data Source) - -STACKIT Cloud Foundry organization manager datasource schema. - -## Example Usage - -```terraform -data "stackit_scf_organization_manager" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - org_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `org_id` (String) The ID of the Cloud Foundry Organization -- `project_id` (String) The ID of the project associated with the organization of the organization manager - -### Optional - -- `region` (String) The region where the organization of the organization manager is located. If not defined, the provider region is used - -### Read-Only - -- `created_at` (String) The time when the organization manager was created -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`region`,`org_id`,`user_id`". -- `platform_id` (String) The ID of the platform associated with the organization of the organization manager -- `updated_at` (String) The time when the organization manager was last updated -- `user_id` (String) The ID of the organization manager user -- `username` (String) An auto-generated organization manager user name diff --git a/docs/data-sources/scf_platform.md b/docs/data-sources/scf_platform.md deleted file mode 100644 index eddbe3ba..00000000 --- a/docs/data-sources/scf_platform.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_scf_platform Data Source - stackit" -subcategory: "" -description: |- - STACKIT Cloud Foundry Platform datasource schema. ---- - -# stackit_scf_platform (Data Source) - -STACKIT Cloud Foundry Platform datasource schema. - -## Example Usage - -```terraform -data "stackit_scf_platform" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - platform_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `platform_id` (String) The unique id of the platform -- `project_id` (String) The ID of the project associated with the platform - -### Optional - -- `region` (String) The region where the platform is located. If not defined, the provider region is used - -### Read-Only - -- `api_url` (String) The CF API Url of the platform -- `console_url` (String) The Stratos URL of the platform -- `display_name` (String) The name of the platform -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`region`,`platform_id`". -- `system_id` (String) The ID of the platform System diff --git a/docs/data-sources/secretsmanager_instance.md b/docs/data-sources/secretsmanager_instance.md deleted file mode 100644 index 7f8e903e..00000000 --- a/docs/data-sources/secretsmanager_instance.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_secretsmanager_instance Data Source - stackit" -subcategory: "" -description: |- - Secrets Manager instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_secretsmanager_instance (Data Source) - -Secrets Manager instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_secretsmanager_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the Secrets Manager instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Read-Only - -- `acls` (Set of String) The access control list for this instance. Each entry is an IP or IP range that is permitted to access, in CIDR notation -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `name` (String) Instance name. diff --git a/docs/data-sources/secretsmanager_user.md b/docs/data-sources/secretsmanager_user.md deleted file mode 100644 index e3a14d02..00000000 --- a/docs/data-sources/secretsmanager_user.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_secretsmanager_user Data Source - stackit" -subcategory: "" -description: |- - Secrets Manager user data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_secretsmanager_user (Data Source) - -Secrets Manager user data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_secretsmanager_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the Secrets Manager instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. -- `user_id` (String) The user's ID. - -### Read-Only - -- `description` (String) A user chosen description to differentiate between multiple users. Can't be changed after creation. -- `id` (String) Terraform's internal data source identifier. It is structured as "`project_id`,`instance_id`,`user_id`". -- `username` (String) An auto-generated user name. -- `write_enabled` (Boolean) If true, the user has writeaccess to the secrets engine. diff --git a/docs/data-sources/security_group.md b/docs/data-sources/security_group.md deleted file mode 100644 index 28c38fa7..00000000 --- a/docs/data-sources/security_group.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_security_group Data Source - stackit" -subcategory: "" -description: |- - Security group datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_security_group (Data Source) - -Security group datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_security_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the security group is associated. -- `security_group_id` (String) The security group ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `description` (String) The description of the security group. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`security_group_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `name` (String) The name of the security group. -- `stateful` (Boolean) Configures if a security group is stateful or stateless. There can only be one type of security groups per network interface/server. diff --git a/docs/data-sources/security_group_rule.md b/docs/data-sources/security_group_rule.md deleted file mode 100644 index d5871bd2..00000000 --- a/docs/data-sources/security_group_rule.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_security_group_rule Data Source - stackit" -subcategory: "" -description: |- - Security group datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_security_group_rule (Data Source) - -Security group datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_security_group_rule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_rule_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the security group rule is associated. -- `security_group_id` (String) The security group ID. -- `security_group_rule_id` (String) The security group rule ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `description` (String) The description of the security group rule. -- `direction` (String) The direction of the traffic which the rule should match. Some of the possible values are: Possible values are: `ingress`, `egress`. -- `ether_type` (String) The ethertype which the rule should match. -- `icmp_parameters` (Attributes) ICMP Parameters. (see [below for nested schema](#nestedatt--icmp_parameters)) -- `id` (String) Terraform's internal datasource ID. It is structured as "`project_id`,`region`,`security_group_id`,`security_group_rule_id`". -- `ip_range` (String) The remote IP range which the rule should match. -- `port_range` (Attributes) The range of ports. (see [below for nested schema](#nestedatt--port_range)) -- `protocol` (Attributes) The internet protocol which the rule should match. (see [below for nested schema](#nestedatt--protocol)) -- `remote_security_group_id` (String) The remote security group which the rule should match. - - -### Nested Schema for `icmp_parameters` - -Read-Only: - -- `code` (Number) ICMP code. Can be set if the protocol is ICMP. -- `type` (Number) ICMP type. Can be set if the protocol is ICMP. - - - -### Nested Schema for `port_range` - -Read-Only: - -- `max` (Number) The maximum port number. Should be greater or equal to the minimum. -- `min` (Number) The minimum port number. Should be less or equal to the minimum. - - - -### Nested Schema for `protocol` - -Read-Only: - -- `name` (String) The protocol name which the rule should match. -- `number` (Number) The protocol number which the rule should match. diff --git a/docs/data-sources/server.md b/docs/data-sources/server.md deleted file mode 100644 index 631a4ff9..00000000 --- a/docs/data-sources/server.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server Data Source - stackit" -subcategory: "" -description: |- - Server datasource schema. Must have a region specified in the provider configuration. ---- - -# stackit_server (Data Source) - -Server datasource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_server" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the server is associated. -- `server_id` (String) The server ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `affinity_group` (String) The affinity group the server is assigned to. -- `availability_zone` (String) The availability zone of the server. -- `boot_volume` (Attributes) The boot volume for the server (see [below for nested schema](#nestedatt--boot_volume)) -- `created_at` (String) Date-time when the server was created -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`server_id`". -- `image_id` (String) The image ID to be used for an ephemeral disk on the server. -- `keypair_name` (String) The name of the keypair used during server creation. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `launched_at` (String) Date-time when the server was launched -- `machine_type` (String) Name of the type of the machine for the server. Possible values are documented in [Virtual machine flavors](https://docs.stackit.cloud/products/compute-engine/server/basics/machine-types/) -- `name` (String) The name of the server. -- `network_interfaces` (List of String) The IDs of network interfaces which should be attached to the server. Updating it will recreate the server. -- `updated_at` (String) Date-time when the server was updated -- `user_data` (String) User data that is passed via cloud-init to the server. - - -### Nested Schema for `boot_volume` - -Read-Only: - -- `delete_on_termination` (Boolean) Delete the volume during the termination of the server. -- `id` (String) The ID of the boot volume diff --git a/docs/data-sources/server_backup_schedule.md b/docs/data-sources/server_backup_schedule.md deleted file mode 100644 index 16126086..00000000 --- a/docs/data-sources/server_backup_schedule.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_backup_schedule Data Source - stackit" -subcategory: "" -description: |- - Server backup schedule datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_server_backup_schedule (Data Source) - -Server backup schedule datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_server_backup_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - backup_schedule_id = xxxxx -} -``` - - -## Schema - -### Required - -- `backup_schedule_id` (Number) Backup schedule ID. -- `project_id` (String) STACKIT Project ID to which the server is associated. -- `server_id` (String) Server ID for the backup schedule. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `backup_properties` (Attributes) Backup schedule details for the backups. (see [below for nested schema](#nestedatt--backup_properties)) -- `enabled` (Boolean) Is the backup schedule enabled or disabled. -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`server_id`,`backup_schedule_id`". -- `name` (String) The schedule name. -- `rrule` (String) Backup schedule described in `rrule` (recurrence rule) format. - - -### Nested Schema for `backup_properties` - -Read-Only: - -- `name` (String) -- `retention_period` (Number) -- `volume_ids` (List of String) diff --git a/docs/data-sources/server_backup_schedules.md b/docs/data-sources/server_backup_schedules.md deleted file mode 100644 index 44c21612..00000000 --- a/docs/data-sources/server_backup_schedules.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_backup_schedules Data Source - stackit" -subcategory: "" -description: |- - Server backup schedules datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_server_backup_schedules (Data Source) - -Server backup schedules datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_server_backup_schedules" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT Project ID (UUID) to which the server is associated. -- `server_id` (String) Server ID (UUID) to which the backup schedule is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal data source identifier. It is structured as "`project_id`,`server_id`". -- `items` (Attributes List) (see [below for nested schema](#nestedatt--items)) - - -### Nested Schema for `items` - -Read-Only: - -- `backup_properties` (Attributes) Backup schedule details for the backups. (see [below for nested schema](#nestedatt--items--backup_properties)) -- `backup_schedule_id` (Number) -- `enabled` (Boolean) Is the backup schedule enabled or disabled. -- `name` (String) The backup schedule name. -- `rrule` (String) Backup schedule described in `rrule` (recurrence rule) format. - - -### Nested Schema for `items.backup_properties` - -Read-Only: - -- `name` (String) -- `retention_period` (Number) -- `volume_ids` (List of String) diff --git a/docs/data-sources/server_update_schedule.md b/docs/data-sources/server_update_schedule.md deleted file mode 100644 index ff4a0c4a..00000000 --- a/docs/data-sources/server_update_schedule.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_update_schedule Data Source - stackit" -subcategory: "" -description: |- - Server update schedule datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_server_update_schedule (Data Source) - -Server update schedule datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_server_update_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - update_schedule_id = xxxxx -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT Project ID to which the server is associated. -- `server_id` (String) Server ID for the update schedule. -- `update_schedule_id` (Number) Update schedule ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `enabled` (Boolean) Is the update schedule enabled or disabled. -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`server_id`,`update_schedule_id`". -- `maintenance_window` (Number) Maintenance window [1..24]. -- `name` (String) The schedule name. -- `rrule` (String) Update schedule described in `rrule` (recurrence rule) format. diff --git a/docs/data-sources/server_update_schedules.md b/docs/data-sources/server_update_schedules.md deleted file mode 100644 index 2ccfe2b5..00000000 --- a/docs/data-sources/server_update_schedules.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_update_schedules Data Source - stackit" -subcategory: "" -description: |- - Server update schedules datasource schema. Must have a region specified in the provider configuration. - ~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_server_update_schedules (Data Source) - -Server update schedules datasource schema. Must have a `region` specified in the provider configuration. - -~> This datasource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -data "stackit_server_update_schedules" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT Project ID (UUID) to which the server is associated. -- `server_id` (String) Server ID (UUID) to which the update schedule is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal data source identifier. It is structured as "`project_id`,`region`,`server_id`". -- `items` (Attributes List) (see [below for nested schema](#nestedatt--items)) - - -### Nested Schema for `items` - -Read-Only: - -- `enabled` (Boolean) Is the update schedule enabled or disabled. -- `maintenance_window` (Number) Maintenance window [1..24]. -- `name` (String) The update schedule name. -- `rrule` (String) Update schedule described in `rrule` (recurrence rule) format. -- `update_schedule_id` (Number) diff --git a/docs/data-sources/service_account.md b/docs/data-sources/service_account.md deleted file mode 100644 index 3811009a..00000000 --- a/docs/data-sources/service_account.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_service_account Data Source - stackit" -subcategory: "" -description: |- - Service account data source schema. ---- - -# stackit_service_account (Data Source) - -Service account data source schema. - -## Example Usage - -```terraform -data "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - email = "sa01-8565oq1@sa.stackit.cloud" -} -``` - - -## Schema - -### Required - -- `email` (String) Email of the service account. -- `project_id` (String) STACKIT project ID to which the service account is associated. - -### Read-Only - -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`email`". -- `name` (String) Name of the service account. diff --git a/docs/data-sources/ske_cluster.md b/docs/data-sources/ske_cluster.md deleted file mode 100644 index 755198d3..00000000 --- a/docs/data-sources/ske_cluster.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_ske_cluster Data Source - stackit" -subcategory: "" -description: |- - SKE Cluster data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_ske_cluster (Data Source) - -SKE Cluster data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-name" -} -``` - - -## Schema - -### Required - -- `name` (String) The cluster name. -- `project_id` (String) STACKIT project ID to which the cluster is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `egress_address_ranges` (List of String) The outgoing network ranges (in CIDR notation) of traffic originating from workload on the cluster. -- `extensions` (Attributes) A single extensions block as defined below (see [below for nested schema](#nestedatt--extensions)) -- `hibernations` (Attributes List) One or more hibernation block as defined below. (see [below for nested schema](#nestedatt--hibernations)) -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`name`". -- `kubernetes_version_min` (String) The minimum Kubernetes version, this field is always nil. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). To get the current kubernetes version being used for your cluster, use the `kubernetes_version_used` field. -- `kubernetes_version_used` (String) Full Kubernetes version used. For example, if `1.22` was selected, this value may result to `1.22.15` -- `maintenance` (Attributes) A single maintenance block as defined below (see [below for nested schema](#nestedatt--maintenance)) -- `network` (Attributes) Network block as defined below. (see [below for nested schema](#nestedatt--network)) -- `node_pools` (Attributes List) One or more `node_pool` block as defined below. (see [below for nested schema](#nestedatt--node_pools)) -- `pod_address_ranges` (List of String) The network ranges (in CIDR notation) used by pods of the cluster. - - -### Nested Schema for `extensions` - -Read-Only: - -- `acl` (Attributes) Cluster access control configuration (see [below for nested schema](#nestedatt--extensions--acl)) -- `argus` (Attributes, Deprecated) A single argus block as defined below. This field is deprecated and will be removed 06 January 2026. (see [below for nested schema](#nestedatt--extensions--argus)) -- `dns` (Attributes) DNS extension configuration (see [below for nested schema](#nestedatt--extensions--dns)) -- `observability` (Attributes) A single observability block as defined below. (see [below for nested schema](#nestedatt--extensions--observability)) - - -### Nested Schema for `extensions.acl` - -Read-Only: - -- `allowed_cidrs` (List of String) Specify a list of CIDRs to whitelist -- `enabled` (Boolean) Is ACL enabled? - - - -### Nested Schema for `extensions.argus` - -Read-Only: - -- `argus_instance_id` (String) Instance ID of argus -- `enabled` (Boolean) Flag to enable/disable argus extensions. - - - -### Nested Schema for `extensions.dns` - -Read-Only: - -- `enabled` (Boolean) Flag to enable/disable DNS extensions -- `zones` (List of String) Specify a list of domain filters for externalDNS (e.g., `foo.runs.onstackit.cloud`) - - - -### Nested Schema for `extensions.observability` - -Read-Only: - -- `enabled` (Boolean) Flag to enable/disable Observability extensions. -- `instance_id` (String) Observability instance ID to choose which Observability instance is used. Required when enabled is set to `true`. - - - - -### Nested Schema for `hibernations` - -Read-Only: - -- `end` (String) End time of hibernation, in crontab syntax. -- `start` (String) Start time of cluster hibernation in crontab syntax. -- `timezone` (String) Timezone name corresponding to a file in the IANA Time Zone database. - - - -### Nested Schema for `maintenance` - -Read-Only: - -- `enable_kubernetes_version_updates` (Boolean) Flag to enable/disable auto-updates of the Kubernetes version. -- `enable_machine_image_version_updates` (Boolean) Flag to enable/disable auto-updates of the OS image version. -- `end` (String) Date time for maintenance window end. -- `start` (String) Date time for maintenance window start. - - - -### Nested Schema for `network` - -Read-Only: - -- `id` (String) ID of the STACKIT Network Area (SNA) network into which the cluster will be deployed. - - - -### Nested Schema for `node_pools` - -Read-Only: - -- `allow_system_components` (Boolean) Allow system components to run on this node pool. -- `availability_zones` (List of String) Specify a list of availability zones. -- `cri` (String) Specifies the container runtime. -- `labels` (Map of String) Labels to add to each node. -- `machine_type` (String) The machine type. -- `max_surge` (Number) The maximum number of nodes upgraded simultaneously. -- `max_unavailable` (Number) The maximum number of nodes unavailable during upgraded. -- `maximum` (Number) Maximum number of nodes in the pool. -- `minimum` (Number) Minimum number of nodes in the pool. -- `name` (String) Specifies the name of the node pool. -- `os_name` (String) The name of the OS image. -- `os_version` (String) The OS image version. -- `os_version_min` (String) The minimum OS image version, this field is always nil. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). To get the current OS image version being used for the node pool, use the read-only `os_version_used` field. -- `os_version_used` (String) Full OS image version used. For example, if 3815.2 was set in `os_version_min`, this value may result to 3815.2.2. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). -- `taints` (Attributes List) Specifies a taint list as defined below. (see [below for nested schema](#nestedatt--node_pools--taints)) -- `volume_size` (Number) The volume size in GB. -- `volume_type` (String) Specifies the volume type. - - -### Nested Schema for `node_pools.taints` - -Read-Only: - -- `effect` (String) The taint effect. -- `key` (String) Taint key to be applied to a node. -- `value` (String) Taint value corresponding to the taint key. diff --git a/docs/data-sources/sqlserverflex_instance.md b/docs/data-sources/sqlserverflex_instance.md deleted file mode 100644 index b13f91fa..00000000 --- a/docs/data-sources/sqlserverflex_instance.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_sqlserverflex_instance Data Source - stackit" -subcategory: "" -description: |- - SQLServer Flex instance data source schema. Must have a region specified in the provider configuration. ---- - -# stackit_sqlserverflex_instance (Data Source) - -SQLServer Flex instance data source schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_sqlserverflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the SQLServer Flex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `acl` (List of String) The Access Control List (ACL) for the SQLServer Flex instance. -- `backup_schedule` (String) The backup schedule. Should follow the cron scheduling system format (e.g. "0 0 * * *"). -- `flavor` (Attributes) (see [below for nested schema](#nestedatt--flavor)) -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`region`,`instance_id`". -- `name` (String) Instance name. -- `options` (Attributes) Custom parameters for the SQLServer Flex instance. (see [below for nested schema](#nestedatt--options)) -- `replicas` (Number) -- `storage` (Attributes) (see [below for nested schema](#nestedatt--storage)) -- `version` (String) - - -### Nested Schema for `flavor` - -Read-Only: - -- `cpu` (Number) -- `description` (String) -- `id` (String) -- `ram` (Number) - - - -### Nested Schema for `options` - -Read-Only: - -- `edition` (String) -- `retention_days` (Number) - - - -### Nested Schema for `storage` - -Read-Only: - -- `class` (String) -- `size` (Number) diff --git a/docs/data-sources/sqlserverflex_user.md b/docs/data-sources/sqlserverflexalpha_user.md similarity index 77% rename from docs/data-sources/sqlserverflex_user.md rename to docs/data-sources/sqlserverflexalpha_user.md index 7b1dcef4..5e646af1 100644 --- a/docs/data-sources/sqlserverflex_user.md +++ b/docs/data-sources/sqlserverflexalpha_user.md @@ -1,19 +1,21 @@ --- # generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_sqlserverflex_user Data Source - stackit" +page_title: "stackitprivatepreview_sqlserverflexalpha_user Data Source - stackitprivatepreview" subcategory: "" description: |- SQLServer Flex user data source schema. Must have a region specified in the provider configuration. --- -# stackit_sqlserverflex_user (Data Source) +# stackitprivatepreview_sqlserverflexalpha_user (Data Source) SQLServer Flex user data source schema. Must have a `region` specified in the provider configuration. ## Example Usage ```terraform -data "stackit_sqlserverflex_user" "example" { +# Copyright (c) STACKIT + +data "stackitprivatepreview_sqlserverflexalpha_user" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" @@ -27,7 +29,7 @@ data "stackit_sqlserverflex_user" "example" { - `instance_id` (String) ID of the SQLServer Flex instance. - `project_id` (String) STACKIT project ID to which the instance is associated. -- `user_id` (String) User ID. +- `user_id` (Number) User ID. ### Optional @@ -35,8 +37,10 @@ data "stackit_sqlserverflex_user" "example" { ### Read-Only +- `default_database` (String) - `host` (String) - `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`region`,`instance_id`,`user_id`". - `port` (Number) - `roles` (Set of String) Database access levels for the user. +- `status` (String) - `username` (String) Username of the SQLServer Flex instance. diff --git a/docs/data-sources/volume.md b/docs/data-sources/volume.md deleted file mode 100644 index 1b1e4064..00000000 --- a/docs/data-sources/volume.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_volume Data Source - stackit" -subcategory: "" -description: |- - Volume resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_volume (Data Source) - -Volume resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -data "stackit_volume" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - volume_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the volume is associated. -- `volume_id` (String) The volume ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `availability_zone` (String) The availability zone of the volume. -- `description` (String) The description of the volume. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`volume_id`". -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `name` (String) The name of the volume. -- `performance_class` (String) The performance class of the volume. Possible values are documented in [Service plans BlockStorage](https://docs.stackit.cloud/products/storage/block-storage/basics/service-plans/#currently-available-service-plans-performance-classes) -- `server_id` (String) The server ID of the server to which the volume is attached to. -- `size` (Number) The size of the volume in GB. It can only be updated to a larger value than the current size -- `source` (Attributes) The source of the volume. It can be either a volume, an image, a snapshot or a backup (see [below for nested schema](#nestedatt--source)) - - -### Nested Schema for `source` - -Read-Only: - -- `id` (String) The ID of the source, e.g. image ID -- `type` (String) The type of the source. Possible values are: `volume`, `image`, `snapshot`, `backup`. diff --git a/docs/ephemeral-resources/access_token.md b/docs/ephemeral-resources/access_token.md deleted file mode 100644 index b45fd715..00000000 --- a/docs/ephemeral-resources/access_token.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_access_token Ephemeral Resource - stackit" -subcategory: "" -description: |- - Ephemeral resource that generates a short-lived STACKIT access token (JWT) using a service account key. A new token is generated each time the resource is evaluated, and it remains consistent for the duration of a Terraform operation. If a private key is not explicitly provided, the provider attempts to extract it from the service account key instead. Access tokens generated from service account keys expire after 60 minutes. - ~> Service account key credentials must be configured either in the STACKIT provider configuration or via environment variables (see example below). If any other authentication method is configured, this ephemeral resource will fail with an error. - ~> This ephemeral-resource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_access_token (Ephemeral Resource) - -Ephemeral resource that generates a short-lived STACKIT access token (JWT) using a service account key. A new token is generated each time the resource is evaluated, and it remains consistent for the duration of a Terraform operation. If a private key is not explicitly provided, the provider attempts to extract it from the service account key instead. Access tokens generated from service account keys expire after 60 minutes. - -~> Service account key credentials must be configured either in the STACKIT provider configuration or via environment variables (see example below). If any other authentication method is configured, this ephemeral resource will fail with an error. - -~> This ephemeral-resource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -provider "stackit" { - default_region = "eu01" - service_account_key_path = "/path/to/sa_key.json" - enable_beta_resources = true -} - -ephemeral "stackit_access_token" "example" {} - -locals { - stackit_api_base_url = "https://iaas.api.stackit.cloud" - public_ip_path = "/v2/projects/${var.project_id}/regions/${var.region}/public-ips" - - public_ip_payload = { - labels = { - key = "value" - } - } -} - -# Docs: https://registry.terraform.io/providers/Mastercard/restapi/latest -provider "restapi" { - uri = local.stackit_api_base_url - write_returns_object = true - - headers = { - Authorization = "Bearer ${ephemeral.stackit_access_token.example.access_token}" - Content-Type = "application/json" - } - - create_method = "POST" - update_method = "PATCH" - destroy_method = "DELETE" -} - -resource "restapi_object" "public_ip_restapi" { - path = local.public_ip_path - data = jsonencode(local.public_ip_payload) - - id_attribute = "id" - read_method = "GET" - create_method = "POST" - update_method = "PATCH" - destroy_method = "DELETE" -} -``` - - -## Schema - -### Read-Only - -- `access_token` (String, Sensitive) JWT access token for STACKIT API authentication. diff --git a/docs/guides/aws_provider_s3_stackit.md b/docs/guides/aws_provider_s3_stackit.md deleted file mode 100644 index b57cacb5..00000000 --- a/docs/guides/aws_provider_s3_stackit.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -page_title: "Using AWS Provider for STACKIT Object Storage (S3 compatible)" ---- -# Using AWS Provider for STACKIT Object Storage (S3 compatible) - -## Overview - -This guide outlines the process of utilizing the [AWS Terraform Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) alongside the STACKIT provider to create and manage STACKIT Object Storage (S3 compatible) resources. - -## Steps - -1. **Configure STACKIT Provider** - - First, configure the STACKIT provider to connect to the STACKIT services. - - ```hcl - provider "stackit" { - default_region = "eu01" - } - ``` - -2. **Define STACKIT Object Storage Bucket** - - Create a STACKIT Object Storage Bucket and obtain credentials for the AWS provider. - - ```hcl - resource "stackit_objectstorage_bucket" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - } - - resource "stackit_objectstorage_credentials_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-credentials-group" - } - - resource "stackit_objectstorage_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = stackit_objectstorage_credentials_group.example.credentials_group_id - expiration_timestamp = "2027-01-02T03:04:05Z" - } - ``` - -3. **Configure AWS Provider** - - Configure the AWS provider to connect to the STACKIT Object Storage bucket. - - ```hcl - provider "aws" { - region = "eu01" - skip_credentials_validation = true - skip_region_validation = true - skip_requesting_account_id = true - - access_key = stackit_objectstorage_credential.example.access_key - secret_key = stackit_objectstorage_credential.example.secret_access_key - - endpoints { - s3 = "https://object.storage.eu01.onstackit.cloud" - } - } - ``` - -4. **Use the Provider to Manage Objects or Policies** - - ```hcl - resource "aws_s3_object" "test_file" { - bucket = stackit_objectstorage_bucket.example.name - key = "hello_world.txt" - source = "files/hello_world.txt" - content_type = "text/plain" - etag = filemd5("files/hello_world.txt") - } - - resource "aws_s3_bucket_policy" "allow_public_read_access" { - bucket = stackit_objectstorage_bucket.example.name - policy = < The environment variable takes precedence over the provider configuration option. This means that if the `STACKIT_TF_ENABLE_BETA_RESOURCES` environment variable is set to a valid value (`"true"` or `"false"`), it will override the `enable_beta_resources` option specified in the provider configuration. \ No newline at end of file diff --git a/docs/guides/scf_cloudfoundry.md b/docs/guides/scf_cloudfoundry.md deleted file mode 100644 index b468cbe1..00000000 --- a/docs/guides/scf_cloudfoundry.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -page_title: "How to provision Cloud Foundry using Terraform" ---- -# How to provision Cloud Foundry using Terraform - -## Objective - -This tutorial demonstrates how to provision Cloud Foundry resources by -integrating the STACKIT Terraform provider with the Cloud Foundry Terraform -provider. The STACKIT Terraform provider will create a managed Cloud Foundry -organization and set up a technical "org manager" user with -`organization_manager` permissions. These credentials, along with the Cloud -Foundry API URL (retrieved dynamically from a platform data resource), are -passed to the Cloud Foundry Terraform provider to manage resources within the -new organization. - -### Output - -This configuration creates a Cloud Foundry organization, mirroring the structure -created via the portal. It sets up three distinct spaces: `dev`, `qa`, and -`prod`. The configuration assigns, a specified user the `organization_manager` -and `organization_user` roles at the organization level, and the -`space_developer` role in each space. - -### Scope - -This tutorial covers the interaction between the STACKIT Terraform provider and -the Cloud Foundry Terraform provider. It assumes you are familiar with: - -- Setting up a STACKIT project and configuring the STACKIT Terraform provider - with a service account (see the general STACKIT documentation for details). -- Basic Terraform concepts, such as variables and locals. - -This document does not cover foundational topics or every feature of the Cloud -Foundry Terraform provider. - -### Example configuration - -The following Terraform configuration provisions a Cloud Foundry organization -and related resources using the STACKIT Terraform provider and the Cloud Foundry -Terraform provider: - -``` -terraform { - required_providers { - stackit = { - source = "stackitcloud/stackit" - } - cloudfoundry = { - source = "cloudfoundry/cloudfoundry" - } - } -} - -variable "project_id" { - type = string - description = "Id of the Project" -} - -variable "org_name" { - type = string - description = "Name of the Organization" -} - -variable "admin_email" { - type = string - description = "Users who are granted permissions" -} - -provider "stackit" { - default_region = "eu01" -} - -resource "stackit_scf_organization" "scf_org" { - name = var.org_name - project_id = var.project_id -} - -data "stackit_scf_platform" "scf_platform" { - project_id = var.project_id - platform_id = stackit_scf_organization.scf_org.platform_id -} - -resource "stackit_scf_organization_manager" "scf_manager" { - project_id = var.project_id - org_id = stackit_scf_organization.scf_org.org_id -} - -provider "cloudfoundry" { - api_url = data.stackit_scf_platform.scf_platform.api_url - user = stackit_scf_organization_manager.scf_manager.username - password = stackit_scf_organization_manager.scf_manager.password -} - -locals { - spaces = ["dev", "qa", "prod"] -} - -resource "cloudfoundry_org_role" "org_user" { - username = var.admin_email - type = "organization_user" - org = stackit_scf_organization.scf_org.org_id -} - -resource "cloudfoundry_org_role" "org_manager" { - username = var.admin_email - type = "organization_manager" - org = stackit_scf_organization.scf_org.org_id -} - -resource "cloudfoundry_space" "spaces" { - for_each = toset(local.spaces) - name = each.key - org = stackit_scf_organization.scf_org.org_id -} - -resource "cloudfoundry_space_role" "space_developer" { - for_each = toset(local.spaces) - username = var.admin_email - type = "space_developer" - depends_on = [cloudfoundry_org_role.org_user] - space = cloudfoundry_space.spaces[each.key].id -} -``` - -## Explanation of configuration - -### STACKIT provider configuration - -``` -provider "stackit" { - default_region = "eu01" -} -``` - -The STACKIT Cloud Foundry Application Programming Interface (SCF API) is -regionalized. Each region operates independently. Set `default_region` in the -provider configuration, to specify the region for all resources, unless you -override it for individual resources. You must also provide access data for the -relevant STACKIT project for the provider to function. - -For more details, see -the:[STACKIT Terraform Provider documentation.](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs) - -### stackit_scf_organization.scf_org resource - -``` -resource "stackit_scf_organization" "scf_org" { - name = var.org_name - project_id = var.project_id -} -``` - -This resource provisions a Cloud Foundry organization, which acts as the -foundational container in the Cloud Foundry environment. Each Cloud Foundry -provider configuration is scoped to a specific organization. The organization’s -name, defined by a variable, must be unique across the platform. The -organization is created within a designated STACKIT project, which requires the -STACKIT provider to be configured with the necessary permissions for that -project. - -### stackit_scf_organization_manager.scf_manager resource - -``` -resource "stackit_scf_organization_manager" "scf_manager" { - project_id = var.project_id - org_id = stackit_scf_organization.scf_org.org_id -} -``` - -This resource creates a technical user in the Cloud Foundry organization with -the organization_manager permission. The user is linked to the organization and -is automatically deleted when the organization is removed. - -### stackit_scf_platform.scf_platform data source - -``` -data "stackit_scf_platform" "scf_platform" { - project_id = var.project_id - platform_id = stackit_scf_organization.scf_org.platform_id -} -``` - -This data source retrieves properties of the Cloud Foundry platform where the -organization is provisioned. It does not create resources, but provides -information about the existing platform. - -### Cloud Foundry provider configuration - -``` -provider "cloudfoundry" { - api_url = data.stackit_scf_platform.scf_platform.api_url - user = stackit_scf_organization_manager.scf_manager.username - password = stackit_scf_organization_manager.scf_manager.password -} -``` - -The Cloud Foundry provider is configured to manage resources in the new -organization. The provider uses the API URL from the `stackit_scf_platform` data -source and authenticates using the credentials of the technical user created by -the `stackit_scf_organization_manager` resource. - -For more information, see the: -[Cloud Foundry Terraform Provider documentation.](https://registry.terraform.io/providers/cloudfoundry/cloudfoundry/latest/docs) - -## Deploy resources - -Follow these steps to initialize your environment and provision Cloud Foundry -resources using Terraform. - -### Initialize Terraform - -Run the following command to initialize the working directory and download the -required provider plugins: - -``` -terraform init -``` - -### Create the organization manager user - -Run this command to provision the organization and technical user needed to -initialize the Cloud Foundry Terraform provider. This step is required only -during the initial setup. For later changes, you do not need the -target flag. - -``` -terraform apply -target stackit_scf_organization_manager.scf_manager -``` - -### Apply the full configuration - -Run this command to provision all resources defined in your Terraform -configuration within the Cloud Foundry organization: - -``` -terraform apply -``` - -## Verify the deployment - -Verify that your Cloud Foundry resources are provisioned correctly. Use the -following Cloud Foundry CLI commands to check applications, services, and -routes: - -- `cf apps` -- `cf services` -- `cf routes` - -For more information, see the -[Cloud Foundry documentation](https://docs.cloudfoundry.org/) and the -[Cloud Foundry CLI Reference Guide](https://cli.cloudfoundry.org/). \ No newline at end of file diff --git a/docs/guides/ske_kube_state_metric_alerts.md b/docs/guides/ske_kube_state_metric_alerts.md deleted file mode 100644 index 22c2b4ce..00000000 --- a/docs/guides/ske_kube_state_metric_alerts.md +++ /dev/null @@ -1,267 +0,0 @@ ---- -page_title: "Alerting with Kube-State-Metrics in STACKIT Observability" ---- -# Alerting with Kube-State-Metrics in STACKIT Observability - -## Overview - -This guide explains how to configure the STACKIT Observability product to send alerts using metrics gathered from kube-state-metrics. - -1. **Set Up Providers** - - Begin by configuring the STACKIT and Kubernetes providers to connect to the STACKIT services. - - ```hcl - provider "stackit" { - default_region = "eu01" - } - - provider "kubernetes" { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - - provider "helm" { - kubernetes { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - } - ``` - -2. **Create SKE Cluster and Kubeconfig Resource** - - Set up a STACKIT SKE Cluster and generate the associated kubeconfig resource. - - ```hcl - resource "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - kubernetes_version_min = "1.31" - node_pools = [ - { - name = "standard" - machine_type = "c1.4" - minimum = "3" - maximum = "9" - max_surge = "3" - availability_zones = ["eu01-1", "eu01-2", "eu01-3"] - os_version_min = "4081.2.1" - os_name = "flatcar" - volume_size = 32 - volume_type = "storage_premium_perf6" - } - ] - maintenance = { - enable_kubernetes_version_updates = true - enable_machine_image_version_updates = true - start = "01:00:00Z" - end = "02:00:00Z" - } - } - - resource "stackit_ske_kubeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - cluster_name = stackit_ske_cluster.example.name - refresh = true - } - ``` - -3. **Create Observability Instance and Credentials** - - Establish a STACKIT Observability instance and its credentials to handle alerts. - - ```hcl - locals { - alert_config = { - route = { - receiver = "EmailStackit", - repeat_interval = "1m", - continue = true - } - receivers = [ - { - name = "EmailStackit", - email_configs = [ - { - to = "" - } - ] - } - ] - } - } - - resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - plan_name = "Observability-Large-EU01" - alert_config = local.alert_config - } - - resource "stackit_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - } - ``` - -4. **Install Prometheus Operator** - - Use the Prometheus Helm chart to install kube-state-metrics and transfer metrics to the STACKIT Observability instance. Customize the helm values as needed for your deployment. - - ```yaml - # helm values - # save as prom-values.tftpl - prometheus: - enabled: true - agentMode: true - prometheusSpec: - enableRemoteWriteReceiver: true - scrapeInterval: 60s - evaluationInterval: 60s - replicas: 1 - storageSpec: - volumeClaimTemplate: - spec: - storageClassName: premium-perf4-stackit - accessModes: ['ReadWriteOnce'] - resources: - requests: - storage: 80Gi - remoteWrite: - - url: ${metrics_push_url} - queueConfig: - batchSendDeadline: '5s' - # both values need to be configured according to your observability plan - capacity: 30000 - maxSamplesPerSend: 3000 - writeRelabelConfigs: - - sourceLabels: ['__name__'] - regex: 'apiserver_.*|etcd_.*|prober_.*|storage_.*|workqueue_(work|queue)_duration_seconds_bucket|kube_pod_tolerations|kubelet_.*|kubernetes_feature_enabled|instance_scrape_target_status' - action: 'drop' - - sourceLabels: ['namespace'] - regex: 'example' - action: 'keep' - basicAuth: - username: - key: username - name: ${secret_name} - password: - key: password - name: ${secret_name} - - grafana: - enabled: false - - defaultRules: - create: false - - alertmanager: - enabled: false - - nodeExporter: - enabled: true - - kube-state-metrics: - enabled: true - customResourceState: - enabled: true - collectors: - - deployments - - pods - ``` - - ```hcl - resource "kubernetes_namespace" "monitoring" { - metadata { - name = "monitoring" - } - } - - resource "kubernetes_secret" "argus_prometheus_authorization" { - metadata { - name = "argus-prometheus-credentials" - namespace = kubernetes_namespace.monitoring.metadata[0].name - } - - data = { - username = stackit_observability_credential.example.username - password = stackit_observability_credential.example.password - } - } - - resource "helm_release" "prometheus_operator" { - name = "prometheus-operator" - repository = "https://prometheus-community.github.io/helm-charts" - chart = "kube-prometheus-stack" - version = "60.1.0" - namespace = kubernetes_namespace.monitoring.metadata[0].name - - values = [ - templatefile("prom-values.tftpl", { - metrics_push_url = stackit_observability_instance.example.metrics_push_url - secret_name = kubernetes_secret.argus_prometheus_authorization.metadata[0].name - }) - ] - } - ``` - -5. **Create Alert Group** - - Define an alert group with a rule to notify when a pod is running in the "example" namespace. - - ```hcl - resource "stackit_observability_alertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - name = "TestAlertGroup" - interval = "2h" - rules = [ - { - alert = "SimplePodCheck" - expression = "sum(kube_pod_status_phase{phase=\"Running\", namespace=\"example\"}) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary = "Test Alert is working" - description = "Test Alert" - } - }, - ] - } - ``` - -6. **Deploy Test Pod** - - Deploy a test pod; doing so should trigger an email notification, as the deployment satisfies the conditions defined in the alert group rule. In a real-world scenario, you would typically configure alerts to monitor pods for error states instead. - - ```hcl - resource "kubernetes_namespace" "example" { - metadata { - name = "example" - } - } - - resource "kubernetes_pod" "example" { - metadata { - name = "nginx" - namespace = kubernetes_namespace.example.metadata[0].name - labels = { - app = "nginx" - } - } - - spec { - container { - image = "nginx:latest" - name = "nginx" - } - } - } - ``` \ No newline at end of file diff --git a/docs/guides/ske_log_alerts.md b/docs/guides/ske_log_alerts.md deleted file mode 100644 index 60498b05..00000000 --- a/docs/guides/ske_log_alerts.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -page_title: "SKE Log Alerts with STACKIT Observability" ---- -# SKE Log Alerts with STACKIT Observability - -## Overview - -This guide walks you through setting up log-based alerting in STACKIT Observability using Promtail to ship Kubernetes logs. - -1. **Set Up Providers** - - Begin by configuring the STACKIT and Kubernetes providers to connect to the STACKIT services. - - ```hcl - provider "stackit" { - region = "eu01" - } - - provider "kubernetes" { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - - provider "helm" { - kubernetes { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - } - ``` - -2. **Create SKE Cluster and Kubeconfig Resource** - - Set up a STACKIT SKE Cluster and generate the associated kubeconfig resource. - - ```hcl - resource "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - kubernetes_version_min = "1.31" - node_pools = [ - { - name = "standard" - machine_type = "c1.4" - minimum = "3" - maximum = "9" - max_surge = "3" - availability_zones = ["eu01-1", "eu01-2", "eu01-3"] - os_version_min = "4081.2.1" - os_name = "flatcar" - volume_size = 32 - volume_type = "storage_premium_perf6" - } - ] - maintenance = { - enable_kubernetes_version_updates = true - enable_machine_image_version_updates = true - start = "01:00:00Z" - end = "02:00:00Z" - } - } - - resource "stackit_ske_kubeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - cluster_name = stackit_ske_cluster.example.name - refresh = true - } - ``` - -3. **Create Observability Instance and Credentials** - - Establish a STACKIT Observability instance and its credentials to handle alerts. - - ```hcl - locals { - alert_config = { - route = { - receiver = "EmailStackit", - repeat_interval = "1m", - continue = true - } - receivers = [ - { - name = "EmailStackit", - email_configs = [ - { - to = "" - } - ] - } - ] - } - } - - resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - plan_name = "Observability-Large-EU01" - alert_config = local.alert_config - } - - resource "stackit_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - } - ``` - -4. **Install Promtail** - - Deploy Promtail via Helm to collect logs and forward them to the STACKIT Observability Loki endpoint. - - ```hcl - resource "helm_release" "promtail" { - name = "promtail" - repository = "https://grafana.github.io/helm-charts" - chart = "promtail" - namespace = kubernetes_namespace.monitoring.metadata.0.name - version = "6.16.4" - - values = [ - <<-EOF - config: - clients: - # To find the Loki push URL, navigate to the observability instance in the portal and select the API tab. - - url: "https://${stackit_observability_credential.example.username}:${stackit_observability_credential.example.password}@/instances/${stackit_observability_instance.example.instance_id}/loki/api/v1/push" - EOF - ] - } - ``` - -5. **Create Alert Group** - - Create a log alert that triggers when a specific pod logs an error message. - - ```hcl - resource "stackit_observability_logalertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - name = "TestLogAlertGroup" - interval = "1m" - rules = [ - { - alert = "SimplePodLogAlertCheck" - expression = "sum(rate({namespace=\"example\", pod=\"logger\"} |= \"Simulated error message\" [1m])) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "Test Log Alert is working" - description : "Test Log Alert" - }, - }, - ] - } - ``` - -6. **Deploy Test Pod** - - Launch a pod that emits simulated error logs. This should trigger the alert if everything is set up correctly. - - ```hcl - resource "kubernetes_namespace" "example" { - metadata { - name = "example" - } - } - - resource "kubernetes_pod" "logger" { - metadata { - name = "logger" - namespace = kubernetes_namespace.example.metadata[0].name - labels = { - app = "logger" - } - } - - spec { - container { - name = "logger" - image = "bash" - command = [ - "bash", - "-c", - <&2 - done - EOF - ] - } - } - } - ``` \ No newline at end of file diff --git a/docs/guides/stackit_cdn_with_custom_domain.md b/docs/guides/stackit_cdn_with_custom_domain.md deleted file mode 100644 index 1fd9cbdb..00000000 --- a/docs/guides/stackit_cdn_with_custom_domain.md +++ /dev/null @@ -1,255 +0,0 @@ ---- -page_title: "Using STACKIT CDN to service static files from an HTTP Origin with STACKIT CDN" ---- - -# Using STACKIT CDN to service static files from an HTTP Origin with STACKIT CDN - -This guide will walk you through the process of setting up a STACKIT CDN distribution to serve static files from a -generic HTTP origin using Terraform. This is a common use case for developers who want to deliver content with low -latency and high data transfer speeds. - ---- - -## Prerequisites - -Before you begin, make sure you have the following: - -* A **STACKIT project** and a user account with the necessary permissions for the CDN. -* A **Service Account Key**: you can read about creating one here: [Create a Service Account Key -](https://docs.stackit.cloud/platform/access-and-identity/service-accounts/how-tos/manage-service-account-keys/) - ---- - -## Step 1: Configure the Terraform Provider - -First, you need to configure the STACKIT provider in your Terraform configuration. Create a file named `main.tf` and add -the following code. This block tells Terraform to download and use the STACKIT provider. - -```terraform -terraform { - required_providers { - stackit = { - source = "stackitcloud/stackit" - } - } -} - -variable "service_account_key" { - type = string - description = "Your STACKIT service account key." - sensitive = true - default = "path/to/sa-key.json" -} - -variable "project_id" { - type = string - default = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Your project ID -} - -provider "stackit" { - # The STACKIT provider is configured using the defined variables. - default_region = "eu01" - service_account_key_path = var.service_account_key -} - -``` - -## Step 2: Create the DNS Zone - -The first resource you'll create is the DNS zone, which will manage the records for your domain. - -```terraform -resource "stackit_dns_zone" "example_zone" { - project_id = var.project_id - name = "My DNS zone" - dns_name = "myapp.runs.onstackit.cloud" - contact_email = "aa@bb.ccc" - type = "primary" -} -``` - -## Step 3: Create the CDN Distribution - -Next, define the CDN distribution. This is the core service that will cache and serve your content from its origin. - -```terraform -resource "stackit_cdn_distribution" "example_distribution" { - project_id = var.project_id - - config = { - # Define the backend configuration - backend = { - type = "http" - - # Replace with the URL of your HTTP origin - origin_url = "https://your-origin-server.com" - } - - # The regions where content will be hosted - regions = ["EU", "US", "ASIA", "AF", "SA"] - blocked_countries = [] - } - -} -``` - -## Step 4: Create the DNS CNAME Record - -Finally, create the **CNAME record** to point your custom domain to the CDN. This step must come after the CDN is -created because it needs the CDN's unique domain name as its target. - -```terraform -resource "stackit_dns_record_set" "cname_record" { - project_id = stackit_dns_zone.example_zone.project_id - zone_id = stackit_dns_zone.example_zone.zone_id - - # This is the custom domain name which will be added to your zone - name = "cdn" - type = "CNAME" - ttl = 3600 - - # Points to the CDN distribution's unique domain. - # Notice the added dot at the end of the domain name to point to a FQDN. - records = ["${stackit_cdn_distribution.example_distribution.domains[0].name}."] -} - -``` - -This record directs traffic from your custom domain to the STACKIT CDN infrastructure. - -## Step 5: Add a Custom Domain to the CDN - -To provide a user-friendly URL, associate a custom domain (like `cdn.myapp.runs.onstackit.cloud`) with your -distribution. - -```terraform -resource "stackit_cdn_custom_domain" "example_custom_domain" { - project_id = stackit_cdn_distribution.example_distribution.project_id - distribution_id = stackit_cdn_distribution.example_distribution.distribution_id - - # Creates "cdn.myapp.runs.onstackit.cloud" dynamically - name = "${stackit_dns_record_set.cname_record.name}.${stackit_dns_zone.example_zone.dns_name}" -} - -``` - -This resource links the subdomain you created in the previous step to the CDN distribution. - -## Complete Terraform Configuration - -Here is the complete `main.tf` file, which follows the logical order of operations. - -```terraform -# This configuration file sets up a complete STACKIT CDN distribution -# with a custom domain managed by STACKIT DNS. - -# ----------------------------------------------------------------------------- -# PROVIDER CONFIGURATION -# ----------------------------------------------------------------------------- - -terraform { - required_providers { - stackit = { - source = "stackitcloud/stackit" - } - } -} - -variable "service_account_key" { - type = string - description = "Your STACKIT service account key." - sensitive = true - default = "path/to/sa-key.json" -} - -variable "project_id" { - type = string - description = "Your STACKIT project ID." - default = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -provider "stackit" { - # The STACKIT provider is configured using the defined variables. - default_region = "eu01" - service_account_key_path = var.service_account_key -} - -# ----------------------------------------------------------------------------- -# DNS ZONE RESOURCE -# ----------------------------------------------------------------------------- -# The DNS zone manages all records for your domain. -# It's the first resource to be created. -# ----------------------------------------------------------------------------- - -resource "stackit_dns_zone" "example_zone" { - project_id = var.project_id - name = "My DNS zone" - dns_name = "myapp.runs.onstackit.cloud" - contact_email = "aa@bb.ccc" - type = "primary" -} - -# ----------------------------------------------------------------------------- -# CDN DISTRIBUTION RESOURCE -# ----------------------------------------------------------------------------- -# This resource defines the CDN, its origin, and caching regions. -# ----------------------------------------------------------------------------- - -resource "stackit_cdn_distribution" "example_distribution" { - project_id = var.project_id - - config = { - # Define the backend configuration - backend = { - type = "http" - - # Replace with the URL of your HTTP origin - origin_url = "https://your-origin-server.com" - } - - # The regions where content will be hosted - regions = ["EU", "US", "ASIA", "AF", "SA"] - blocked_countries = [] - } -} - -# ----------------------------------------------------------------------------- -# CUSTOM DOMAIN AND DNS RECORD -# ----------------------------------------------------------------------------- -# These resources link your CDN to a user-friendly custom domain and create -# the necessary DNS record to route traffic. -# ----------------------------------------------------------------------------- - -resource "stackit_dns_record_set" "cname_record" { - project_id = stackit_dns_zone.example_zone.project_id - zone_id = stackit_dns_zone.example_zone.zone_id - # This is the custom domain name which will be added to your zone - name = "cdn" - type = "CNAME" - ttl = 3600 - # Points to the CDN distribution's unique domain. - # The dot at the end makes it a fully qualified domain name (FQDN). - records = ["${stackit_cdn_distribution.example_distribution.domains[0].name}."] - -} - -resource "stackit_cdn_custom_domain" "example_custom_domain" { - project_id = stackit_cdn_distribution.example_distribution.project_id - distribution_id = stackit_cdn_distribution.example_distribution.distribution_id - - # Creates "cdn.myapp.runs.onstackit.cloud" dynamically - name = "${stackit_dns_record_set.cname_record.name}.${stackit_dns_zone.example_zone.dns_name}" -} - -# ----------------------------------------------------------------------------- -# OUTPUTS -# ----------------------------------------------------------------------------- -# This output will display the final custom URL after `terraform apply` is run. -# ----------------------------------------------------------------------------- - -output "custom_cdn_url" { - description = "The final custom domain URL for the CDN distribution." - value = "https://${stackit_cdn_custom_domain.example_custom_domain.name}" -} - -``` diff --git a/docs/guides/stackit_org_service_account.md b/docs/guides/stackit_org_service_account.md deleted file mode 100644 index e75ad7ef..00000000 --- a/docs/guides/stackit_org_service_account.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -page_title: "Creating projects in empty organization via Terraform" ---- -# Creating projects in empty organization via Terraform - -Consider the following situation: You're starting with an empty STACKIT organization and want to create projects -in this organization using the `stackit_resourcemanager_project` resource. Unfortunately it's not possible to create -a service account on organization level which can be used for authentication in the STACKIT Terraform provider. -The following steps will help you to get started: - -1. Using the STACKIT portal, create a dummy project in your organization which will hold your service account, let's name it e.g. "dummy-service-account-project". -2. In this "dummy-service-account-project", create a service account. Create and save a service account key to use for authentication for the STACKIT Terraform provider later as described in the docs. Now copy the e-mail address of the service account you just created. -3. Here comes the important part: Navigate to your organization, open it and select "Access". Click on the "Grant access" button and paste the e-mail address of your service account. Be careful to grant the service account enough permissions to create projects in your organization, e.g. by assigning the "owner" role to it. - -*This problem was brought up initially in [this](https://github.com/stackitcloud/terraform-provider-stackit/issues/855) issue on GitHub.* diff --git a/docs/guides/using_loadbalancer_with_observability.md b/docs/guides/using_loadbalancer_with_observability.md deleted file mode 100644 index a6bc9703..00000000 --- a/docs/guides/using_loadbalancer_with_observability.md +++ /dev/null @@ -1,163 +0,0 @@ ---- -page_title: "Using the STACKIT Loadbalancer together with STACKIT Observability" ---- -# Using the STACKIT Loadbalancer together with STACKIT Observability - -## Overview - -This guide explains how to configure the STACKIT Loadbalancer product to send metrics and logs to a STACKIT Observability instance. - -1. **Set Up Providers** - - Begin by configuring the STACKIT provider to connect to the STACKIT services. - - ```hcl - provider "stackit" { - default_region = "eu01" - } - ``` - -2. **Create an Observability instance** - - Establish a STACKIT Observability instance and its credentials. - - ```hcl - resource "stackit_observability_instance" "observability01" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Monitoring-Medium-EU01" - acl = ["0.0.0.0/0"] - metrics_retention_days = 90 - metrics_retention_days_5m_downsampling = 90 - metrics_retention_days_1h_downsampling = 90 - } - - resource "stackit_observability_credential" "observability01-credential" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.observability01.instance_id - } - ``` - -3. **Create STACKIT Loadbalancer credentials reference** - - Create a STACKIT Loadbalancer credentials which will be used in the STACKIT Loadbalancer resource as a reference. - - ```hcl - resource "stackit_loadbalancer_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-credentials" - username = stackit_observability_credential.observability01-credential.username - password = stackit_observability_credential.observability01-credential.password - } - ``` - -4. **Create the STACKIT Loadbalancer** - - ```hcl - # Create a network - resource "stackit_network" "example_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network" - ipv4_nameservers = ["8.8.8.8"] - ipv4_prefix = "192.168.0.0/25" - labels = { - "key" = "value" - } - routed = true - } - - # Create a network interface - resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.example_network.network_id - } - - # Create a public IP for the load balancer - resource "stackit_public_ip" "public-ip" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - lifecycle { - ignore_changes = [network_interface_id] - } - } - - # Create a key pair for accessing the server instance - resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - # set the path of your public key file here - public_key = chomp(file("/home/bob/.ssh/id_ed25519.pub")) - } - - # Create a server instance - resource "stackit_server" "boot-from-image" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "59838a89-51b1-4892-b57f-b3caf598ee2f" // Ubuntu 24.04 - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] - } - - # Create a load balancer - resource "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-load-balancer" - target_pools = [ - { - name = "example-target-pool" - target_port = 80 - targets = [ - { - display_name = stackit_server.boot-from-image.name - ip = stackit_network_interface.nic.ipv4 - } - ] - active_health_check = { - healthy_threshold = 10 - interval = "3s" - interval_jitter = "3s" - timeout = "3s" - unhealthy_threshold = 10 - } - } - ] - listeners = [ - { - display_name = "example-listener" - port = 80 - protocol = "PROTOCOL_TCP" - target_pool = "example-target-pool" - } - ] - networks = [ - { - network_id = stackit_network.example_network.network_id - role = "ROLE_LISTENERS_AND_TARGETS" - } - ] - external_address = stackit_public_ip.public-ip.ip - options = { - private_network_only = false - observability = { - logs = { - # uses the load balancer credential from the last step - credentials_ref = stackit_loadbalancer_observability_credential.example.credentials_ref - # uses the observability instance from step 1 - push_url = stackit_observability_instance.observability01.logs_push_url - } - metrics = { - # uses the load balancer credential from the last step - credentials_ref = stackit_loadbalancer_observability_credential.example.credentials_ref - # uses the observability instance from step 1 - push_url = stackit_observability_instance.observability01.metrics_push_url - } - } - } - } - ``` diff --git a/docs/guides/vault_secrets_manager.md b/docs/guides/vault_secrets_manager.md deleted file mode 100644 index d97b0533..00000000 --- a/docs/guides/vault_secrets_manager.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -page_title: "Using Vault Provider with STACKIT Secrets Manager" ---- -# Using Vault Provider with STACKIT Secrets Manager - -## Overview - -This guide outlines the process of utilizing the [HashiCorp Vault provider](https://registry.terraform.io/providers/hashicorp/vault) alongside the STACKIT provider to write secrets in the STACKIT Secrets Manager. The guide focuses on secrets from STACKIT Cloud resources but can be adapted for any secret. - -## Steps - -1. **Configure STACKIT Provider** - - ```hcl - provider "stackit" { - default_region = "eu01" - } - ``` - -2. **Create STACKIT Secrets Manager Instance** - - ```hcl - resource "stackit_secretsmanager_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - } - ``` - -3. **Define STACKIT Secrets Manager User** - - ```hcl - resource "stackit_secretsmanager_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_secretsmanager_instance.example.instance_id - description = "Example user" - write_enabled = true - } - ``` - -4. **Configure Vault Provider** - - ```hcl - provider "vault" { - address = "https://prod.sm.eu01.stackit.cloud" - skip_child_token = true - - auth_login_userpass { - username = stackit_secretsmanager_user.example.username - password = stackit_secretsmanager_user.example.password - } - } - ``` - -5. **Define Terraform Resource (Example: Observability Monitoring Instance)** - - ```hcl - resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Monitoring-Medium-EU01" - } - ``` - -6. **Store Secret in Vault** - - ```hcl - resource "vault_kv_secret_v2" "example" { - mount = stackit_secretsmanager_instance.example.instance_id - name = "my-secret" - cas = 1 - delete_all_versions = true - data_json = jsonencode( - { - grafana_password = stackit_observability_instance.example.grafana_initial_admin_password, - other_secret = ..., - } - ) - } - ``` - -## Note - -This example can be adapted for various resources within the provider as well as any other Secret the user wants to set in the Secrets Manager instance. Adapting this examples means replacing the Observability Monitoring Grafana password with the appropriate value. \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index ce090ead..c835f932 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,31 +1,33 @@ -# STACKIT Terraform Provider +# STACKITPRIVATEPREVIEW Terraform Provider The STACKIT Terraform provider is the official Terraform provider to integrate all the resources developed by [STACKIT](https://www.stackit.de/en/). ## Example Usage ```terraform -provider "stackit" { +# Copyright (c) STACKIT + +provider "stackitprivatepreview" { default_region = "eu01" } # Authentication # Token flow (scheduled for deprecation and will be removed on December 17, 2025) -provider "stackit" { +provider "stackitprivatepreview" { default_region = "eu01" service_account_token = var.service_account_token } # Key flow -provider "stackit" { +provider "stackitprivatepreview" { default_region = "eu01" service_account_key = var.service_account_key private_key = var.private_key } # Key flow (using path) -provider "stackit" { +provider "stackitprivatepreview" { default_region = "eu01" service_account_key_path = var.service_account_key_path private_key_path = var.private_key_path diff --git a/docs/resources/affinity_group.md b/docs/resources/affinity_group.md deleted file mode 100644 index 3d8b8351..00000000 --- a/docs/resources/affinity_group.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_affinity_group Resource - stackit" -subcategory: "" -description: |- - Affinity Group schema. - Usage with server - - resource "stackit_affinity_group" "affinity-group" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-key-pair" - policy = "soft-affinity" - } - - resource "stackit_server" "example-server" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - affinity_group = stackit_affinity_group.affinity-group.affinity_group_id - availability_zone = "eu01-1" - machine_type = "g2i.1" - } - - - Policies - - hard-affinity- All servers launched in this group will be hosted on the same compute node. - hard-anti-affinity- All servers launched in this group will be - hosted on different compute nodes. - soft-affinity- All servers launched in this group will be hosted - on the same compute node if possible, but if not possible they still will be scheduled instead of failure. - soft-anti-affinity- All servers launched in this group will be hosted on different compute nodes if possible, - but if not possible they still will be scheduled instead of failure. ---- - -# stackit_affinity_group (Resource) - -Affinity Group schema. - - - -### Usage with server -```terraform -resource "stackit_affinity_group" "affinity-group" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-key-pair" - policy = "soft-affinity" -} - -resource "stackit_server" "example-server" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - affinity_group = stackit_affinity_group.affinity-group.affinity_group_id - availability_zone = "eu01-1" - machine_type = "g2i.1" -} - -``` - -### Policies - -* `hard-affinity`- All servers launched in this group will be hosted on the same compute node. - -* `hard-anti-affinity`- All servers launched in this group will be - hosted on different compute nodes. - -* `soft-affinity`- All servers launched in this group will be hosted - on the same compute node if possible, but if not possible they still will be scheduled instead of failure. - -* `soft-anti-affinity`- All servers launched in this group will be hosted on different compute nodes if possible, - but if not possible they still will be scheduled instead of failure. - -## Example Usage - -```terraform -resource "stackit_affinity_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-affinity-group-name" - policy = "hard-anti-affinity" -} - -# Only use the import statement, if you want to import an existing affinity group -import { - to = stackit_affinity_group.import-example - id = "${var.project_id},${var.region},${var.affinity_group_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the affinity group. -- `policy` (String) The policy of the affinity group. -- `project_id` (String) STACKIT Project ID to which the affinity group is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `affinity_group_id` (String) The affinity group ID. -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`affinity_group_id`". -- `members` (List of String) The servers that are part of the affinity group. diff --git a/docs/resources/authorization_organization_role_assignment.md b/docs/resources/authorization_organization_role_assignment.md deleted file mode 100644 index 3d8e0a27..00000000 --- a/docs/resources/authorization_organization_role_assignment.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_authorization_organization_role_assignment Resource - stackit" -subcategory: "" -description: |- - organization Role Assignment resource schema. - ~> This resource is part of the iam experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_authorization_organization_role_assignment (Resource) - -organization Role Assignment resource schema. - -~> This resource is part of the iam experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -resource "stackit_authorization_organization_role_assignment" "example" { - resource_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - role = "owner" - subject = "john.doe@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing organization role assignment -import { - to = stackit_authorization_organization_role_assignment.import-example - id = "${var.organization_id},${var.org_role_assignment_role},${var.org_role_assignment_subject}" -} -``` - - -## Schema - -### Required - -- `resource_id` (String) organization Resource to assign the role to. -- `role` (String) Role to be assigned -- `subject` (String) Identifier of user, service account or client. Usually email address or name in case of clients - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "[resource_id],[role],[subject]". diff --git a/docs/resources/authorization_project_role_assignment.md b/docs/resources/authorization_project_role_assignment.md deleted file mode 100644 index 14164421..00000000 --- a/docs/resources/authorization_project_role_assignment.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_authorization_project_role_assignment Resource - stackit" -subcategory: "" -description: |- - project Role Assignment resource schema. - ~> This resource is part of the iam experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_authorization_project_role_assignment (Resource) - -project Role Assignment resource schema. - -~> This resource is part of the iam experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -resource "stackit_authorization_project_role_assignment" "example" { - resource_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - role = "owner" - subject = "john.doe@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing project role assignment -import { - to = stackit_authorization_project_role_assignment.import-example - id = "${var.project_id},${var.project_role_assignment_role},${var.project_role_assignment_subject}" -} -``` - - -## Schema - -### Required - -- `resource_id` (String) project Resource to assign the role to. -- `role` (String) Role to be assigned -- `subject` (String) Identifier of user, service account or client. Usually email address or name in case of clients - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "[resource_id],[role],[subject]". diff --git a/docs/resources/cdn_custom_domain.md b/docs/resources/cdn_custom_domain.md deleted file mode 100644 index 0a535c6b..00000000 --- a/docs/resources/cdn_custom_domain.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_cdn_custom_domain Resource - stackit" -subcategory: "" -description: |- - CDN distribution data source schema. - ~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_cdn_custom_domain (Resource) - -CDN distribution data source schema. - -~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -resource "stackit_cdn_custom_domain" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - distribution_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "https://xxx.xxx" - certificate = { - certificate = "-----BEGIN CERTIFICATE-----\nY2VydGlmaWNhdGVfZGF0YQ==\n-----END CERTIFICATE---" - private_key = "-----BEGIN RSA PRIVATE KEY-----\nY2VydGlmaWNhdGVfZGF0YQ==\n-----END RSA PRIVATE KEY---" - } -} - -# Only use the import statement, if you want to import an existing cdn custom domain -import { - to = stackit_cdn_custom_domain.import-example - id = "${var.project_id},${var.distribution_id},${var.custom_domain_name}" -} -``` - - -## Schema - -### Required - -- `distribution_id` (String) CDN distribution ID -- `name` (String) -- `project_id` (String) STACKIT project ID associated with the distribution - -### Optional - -- `certificate` (Attributes) The TLS certificate for the custom domain. If omitted, a managed certificate will be used. If the block is specified, a custom certificate is used. (see [below for nested schema](#nestedatt--certificate)) - -### Read-Only - -- `errors` (List of String) List of distribution errors -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`distribution_id`". -- `status` (String) Status of the distribution - - -### Nested Schema for `certificate` - -Optional: - -- `certificate` (String, Sensitive) The PEM-encoded TLS certificate. Required for custom certificates. -- `private_key` (String, Sensitive) The PEM-encoded private key for the certificate. Required for custom certificates. The certificate will be updated if this field is changed. - -Read-Only: - -- `version` (Number) A version identifier for the certificate. Required for custom certificates. The certificate will be updated if this field is changed. diff --git a/docs/resources/cdn_distribution.md b/docs/resources/cdn_distribution.md deleted file mode 100644 index 66338f0c..00000000 --- a/docs/resources/cdn_distribution.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_cdn_distribution Resource - stackit" -subcategory: "" -description: |- - CDN distribution data source schema. - ~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_cdn_distribution (Resource) - -CDN distribution data source schema. - -~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -resource "stackit_cdn_distribution" "example_distribution" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - config = { - backend = { - type = "http" - origin_url = "https://mybackend.onstackit.cloud" - geofencing = { - "https://mybackend.onstackit.cloud" = ["DE"] - } - } - regions = ["EU", "US", "ASIA", "AF", "SA"] - blocked_countries = ["DE", "AT", "CH"] - - optimizer = { - enabled = true - } - } -} - -# Only use the import statement, if you want to import an existing cdn distribution -import { - to = stackit_cdn_distribution.import-example - id = "${var.project_id},${var.distribution_id}" -} -``` - - -## Schema - -### Required - -- `config` (Attributes) The distribution configuration (see [below for nested schema](#nestedatt--config)) -- `project_id` (String) STACKIT project ID associated with the distribution - -### Read-Only - -- `created_at` (String) Time when the distribution was created -- `distribution_id` (String) CDN distribution ID -- `domains` (Attributes List) List of configured domains for the distribution (see [below for nested schema](#nestedatt--domains)) -- `errors` (List of String) List of distribution errors -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`distribution_id`". -- `status` (String) Status of the distribution -- `updated_at` (String) Time when the distribution was last updated - - -### Nested Schema for `config` - -Required: - -- `backend` (Attributes) The configured backend for the distribution (see [below for nested schema](#nestedatt--config--backend)) -- `regions` (List of String) The configured regions where content will be hosted - -Optional: - -- `blocked_countries` (List of String) The configured countries where distribution of content is blocked -- `optimizer` (Attributes) Configuration for the Image Optimizer. This is a paid feature that automatically optimizes images to reduce their file size for faster delivery, leading to improved website performance and a better user experience. (see [below for nested schema](#nestedatt--config--optimizer)) - - -### Nested Schema for `config.backend` - -Required: - -- `origin_url` (String) The configured backend type for the distribution -- `type` (String) The configured backend type. Possible values are: `http`. - -Optional: - -- `geofencing` (Map of List of String) A map of URLs to a list of countries where content is allowed. -- `origin_request_headers` (Map of String) The configured origin request headers for the backend - - - -### Nested Schema for `config.optimizer` - -Optional: - -- `enabled` (Boolean) - - - - -### Nested Schema for `domains` - -Read-Only: - -- `errors` (List of String) List of domain errors -- `name` (String) The name of the domain -- `status` (String) The status of the domain -- `type` (String) The type of the domain. Each distribution has one domain of type "managed", and domains of type "custom" may be additionally created by the user diff --git a/docs/resources/dns_record_set.md b/docs/resources/dns_record_set.md deleted file mode 100644 index b52f7e0d..00000000 --- a/docs/resources/dns_record_set.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_dns_record_set Resource - stackit" -subcategory: "" -description: |- - DNS Record Set Resource schema. ---- - -# stackit_dns_record_set (Resource) - -DNS Record Set Resource schema. - -## Example Usage - -```terraform -resource "stackit_dns_record_set" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - zone_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-record-set" - type = "A" - comment = "Example comment" - records = ["1.2.3.4"] -} - -# Only use the import statement, if you want to import an existing dns record set -import { - to = stackit_dns_record_set.import-example - id = "${var.project_id},${var.zone_id},${var.record_set_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Name of the record which should be a valid domain according to rfc1035 Section 2.3.4. E.g. `example.com` -- `project_id` (String) STACKIT project ID to which the dns record set is associated. -- `records` (List of String) Records. -- `type` (String) The record set type. E.g. `A` or `CNAME` -- `zone_id` (String) The zone ID to which is dns record set is associated. - -### Optional - -- `active` (Boolean) Specifies if the record set is active or not. Defaults to `true` -- `comment` (String) Comment. -- `ttl` (Number) Time to live. E.g. 3600 - -### Read-Only - -- `error` (String) Error shows error in case create/update/delete failed. -- `fqdn` (String) Fully qualified domain name (FQDN) of the record set. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`zone_id`,`record_set_id`". -- `record_set_id` (String) The rr set id. -- `state` (String) Record set state. diff --git a/docs/resources/dns_zone.md b/docs/resources/dns_zone.md deleted file mode 100644 index 25bc0ae8..00000000 --- a/docs/resources/dns_zone.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_dns_zone Resource - stackit" -subcategory: "" -description: |- - DNS Zone resource schema. ---- - -# stackit_dns_zone (Resource) - -DNS Zone resource schema. - -## Example Usage - -```terraform -resource "stackit_dns_zone" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "Example zone" - dns_name = "example-zone.com" - contact_email = "aa@bb.ccc" - type = "primary" - acl = "192.168.0.0/24" - description = "Example description" - default_ttl = 1230 -} - -# Only use the import statement, if you want to import an existing dns zone -import { - to = stackit_dns_zone.import-example - id = "${var.project_id},${var.zone_id}" -} -``` - - -## Schema - -### Required - -- `dns_name` (String) The zone name. E.g. `example.com` -- `name` (String) The user given name of the zone. -- `project_id` (String) STACKIT project ID to which the dns zone is associated. - -### Optional - -- `acl` (String) The access control list. E.g. `0.0.0.0/0,::/0` -- `active` (Boolean) -- `contact_email` (String) A contact e-mail for the zone. -- `default_ttl` (Number) Default time to live. E.g. 3600. -- `description` (String) Description of the zone. -- `expire_time` (Number) Expire time. E.g. 1209600. -- `is_reverse_zone` (Boolean) Specifies, if the zone is a reverse zone or not. Defaults to `false` -- `negative_cache` (Number) Negative caching. E.g. 60 -- `primaries` (List of String) Primary name server for secondary zone. E.g. ["1.2.3.4"] -- `refresh_time` (Number) Refresh time. E.g. 3600 -- `retry_time` (Number) Retry time. E.g. 600 -- `type` (String) Zone type. Defaults to `primary`. Possible values are: `primary`, `secondary`. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`zone_id`". -- `primary_name_server` (String) Primary name server. FQDN. -- `record_count` (Number) Record count how many records are in the zone. -- `serial_number` (Number) Serial number. E.g. `2022111400`. -- `state` (String) Zone state. E.g. `CREATE_SUCCEEDED`. -- `visibility` (String) Visibility of the zone. E.g. `public`. -- `zone_id` (String) The zone ID. diff --git a/docs/resources/git.md b/docs/resources/git.md deleted file mode 100644 index 0fb6f2bf..00000000 --- a/docs/resources/git.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_git Resource - stackit" -subcategory: "" -description: |- - Git Instance resource schema. - ~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. This resource currently does not support updates. Changing the ACLs, flavor, or name will trigger resource recreation. Update functionality will be added soon. In the meantime, please proceed with caution. To update these attributes, please open a support ticket. ---- - -# stackit_git (Resource) - -Git Instance resource schema. - -~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. This resource currently does not support updates. Changing the ACLs, flavor, or name will trigger resource recreation. Update functionality will be added soon. In the meantime, please proceed with caution. To update these attributes, please open a support ticket. - -## Example Usage - -```terraform -resource "stackit_git" "git" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "git-example-instance" -} - -resource "stackit_git" "git" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "git-example-instance" - acl = [ - "0.0.0.0/0" - ] - flavor = "git-100" -} - -# Only use the import statement, if you want to import an existing git resource -import { - to = stackit_git.import-example - id = "${var.project_id},${var.git_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Unique name linked to the git instance. -- `project_id` (String) STACKIT project ID to which the git instance is associated. - -### Optional - -- `acl` (List of String) Restricted ACL for instance access. -- `flavor` (String) Instance flavor. If not provided, defaults to git-100. For a list of available flavors, refer to our API documentation: `https://docs.api.stackit.cloud/documentation/git/version/v1beta` - -### Read-Only - -- `consumed_disk` (String) How many bytes of disk space is consumed. -- `consumed_object_storage` (String) How many bytes of Object Storage is consumed. -- `created` (String) Instance creation timestamp in RFC3339 format. -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`instance_id`". -- `instance_id` (String) ID linked to the git instance. -- `url` (String) Url linked to the git instance. -- `version` (String) Version linked to the git instance. diff --git a/docs/resources/image.md b/docs/resources/image.md deleted file mode 100644 index 7dfb252f..00000000 --- a/docs/resources/image.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_image Resource - stackit" -subcategory: "" -description: |- - Image resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_image (Resource) - -Image resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_image" "example_image" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-image" - disk_format = "qcow2" - local_file_path = "./path/to/image.qcow2" - min_disk_size = 10 - min_ram = 5 -} - -# Only use the import statement, if you want to import an existing image -# Must set a configuration value for the local_file_path attribute as the provider has marked it as required. -# Since this attribute is not fetched in general from the API call, after adding it this would replace your image resource after an terraform apply. -# In order to prevent this you need to add: -#lifecycle { -# ignore_changes = [ local_file_path ] -# } -import { - to = stackit_image.import-example - id = "${var.project_id},${var.region},${var.image_id}" -} -``` - - -## Schema - -### Required - -- `disk_format` (String) The disk format of the image. -- `local_file_path` (String) The filepath of the raw image file to be uploaded. -- `name` (String) The name of the image. -- `project_id` (String) STACKIT project ID to which the image is associated. - -### Optional - -- `config` (Attributes) Properties to set hardware and scheduling settings for an image. (see [below for nested schema](#nestedatt--config)) -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `min_disk_size` (Number) The minimum disk size of the image in GB. -- `min_ram` (Number) The minimum RAM of the image in MB. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `checksum` (Attributes) Representation of an image checksum. (see [below for nested schema](#nestedatt--checksum)) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`image_id`". -- `image_id` (String) The image ID. -- `protected` (Boolean) Whether the image is protected. -- `scope` (String) The scope of the image. - - -### Nested Schema for `config` - -Optional: - -- `boot_menu` (Boolean) Enables the BIOS bootmenu. -- `cdrom_bus` (String) Sets CDROM bus controller type. -- `disk_bus` (String) Sets Disk bus controller type. -- `nic_model` (String) Sets virtual network interface model. -- `operating_system` (String) Enables operating system specific optimizations. -- `operating_system_distro` (String) Operating system distribution. -- `operating_system_version` (String) Version of the operating system. -- `rescue_bus` (String) Sets the device bus when the image is used as a rescue image. -- `rescue_device` (String) Sets the device when the image is used as a rescue image. -- `secure_boot` (Boolean) Enables Secure Boot. -- `uefi` (Boolean) Enables UEFI boot. -- `video_model` (String) Sets Graphic device model. -- `virtio_scsi` (Boolean) Enables the use of VirtIO SCSI to provide block device access. By default instances use VirtIO Block. - - - -### Nested Schema for `checksum` - -Read-Only: - -- `algorithm` (String) Algorithm for the checksum of the image data. -- `digest` (String) Hexdigest of the checksum of the image data. diff --git a/docs/resources/key_pair.md b/docs/resources/key_pair.md deleted file mode 100644 index ff25a7b2..00000000 --- a/docs/resources/key_pair.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_key_pair Resource - stackit" -subcategory: "" -description: |- - Key pair resource schema. Must have a region specified in the provider configuration. Allows uploading an SSH public key to be used for server authentication. - Usage with server - - resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) - } - - resource "stackit_server" "example-server" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = "example-key-pair" - } ---- - -# stackit_key_pair (Resource) - -Key pair resource schema. Must have a `region` specified in the provider configuration. Allows uploading an SSH public key to be used for server authentication. - - - -### Usage with server -```terraform -resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) -} - -resource "stackit_server" "example-server" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = "example-key-pair" -} - -``` - -## Example Usage - -```terraform -# Create a key pair -resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) -} - -# Only use the import statement, if you want to import an existing key pair -import { - to = stackit_key_pair.import-example - id = var.keypair_name -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the SSH key pair. -- `public_key` (String) A string representation of the public SSH key. E.g., `ssh-rsa ` or `ssh-ed25519 `. - -### Optional - -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container. - -### Read-Only - -- `fingerprint` (String) The fingerprint of the public SSH key. -- `id` (String) Terraform's internal resource ID. It takes the value of the key pair "`name`". diff --git a/docs/resources/kms_key.md b/docs/resources/kms_key.md deleted file mode 100644 index baeea34c..00000000 --- a/docs/resources/kms_key.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_kms_key Resource - stackit" -subcategory: "" -description: |- - KMS Key resource schema. Uses the default_region specified in the provider configuration as a fallback in case no region is defined on resource level. - ~> Keys will not be instantly destroyed by terraform during a terraform destroy. They will just be scheduled for deletion via the API and thrown out of the Terraform state afterwards. This way we can ensure no key setups are deleted by accident and it gives you the option to recover your keys within the grace period. ---- - -# stackit_kms_key (Resource) - -KMS Key resource schema. Uses the `default_region` specified in the provider configuration as a fallback in case no `region` is defined on resource level. - - ~> Keys will **not** be instantly destroyed by terraform during a `terraform destroy`. They will just be scheduled for deletion via the API and thrown out of the Terraform state afterwards. **This way we can ensure no key setups are deleted by accident and it gives you the option to recover your keys within the grace period.** - -## Example Usage - -```terraform -resource "stackit_kms_key" "key" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "key-01" - protection = "software" - algorithm = "aes_256_gcm" - purpose = "symmetric_encrypt_decrypt" -} -``` - - -## Schema - -### Required - -- `algorithm` (String) The encryption algorithm that the key will use to encrypt data. Possible values are: `aes_256_gcm`, `rsa_2048_oaep_sha256`, `rsa_3072_oaep_sha256`, `rsa_4096_oaep_sha256`, `rsa_4096_oaep_sha512`, `hmac_sha256`, `hmac_sha384`, `hmac_sha512`, `ecdsa_p256_sha256`, `ecdsa_p384_sha384`, `ecdsa_p521_sha512`. -- `display_name` (String) The display name to distinguish multiple keys -- `keyring_id` (String) The ID of the associated keyring -- `project_id` (String) STACKIT project ID to which the key is associated. -- `protection` (String) The underlying system that is responsible for protecting the key material. Possible values are: `software`. -- `purpose` (String) The purpose for which the key will be used. Possible values are: `symmetric_encrypt_decrypt`, `asymmetric_encrypt_decrypt`, `message_authentication_code`, `asymmetric_sign_verify`. - -### Optional - -- `access_scope` (String) The access scope of the key. Default is `PUBLIC`. Possible values are: `PUBLIC`, `SNA`. -- `description` (String) A user chosen description to distinguish multiple keys -- `import_only` (Boolean) States whether versions can be created or only imported. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`keyring_id`,`key_id`". -- `key_id` (String) The ID of the key diff --git a/docs/resources/kms_keyring.md b/docs/resources/kms_keyring.md deleted file mode 100644 index 272e8329..00000000 --- a/docs/resources/kms_keyring.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_kms_keyring Resource - stackit" -subcategory: "" -description: |- - KMS Keyring resource schema. Uses the default_region specified in the provider configuration as a fallback in case no region is defined on resource level. - ~> Keyrings will not be destroyed by terraform during a terraform destroy. They will just be thrown out of the Terraform state and not deleted on API side. This way we can ensure no keyring setups are deleted by accident and it gives you the option to recover your keys within the grace period. ---- - -# stackit_kms_keyring (Resource) - -KMS Keyring resource schema. Uses the `default_region` specified in the provider configuration as a fallback in case no `region` is defined on resource level. - - ~> Keyrings will **not** be destroyed by terraform during a `terraform destroy`. They will just be thrown out of the Terraform state and not deleted on API side. **This way we can ensure no keyring setups are deleted by accident and it gives you the option to recover your keys within the grace period.** - -## Example Usage - -```terraform -resource "stackit_kms_keyring" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-name" - description = "example description" -} -``` - - -## Schema - -### Required - -- `display_name` (String) The display name to distinguish multiple keyrings. -- `project_id` (String) STACKIT project ID to which the keyring is associated. - -### Optional - -- `description` (String) A user chosen description to distinguish multiple keyrings. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`keyring_id`". -- `keyring_id` (String) An auto generated unique id which identifies the keyring. diff --git a/docs/resources/kms_wrapping_key.md b/docs/resources/kms_wrapping_key.md deleted file mode 100644 index 392c35db..00000000 --- a/docs/resources/kms_wrapping_key.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_kms_wrapping_key Resource - stackit" -subcategory: "" -description: |- - KMS wrapping key resource schema. ---- - -# stackit_kms_wrapping_key (Resource) - -KMS wrapping key resource schema. - -## Example Usage - -```terraform -resource "stackit_kms_wrapping_key" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-name" - protection = "software" - algorithm = "rsa_2048_oaep_sha256" - purpose = "wrap_symmetric_key" -} -``` - - -## Schema - -### Required - -- `algorithm` (String) The wrapping algorithm used to wrap the key to import. Possible values are: `rsa_2048_oaep_sha256`, `rsa_3072_oaep_sha256`, `rsa_4096_oaep_sha256`, `rsa_4096_oaep_sha512`, `rsa_2048_oaep_sha256_aes_256_key_wrap`, `rsa_3072_oaep_sha256_aes_256_key_wrap`, `rsa_4096_oaep_sha256_aes_256_key_wrap`, `rsa_4096_oaep_sha512_aes_256_key_wrap`. -- `display_name` (String) The display name to distinguish multiple wrapping keys. -- `keyring_id` (String) The ID of the associated keyring -- `project_id` (String) STACKIT project ID to which the keyring is associated. -- `protection` (String) The underlying system that is responsible for protecting the key material. Possible values are: `software`. -- `purpose` (String) The purpose for which the key will be used. Possible values are: `wrap_symmetric_key`, `wrap_asymmetric_key`. - -### Optional - -- `access_scope` (String) The access scope of the key. Default is `PUBLIC`. Possible values are: `PUBLIC`, `SNA`. -- `description` (String) A user chosen description to distinguish multiple wrapping keys. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `created_at` (String) The date and time the creation of the wrapping key was triggered. -- `expires_at` (String) The date and time the wrapping key will expire. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`keyring_id`,`wrapping_key_id`". -- `public_key` (String) The public key of the wrapping key. -- `wrapping_key_id` (String) The ID of the wrapping key diff --git a/docs/resources/loadbalancer.md b/docs/resources/loadbalancer.md deleted file mode 100644 index 2d527baf..00000000 --- a/docs/resources/loadbalancer.md +++ /dev/null @@ -1,377 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_loadbalancer Resource - stackit" -subcategory: "" -description: |- - Setting up supporting infrastructure - The example below creates the supporting infrastructure using the STACKIT Terraform provider, including the network, network interface, a public IP address and server resources. ---- - -# stackit_loadbalancer (Resource) - -## Setting up supporting infrastructure - - -The example below creates the supporting infrastructure using the STACKIT Terraform provider, including the network, network interface, a public IP address and server resources. - -## Example Usage - -```terraform -# Create a network -resource "stackit_network" "example_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network" - ipv4_nameservers = ["8.8.8.8"] - ipv4_prefix = "192.168.0.0/25" - labels = { - "key" = "value" - } - routed = true -} - -# Create a network interface -resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.example_network.network_id -} - -# Create a public IP for the load balancer -resource "stackit_public_ip" "public-ip" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - lifecycle { - ignore_changes = [network_interface_id] - } -} - -# Create a key pair for accessing the server instance -resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) -} - -# Create a server instance -resource "stackit_server" "boot-from-image" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "59838a89-51b1-4892-b57f-b3caf598ee2f" // Ubuntu 24.04 - } - availability_zone = "xxxx-x" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] -} - -# Create a load balancer -resource "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-load-balancer" - plan_id = "p10" - target_pools = [ - { - name = "example-target-pool" - target_port = 80 - targets = [ - { - display_name = stackit_server.boot-from-image.name - ip = stackit_network_interface.nic.ipv4 - } - ] - active_health_check = { - healthy_threshold = 10 - interval = "3s" - interval_jitter = "3s" - timeout = "3s" - unhealthy_threshold = 10 - } - } - ] - listeners = [ - { - display_name = "example-listener" - port = 80 - protocol = "PROTOCOL_TCP" - target_pool = "example-target-pool" - tcp = { - idle_timeout = "90s" - } - } - ] - networks = [ - { - network_id = stackit_network.example_network.network_id - role = "ROLE_LISTENERS_AND_TARGETS" - } - ] - external_address = stackit_public_ip.public-ip.ip - options = { - private_network_only = false - } -} - -# This example demonstrates an advanced setup where the Load Balancer is in one -# network and the target server is in another. This requires manual -# security group configuration using the `disable_security_group_assignment` -# and `security_group_id` attributes. - -# We create two separate networks: one for the load balancer and one for the target. -resource "stackit_network" "lb_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "lb-network-example" - ipv4_prefix = "192.168.10.0/25" - ipv4_nameservers = ["8.8.8.8"] -} - -resource "stackit_network" "target_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "target-network-example" - ipv4_prefix = "192.168.10.0/25" - ipv4_nameservers = ["8.8.8.8"] -} - -resource "stackit_public_ip" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -resource "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-advanced-lb" - external_address = stackit_public_ip.example.ip - - # Key setting for manual mode: disables automatic security group handling. - disable_security_group_assignment = true - - networks = [{ - network_id = stackit_network.lb_network.network_id - role = "ROLE_LISTENERS_AND_TARGETS" - }] - - listeners = [{ - port = 80 - protocol = "PROTOCOL_TCP" - target_pool = "cross-network-pool" - }] - - target_pools = [{ - name = "cross-network-pool" - target_port = 80 - targets = [{ - display_name = stackit_server.example.name - ip = stackit_network_interface.nic.ipv4 - }] - }] -} - -# Create a new security group to be assigned to the target server. -resource "stackit_security_group" "target_sg" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "target-sg-for-lb-access" - description = "Allows ingress traffic from the example load balancer." -} - -# Create a rule to allow traffic FROM the load balancer. -# This rule uses the computed `security_group_id` of the load balancer. -resource "stackit_security_group_rule" "allow_lb_ingress" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = stackit_security_group.target_sg.security_group_id - direction = "ingress" - protocol = { - name = "tcp" - } - - # This is the crucial link: it allows traffic from the LB's security group. - remote_security_group_id = stackit_loadbalancer.example.security_group_id - - port_range = { - min = 80 - max = 80 - } -} - -resource "stackit_server" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-remote-target" - machine_type = "g2i.2" - availability_zone = "eu01-1" - - boot_volume = { - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - size = 10 - } - - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] -} - -resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.target_network.network_id - security_group_ids = [stackit_security_group.target_sg.security_group_id] -} -# End of advanced example - -# Only use the import statement, if you want to import an existing loadbalancer -import { - to = stackit_loadbalancer.import-example - id = "${var.project_id},${var.region},${var.loadbalancer_name}" -} -``` - - -## Schema - -### Required - -- `listeners` (Attributes List) List of all listeners which will accept traffic. Limited to 20. (see [below for nested schema](#nestedatt--listeners)) -- `name` (String) Load balancer name. -- `networks` (Attributes List) List of networks that listeners and targets reside in. (see [below for nested schema](#nestedatt--networks)) -- `project_id` (String) STACKIT project ID to which the Load Balancer is associated. -- `target_pools` (Attributes List) List of all target pools which will be used in the Load Balancer. Limited to 20. (see [below for nested schema](#nestedatt--target_pools)) - -### Optional - -- `disable_security_group_assignment` (Boolean) If set to true, this will disable the automatic assignment of a security group to the load balancer's targets. This option is primarily used to allow targets that are not within the load balancer's own network or SNA (STACKIT network area). When this is enabled, you are fully responsible for ensuring network connectivity to the targets, including managing all routing and security group rules manually. This setting cannot be changed after the load balancer is created. -- `external_address` (String) External Load Balancer IP address where this Load Balancer is exposed. -- `options` (Attributes) Defines any optional functionality you want to have enabled on your load balancer. (see [below for nested schema](#nestedatt--options)) -- `plan_id` (String) The service plan ID. If not defined, the default service plan is `p10`. Possible values are: `p10`, `p50`, `p250`, `p750`. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`","region","`name`". -- `private_address` (String) Transient private Load Balancer IP address. It can change any time. -- `security_group_id` (String) The ID of the egress security group assigned to the Load Balancer's internal machines. This ID is essential for allowing traffic from the Load Balancer to targets in different networks or STACKIT network areas (SNA). To enable this, create a security group rule for your target VMs and set the `remote_security_group_id` of that rule to this value. This is typically used when `disable_security_group_assignment` is set to `true`. - - -### Nested Schema for `listeners` - -Required: - -- `port` (Number) Port number where we listen for traffic. -- `protocol` (String) Protocol is the highest network protocol we understand to load balance. Possible values are: `PROTOCOL_UNSPECIFIED`, `PROTOCOL_TCP`, `PROTOCOL_UDP`, `PROTOCOL_TCP_PROXY`, `PROTOCOL_TLS_PASSTHROUGH`. -- `target_pool` (String) Reference target pool by target pool name. - -Optional: - -- `display_name` (String) -- `server_name_indicators` (Attributes List) A list of domain names to match in order to pass TLS traffic to the target pool in the current listener (see [below for nested schema](#nestedatt--listeners--server_name_indicators)) -- `tcp` (Attributes) Options that are specific to the TCP protocol. (see [below for nested schema](#nestedatt--listeners--tcp)) -- `udp` (Attributes) Options that are specific to the UDP protocol. (see [below for nested schema](#nestedatt--listeners--udp)) - - -### Nested Schema for `listeners.server_name_indicators` - -Optional: - -- `name` (String) A domain name to match in order to pass TLS traffic to the target pool in the current listener - - - -### Nested Schema for `listeners.tcp` - -Optional: - -- `idle_timeout` (String) Time after which an idle connection is closed. The default value is set to 300 seconds, and the maximum value is 3600 seconds. The format is a duration and the unit must be seconds. Example: 30s - - - -### Nested Schema for `listeners.udp` - -Optional: - -- `idle_timeout` (String) Time after which an idle session is closed. The default value is set to 1 minute, and the maximum value is 2 minutes. The format is a duration and the unit must be seconds. Example: 30s - - - - -### Nested Schema for `networks` - -Required: - -- `network_id` (String) Openstack network ID. -- `role` (String) The role defines how the load balancer is using the network. Possible values are: `ROLE_UNSPECIFIED`, `ROLE_LISTENERS_AND_TARGETS`, `ROLE_LISTENERS`, `ROLE_TARGETS`. - - - -### Nested Schema for `target_pools` - -Required: - -- `name` (String) Target pool name. -- `target_port` (Number) Identical port number where each target listens for traffic. -- `targets` (Attributes List) List of all targets which will be used in the pool. Limited to 1000. (see [below for nested schema](#nestedatt--target_pools--targets)) - -Optional: - -- `active_health_check` (Attributes) (see [below for nested schema](#nestedatt--target_pools--active_health_check)) -- `session_persistence` (Attributes) Here you can setup various session persistence options, so far only "`use_source_ip_address`" is supported. (see [below for nested schema](#nestedatt--target_pools--session_persistence)) - - -### Nested Schema for `target_pools.targets` - -Required: - -- `display_name` (String) Target display name -- `ip` (String) Target IP - - - -### Nested Schema for `target_pools.active_health_check` - -Optional: - -- `healthy_threshold` (Number) Healthy threshold of the health checking. -- `interval` (String) Interval duration of health checking in seconds. -- `interval_jitter` (String) Interval duration threshold of the health checking in seconds. -- `timeout` (String) Active health checking timeout duration in seconds. -- `unhealthy_threshold` (Number) Unhealthy threshold of the health checking. - - - -### Nested Schema for `target_pools.session_persistence` - -Optional: - -- `use_source_ip_address` (Boolean) If true then all connections from one source IP address are redirected to the same target. This setting changes the load balancing algorithm to Maglev. - - - - -### Nested Schema for `options` - -Optional: - -- `acl` (Set of String) Load Balancer is accessible only from an IP address in this range. -- `observability` (Attributes) We offer Load Balancer metrics observability via ARGUS or external solutions. Not changeable after creation. (see [below for nested schema](#nestedatt--options--observability)) -- `private_network_only` (Boolean) If true, Load Balancer is accessible only via a private network IP address. - - -### Nested Schema for `options.observability` - -Optional: - -- `logs` (Attributes) Observability logs configuration. Not changeable after creation. (see [below for nested schema](#nestedatt--options--observability--logs)) -- `metrics` (Attributes) Observability metrics configuration. Not changeable after creation. (see [below for nested schema](#nestedatt--options--observability--metrics)) - - -### Nested Schema for `options.observability.logs` - -Optional: - -- `credentials_ref` (String) Credentials reference for logs. Not changeable after creation. -- `push_url` (String) Credentials reference for logs. Not changeable after creation. - - - -### Nested Schema for `options.observability.metrics` - -Optional: - -- `credentials_ref` (String) Credentials reference for metrics. Not changeable after creation. -- `push_url` (String) Credentials reference for metrics. Not changeable after creation. diff --git a/docs/resources/loadbalancer_observability_credential.md b/docs/resources/loadbalancer_observability_credential.md deleted file mode 100644 index 3d00c6c3..00000000 --- a/docs/resources/loadbalancer_observability_credential.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_loadbalancer_observability_credential Resource - stackit" -subcategory: "" -description: |- - Load balancer observability credential resource schema. Must have a region specified in the provider configuration. These contain the username and password for the observability service (e.g. Argus) where the load balancer logs/metrics will be pushed into ---- - -# stackit_loadbalancer_observability_credential (Resource) - -Load balancer observability credential resource schema. Must have a `region` specified in the provider configuration. These contain the username and password for the observability service (e.g. Argus) where the load balancer logs/metrics will be pushed into - -## Example Usage - -```terraform -resource "stackit_loadbalancer_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-credentials" - username = "example-user" - password = "example-password" -} - -# Only use the import statement, if you want to import an existing loadbalancer observability credential -import { - to = stackit_loadbalancer_observability_credential.import-example - id = "${var.project_id},${var.region},${var.credentials_ref}" -} -``` - - -## Schema - -### Required - -- `display_name` (String) Observability credential name. -- `password` (String) The username for the observability service (e.g. Argus) where the logs/metrics will be pushed into. -- `project_id` (String) STACKIT project ID to which the load balancer observability credential is associated. -- `username` (String) The password for the observability service (e.g. Argus) where the logs/metrics will be pushed into. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `credentials_ref` (String) The credentials reference is used by the Load Balancer to define which credentials it will use. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`","region","`credentials_ref`". diff --git a/docs/resources/logme_credential.md b/docs/resources/logme_credential.md deleted file mode 100644 index 74a598c6..00000000 --- a/docs/resources/logme_credential.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_logme_credential Resource - stackit" -subcategory: "" -description: |- - LogMe credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_logme_credential (Resource) - -LogMe credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_logme_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing logme credential -import { - to = stackit_logme_credential.import-example - id = "${var.project_id},${var.logme_instance_id},${var.logme_credentials_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the LogMe instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `credential_id` (String) The credential's ID. -- `host` (String) -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `username` (String) diff --git a/docs/resources/logme_instance.md b/docs/resources/logme_instance.md deleted file mode 100644 index 74b6f214..00000000 --- a/docs/resources/logme_instance.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_logme_instance Resource - stackit" -subcategory: "" -description: |- - LogMe instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_logme_instance (Resource) - -LogMe instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_logme_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "2" - plan_name = "stackit-logme2-1.2.50-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - } -} - -# Only use the import statement, if you want to import an existing logme instance -import { - to = stackit_logme_instance.import-example - id = "${var.project_id},${var.logme_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Instance name. -- `plan_name` (String) The selected plan name. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `version` (String) The service version. - -### Optional - -- `parameters` (Attributes) Configuration parameters. Please note that removing a previously configured field from your Terraform configuration won't replace its value in the API. To update a previously configured field, explicitly set a new value for it. (see [below for nested schema](#nestedatt--parameters)) - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `instance_id` (String) ID of the LogMe instance. -- `plan_id` (String) The selected plan ID. - - -### Nested Schema for `parameters` - -Optional: - -- `enable_monitoring` (Boolean) Enable monitoring. -- `fluentd_tcp` (Number) -- `fluentd_tls` (Number) -- `fluentd_tls_ciphers` (String) -- `fluentd_tls_max_version` (String) -- `fluentd_tls_min_version` (String) -- `fluentd_tls_version` (String) -- `fluentd_udp` (Number) -- `graphite` (String) If set, monitoring with Graphite will be enabled. Expects the host and port where the Graphite metrics should be sent to (host:port). -- `ism_deletion_after` (String) Combination of an integer and a timerange when an index will be considered "old" and can be deleted. Possible values for the timerange are `s`, `m`, `h` and `d`. -- `ism_jitter` (Number) -- `ism_job_interval` (Number) Jitter of the execution time. -- `java_heapspace` (Number) The amount of memory (in MB) allocated as heap by the JVM for OpenSearch. -- `java_maxmetaspace` (Number) The amount of memory (in MB) used by the JVM to store metadata for OpenSearch. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted (in seconds). -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key. -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `opensearch_tls_ciphers` (List of String) -- `opensearch_tls_protocols` (List of String) -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. diff --git a/docs/resources/mariadb_credential.md b/docs/resources/mariadb_credential.md deleted file mode 100644 index d0ea4d1f..00000000 --- a/docs/resources/mariadb_credential.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mariadb_credential Resource - stackit" -subcategory: "" -description: |- - MariaDB credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_mariadb_credential (Resource) - -MariaDB credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_mariadb_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing mariadb credential -import { - to = stackit_mariadb_credential.import-example - id = "${var.project_id},${var.mariadb_instance_id},${var.mariadb_credential_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the MariaDB instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `credential_id` (String) The credential's ID. -- `host` (String) -- `hosts` (List of String) -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `name` (String) -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `username` (String) diff --git a/docs/resources/mariadb_instance.md b/docs/resources/mariadb_instance.md deleted file mode 100644 index 4814286b..00000000 --- a/docs/resources/mariadb_instance.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mariadb_instance Resource - stackit" -subcategory: "" -description: |- - MariaDB instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_mariadb_instance (Resource) - -MariaDB instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_mariadb_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "10.11" - plan_name = "stackit-mariadb-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - } -} - -# Only use the import statement, if you want to import an existing mariadb instance -import { - to = stackit_mariadb_instance.import-example - id = "${var.project_id},${var.mariadb_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Instance name. -- `plan_name` (String) The selected plan name. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `version` (String) The service version. - -### Optional - -- `parameters` (Attributes) Configuration parameters. Please note that removing a previously configured field from your Terraform configuration won't replace its value in the API. To update a previously configured field, explicitly set a new value for it. (see [below for nested schema](#nestedatt--parameters)) - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `instance_id` (String) ID of the MariaDB instance. -- `plan_id` (String) The selected plan ID. - - -### Nested Schema for `parameters` - -Optional: - -- `enable_monitoring` (Boolean) Enable monitoring. -- `graphite` (String) Graphite server URL (host and port). If set, monitoring with Graphite will be enabled. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted. -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. Monitoring instances with the plan "Observability-Monitoring-Starter" are not supported. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. diff --git a/docs/resources/modelserving_token.md b/docs/resources/modelserving_token.md deleted file mode 100644 index 7b7ca9eb..00000000 --- a/docs/resources/modelserving_token.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_modelserving_token Resource - stackit" -subcategory: "" -description: |- - AI Model Serving Auth Token Resource schema. - Example Usage - Automatically rotate AI model serving token - - resource "time_rotating" "rotate" { - rotation_days = 80 - } - - resource "stackit_modelserving_token" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "Example token" - - rotate_when_changed = { - rotation = time_rotating.rotate.id - } - - } ---- - -# stackit_modelserving_token (Resource) - -AI Model Serving Auth Token Resource schema. - -## Example Usage - -### Automatically rotate AI model serving token -```terraform -resource "time_rotating" "rotate" { - rotation_days = 80 -} - -resource "stackit_modelserving_token" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "Example token" - - rotate_when_changed = { - rotation = time_rotating.rotate.id - } - -} -``` - - - - -## Schema - -### Required - -- `name` (String) Name of the AI model serving auth token. -- `project_id` (String) STACKIT project ID to which the AI model serving auth token is associated. - -### Optional - -- `description` (String) The description of the AI model serving auth token. -- `region` (String) Region to which the AI model serving auth token is associated. If not defined, the provider region is used -- `rotate_when_changed` (Map of String) A map of arbitrary key/value pairs that will force recreation of the token when they change, enabling token rotation based on external conditions such as a rotating timestamp. Changing this forces a new resource to be created. -- `ttl_duration` (String) The TTL duration of the AI model serving auth token. E.g. 5h30m40s,5h,5h30m,30m,30s - -### Read-Only - -- `id` (String) Terraform's internal data source. ID. It is structured as "`project_id`,`region`,`token_id`". -- `state` (String) State of the AI model serving auth token. -- `token` (String, Sensitive) Content of the AI model serving auth token. -- `token_id` (String) The AI model serving auth token ID. -- `valid_until` (String) The time until the AI model serving auth token is valid. diff --git a/docs/resources/mongodbflex_instance.md b/docs/resources/mongodbflex_instance.md deleted file mode 100644 index 5d65acfe..00000000 --- a/docs/resources/mongodbflex_instance.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mongodbflex_instance Resource - stackit" -subcategory: "" -description: |- - MongoDB Flex instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_mongodbflex_instance (Resource) - -MongoDB Flex instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_mongodbflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - acl = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] - flavor = { - cpu = 1 - ram = 4 - } - replicas = 1 - storage = { - class = "class" - size = 10 - } - version = "7.0" - options = { - type = "Single" - snapshot_retention_days = 3 - point_in_time_window_hours = 30 - } - backup_schedule = "0 0 * * *" -} - -# Only use the import statement, if you want to import an existing mongodbflex instance -import { - to = stackit_mongodbflex_instance.import-example - id = "${var.project_id},${var.region},${var.instance_id}" -} -``` - - -## Schema - -### Required - -- `acl` (List of String) The Access Control List (ACL) for the MongoDB Flex instance. -- `backup_schedule` (String) The backup schedule. Should follow the cron scheduling system format (e.g. "0 0 * * *"). -- `flavor` (Attributes) (see [below for nested schema](#nestedatt--flavor)) -- `name` (String) Instance name. -- `options` (Attributes) (see [below for nested schema](#nestedatt--options)) -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `replicas` (Number) -- `storage` (Attributes) (see [below for nested schema](#nestedatt--storage)) -- `version` (String) - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`". -- `instance_id` (String) ID of the MongoDB Flex instance. - - -### Nested Schema for `flavor` - -Required: - -- `cpu` (Number) -- `ram` (Number) - -Read-Only: - -- `description` (String) -- `id` (String) - - - -### Nested Schema for `options` - -Required: - -- `point_in_time_window_hours` (Number) The number of hours back in time the point-in-time recovery feature will be able to recover. -- `type` (String) Type of the MongoDB Flex instance. Possible values are: `Replica`, `Sharded`, `Single`. - -Optional: - -- `daily_snapshot_retention_days` (Number) The number of days that daily backups will be retained. -- `monthly_snapshot_retention_months` (Number) The number of months that monthly backups will be retained. -- `snapshot_retention_days` (Number) The number of days that continuous backups (controlled via the `backup_schedule`) will be retained. -- `weekly_snapshot_retention_weeks` (Number) The number of weeks that weekly backups will be retained. - - - -### Nested Schema for `storage` - -Required: - -- `class` (String) -- `size` (Number) diff --git a/docs/resources/mongodbflex_user.md b/docs/resources/mongodbflex_user.md deleted file mode 100644 index 0e113302..00000000 --- a/docs/resources/mongodbflex_user.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_mongodbflex_user Resource - stackit" -subcategory: "" -description: |- - MongoDB Flex user resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_mongodbflex_user (Resource) - -MongoDB Flex user resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_mongodbflex_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - username = "username" - roles = ["role"] - database = "database" -} - -# Only use the import statement, if you want to import an existing mongodbflex user -import { - to = stackit_mongodbflex_user.import-example - id = "${var.project_id},${var.region},${var.instance_id},${user_id}" -} -``` - - -## Schema - -### Required - -- `database` (String) -- `instance_id` (String) ID of the MongoDB Flex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `roles` (Set of String) Database access levels for the user. Some of the possible values are: [`read`, `readWrite`, `readWriteAnyDatabase`] - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. -- `username` (String) - -### Read-Only - -- `host` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`user_id`". -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `user_id` (String) User ID. diff --git a/docs/resources/network.md b/docs/resources/network.md deleted file mode 100644 index 6fe44131..00000000 --- a/docs/resources/network.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network Resource - stackit" -subcategory: "" -description: |- - Network resource schema. Must have a region specified in the provider configuration. - ~> Behavior of not configured ipv4_nameservers will change from January 2026. When ipv4_nameservers is not set, it will be set to the network area's default_nameservers. - To prevent any nameserver configuration, the ipv4_nameservers attribute should be explicitly set to an empty list []. - In cases where ipv4_nameservers are defined within the resource, the existing behavior will remain unchanged. ---- - -# stackit_network (Resource) - -Network resource schema. Must have a `region` specified in the provider configuration. -~> Behavior of not configured `ipv4_nameservers` will change from January 2026. When `ipv4_nameservers` is not set, it will be set to the network area's `default_nameservers`. -To prevent any nameserver configuration, the `ipv4_nameservers` attribute should be explicitly set to an empty list `[]`. -In cases where `ipv4_nameservers` are defined within the resource, the existing behavior will remain unchanged. - -## Example Usage - -```terraform -resource "stackit_network" "example_with_name" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-with-name" -} - -resource "stackit_network" "example_routed_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-routed-network" - labels = { - "key" = "value" - } - routed = true -} - -resource "stackit_network" "example_non_routed_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-non-routed-network" - ipv4_nameservers = ["1.2.3.4", "5.6.7.8"] - ipv4_gateway = "10.1.2.3" - ipv4_prefix = "10.1.2.0/24" - labels = { - "key" = "value" - } - routed = false -} - -# Only use the import statement, if you want to import an existing network -# Note: There will be a conflict which needs to be resolved manually. -# These attributes cannot be configured together: [ipv4_prefix,ipv4_prefix_length,ipv4_gateway] -import { - to = stackit_network.import-example - id = "${var.project_id},${var.region},${var.network_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the network. -- `project_id` (String) STACKIT project ID to which the network is associated. - -### Optional - -- `ipv4_gateway` (String) The IPv4 gateway of a network. If not specified, the first IP of the network will be assigned as the gateway. -- `ipv4_nameservers` (List of String) The IPv4 nameservers of the network. -- `ipv4_prefix` (String) The IPv4 prefix of the network (CIDR). -- `ipv4_prefix_length` (Number) The IPv4 prefix length of the network. -- `ipv6_gateway` (String) The IPv6 gateway of a network. If not specified, the first IP of the network will be assigned as the gateway. -- `ipv6_nameservers` (List of String) The IPv6 nameservers of the network. -- `ipv6_prefix` (String) The IPv6 prefix of the network (CIDR). -- `ipv6_prefix_length` (Number) The IPv6 prefix length of the network. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `nameservers` (List of String, Deprecated) The nameservers of the network. This field is deprecated and will be removed in January 2026, use `ipv4_nameservers` to configure the nameservers for IPv4. -- `no_ipv4_gateway` (Boolean) If set to `true`, the network doesn't have a gateway. -- `no_ipv6_gateway` (Boolean) If set to `true`, the network doesn't have a gateway. -- `region` (String) The resource region. If not defined, the provider region is used. -- `routed` (Boolean) If set to `true`, the network is routed and therefore accessible from other networks. -- `routing_table_id` (String) The ID of the routing table associated with the network. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`network_id`". -- `ipv4_prefixes` (List of String) The IPv4 prefixes of the network. -- `ipv6_prefixes` (List of String) The IPv6 prefixes of the network. -- `network_id` (String) The network ID. -- `prefixes` (List of String, Deprecated) The prefixes of the network. This field is deprecated and will be removed in January 2026, use `ipv4_prefixes` to read the prefixes of the IPv4 networks. -- `public_ip` (String) The public IP of the network. diff --git a/docs/resources/network_area.md b/docs/resources/network_area.md deleted file mode 100644 index 909784c3..00000000 --- a/docs/resources/network_area.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_area Resource - stackit" -subcategory: "" -description: |- - Network area resource schema. ---- - -# stackit_network_area (Resource) - -Network area resource schema. - -## Example Usage - -```terraform -resource "stackit_network_area" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network-area" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing network area -import { - to = stackit_network_area.import-example - id = "${var.organization_id},${var.network_area_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the network area. -- `organization_id` (String) STACKIT organization ID to which the network area is associated. - -### Optional - -- `default_nameservers` (List of String, Deprecated) List of DNS Servers/Nameservers for configuration of network area for region `eu01`. -- `default_prefix_length` (Number, Deprecated) The default prefix length for networks in the network area for region `eu01`. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `max_prefix_length` (Number, Deprecated) The maximal prefix length for networks in the network area for region `eu01`. -- `min_prefix_length` (Number, Deprecated) The minimal prefix length for networks in the network area for region `eu01`. -- `network_ranges` (Attributes List, Deprecated) List of Network ranges for configuration of network area for region `eu01`. (see [below for nested schema](#nestedatt--network_ranges)) -- `transfer_network` (String, Deprecated) Classless Inter-Domain Routing (CIDR) for configuration of network area for region `eu01`. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`network_area_id`". -- `network_area_id` (String) The network area ID. -- `project_count` (Number) The amount of projects currently referencing this area. - - -### Nested Schema for `network_ranges` - -Required: - -- `prefix` (String, Deprecated) Classless Inter-Domain Routing (CIDR). - -Read-Only: - -- `network_range_id` (String, Deprecated) diff --git a/docs/resources/network_area_region.md b/docs/resources/network_area_region.md deleted file mode 100644 index 050fdb35..00000000 --- a/docs/resources/network_area_region.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_area_region Resource - stackit" -subcategory: "" -description: |- - Network area region resource schema. ---- - -# stackit_network_area_region (Resource) - -Network area region resource schema. - -## Example Usage - -```terraform -resource "stackit_network_area_region" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - ipv4 = { - transfer_network = "10.1.2.0/24" - network_ranges = [ - { - prefix = "10.0.0.0/16" - } - ] - } -} - -# Only use the import statement, if you want to import an existing network area region -import { - to = stackit_network_area_region.import-example - id = "${var.organization_id},${var.network_area_id},${var.region}" -} -``` - - -## Schema - -### Required - -- `ipv4` (Attributes) The regional IPv4 config of a network area. (see [below for nested schema](#nestedatt--ipv4)) -- `network_area_id` (String) The network area ID. -- `organization_id` (String) STACKIT organization ID to which the network area is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`network_area_id`,`region`". - - -### Nested Schema for `ipv4` - -Required: - -- `network_ranges` (Attributes List) List of Network ranges. (see [below for nested schema](#nestedatt--ipv4--network_ranges)) -- `transfer_network` (String) IPv4 Classless Inter-Domain Routing (CIDR). - -Optional: - -- `default_nameservers` (List of String) List of DNS Servers/Nameservers. -- `default_prefix_length` (Number) The default prefix length for networks in the network area. -- `max_prefix_length` (Number) The maximal prefix length for networks in the network area. -- `min_prefix_length` (Number) The minimal prefix length for networks in the network area. - - -### Nested Schema for `ipv4.network_ranges` - -Required: - -- `prefix` (String) Classless Inter-Domain Routing (CIDR). - -Read-Only: - -- `network_range_id` (String) diff --git a/docs/resources/network_area_route.md b/docs/resources/network_area_route.md deleted file mode 100644 index 5b9056d3..00000000 --- a/docs/resources/network_area_route.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_area_route Resource - stackit" -subcategory: "" -description: |- - Network area route resource schema. Must have a `region` specified in the provider configuration. ---- - -# stackit_network_area_route (Resource) - -Network area route resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - destination = { - type = "cidrv4" - value = "192.168.0.0/24" - } - next_hop = { - type = "ipv4" - value = "192.168.0.0" - } - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing network area route -import { - to = stackit_network_area_route.import-example - id = "${var.organization_id},${var.network_area_id},${var.region},${var.network_area_route_id}" -} -``` - -## Migration of IaaS resources from versions <= v0.74.0 - -The release of the STACKIT IaaS API v2 provides a lot of new features, but also includes some breaking changes -(when coming from v1 of the STACKIT IaaS API) which must be somehow represented on Terraform side. The -`stackit_network_area_route` resource did undergo some changes. See the example below how to migrate your resources. - -### Breaking change: Network area route resource (stackit_network_area_route) - -**Configuration for <= v0.74.0** - -```terraform -resource "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - prefix = "192.168.0.0/24" # prefix field got removed for provider versions > v0.74.0, use the new destination field instead - next_hop = "192.168.0.0" # schema of the next_hop field changed, see below -} -``` - -**Configuration for > v0.74.0** - -```terraform -resource "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - destination = { # the new 'destination' field replaces the old 'prefix' field - type = "cidrv4" - value = "192.168.0.0/24" # migration: put the value of the old 'prefix' field here - } - next_hop = { - type = "ipv4" - value = "192.168.0.0" # migration: put the value of the old 'next_hop' field here - } -} -``` - - -## Schema - -### Required - -- `destination` (Attributes) Destination of the route. (see [below for nested schema](#nestedatt--destination)) -- `network_area_id` (String) The network area ID to which the network area route is associated. -- `next_hop` (Attributes) Next hop destination. (see [below for nested schema](#nestedatt--next_hop)) -- `organization_id` (String) STACKIT organization ID to which the network area is associated. - -### Optional - -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`network_area_id`,`region`,`network_area_route_id`". -- `network_area_route_id` (String) The network area route ID. - - -### Nested Schema for `destination` - -Required: - -- `type` (String) CIDRV type. Possible values are: `cidrv4`, `cidrv6`. Only `cidrv4` is supported currently. -- `value` (String) An CIDR string. - - - -### Nested Schema for `next_hop` - -Required: - -- `type` (String) Type of the next hop. Possible values are: `blackhole`, `internet`, `ipv4`, `ipv6`. Only `ipv4` supported currently. - -Optional: - -- `value` (String) Either IPv4 or IPv6 (not set for blackhole and internet). Only IPv4 supported currently. - diff --git a/docs/resources/network_interface.md b/docs/resources/network_interface.md deleted file mode 100644 index 6c7156a5..00000000 --- a/docs/resources/network_interface.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_network_interface Resource - stackit" -subcategory: "" -description: |- - Network interface resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_network_interface (Resource) - -Network interface resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_network_interface" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - allowed_addresses = ["192.168.0.0/24"] - security_group_ids = ["xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"] -} - -# Only use the import statement, if you want to import an existing network interface -import { - to = stackit_network_interface.import-example - id = "${var.project_id},${var.region},${var.network_id},${var.network_interface_id}" -} -``` - - -## Schema - -### Required - -- `network_id` (String) The network ID to which the network interface is associated. -- `project_id` (String) STACKIT project ID to which the network is associated. - -### Optional - -- `allowed_addresses` (List of String) The list of CIDR (Classless Inter-Domain Routing) notations. -- `ipv4` (String) The IPv4 address. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a network interface. -- `name` (String) The name of the network interface. -- `region` (String) The resource region. If not defined, the provider region is used. -- `security` (Boolean) The Network Interface Security. If set to false, then no security groups will apply to this network interface. -- `security_group_ids` (List of String) The list of security group UUIDs. If security is set to false, setting this field will lead to an error. - -### Read-Only - -- `device` (String) The device UUID of the network interface. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`network_id`,`network_interface_id`". -- `mac` (String) The MAC address of network interface. -- `network_interface_id` (String) The network interface ID. -- `type` (String) Type of network interface. Some of the possible values are: Possible values are: `server`, `metadata`, `gateway`. diff --git a/docs/resources/objectstorage_bucket.md b/docs/resources/objectstorage_bucket.md deleted file mode 100644 index 7abc9dbf..00000000 --- a/docs/resources/objectstorage_bucket.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_objectstorage_bucket Resource - stackit" -subcategory: "" -description: |- - ObjectStorage bucket resource schema. Must have a region specified in the provider configuration. If you are creating credentialsgroup and bucket resources simultaneously, please include the depends_on field so that they are created sequentially. This prevents errors from concurrent calls to the service enablement that is done in the background. ---- - -# stackit_objectstorage_bucket (Resource) - -ObjectStorage bucket resource schema. Must have a `region` specified in the provider configuration. If you are creating `credentialsgroup` and `bucket` resources simultaneously, please include the `depends_on` field so that they are created sequentially. This prevents errors from concurrent calls to the service enablement that is done in the background. - -## Example Usage - -```terraform -resource "stackit_objectstorage_bucket" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-bucket" -} - -# Only use the import statement, if you want to import an existing objectstorage bucket -import { - to = stackit_objectstorage_bucket.import-example - id = "${var.project_id},${var.region},${var.bucket_name}" -} -``` - - -## Schema - -### Required - -- `name` (String) The bucket name. It must be DNS conform. -- `project_id` (String) STACKIT Project ID to which the bucket is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`name`". -- `url_path_style` (String) -- `url_virtual_hosted_style` (String) diff --git a/docs/resources/objectstorage_credential.md b/docs/resources/objectstorage_credential.md deleted file mode 100644 index 037c4a78..00000000 --- a/docs/resources/objectstorage_credential.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_objectstorage_credential Resource - stackit" -subcategory: "" -description: |- - ObjectStorage credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_objectstorage_credential (Resource) - -ObjectStorage credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_objectstorage_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - expiration_timestamp = "2027-01-02T03:04:05Z" -} - -# Only use the import statement, if you want to import an existing objectstorage credential -import { - to = stackit_objectstorage_credential.import-example - id = "${var.project_id},${var.region},${var.bucket_credentials_group_id},${var.bucket_credential_id}" -} -``` - - -## Schema - -### Required - -- `credentials_group_id` (String) The credential group ID. -- `project_id` (String) STACKIT Project ID to which the credential group is associated. - -### Optional - -- `expiration_timestamp` (String) Expiration timestamp, in RFC339 format without fractional seconds. Example: "2025-01-01T00:00:00Z". If not set, the credential never expires. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `access_key` (String) -- `credential_id` (String) The credential ID. -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`credentials_group_id`,`credential_id`". -- `name` (String) -- `secret_access_key` (String, Sensitive) diff --git a/docs/resources/objectstorage_credentials_group.md b/docs/resources/objectstorage_credentials_group.md deleted file mode 100644 index 9115a0c7..00000000 --- a/docs/resources/objectstorage_credentials_group.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_objectstorage_credentials_group Resource - stackit" -subcategory: "" -description: |- - ObjectStorage credentials group resource schema. Must have a region specified in the provider configuration. If you are creating credentialsgroup and bucket resources simultaneously, please include the depends_on field so that they are created sequentially. This prevents errors from concurrent calls to the service enablement that is done in the background. ---- - -# stackit_objectstorage_credentials_group (Resource) - -ObjectStorage credentials group resource schema. Must have a `region` specified in the provider configuration. If you are creating `credentialsgroup` and `bucket` resources simultaneously, please include the `depends_on` field so that they are created sequentially. This prevents errors from concurrent calls to the service enablement that is done in the background. - -## Example Usage - -```terraform -resource "stackit_objectstorage_credentials_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-credentials-group" -} - -# Only use the import statement, if you want to import an existing objectstorage credential group -import { - to = stackit_objectstorage_credentials_group.import-example - id = "${var.project_id},${var.region},${var.bucket_credentials_group_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The credentials group's display name. -- `project_id` (String) Project ID to which the credentials group is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `credentials_group_id` (String) The credentials group ID -- `id` (String) Terraform's internal data source identifier. It is structured as "`project_id`,`region`,`credentials_group_id`". -- `urn` (String) Credentials group uniform resource name (URN) diff --git a/docs/resources/observability_alertgroup.md b/docs/resources/observability_alertgroup.md deleted file mode 100644 index 0502ea64..00000000 --- a/docs/resources/observability_alertgroup.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_alertgroup Resource - stackit" -subcategory: "" -description: |- - Observability alert group resource schema. Used to create alerts based on metrics (Thanos). Must have a region specified in the provider configuration. ---- - -# stackit_observability_alertgroup (Resource) - -Observability alert group resource schema. Used to create alerts based on metrics (Thanos). Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_observability_alertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-alert-group" - interval = "60s" - rules = [ - { - alert = "example-alert-name" - expression = "kube_node_status_condition{condition=\"Ready\", status=\"false\"} > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - { - alert = "example-alert-name-2" - expression = "kube_node_status_condition{condition=\"Ready\", status=\"false\"} > 0" - for = "1m" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - ] -} - -# Only use the import statement, if you want to import an existing observability alertgroup -import { - to = stackit_observability_alertgroup.import-example - id = "${var.project_id},${var.observability_instance_id},${var.observability_alertgroup_name}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) Observability instance ID to which the alert group is associated. -- `name` (String) The name of the alert group. Is the identifier and must be unique in the group. -- `project_id` (String) STACKIT project ID to which the alert group is associated. -- `rules` (Attributes List) Rules for the alert group (see [below for nested schema](#nestedatt--rules)) - -### Optional - -- `interval` (String) Specifies the frequency at which rules within the group are evaluated. The interval must be at least 60 seconds and defaults to 60 seconds if not set. Supported formats include hours, minutes, and seconds, either singly or in combination. Examples of valid formats are: '5h30m40s', '5h', '5h30m', '60m', and '60s'. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`name`". - - -### Nested Schema for `rules` - -Required: - -- `alert` (String) The name of the alert rule. Is the identifier and must be unique in the group. -- `expression` (String) The PromQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending/firing alerts. - -Optional: - -- `annotations` (Map of String) A map of key:value. Annotations to add or overwrite for each alert -- `for` (String) Alerts are considered firing once they have been returned for this long. Alerts which have not yet fired for long enough are considered pending. Default is 0s -- `labels` (Map of String) A map of key:value. Labels to add or overwrite for each alert diff --git a/docs/resources/observability_credential.md b/docs/resources/observability_credential.md deleted file mode 100644 index 773fad95..00000000 --- a/docs/resources/observability_credential.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_credential Resource - stackit" -subcategory: "" -description: |- - Observability credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_observability_credential (Resource) - -Observability credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - description = "Description of the credential." -} -``` - - -## Schema - -### Required - -- `instance_id` (String) The Observability Instance ID the credential belongs to. -- `project_id` (String) STACKIT project ID to which the credential is associated. - -### Optional - -- `description` (String) A description of the credential. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`username`". -- `password` (String, Sensitive) Credential password -- `username` (String) Credential username diff --git a/docs/resources/observability_instance.md b/docs/resources/observability_instance.md deleted file mode 100644 index fe8a5dbb..00000000 --- a/docs/resources/observability_instance.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_instance Resource - stackit" -subcategory: "" -description: |- - Observability instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_observability_instance (Resource) - -Observability instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Starter-EU01" - acl = ["1.1.1.1/32", "2.2.2.2/32"] - logs_retention_days = 30 - traces_retention_days = 30 - metrics_retention_days = 90 - metrics_retention_days_5m_downsampling = 90 - metrics_retention_days_1h_downsampling = 90 -} - -# Only use the import statement, if you want to import an existing observability instance -import { - to = stackit_observability_instance.import-example - id = "${var.project_id},${var.observability_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the Observability instance. -- `plan_name` (String) Specifies the Observability plan. E.g. `Observability-Monitoring-Medium-EU01`. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `acl` (Set of String) The access control list for this instance. Each entry is an IP address range that is permitted to access, in CIDR notation. -- `alert_config` (Attributes) Alert configuration for the instance. (see [below for nested schema](#nestedatt--alert_config)) -- `logs_retention_days` (Number) Specifies for how many days the logs are kept. Default is set to `7`. -- `metrics_retention_days` (Number) Specifies for how many days the raw metrics are kept. Default is set to `90`. -- `metrics_retention_days_1h_downsampling` (Number) Specifies for how many days the 1h downsampled metrics are kept. must be less than the value of the 5m downsampling retention. Default is set to `90`. -- `metrics_retention_days_5m_downsampling` (Number) Specifies for how many days the 5m downsampled metrics are kept. must be less than the value of the general retention. Default is set to `90`. -- `parameters` (Map of String) Additional parameters. -- `traces_retention_days` (Number) Specifies for how many days the traces are kept. Default is set to `7`. - -### Read-Only - -- `alerting_url` (String) Specifies Alerting URL. -- `dashboard_url` (String) Specifies Observability instance dashboard URL. -- `grafana_initial_admin_password` (String, Sensitive) Specifies an initial Grafana admin password. -- `grafana_initial_admin_user` (String) Specifies an initial Grafana admin username. -- `grafana_public_read_access` (Boolean) If true, anyone can access Grafana dashboards without logging in. -- `grafana_url` (String) Specifies Grafana URL. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `instance_id` (String) The Observability instance ID. -- `is_updatable` (Boolean) Specifies if the instance can be updated. -- `jaeger_traces_url` (String) -- `jaeger_ui_url` (String) -- `logs_push_url` (String) Specifies URL for pushing logs. -- `logs_url` (String) Specifies Logs URL. -- `metrics_push_url` (String) Specifies URL for pushing metrics. -- `metrics_url` (String) Specifies metrics URL. -- `otlp_traces_url` (String) -- `plan_id` (String) The Observability plan ID. -- `targets_url` (String) Specifies Targets URL. -- `zipkin_spans_url` (String) - - -### Nested Schema for `alert_config` - -Required: - -- `receivers` (Attributes List) List of alert receivers. (see [below for nested schema](#nestedatt--alert_config--receivers)) -- `route` (Attributes) Route configuration for the alerts. (see [below for nested schema](#nestedatt--alert_config--route)) - -Optional: - -- `global` (Attributes) Global configuration for the alerts. If nothing passed the default argus config will be used. It is only possible to update the entire global part, not individual attributes. (see [below for nested schema](#nestedatt--alert_config--global)) - - -### Nested Schema for `alert_config.receivers` - -Required: - -- `name` (String) Name of the receiver. - -Optional: - -- `email_configs` (Attributes List) List of email configurations. (see [below for nested schema](#nestedatt--alert_config--receivers--email_configs)) -- `opsgenie_configs` (Attributes List) List of OpsGenie configurations. (see [below for nested schema](#nestedatt--alert_config--receivers--opsgenie_configs)) -- `webhooks_configs` (Attributes List) List of Webhooks configurations. (see [below for nested schema](#nestedatt--alert_config--receivers--webhooks_configs)) - - -### Nested Schema for `alert_config.receivers.email_configs` - -Optional: - -- `auth_identity` (String) SMTP authentication information. Must be a valid email address -- `auth_password` (String, Sensitive) SMTP authentication password. -- `auth_username` (String) SMTP authentication username. -- `from` (String) The sender email address. Must be a valid email address -- `send_resolved` (Boolean) Whether to notify about resolved alerts. -- `smart_host` (String) The SMTP host through which emails are sent. -- `to` (String) The email address to send notifications to. Must be a valid email address - - - -### Nested Schema for `alert_config.receivers.opsgenie_configs` - -Optional: - -- `api_key` (String) The API key for OpsGenie. -- `api_url` (String) The host to send OpsGenie API requests to. Must be a valid URL -- `priority` (String) Priority of the alert. Possible values are: `P1`, `P2`, `P3`, `P4`, `P5`. -- `send_resolved` (Boolean) Whether to notify about resolved alerts. -- `tags` (String) Comma separated list of tags attached to the notifications. - - - -### Nested Schema for `alert_config.receivers.webhooks_configs` - -Optional: - -- `google_chat` (Boolean) Google Chat webhooks require special handling, set this to true if the webhook is for Google Chat. -- `ms_teams` (Boolean) Microsoft Teams webhooks require special handling, set this to true if the webhook is for Microsoft Teams. -- `send_resolved` (Boolean) Whether to notify about resolved alerts. -- `url` (String, Sensitive) The endpoint to send HTTP POST requests to. Must be a valid URL - - - - -### Nested Schema for `alert_config.route` - -Required: - -- `receiver` (String) The name of the receiver to route the alerts to. - -Optional: - -- `group_by` (List of String) The labels by which incoming alerts are grouped together. For example, multiple alerts coming in for cluster=A and alertname=LatencyHigh would be batched into a single group. To aggregate by all possible labels use the special value '...' as the sole label name, for example: group_by: ['...']. This effectively disables aggregation entirely, passing through all alerts as-is. This is unlikely to be what you want, unless you have a very low alert volume or your upstream notification system performs its own grouping. -- `group_interval` (String) How long to wait before sending a notification about new alerts that are added to a group of alerts for which an initial notification has already been sent. (Usually ~5m or more.) -- `group_wait` (String) How long to initially wait to send a notification for a group of alerts. Allows to wait for an inhibiting alert to arrive or collect more initial alerts for the same group. (Usually ~0s to few minutes.) -- `repeat_interval` (String) How long to wait before sending a notification again if it has already been sent successfully for an alert. (Usually ~3h or more). -- `routes` (Attributes List) List of child routes. (see [below for nested schema](#nestedatt--alert_config--route--routes)) - - -### Nested Schema for `alert_config.route.routes` - -Required: - -- `receiver` (String) The name of the receiver to route the alerts to. - -Optional: - -- `continue` (Boolean) Whether an alert should continue matching subsequent sibling nodes. -- `group_by` (List of String) The labels by which incoming alerts are grouped together. For example, multiple alerts coming in for cluster=A and alertname=LatencyHigh would be batched into a single group. To aggregate by all possible labels use the special value '...' as the sole label name, for example: group_by: ['...']. This effectively disables aggregation entirely, passing through all alerts as-is. This is unlikely to be what you want, unless you have a very low alert volume or your upstream notification system performs its own grouping. -- `group_interval` (String) How long to wait before sending a notification about new alerts that are added to a group of alerts for which an initial notification has already been sent. (Usually ~5m or more.) -- `group_wait` (String) How long to initially wait to send a notification for a group of alerts. Allows to wait for an inhibiting alert to arrive or collect more initial alerts for the same group. (Usually ~0s to few minutes.) -- `match` (Map of String, Deprecated) A set of equality matchers an alert has to fulfill to match the node. This field is deprecated and will be removed after 10th March 2026, use `matchers` in the `routes` instead -- `match_regex` (Map of String, Deprecated) A set of regex-matchers an alert has to fulfill to match the node. This field is deprecated and will be removed after 10th March 2026, use `matchers` in the `routes` instead -- `matchers` (List of String) A list of matchers that an alert has to fulfill to match the node. A matcher is a string with a syntax inspired by PromQL and OpenMetrics. -- `repeat_interval` (String) How long to wait before sending a notification again if it has already been sent successfully for an alert. (Usually ~3h or more). - - - - -### Nested Schema for `alert_config.global` - -Optional: - -- `opsgenie_api_key` (String, Sensitive) The API key for OpsGenie. -- `opsgenie_api_url` (String) The host to send OpsGenie API requests to. Must be a valid URL -- `resolve_timeout` (String) The default value used by alertmanager if the alert does not include EndsAt. After this time passes, it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. -- `smtp_auth_identity` (String) SMTP authentication information. Must be a valid email address -- `smtp_auth_password` (String, Sensitive) SMTP Auth using LOGIN and PLAIN. -- `smtp_auth_username` (String) SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. -- `smtp_from` (String) The default SMTP From header field. Must be a valid email address -- `smtp_smart_host` (String) The default SMTP smarthost used for sending emails, including port number in format `host:port` (eg. `smtp.example.com:587`). Port number usually is 25, or 587 for SMTP over TLS (sometimes referred to as STARTTLS). diff --git a/docs/resources/observability_logalertgroup.md b/docs/resources/observability_logalertgroup.md deleted file mode 100644 index 5b38cf66..00000000 --- a/docs/resources/observability_logalertgroup.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_logalertgroup Resource - stackit" -subcategory: "" -description: |- - Observability log alert group resource schema. Used to create alerts based on logs (Loki). Must have a region specified in the provider configuration. ---- - -# stackit_observability_logalertgroup (Resource) - -Observability log alert group resource schema. Used to create alerts based on logs (Loki). Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_observability_logalertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-log-alert-group" - interval = "60m" - rules = [ - { - alert = "example-log-alert-name" - expression = "sum(rate({namespace=\"example\", pod=\"logger\"} |= \"Simulated error message\" [1m])) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - { - alert = "example-log-alert-name-2" - expression = "sum(rate({namespace=\"example\", pod=\"logger\"} |= \"Another error message\" [1m])) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - ] -} - -# Only use the import statement, if you want to import an existing observability logalertgroup -import { - to = stackit_observability_logalertgroup.import-example - id = "${var.project_id},${var.observability_instance_id},${var.observability_logalertgroup_name}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) Observability instance ID to which the log alert group is associated. -- `name` (String) The name of the log alert group. Is the identifier and must be unique in the group. -- `project_id` (String) STACKIT project ID to which the log alert group is associated. -- `rules` (Attributes List) Rules for the log alert group (see [below for nested schema](#nestedatt--rules)) - -### Optional - -- `interval` (String) Specifies the frequency at which rules within the group are evaluated. The interval must be at least 60 seconds and defaults to 60 seconds if not set. Supported formats include hours, minutes, and seconds, either singly or in combination. Examples of valid formats are: '5h30m40s', '5h', '5h30m', '60m', and '60s'. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`name`". - - -### Nested Schema for `rules` - -Required: - -- `alert` (String) The name of the alert rule. Is the identifier and must be unique in the group. -- `expression` (String) The LogQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending/firing alerts. - -Optional: - -- `annotations` (Map of String) A map of key:value. Annotations to add or overwrite for each alert -- `for` (String) Alerts are considered firing once they have been returned for this long. Alerts which have not yet fired for long enough are considered pending. Default is 0s -- `labels` (Map of String) A map of key:value. Labels to add or overwrite for each alert diff --git a/docs/resources/observability_scrapeconfig.md b/docs/resources/observability_scrapeconfig.md deleted file mode 100644 index 9840a2e4..00000000 --- a/docs/resources/observability_scrapeconfig.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_observability_scrapeconfig Resource - stackit" -subcategory: "" -description: |- - Observability scrape config resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_observability_scrapeconfig (Resource) - -Observability scrape config resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_observability_scrapeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-job" - metrics_path = "/my-metrics" - saml2 = { - enable_url_parameters = true - } - targets = [ - { - urls = ["url1", "urls2"] - labels = { - "url1" = "dev" - } - } - ] -} - -# Only use the import statement, if you want to import an existing observability scrapeconfig -import { - to = stackit_observability_scrapeconfig.import-example - id = "${var.project_id},${var.observability_instance_id},${var.observability_scrapeconfig_name}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) Observability instance ID to which the scraping job is associated. -- `metrics_path` (String) Specifies the job scraping url path. E.g. `/metrics`. -- `name` (String) Specifies the name of the scraping job. -- `project_id` (String) STACKIT project ID to which the scraping job is associated. -- `targets` (Attributes List) The targets list (specified by the static config). (see [below for nested schema](#nestedatt--targets)) - -### Optional - -- `basic_auth` (Attributes) A basic authentication block. (see [below for nested schema](#nestedatt--basic_auth)) -- `saml2` (Attributes) A SAML2 configuration block. (see [below for nested schema](#nestedatt--saml2)) -- `sample_limit` (Number) Specifies the scrape sample limit. Upper limit depends on the service plan. Defaults to `5000`. -- `scheme` (String) Specifies the http scheme. Defaults to `https`. -- `scrape_interval` (String) Specifies the scrape interval as duration string. Defaults to `5m`. -- `scrape_timeout` (String) Specifies the scrape timeout as duration string. Defaults to `2m`. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`,`name`". - - -### Nested Schema for `targets` - -Required: - -- `urls` (List of String) Specifies target URLs. - -Optional: - -- `labels` (Map of String) Specifies labels. - - - -### Nested Schema for `basic_auth` - -Required: - -- `password` (String, Sensitive) Specifies basic auth password. -- `username` (String) Specifies basic auth username. - - - -### Nested Schema for `saml2` - -Optional: - -- `enable_url_parameters` (Boolean) Specifies if URL parameters are enabled. Defaults to `true` diff --git a/docs/resources/opensearch_credential.md b/docs/resources/opensearch_credential.md deleted file mode 100644 index 113adf91..00000000 --- a/docs/resources/opensearch_credential.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_opensearch_credential Resource - stackit" -subcategory: "" -description: |- - OpenSearch credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_opensearch_credential (Resource) - -OpenSearch credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_opensearch_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing opensearch credential -import { - to = stackit_opensearch_credential.import-example - id = "${var.project_id},${var.instance_id},${var.credential_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the OpenSearch instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `credential_id` (String) The credential's ID. -- `host` (String) -- `hosts` (List of String) -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `password` (String, Sensitive) -- `port` (Number) -- `scheme` (String) -- `uri` (String, Sensitive) -- `username` (String) diff --git a/docs/resources/opensearch_instance.md b/docs/resources/opensearch_instance.md deleted file mode 100644 index 5ca0a8b8..00000000 --- a/docs/resources/opensearch_instance.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_opensearch_instance Resource - stackit" -subcategory: "" -description: |- - OpenSearch instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_opensearch_instance (Resource) - -OpenSearch instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_opensearch_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "2" - plan_name = "stackit-opensearch-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - } -} - -# Only use the import statement, if you want to import an existing opensearch instance -import { - to = stackit_opensearch_instance.import-example - id = "${var.project_id},${var.instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Instance name. -- `plan_name` (String) The selected plan name. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `version` (String) The service version. - -### Optional - -- `parameters` (Attributes) Configuration parameters. Please note that removing a previously configured field from your Terraform configuration won't replace its value in the API. To update a previously configured field, explicitly set a new value for it. (see [below for nested schema](#nestedatt--parameters)) - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `instance_id` (String) ID of the OpenSearch instance. -- `plan_id` (String) The selected plan ID. - - -### Nested Schema for `parameters` - -Optional: - -- `enable_monitoring` (Boolean) Enable monitoring. -- `graphite` (String) If set, monitoring with Graphite will be enabled. Expects the host and port where the Graphite metrics should be sent to (host:port). -- `java_garbage_collector` (String) The garbage collector to use for OpenSearch. -- `java_heapspace` (Number) The amount of memory (in MB) allocated as heap by the JVM for OpenSearch. -- `java_maxmetaspace` (Number) The amount of memory (in MB) used by the JVM to store metadata for OpenSearch. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted (in seconds). -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key. -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `plugins` (List of String) List of plugins to install. Must be a supported plugin name. The plugins `repository-s3` and `repository-azure` are enabled by default and cannot be disabled. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. -- `tls_ciphers` (List of String) List of TLS ciphers to use. -- `tls_protocols` (List of String) The TLS protocol to use. diff --git a/docs/resources/postgresflex_database.md b/docs/resources/postgresflex_database.md deleted file mode 100644 index b9363141..00000000 --- a/docs/resources/postgresflex_database.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_postgresflex_database Resource - stackit" -subcategory: "" -description: |- - Postgres Flex database resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_postgresflex_database (Resource) - -Postgres Flex database resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_postgresflex_database" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "mydb" - owner = "myusername" -} - -# Only use the import statement, if you want to import an existing postgresflex database -import { - to = stackit_postgresflex_database.import-example - id = "${var.project_id},${var.region},${var.postgres_instance_id},${var.postgres_database_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the Postgres Flex instance. -- `name` (String) Database name. -- `owner` (String) Username of the database owner. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `database_id` (String) Database ID. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`,`database_id`". diff --git a/docs/resources/postgresflex_user.md b/docs/resources/postgresflex_user.md deleted file mode 100644 index 763e1e19..00000000 --- a/docs/resources/postgresflex_user.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_postgresflex_user Resource - stackit" -subcategory: "" -description: |- - Postgres Flex user resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_postgresflex_user (Resource) - -Postgres Flex user resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_postgresflex_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - username = "username" - roles = ["role"] -} - -# Only use the import statement, if you want to import an existing postgresflex user -import { - to = stackit_postgresflex_user.import-example - id = "${var.project_id},${var.region},${var.postgres_instance_id},${var.user_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the PostgresFlex instance. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `roles` (Set of String) Database access levels for the user. Possible values are: `login`, `createdb`. -- `username` (String) - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `host` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`,`user_id`". -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `user_id` (String) User ID. diff --git a/docs/resources/postgresflex_instance.md b/docs/resources/postgresflexalpha_instance.md similarity index 69% rename from docs/resources/postgresflex_instance.md rename to docs/resources/postgresflexalpha_instance.md index 46dfdbc3..64878367 100644 --- a/docs/resources/postgresflex_instance.md +++ b/docs/resources/postgresflexalpha_instance.md @@ -1,19 +1,21 @@ --- # generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_postgresflex_instance Resource - stackit" +page_title: "stackitprivatepreview_postgresflexalpha_instance Resource - stackitprivatepreview" subcategory: "" description: |- Postgres Flex instance resource schema. Must have a region specified in the provider configuration. --- -# stackit_postgresflex_instance (Resource) +# stackitprivatepreview_postgresflexalpha_instance (Resource) Postgres Flex instance resource schema. Must have a `region` specified in the provider configuration. ## Example Usage ```terraform -resource "stackit_postgresflex_instance" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_postgresflexalpha_instance" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" name = "example-instance" acl = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] @@ -32,7 +34,7 @@ resource "stackit_postgresflex_instance" "example" { # Only use the import statement, if you want to import an existing postgresflex instance import { - to = stackit_postgresflex_instance.import-example + to = stackitprivatepreview_postgresflexalpha_instance.import-example id = "${var.project_id},${var.region},${var.postgres_instance_id}" } ``` @@ -44,8 +46,10 @@ import { - `acl` (List of String) The Access Control List (ACL) for the PostgresFlex instance. - `backup_schedule` (String) +- `encryption` (Attributes) The encryption block. (see [below for nested schema](#nestedatt--encryption)) - `flavor` (Attributes) (see [below for nested schema](#nestedatt--flavor)) - `name` (String) Instance name. +- `network` (Attributes) (see [below for nested schema](#nestedatt--network)) - `project_id` (String) STACKIT project ID to which the instance is associated. - `replicas` (Number) - `storage` (Attributes) (see [below for nested schema](#nestedatt--storage)) @@ -60,6 +64,17 @@ import { - `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`". - `instance_id` (String) ID of the PostgresFlex instance. + +### Nested Schema for `encryption` + +Required: + +- `key_id` (String) Key ID of the encryption key. +- `key_ring_id` (String) +- `key_version` (String) +- `service_account` (String) + + ### Nested Schema for `flavor` @@ -74,6 +89,14 @@ Read-Only: - `id` (String) + +### Nested Schema for `network` + +Required: + +- `access_scope` (String) + + ### Nested Schema for `storage` diff --git a/docs/resources/public_ip.md b/docs/resources/public_ip.md deleted file mode 100644 index f95b9314..00000000 --- a/docs/resources/public_ip.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_public_ip Resource - stackit" -subcategory: "" -description: |- - Public IP resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_public_ip (Resource) - -Public IP resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_public_ip" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing public ip -import { - to = stackit_public_ip.import-example - id = "${var.project_id},${var.region},${var.public_ip_id}" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the public IP is associated. - -### Optional - -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `network_interface_id` (String) Associates the public IP with a network interface or a virtual IP (ID). If you are using this resource with a Kubernetes Load Balancer or any other resource which associates a network interface implicitly, use the lifecycle `ignore_changes` property in this field to prevent unintentional removal of the network interface due to drift in the Terraform state -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`public_ip_id`". -- `ip` (String) The IP address. -- `public_ip_id` (String) The public IP ID. diff --git a/docs/resources/public_ip_associate.md b/docs/resources/public_ip_associate.md deleted file mode 100644 index fd76fc36..00000000 --- a/docs/resources/public_ip_associate.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_public_ip_associate Resource - stackit" -subcategory: "" -description: |- - Associates an existing public IP to a network interface. This is useful for situations where you have a pre-allocated public IP or unable to use the stackit_public_ip resource to create a new public IP. Must have a region specified in the provider configuration. - !> The stackit_public_ip_associate resource should not be used together with the stackit_public_ip resource for the same public IP or for the same network interface. - Using both resources together for the same public IP or network interface WILL lead to conflicts, as they both have control of the public IP and network interface association. ---- - -# stackit_public_ip_associate (Resource) - -Associates an existing public IP to a network interface. This is useful for situations where you have a pre-allocated public IP or unable to use the `stackit_public_ip` resource to create a new public IP. Must have a `region` specified in the provider configuration. - -!> The `stackit_public_ip_associate` resource should not be used together with the `stackit_public_ip` resource for the same public IP or for the same network interface. -Using both resources together for the same public IP or network interface WILL lead to conflicts, as they both have control of the public IP and network interface association. - -## Example Usage - -```terraform -resource "stackit_public_ip_associate" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - public_ip_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing public ip associate -import { - to = stackit_public_ip_associate.import-example - id = "${var.project_id},${var.region},${var.public_ip_id},${var.network_interface_id}" -} -``` - - -## Schema - -### Required - -- `network_interface_id` (String) The ID of the network interface (or virtual IP) to which the public IP should be attached to. -- `project_id` (String) STACKIT project ID to which the public IP is associated. -- `public_ip_id` (String) The public IP ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`public_ip_id`,`network_interface_id`". -- `ip` (String) The IP address. diff --git a/docs/resources/rabbitmq_credential.md b/docs/resources/rabbitmq_credential.md deleted file mode 100644 index de60bfb8..00000000 --- a/docs/resources/rabbitmq_credential.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_rabbitmq_credential Resource - stackit" -subcategory: "" -description: |- - RabbitMQ credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_rabbitmq_credential (Resource) - -RabbitMQ credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_rabbitmq_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing rabbitmq credential -import { - to = stackit_rabbitmq_credential.import-example - id = "${var.project_id},${var.rabbitmq_instance_id},${var.rabbitmq_credential_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the RabbitMQ instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `credential_id` (String) The credential's ID. -- `host` (String) -- `hosts` (List of String) -- `http_api_uri` (String) -- `http_api_uris` (List of String) -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `management` (String) -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) -- `uris` (List of String) -- `username` (String) diff --git a/docs/resources/rabbitmq_instance.md b/docs/resources/rabbitmq_instance.md deleted file mode 100644 index 40cf2ab0..00000000 --- a/docs/resources/rabbitmq_instance.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_rabbitmq_instance Resource - stackit" -subcategory: "" -description: |- - RabbitMQ instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_rabbitmq_instance (Resource) - -RabbitMQ instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_rabbitmq_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "3.13" - plan_name = "stackit-rabbitmq-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - consumer_timeout = 18000000 - enable_monitoring = false - plugins = ["rabbitmq_consistent_hash_exchange", "rabbitmq_federation", "rabbitmq_tracing"] - } -} - -# Only use the import statement, if you want to import an existing rabbitmq instance -import { - to = stackit_rabbitmq_instance.import-example - id = "${var.project_id},${var.rabbitmq_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Instance name. -- `plan_name` (String) The selected plan name. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `version` (String) The service version. - -### Optional - -- `parameters` (Attributes) Configuration parameters. Please note that removing a previously configured field from your Terraform configuration won't replace its value in the API. To update a previously configured field, explicitly set a new value for it. (see [below for nested schema](#nestedatt--parameters)) - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `instance_id` (String) ID of the RabbitMQ instance. -- `plan_id` (String) The selected plan ID. - - -### Nested Schema for `parameters` - -Optional: - -- `consumer_timeout` (Number) The timeout in milliseconds for the consumer. -- `enable_monitoring` (Boolean) Enable monitoring. -- `graphite` (String) Graphite server URL (host and port). If set, monitoring with Graphite will be enabled. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted. -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `plugins` (List of String) List of plugins to install. Must be a supported plugin name. -- `roles` (List of String) List of roles to assign to the instance. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `syslog` (List of String) List of syslog servers to send logs to. -- `tls_ciphers` (List of String) List of TLS ciphers to use. -- `tls_protocols` (String) TLS protocol to use. diff --git a/docs/resources/redis_credential.md b/docs/resources/redis_credential.md deleted file mode 100644 index 2e2674a2..00000000 --- a/docs/resources/redis_credential.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_redis_credential Resource - stackit" -subcategory: "" -description: |- - Redis credential resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_redis_credential (Resource) - -Redis credential resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_redis_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing redis credential -import { - to = stackit_redis_credential.import-example - id = "${var.project_id},${var.redis_instance_id},${var.redis_credential_id}" -} -``` - - -## Schema - -### Required - -- `instance_id` (String) ID of the Redis instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. - -### Read-Only - -- `credential_id` (String) The credential's ID. -- `host` (String) -- `hosts` (List of String) -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`instance_id`,`credential_id`". -- `load_balanced_host` (String) -- `password` (String, Sensitive) -- `port` (Number) -- `uri` (String, Sensitive) Connection URI. -- `username` (String) diff --git a/docs/resources/redis_instance.md b/docs/resources/redis_instance.md deleted file mode 100644 index 40f63b81..00000000 --- a/docs/resources/redis_instance.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_redis_instance Resource - stackit" -subcategory: "" -description: |- - Redis instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_redis_instance (Resource) - -Redis instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_redis_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "7" - plan_name = "stackit-redis-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - enable_monitoring = false - down_after_milliseconds = 30000 - syslog = ["logs4.your-syslog-endpoint.com:54321"] - } -} - -# Only use the import statement, if you want to import an existing redis instance -import { - to = stackit_redis_instance.import-example - id = "${var.project_id},${var.redis_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Instance name. -- `plan_name` (String) The selected plan name. -- `project_id` (String) STACKIT project ID to which the instance is associated. -- `version` (String) The service version. - -### Optional - -- `parameters` (Attributes) Configuration parameters. Please note that removing a previously configured field from your Terraform configuration won't replace its value in the API. To update a previously configured field, explicitly set a new value for it. (see [below for nested schema](#nestedatt--parameters)) - -### Read-Only - -- `cf_guid` (String) -- `cf_organization_guid` (String) -- `cf_space_guid` (String) -- `dashboard_url` (String) -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `image_url` (String) -- `instance_id` (String) ID of the Redis instance. -- `plan_id` (String) The selected plan ID. - - -### Nested Schema for `parameters` - -Optional: - -- `down_after_milliseconds` (Number) The number of milliseconds after which the instance is considered down. -- `enable_monitoring` (Boolean) Enable monitoring. -- `failover_timeout` (Number) The failover timeout in milliseconds. -- `graphite` (String) Graphite server URL (host and port). If set, monitoring with Graphite will be enabled. -- `lazyfree_lazy_eviction` (String) The lazy eviction enablement (yes or no). -- `lazyfree_lazy_expire` (String) The lazy expire enablement (yes or no). -- `lua_time_limit` (Number) The Lua time limit. -- `max_disk_threshold` (Number) The maximum disk threshold in MB. If the disk usage exceeds this threshold, the instance will be stopped. -- `maxclients` (Number) The maximum number of clients. -- `maxmemory_policy` (String) The policy to handle the maximum memory (volatile-lru, noeviction, etc). -- `maxmemory_samples` (Number) The maximum memory samples. -- `metrics_frequency` (Number) The frequency in seconds at which metrics are emitted. -- `metrics_prefix` (String) The prefix for the metrics. Could be useful when using Graphite monitoring to prefix the metrics with a certain value, like an API key -- `min_replicas_max_lag` (Number) The minimum replicas maximum lag. -- `monitoring_instance_id` (String) The ID of the STACKIT monitoring instance. -- `notify_keyspace_events` (String) The notify keyspace events. -- `sgw_acl` (String) Comma separated list of IP networks in CIDR notation which are allowed to access this instance. -- `snapshot` (String) The snapshot configuration. -- `syslog` (List of String) List of syslog servers to send logs to. -- `tls_ciphers` (List of String) List of TLS ciphers to use. -- `tls_ciphersuites` (String) TLS cipher suites to use. -- `tls_protocols` (String) TLS protocol to use. diff --git a/docs/resources/resourcemanager_folder.md b/docs/resources/resourcemanager_folder.md deleted file mode 100644 index 2a99f8a0..00000000 --- a/docs/resources/resourcemanager_folder.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_resourcemanager_folder Resource - stackit" -subcategory: "" -description: |- - Resource Manager folder resource schema. ---- - -# stackit_resourcemanager_folder (Resource) - -Resource Manager folder resource schema. - -## Example Usage - -```terraform -resource "stackit_resourcemanager_folder" "example" { - name = "example-folder" - owner_email = "foo.bar@stackit.cloud" - parent_container_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Note: -# You can add projects under folders. -# However, when deleting a project, be aware: -# - Projects may remain "invisible" for up to 7 days after deletion -# - During this time, deleting the parent folder may fail because the project is still technically linked -resource "stackit_resourcemanager_project" "example_project" { - name = "example-project" - owner_email = "foo.bar@stackit.cloud" - parent_container_id = stackit_resourcemanager_folder.example.container_id -} - -# Only use the import statement, if you want to import an existing resourcemanager folder -# Note: There will be a conflict which needs to be resolved manually. -# Must set a configuration value for the owner_email attribute as the provider has marked it as required. -import { - to = stackit_resourcemanager_folder.import-example - id = var.container_id -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the folder. -- `owner_email` (String) Email address of the owner of the folder. This value is only considered during creation. Changing it afterwards will have no effect. -- `parent_container_id` (String) Parent resource identifier. Both container ID (user-friendly) and UUID are supported. - -### Optional - -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container. A label key must match the regex [A-ZÄÜÖa-zäüöß0-9_-]{1,64}. A label value must match the regex ^$|[A-ZÄÜÖa-zäüöß0-9_-]{1,64}. - -### Read-Only - -- `container_id` (String) Folder container ID. Globally unique, user-friendly identifier. -- `creation_time` (String) Date-time at which the folder was created. -- `folder_id` (String) Folder UUID identifier. Globally unique folder identifier -- `id` (String) Terraform's internal resource ID. It is structured as "`container_id`". -- `update_time` (String) Date-time at which the folder was last modified. diff --git a/docs/resources/resourcemanager_project.md b/docs/resources/resourcemanager_project.md deleted file mode 100644 index 382cc2f7..00000000 --- a/docs/resources/resourcemanager_project.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_resourcemanager_project Resource - stackit" -subcategory: "" -description: |- - Resource Manager project resource schema. - -> In case you're getting started with an empty STACKIT organization and want to use this resource to create projects in it, check out this guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/stackit_org_service_account for how to create a service account which you can use for authentication in the STACKIT Terraform provider. ---- - -# stackit_resourcemanager_project (Resource) - -Resource Manager project resource schema. - --> In case you're getting started with an empty STACKIT organization and want to use this resource to create projects in it, check out [this guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/stackit_org_service_account) for how to create a service account which you can use for authentication in the STACKIT Terraform provider. - -## Example Usage - -```terraform -resource "stackit_resourcemanager_project" "example" { - parent_container_id = "example-parent-container-abc123" - name = "example-container" - labels = { - "Label 1" = "foo" - // "networkArea" = stackit_network_area.foo.network_area_id - } - owner_email = "john.doe@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing resourcemanager project -# Note: There will be a conflict which needs to be resolved manually. -# Must set a configuration value for the owner_email attribute as the provider has marked it as required. -import { - to = stackit_resourcemanager_project.import-example - id = var.container_id -} -``` - - -## Schema - -### Required - -- `name` (String) Project name. -- `owner_email` (String) Email address of the owner of the project. This value is only considered during creation. Changing it afterwards will have no effect. -- `parent_container_id` (String) Parent resource identifier. Both container ID (user-friendly) and UUID are supported - -### Optional - -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container. A label key must match the regex [A-ZÄÜÖa-zäüöß0-9_-]{1,64}. A label value must match the regex ^$|[A-ZÄÜÖa-zäüöß0-9_-]{1,64}. -To create a project within a STACKIT Network Area, setting the label `networkArea=` is required. This can not be changed after project creation. - -### Read-Only - -- `container_id` (String) Project container ID. Globally unique, user-friendly identifier. -- `creation_time` (String) Date-time at which the project was created. -- `id` (String) Terraform's internal resource ID. It is structured as "`container_id`". -- `project_id` (String) Project UUID identifier. This is the ID that can be used in most of the other resources to identify the project. -- `update_time` (String) Date-time at which the project was last modified. diff --git a/docs/resources/routing_table.md b/docs/resources/routing_table.md deleted file mode 100644 index ff6e00e3..00000000 --- a/docs/resources/routing_table.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_routing_table Resource - stackit" -subcategory: "" -description: |- - Routing table resource schema. Must have a region specified in the provider configuration. - ~> This resource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_routing_table (Resource) - -Routing table resource schema. Must have a `region` specified in the provider configuration. - -~> This resource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -resource "stackit_routing_table" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing routing table -import { - to = stackit_routing_table.import-example - id = "${var.organization_id},${var.region},${var.network_area_id},${var.routing_table_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the routing table. -- `network_area_id` (String) The network area ID to which the routing table is associated. -- `organization_id` (String) STACKIT organization ID to which the routing table is associated. - -### Optional - -- `description` (String) Description of the routing table. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `region` (String) The resource region. If not defined, the provider region is used. -- `system_routes` (Boolean) This controls whether the routes for project-to-project communication are created automatically or not. - -### Read-Only - -- `created_at` (String) Date-time when the routing table was created -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`region`,`network_area_id`,`routing_table_id`". -- `routing_table_id` (String) The routing tables ID. -- `updated_at` (String) Date-time when the routing table was updated diff --git a/docs/resources/routing_table_route.md b/docs/resources/routing_table_route.md deleted file mode 100644 index 4e9f11fb..00000000 --- a/docs/resources/routing_table_route.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_routing_table_route Resource - stackit" -subcategory: "" -description: |- - Routing table route resource schema. Must have a region specified in the provider configuration. - ~> This resource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. ---- - -# stackit_routing_table_route (Resource) - -Routing table route resource schema. Must have a `region` specified in the provider configuration. - -~> This resource is part of the routing-tables experiment and is likely going to undergo significant changes or be removed in the future. Use it at your own discretion. - -## Example Usage - -```terraform -resource "stackit_routing_table_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - destination = { - type = "cidrv4" - value = "192.168.178.0/24" - } - next_hop = { - type = "ipv4" - value = "192.168.178.1" - } - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing routing table route -import { - to = stackit_routing_table_route.import-example - id = "${var.organization_id},${var.region},${var.network_area_id},${var.routing_table_id},${var.routing_table_route_id}" -} -``` - - -## Schema - -### Required - -- `destination` (Attributes) Destination of the route. (see [below for nested schema](#nestedatt--destination)) -- `network_area_id` (String) The network area ID to which the routing table is associated. -- `next_hop` (Attributes) Next hop destination. (see [below for nested schema](#nestedatt--next_hop)) -- `organization_id` (String) STACKIT organization ID to which the routing table is associated. -- `routing_table_id` (String) The routing tables ID. - -### Optional - -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `created_at` (String) Date-time when the route was created. -- `id` (String) Terraform's internal resource ID. It is structured as "`organization_id`,`region`,`network_area_id`,`routing_table_id`,`route_id`". -- `route_id` (String) The ID of the route. -- `updated_at` (String) Date-time when the route was updated. - - -### Nested Schema for `destination` - -Required: - -- `type` (String) CIDRV type. Possible values are: `cidrv4`, `cidrv6`. Only `cidrv4` is supported during experimental stage. -- `value` (String) An CIDR string. - - - -### Nested Schema for `next_hop` - -Required: - -- `type` (String) Type of the next hop. Possible values are: `blackhole`, `internet`, `ipv4`, `ipv6`. - -Optional: - -- `value` (String) Either IPv4 or IPv6 (not set for blackhole and internet). Only IPv4 supported during experimental stage. diff --git a/docs/resources/scf_organization.md b/docs/resources/scf_organization.md deleted file mode 100644 index 28c2d3a1..00000000 --- a/docs/resources/scf_organization.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_scf_organization Resource - stackit" -subcategory: "" -description: |- - STACKIT Cloud Foundry organization resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_scf_organization (Resource) - -STACKIT Cloud Foundry organization resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_scf_organization" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" -} - -resource "stackit_scf_organization" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - platform_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - quota_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - suspended = false -} - -# Only use the import statement, if you want to import an existing scf organization -import { - to = stackit_scf_organization.import-example - id = "${var.project_id},${var.region},${var.org_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the organization -- `project_id` (String) The ID of the project associated with the organization - -### Optional - -- `platform_id` (String) The ID of the platform associated with the organization -- `quota_id` (String) The ID of the quota associated with the organization -- `region` (String) The resource region. If not defined, the provider region is used -- `suspended` (Boolean) A boolean indicating whether the organization is suspended - -### Read-Only - -- `created_at` (String) The time when the organization was created -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`region`,`org_id`". -- `org_id` (String) The ID of the Cloud Foundry Organization -- `status` (String) The status of the organization (e.g., deleting, delete_failed) -- `updated_at` (String) The time when the organization was last updated diff --git a/docs/resources/scf_organization_manager.md b/docs/resources/scf_organization_manager.md deleted file mode 100644 index 3ed0b008..00000000 --- a/docs/resources/scf_organization_manager.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_scf_organization_manager Resource - stackit" -subcategory: "" -description: |- - STACKIT Cloud Foundry organization manager resource schema. ---- - -# stackit_scf_organization_manager (Resource) - -STACKIT Cloud Foundry organization manager resource schema. - -## Example Usage - -```terraform -resource "stackit_scf_organization_manager" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - org_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing scf org user -# The password field is still null after import and must be entered manually in the state. -import { - to = stackit_scf_organization_manager.import-example - id = "${var.project_id},${var.region},${var.org_id},${var.user_id}" -} -``` - - -## Schema - -### Required - -- `org_id` (String) The ID of the Cloud Foundry Organization -- `project_id` (String) The ID of the project associated with the organization of the organization manager - -### Optional - -- `region` (String) The region where the organization of the organization manager is located. If not defined, the provider region is used - -### Read-Only - -- `created_at` (String) The time when the organization manager was created -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`region`,`org_id`,`user_id`". -- `password` (String, Sensitive) An auto-generated password -- `platform_id` (String) The ID of the platform associated with the organization of the organization manager -- `updated_at` (String) The time when the organization manager was last updated -- `user_id` (String) The ID of the organization manager user -- `username` (String) An auto-generated organization manager user name diff --git a/docs/resources/secretsmanager_instance.md b/docs/resources/secretsmanager_instance.md deleted file mode 100644 index 8848b37d..00000000 --- a/docs/resources/secretsmanager_instance.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_secretsmanager_instance Resource - stackit" -subcategory: "" -description: |- - Secrets Manager instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_secretsmanager_instance (Resource) - -Secrets Manager instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_secretsmanager_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - acls = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] -} - -# Only use the import statement, if you want to import an existing secretsmanager instance -import { - to = stackit_secretsmanager_instance.import-example - id = "${var.project_id},${var.secret_instance_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) Instance name. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `acls` (Set of String) The access control list for this instance. Each entry is an IP or IP range that is permitted to access, in CIDR notation - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`instance_id`". -- `instance_id` (String) ID of the Secrets Manager instance. diff --git a/docs/resources/secretsmanager_user.md b/docs/resources/secretsmanager_user.md deleted file mode 100644 index 6f592222..00000000 --- a/docs/resources/secretsmanager_user.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_secretsmanager_user Resource - stackit" -subcategory: "" -description: |- - Secrets Manager user resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_secretsmanager_user (Resource) - -Secrets Manager user resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_secretsmanager_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - description = "Example user" - write_enabled = false -} - -# Only use the import statement, if you want to import an existing secretsmanager user -import { - to = stackit_secretsmanager_user.import-example - id = "${var.project_id},${var.secret_instance_id},${var.secret_user_id}" -} -``` - - -## Schema - -### Required - -- `description` (String) A user chosen description to differentiate between multiple users. Can't be changed after creation. -- `instance_id` (String) ID of the Secrets Manager instance. -- `project_id` (String) STACKIT Project ID to which the instance is associated. -- `write_enabled` (Boolean) If true, the user has writeaccess to the secrets engine. - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`instance_id`,`user_id`". -- `password` (String, Sensitive) An auto-generated password. -- `user_id` (String) The user's ID. -- `username` (String) An auto-generated user name. diff --git a/docs/resources/security_group.md b/docs/resources/security_group.md deleted file mode 100644 index eec31aa0..00000000 --- a/docs/resources/security_group.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_security_group Resource - stackit" -subcategory: "" -description: |- - Security group resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_security_group (Resource) - -Security group resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_security_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "my_security_group" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing security group -import { - to = stackit_security_group.import-example - id = "${var.project_id},${var.security_group_id}" -} -``` - - -## Schema - -### Required - -- `name` (String) The name of the security group. -- `project_id` (String) STACKIT project ID to which the security group is associated. - -### Optional - -- `description` (String) The description of the security group. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `region` (String) The resource region. If not defined, the provider region is used. -- `stateful` (Boolean) Configures if a security group is stateful or stateless. There can only be one type of security groups per network interface/server. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`security_group_id`". -- `security_group_id` (String) The security group ID. diff --git a/docs/resources/security_group_rule.md b/docs/resources/security_group_rule.md deleted file mode 100644 index 97e9fc65..00000000 --- a/docs/resources/security_group_rule.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_security_group_rule Resource - stackit" -subcategory: "" -description: |- - Security group rule resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_security_group_rule (Resource) - -Security group rule resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_security_group_rule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - direction = "ingress" - icmp_parameters = { - code = 0 - type = 8 - } - protocol = { - name = "icmp" - } -} - -# Only use the import statement, if you want to import an existing security group rule -# Note: There will be a conflict which needs to be resolved manually. -# Attribute "protocol.number" cannot be specified when "protocol.name" is specified. -import { - to = stackit_security_group_rule.import-example - id = "${var.project_id},${var.security_group_id},${var.security_group_rule_id}" -} -``` - - -## Schema - -### Required - -- `direction` (String) The direction of the traffic which the rule should match. Some of the possible values are: Possible values are: `ingress`, `egress`. -- `project_id` (String) STACKIT project ID to which the security group rule is associated. -- `security_group_id` (String) The security group ID. - -### Optional - -- `description` (String) The rule description. -- `ether_type` (String) The ethertype which the rule should match. -- `icmp_parameters` (Attributes) ICMP Parameters. These parameters should only be provided if the protocol is ICMP. (see [below for nested schema](#nestedatt--icmp_parameters)) -- `ip_range` (String) The remote IP range which the rule should match. -- `port_range` (Attributes) The range of ports. This should only be provided if the protocol is not ICMP. (see [below for nested schema](#nestedatt--port_range)) -- `protocol` (Attributes) The internet protocol which the rule should match. (see [below for nested schema](#nestedatt--protocol)) -- `region` (String) The resource region. If not defined, the provider region is used. -- `remote_security_group_id` (String) The remote security group which the rule should match. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`security_group_id`,`security_group_rule_id`". -- `security_group_rule_id` (String) The security group rule ID. - - -### Nested Schema for `icmp_parameters` - -Required: - -- `code` (Number) ICMP code. Can be set if the protocol is ICMP. -- `type` (Number) ICMP type. Can be set if the protocol is ICMP. - - - -### Nested Schema for `port_range` - -Required: - -- `max` (Number) The maximum port number. Should be greater or equal to the minimum. -- `min` (Number) The minimum port number. Should be less or equal to the maximum. - - - -### Nested Schema for `protocol` - -Optional: - -- `name` (String) The protocol name which the rule should match. Either `name` or `number` must be provided. Possible values are: `ah`, `dccp`, `egp`, `esp`, `gre`, `icmp`, `igmp`, `ipip`, `ipv6-encap`, `ipv6-frag`, `ipv6-icmp`, `ipv6-nonxt`, `ipv6-opts`, `ipv6-route`, `ospf`, `pgm`, `rsvp`, `sctp`, `tcp`, `udp`, `udplite`, `vrrp`. -- `number` (Number) The protocol number which the rule should match. Either `name` or `number` must be provided. diff --git a/docs/resources/server.md b/docs/resources/server.md deleted file mode 100644 index e7559dfc..00000000 --- a/docs/resources/server.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server Resource - stackit" -subcategory: "" -description: |- - Server resource schema. Must have a region specified in the provider configuration. - Example Usage - With key pair - - resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) - } - - resource "stackit_server" "user-data-from-file" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-server" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - user_data = file("${path.module}/cloud-init.yaml") - } - - - Boot from volume - - resource "stackit_server" "boot-from-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = "example-keypair" - } - - - Boot from existing volume - - resource "stackit_volume" "example-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - size = 12 - source = { - type = "image" - id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-volume" - availability_zone = "eu01-1" - } - - resource "stackit_server" "boot-from-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - source_type = "volume" - source_id = stackit_volume.example-volume.volume_id - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - } - - - Network setup - - resource "stackit_network" "network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network" - nameservers = ["192.0.2.0", "198.51.100.0", "203.0.113.0"] - ipv4_prefix_length = 24 - } - - resource "stackit_security_group" "sec-group" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-security-group" - stateful = true - } - - resource "stackit_security_group_rule" "rule" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = stackit_security_group.sec-group.security_group_id - direction = "ingress" - ether_type = "IPv4" - } - - resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.network.network_id - security_group_ids = [stackit_security_group.sec-group.security_group_id] - } - - resource "stackit_server" "server-with-network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] - } - - resource "stackit_public_ip" "public-ip" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = stackit_network_interface.nic.network_interface_id - } - - - Server with attached volume - - resource "stackit_volume" "example-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - size = 12 - performance_class = "storage_premium_perf6" - name = "example-volume" - availability_zone = "eu01-1" - } - - resource "stackit_server" "server-with-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - } - - resource "stackit_server_volume_attach" "attach_volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = stackit_server.server-with-volume.server_id - volume_id = stackit_volume.example-volume.volume_id - } - - - Server with user data (cloud-init) - - resource "stackit_server" "user-data" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-server" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - user_data = "#!/bin/bash\n/bin/su" - } - - resource "stackit_server" "user-data-from-file" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-server" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - user_data = file("${path.module}/cloud-init.yaml") - } ---- - -# stackit_server (Resource) - -Server resource schema. Must have a region specified in the provider configuration. - -## Example Usage - - -### With key pair -```terraform -resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) -} - -resource "stackit_server" "user-data-from-file" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-server" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - user_data = file("${path.module}/cloud-init.yaml") -} - -``` - -### Boot from volume -```terraform -resource "stackit_server" "boot-from-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = "example-keypair" -} - -``` - -### Boot from existing volume -```terraform -resource "stackit_volume" "example-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - size = 12 - source = { - type = "image" - id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-volume" - availability_zone = "eu01-1" -} - -resource "stackit_server" "boot-from-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - source_type = "volume" - source_id = stackit_volume.example-volume.volume_id - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name -} - -``` - -### Network setup -```terraform -resource "stackit_network" "network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network" - nameservers = ["192.0.2.0", "198.51.100.0", "203.0.113.0"] - ipv4_prefix_length = 24 -} - -resource "stackit_security_group" "sec-group" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-security-group" - stateful = true -} - -resource "stackit_security_group_rule" "rule" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = stackit_security_group.sec-group.security_group_id - direction = "ingress" - ether_type = "IPv4" -} - -resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.network.network_id - security_group_ids = [stackit_security_group.sec-group.security_group_id] -} - -resource "stackit_server" "server-with-network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] -} - -resource "stackit_public_ip" "public-ip" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = stackit_network_interface.nic.network_interface_id -} - -``` - -### Server with attached volume -```terraform -resource "stackit_volume" "example-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - size = 12 - performance_class = "storage_premium_perf6" - name = "example-volume" - availability_zone = "eu01-1" -} - -resource "stackit_server" "server-with-volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name -} - -resource "stackit_server_volume_attach" "attach_volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = stackit_server.server-with-volume.server_id - volume_id = stackit_volume.example-volume.volume_id -} - -``` - -### Server with user data (cloud-init) -```terraform -resource "stackit_server" "user-data" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-server" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - user_data = "#!/bin/bash\n/bin/su" -} - -resource "stackit_server" "user-data-from-file" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - boot_volume = { - size = 64 - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - } - name = "example-server" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - user_data = file("${path.module}/cloud-init.yaml") -} - -``` - -## Example Usage - -```terraform -resource "stackit_server" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "59838a89-51b1-4892-b57f-b3caf598ee2f" // Ubuntu 24.04 - } - availability_zone = "xxxx-x" - machine_type = "g2i.1" - network_interfaces = [ - stackit_network_interface.example.network_interface_id - ] -} - -# Only use the import statement, if you want to import an existing server -# Note: There will be a conflict which needs to be resolved manually. -# Must set a configuration value for the boot_volume.source_type and boot_volume.source_id attribute as the provider has marked it as required. -# Since those attributes are not fetched in general from the API call, after adding them this would replace your server resource after an terraform apply. -# In order to prevent this you need to add: -# lifecycle { -# ignore_changes = [ boot_volume ] -# } -import { - to = stackit_server.import-example - id = "${var.project_id},${var.region},${var.server_id}" -} -``` - - -## Schema - -### Required - -- `machine_type` (String) Name of the type of the machine for the server. Possible values are documented in [Virtual machine flavors](https://docs.stackit.cloud/products/compute-engine/server/basics/machine-types/) -- `name` (String) The name of the server. -- `project_id` (String) STACKIT project ID to which the server is associated. - -### Optional - -- `affinity_group` (String) The affinity group the server is assigned to. -- `availability_zone` (String) The availability zone of the server. -- `boot_volume` (Attributes) The boot volume for the server (see [below for nested schema](#nestedatt--boot_volume)) -- `desired_status` (String) The desired status of the server resource. Possible values are: `active`, `inactive`, `deallocated`. -- `image_id` (String) The image ID to be used for an ephemeral disk on the server. -- `keypair_name` (String) The name of the keypair used during server creation. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `network_interfaces` (List of String) The IDs of network interfaces which should be attached to the server. Updating it will recreate the server. **Required when (re-)creating servers. Still marked as optional in the schema to not introduce breaking changes. There will be a migration path for this field soon.** -- `region` (String) The resource region. If not defined, the provider region is used. -- `user_data` (String) User data that is passed via cloud-init to the server. - -### Read-Only - -- `created_at` (String) Date-time when the server was created -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`server_id`". -- `launched_at` (String) Date-time when the server was launched -- `server_id` (String) The server ID. -- `updated_at` (String) Date-time when the server was updated - - -### Nested Schema for `boot_volume` - -Required: - -- `source_id` (String) The ID of the source, either image ID or volume ID -- `source_type` (String) The type of the source. Possible values are: `volume`, `image`. - -Optional: - -- `delete_on_termination` (Boolean) Delete the volume during the termination of the server. Only allowed when `source_type` is `image`. -- `performance_class` (String) The performance class of the server. -- `size` (Number) The size of the boot volume in GB. Must be provided when `source_type` is `image`. - -Read-Only: - -- `id` (String) The ID of the boot volume diff --git a/docs/resources/server_backup_schedule.md b/docs/resources/server_backup_schedule.md deleted file mode 100644 index 80f6fc56..00000000 --- a/docs/resources/server_backup_schedule.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_backup_schedule Resource - stackit" -subcategory: "" -description: |- - Server backup schedule resource schema. Must have a region specified in the provider configuration. - ~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_server_backup_schedule (Resource) - -Server backup schedule resource schema. Must have a `region` specified in the provider configuration. - -~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -resource "stackit_server_backup_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example_backup_schedule_name" - rrule = "DTSTART;TZID=Europe/Sofia:20200803T023000 RRULE:FREQ=DAILY;INTERVAL=1" - enabled = true - backup_properties = { - name = "example_backup_name" - retention_period = 14 - volume_ids = null - } -} - -# Only use the import statement, if you want to import an existing server backup schedule -import { - to = stackit_server_backup_schedule.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.server_backup_schedule_id}" -} -``` - - -## Schema - -### Required - -- `backup_properties` (Attributes) Backup schedule details for the backups. (see [below for nested schema](#nestedatt--backup_properties)) -- `enabled` (Boolean) Is the backup schedule enabled or disabled. -- `name` (String) The schedule name. -- `project_id` (String) STACKIT Project ID to which the server is associated. -- `rrule` (String) Backup schedule described in `rrule` (recurrence rule) format. -- `server_id` (String) Server ID for the backup schedule. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `backup_schedule_id` (Number) Backup schedule ID. -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`server_id`,`backup_schedule_id`". - - -### Nested Schema for `backup_properties` - -Required: - -- `name` (String) -- `retention_period` (Number) - -Optional: - -- `volume_ids` (List of String) diff --git a/docs/resources/server_network_interface_attach.md b/docs/resources/server_network_interface_attach.md deleted file mode 100644 index eab7c8c9..00000000 --- a/docs/resources/server_network_interface_attach.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_network_interface_attach Resource - stackit" -subcategory: "" -description: |- - Network interface attachment resource schema. Attaches a network interface to a server. The attachment only takes full effect after server reboot. ---- - -# stackit_server_network_interface_attach (Resource) - -Network interface attachment resource schema. Attaches a network interface to a server. The attachment only takes full effect after server reboot. - -## Example Usage - -```terraform -resource "stackit_server_network_interface_attach" "attached_network_interface" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing server network interface attachment -import { - to = stackit_server_network_interface_attach.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.network_interface_id}" -} -``` - - -## Schema - -### Required - -- `network_interface_id` (String) The network interface ID. -- `project_id` (String) STACKIT project ID to which the network interface attachment is associated. -- `server_id` (String) The server ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`server_id`,`network_interface_id`". diff --git a/docs/resources/server_service_account_attach.md b/docs/resources/server_service_account_attach.md deleted file mode 100644 index 215c6f5f..00000000 --- a/docs/resources/server_service_account_attach.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_service_account_attach Resource - stackit" -subcategory: "" -description: |- - Service account attachment resource schema. Attaches a service account to a server. Must have a region specified in the provider configuration. ---- - -# stackit_server_service_account_attach (Resource) - -Service account attachment resource schema. Attaches a service account to a server. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_server_service_account_attach" "attached_service_account" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - service_account_email = "service-account@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing server service account attachment -import { - to = stackit_server_service_account_attach.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.service_account_email}" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the service account attachment is associated. -- `server_id` (String) The server ID. -- `service_account_email` (String) The service account email. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`server_id`,`service_account_email`". diff --git a/docs/resources/server_update_schedule.md b/docs/resources/server_update_schedule.md deleted file mode 100644 index f0c00c88..00000000 --- a/docs/resources/server_update_schedule.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_update_schedule Resource - stackit" -subcategory: "" -description: |- - Server update schedule resource schema. Must have a region specified in the provider configuration. - ~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our guide https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources for how to opt-in to use beta resources. ---- - -# stackit_server_update_schedule (Resource) - -Server update schedule resource schema. Must have a `region` specified in the provider configuration. - -~> This resource is in beta and may be subject to breaking changes in the future. Use with caution. See our [guide](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs/guides/opting_into_beta_resources) for how to opt-in to use beta resources. - -## Example Usage - -```terraform -resource "stackit_server_update_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example_update_schedule_name" - rrule = "DTSTART;TZID=Europe/Sofia:20200803T023000 RRULE:FREQ=DAILY;INTERVAL=1" - enabled = true - maintenance_window = 1 -} - -# Only use the import statement, if you want to import an existing server update schedule -import { - to = stackit_server_update_schedule.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.server_update_schedule_id}" -} -``` - - -## Schema - -### Required - -- `enabled` (Boolean) Is the update schedule enabled or disabled. -- `maintenance_window` (Number) Maintenance window [1..24]. -- `name` (String) The schedule name. -- `project_id` (String) STACKIT Project ID to which the server is associated. -- `rrule` (String) Update schedule described in `rrule` (recurrence rule) format. -- `server_id` (String) Server ID for the update schedule. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`region`,`server_id`,`update_schedule_id`". -- `update_schedule_id` (Number) Update schedule ID. diff --git a/docs/resources/server_volume_attach.md b/docs/resources/server_volume_attach.md deleted file mode 100644 index 61710ce4..00000000 --- a/docs/resources/server_volume_attach.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_server_volume_attach Resource - stackit" -subcategory: "" -description: |- - Volume attachment resource schema. Attaches a volume to a server. Must have a region specified in the provider configuration. ---- - -# stackit_server_volume_attach (Resource) - -Volume attachment resource schema. Attaches a volume to a server. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_server_volume_attach" "attached_volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - volume_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing server volume attachment -import { - to = stackit_server_volume_attach.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.volume_id}" -} -``` - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID to which the volume attachment is associated. -- `server_id` (String) The server ID. -- `volume_id` (String) The volume ID. - -### Optional - -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`server_id`,`volume_id`". diff --git a/docs/resources/service_account.md b/docs/resources/service_account.md deleted file mode 100644 index 23684418..00000000 --- a/docs/resources/service_account.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_service_account Resource - stackit" -subcategory: "" -description: |- - Service account resource schema. ---- - -# stackit_service_account (Resource) - -Service account resource schema. - -## Example Usage - -```terraform -resource "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "sa01" -} - -# Only use the import statement, if you want to import an existing service account -import { - to = stackit_service_account.import-example - id = "${var.project_id},${var.service_account_email}" -} -``` - - -## Schema - -### Required - -- `name` (String) Name of the service account. -- `project_id` (String) STACKIT project ID to which the service account is associated. - -### Read-Only - -- `email` (String) Email of the service account. -- `id` (String) Terraform's internal resource ID, structured as "`project_id`,`email`". diff --git a/docs/resources/service_account_access_token.md b/docs/resources/service_account_access_token.md deleted file mode 100644 index 49cc02e2..00000000 --- a/docs/resources/service_account_access_token.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_service_account_access_token Resource - stackit" -subcategory: "" -description: |- - Service account access token schema. - !> This resource is scheduled for deprecation and will be removed on December 17, 2025. To ensure a smooth transition, please refer to our migration guide at https://docs.stackit.cloud/platform/access-and-identity/service-accounts/migrate-flows/ for detailed instructions and recommendations. - Example Usage - Automatically rotate access tokens - - resource "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "sa01" - } - - resource "time_rotating" "rotate" { - rotation_days = 80 - } - - resource "stackit_service_account_access_token" "sa_token" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - service_account_email = stackit_service_account.sa.email - ttl_days = 180 - - rotate_when_changed = { - rotation = time_rotating.rotate.id - } - } ---- - -# stackit_service_account_access_token (Resource) - -Service account access token schema. - -!> This resource is scheduled for deprecation and will be removed on December 17, 2025. To ensure a smooth transition, please refer to our migration guide at https://docs.stackit.cloud/platform/access-and-identity/service-accounts/migrate-flows/ for detailed instructions and recommendations. - -## Example Usage - - -### Automatically rotate access tokens -```terraform -resource "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "sa01" -} - -resource "time_rotating" "rotate" { - rotation_days = 80 -} - -resource "stackit_service_account_access_token" "sa_token" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - service_account_email = stackit_service_account.sa.email - ttl_days = 180 - - rotate_when_changed = { - rotation = time_rotating.rotate.id - } -} - -``` - - - - -## Schema - -### Required - -- `project_id` (String) STACKIT project ID associated with the service account token. -- `service_account_email` (String) Email address linked to the service account. - -### Optional - -- `rotate_when_changed` (Map of String) A map of arbitrary key/value pairs that will force recreation of the token when they change, enabling token rotation based on external conditions such as a rotating timestamp. Changing this forces a new resource to be created. -- `ttl_days` (Number) Specifies the token's validity duration in days. If unspecified, defaults to 90 days. - -### Read-Only - -- `access_token_id` (String) Identifier for the access token linked to the service account. -- `active` (Boolean) Indicate whether the token is currently active or inactive -- `created_at` (String) Timestamp indicating when the access token was created. -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`service_account_email`,`access_token_id`". -- `token` (String, Sensitive) JWT access token for API authentication. Prefixed by 'Bearer' and should be stored securely as it is irretrievable once lost. -- `valid_until` (String) Estimated expiration timestamp of the access token. For precise validity, check the JWT details. diff --git a/docs/resources/service_account_key.md b/docs/resources/service_account_key.md deleted file mode 100644 index 8628384f..00000000 --- a/docs/resources/service_account_key.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_service_account_key Resource - stackit" -subcategory: "" -description: |- - Service account key schema. - Example Usage - Automatically rotate service account keys - - resource "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "sa01" - } - - resource "time_rotating" "rotate" { - rotation_days = 80 - } - - resource "stackit_service_account_key" "sa_key" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - service_account_email = stackit_service_account.sa.email - ttl_days = 90 - - rotate_when_changed = { - rotation = time_rotating.rotate.id - } - } ---- - -# stackit_service_account_key (Resource) - -Service account key schema. -## Example Usage - - -### Automatically rotate service account keys -```terraform -resource "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "sa01" -} - -resource "time_rotating" "rotate" { - rotation_days = 80 -} - -resource "stackit_service_account_key" "sa_key" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - service_account_email = stackit_service_account.sa.email - ttl_days = 90 - - rotate_when_changed = { - rotation = time_rotating.rotate.id - } -} - -``` - - - - -## Schema - -### Required - -- `project_id` (String) The STACKIT project ID associated with the service account key. -- `service_account_email` (String) The email address associated with the service account, used for account identification and communication. - -### Optional - -- `public_key` (String) Specifies the public_key (RSA2048 key-pair). If not provided, a certificate from STACKIT will be used to generate a private_key. -- `rotate_when_changed` (Map of String) A map of arbitrary key/value pairs designed to force key recreation when they change, facilitating key rotation based on external factors such as a changing timestamp. Modifying this map triggers the creation of a new resource. -- `ttl_days` (Number) Specifies the key's validity duration in days. If left unspecified, the key is considered valid until it is deleted - -### Read-Only - -- `id` (String) Terraform's internal resource identifier. It is structured as "`project_id`,`service_account_email`,`key_id`". -- `json` (String, Sensitive) The raw JSON representation of the service account key json, available for direct use. -- `key_id` (String) The unique identifier for the key associated with the service account. diff --git a/docs/resources/ske_cluster.md b/docs/resources/ske_cluster.md deleted file mode 100644 index b8b68140..00000000 --- a/docs/resources/ske_cluster.md +++ /dev/null @@ -1,204 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_ske_cluster Resource - stackit" -subcategory: "" -description: |- - SKE Cluster Resource schema. Must have a region specified in the provider configuration. - -> When updating node_pools of a stackit_ske_cluster, the Terraform plan might appear incorrect as it matches the node pools by index rather than by name. However, the SKE API correctly identifies node pools by name and applies the intended changes. Please review your changes carefully to ensure the correct configuration will be applied. ---- - -# stackit_ske_cluster (Resource) - -SKE Cluster Resource schema. Must have a `region` specified in the provider configuration. - --> When updating `node_pools` of a `stackit_ske_cluster`, the Terraform plan might appear incorrect as it matches the node pools by index rather than by name. However, the SKE API correctly identifies node pools by name and applies the intended changes. Please review your changes carefully to ensure the correct configuration will be applied. - -## Example Usage - -```terraform -resource "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - kubernetes_version_min = "x.x" - node_pools = [ - { - name = "np-example" - machine_type = "x.x" - os_version = "x.x.x" - minimum = "2" - maximum = "3" - availability_zones = ["eu01-3"] - } - ] - maintenance = { - enable_kubernetes_version_updates = true - enable_machine_image_version_updates = true - start = "01:00:00Z" - end = "02:00:00Z" - } -} - -# Only use the import statement, if you want to import an existing ske cluster -import { - to = stackit_ske_cluster.import-example - id = "${var.project_id},${var.region},${var.ske_name}" -} -``` - - -## Schema - -### Required - -- `name` (String) The cluster name. -- `node_pools` (Attributes List) One or more `node_pool` block as defined below. (see [below for nested schema](#nestedatt--node_pools)) -- `project_id` (String) STACKIT project ID to which the cluster is associated. - -### Optional - -- `extensions` (Attributes) A single extensions block as defined below. (see [below for nested schema](#nestedatt--extensions)) -- `hibernations` (Attributes List) One or more hibernation block as defined below. (see [below for nested schema](#nestedatt--hibernations)) -- `kubernetes_version_min` (String) The minimum Kubernetes version. This field will be used to set the minimum kubernetes version on creation/update of the cluster. If unset, the latest supported Kubernetes version will be used. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). To get the current kubernetes version being used for your cluster, use the read-only `kubernetes_version_used` field. -- `maintenance` (Attributes) A single maintenance block as defined below. (see [below for nested schema](#nestedatt--maintenance)) -- `network` (Attributes) Network block as defined below. (see [below for nested schema](#nestedatt--network)) -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `egress_address_ranges` (List of String) The outgoing network ranges (in CIDR notation) of traffic originating from workload on the cluster. -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`name`". -- `kubernetes_version_used` (String) Full Kubernetes version used. For example, if 1.22 was set in `kubernetes_version_min`, this value may result to 1.22.15. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). -- `pod_address_ranges` (List of String) The network ranges (in CIDR notation) used by pods of the cluster. - - -### Nested Schema for `node_pools` - -Required: - -- `availability_zones` (List of String) Specify a list of availability zones. E.g. `eu01-m` -- `machine_type` (String) The machine type. -- `maximum` (Number) Maximum number of nodes in the pool. -- `minimum` (Number) Minimum number of nodes in the pool. -- `name` (String) Specifies the name of the node pool. - -Optional: - -- `allow_system_components` (Boolean) Allow system components to run on this node pool. -- `cri` (String) Specifies the container runtime. Defaults to `containerd` -- `labels` (Map of String) Labels to add to each node. -- `max_surge` (Number) Maximum number of additional VMs that are created during an update. If set (larger than 0), then it must be at least the amount of zones configured for the nodepool. The `max_surge` and `max_unavailable` fields cannot both be unset at the same time. -- `max_unavailable` (Number) Maximum number of VMs that that can be unavailable during an update. If set (larger than 0), then it must be at least the amount of zones configured for the nodepool. The `max_surge` and `max_unavailable` fields cannot both be unset at the same time. -- `os_name` (String) The name of the OS image. Defaults to `flatcar`. -- `os_version` (String, Deprecated) This field is deprecated, use `os_version_min` to configure the version and `os_version_used` to get the currently used version instead. -- `os_version_min` (String) The minimum OS image version. This field will be used to set the minimum OS image version on creation/update of the cluster. If unset, the latest supported OS image version will be used. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). To get the current OS image version being used for the node pool, use the read-only `os_version_used` field. -- `taints` (Attributes List) Specifies a taint list as defined below. (see [below for nested schema](#nestedatt--node_pools--taints)) -- `volume_size` (Number) The volume size in GB. Defaults to `20` -- `volume_type` (String) Specifies the volume type. Defaults to `storage_premium_perf1`. - -Read-Only: - -- `os_version_used` (String) Full OS image version used. For example, if 3815.2 was set in `os_version_min`, this value may result to 3815.2.2. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). - - -### Nested Schema for `node_pools.taints` - -Required: - -- `effect` (String) The taint effect. E.g `PreferNoSchedule`. -- `key` (String) Taint key to be applied to a node. - -Optional: - -- `value` (String) Taint value corresponding to the taint key. - - - - -### Nested Schema for `extensions` - -Optional: - -- `acl` (Attributes) Cluster access control configuration. (see [below for nested schema](#nestedatt--extensions--acl)) -- `argus` (Attributes, Deprecated) A single argus block as defined below. This field is deprecated and will be removed 06 January 2026. (see [below for nested schema](#nestedatt--extensions--argus)) -- `dns` (Attributes) DNS extension configuration (see [below for nested schema](#nestedatt--extensions--dns)) -- `observability` (Attributes) A single observability block as defined below. (see [below for nested schema](#nestedatt--extensions--observability)) - - -### Nested Schema for `extensions.acl` - -Required: - -- `allowed_cidrs` (List of String) Specify a list of CIDRs to whitelist. -- `enabled` (Boolean) Is ACL enabled? - - - -### Nested Schema for `extensions.argus` - -Required: - -- `enabled` (Boolean) Flag to enable/disable Argus extensions. - -Optional: - -- `argus_instance_id` (String) Argus instance ID to choose which Argus instance is used. Required when enabled is set to `true`. - - - -### Nested Schema for `extensions.dns` - -Required: - -- `enabled` (Boolean) Flag to enable/disable DNS extensions - -Optional: - -- `zones` (List of String) Specify a list of domain filters for externalDNS (e.g., `foo.runs.onstackit.cloud`) - - - -### Nested Schema for `extensions.observability` - -Required: - -- `enabled` (Boolean) Flag to enable/disable Observability extensions. - -Optional: - -- `instance_id` (String) Observability instance ID to choose which Observability instance is used. Required when enabled is set to `true`. - - - - -### Nested Schema for `hibernations` - -Required: - -- `end` (String) End time of hibernation in crontab syntax. E.g. `0 8 * * *` for waking up the cluster at 8am. -- `start` (String) Start time of cluster hibernation in crontab syntax. E.g. `0 18 * * *` for starting everyday at 6pm. - -Optional: - -- `timezone` (String) Timezone name corresponding to a file in the IANA Time Zone database. i.e. `Europe/Berlin`. - - - -### Nested Schema for `maintenance` - -Required: - -- `end` (String) Time for maintenance window end. E.g. `01:23:45Z`, `05:00:00+02:00`. -- `start` (String) Time for maintenance window start. E.g. `01:23:45Z`, `05:00:00+02:00`. - -Optional: - -- `enable_kubernetes_version_updates` (Boolean) Flag to enable/disable auto-updates of the Kubernetes version. Defaults to `true`. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). -- `enable_machine_image_version_updates` (Boolean) Flag to enable/disable auto-updates of the OS image version. Defaults to `true`. SKE automatically updates the cluster Kubernetes version if you have set `maintenance.enable_kubernetes_version_updates` to true or if there is a mandatory update, as described in [General information for Kubernetes & OS updates](https://docs.stackit.cloud/products/runtime/kubernetes-engine/basics/version-updates/). - - - -### Nested Schema for `network` - -Optional: - -- `id` (String) ID of the STACKIT Network Area (SNA) network into which the cluster will be deployed. diff --git a/docs/resources/ske_kubeconfig.md b/docs/resources/ske_kubeconfig.md deleted file mode 100644 index ff890a8f..00000000 --- a/docs/resources/ske_kubeconfig.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_ske_kubeconfig Resource - stackit" -subcategory: "" -description: |- - SKE kubeconfig resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_ske_kubeconfig (Resource) - -SKE kubeconfig resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_ske_kubeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - cluster_name = "example-cluster" - - refresh = true - expiration = 7200 # 2 hours - refresh_before = 3600 # 1 hour -} -``` - - -## Schema - -### Required - -- `cluster_name` (String) Name of the SKE cluster. -- `project_id` (String) STACKIT project ID to which the cluster is associated. - -### Optional - -- `expiration` (Number) Expiration time of the kubeconfig, in seconds. Defaults to `3600` -- `refresh` (Boolean) If set to true, the provider will check if the kubeconfig has expired and will generated a new valid one in-place -- `refresh_before` (Number) Number of seconds before expiration to trigger refresh of the kubeconfig at. Only used if refresh is set to true. -- `region` (String) The resource region. If not defined, the provider region is used. - -### Read-Only - -- `creation_time` (String) Date-time when the kubeconfig was created -- `expires_at` (String) Timestamp when the kubeconfig expires -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`cluster_name`,`kube_config_id`". -- `kube_config` (String, Sensitive) Raw short-lived admin kubeconfig. -- `kube_config_id` (String) Internally generated UUID to identify a kubeconfig resource in Terraform, since the SKE API doesnt return a kubeconfig identifier diff --git a/docs/resources/sqlserverflex_instance.md b/docs/resources/sqlserverflex_instance.md deleted file mode 100644 index 2d88f429..00000000 --- a/docs/resources/sqlserverflex_instance.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_sqlserverflex_instance Resource - stackit" -subcategory: "" -description: |- - SQLServer Flex instance resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_sqlserverflex_instance (Resource) - -SQLServer Flex instance resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_sqlserverflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - acl = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] - backup_schedule = "00 00 * * *" - flavor = { - cpu = 4 - ram = 16 - } - storage = { - class = "class" - size = 5 - } - version = 2022 -} - -# Only use the import statement, if you want to import an existing sqlserverflex instance -import { - to = stackit_sqlserverflex_instance.import-example - id = "${var.project_id},${var.region},${var.sql_instance_id}" -} -``` - - -## Schema - -### Required - -- `flavor` (Attributes) (see [below for nested schema](#nestedatt--flavor)) -- `name` (String) Instance name. -- `project_id` (String) STACKIT project ID to which the instance is associated. - -### Optional - -- `acl` (List of String) The Access Control List (ACL) for the SQLServer Flex instance. -- `backup_schedule` (String) The backup schedule. Should follow the cron scheduling system format (e.g. "0 0 * * *") -- `options` (Attributes) (see [below for nested schema](#nestedatt--options)) -- `region` (String) The resource region. If not defined, the provider region is used. -- `storage` (Attributes) (see [below for nested schema](#nestedatt--storage)) -- `version` (String) - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`". -- `instance_id` (String) ID of the SQLServer Flex instance. -- `replicas` (Number) - - -### Nested Schema for `flavor` - -Required: - -- `cpu` (Number) -- `ram` (Number) - -Read-Only: - -- `description` (String) -- `id` (String) - - - -### Nested Schema for `options` - -Optional: - -- `retention_days` (Number) - -Read-Only: - -- `edition` (String) - - - -### Nested Schema for `storage` - -Optional: - -- `class` (String) -- `size` (Number) diff --git a/docs/resources/sqlserverflex_user.md b/docs/resources/sqlserverflexalpha_user.md similarity index 80% rename from docs/resources/sqlserverflex_user.md rename to docs/resources/sqlserverflexalpha_user.md index 20a3b161..1554ad07 100644 --- a/docs/resources/sqlserverflex_user.md +++ b/docs/resources/sqlserverflexalpha_user.md @@ -1,19 +1,21 @@ --- # generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_sqlserverflex_user Resource - stackit" +page_title: "stackitprivatepreview_sqlserverflexalpha_user Resource - stackitprivatepreview" subcategory: "" description: |- SQLServer Flex user resource schema. Must have a region specified in the provider configuration. --- -# stackit_sqlserverflex_user (Resource) +# stackitprivatepreview_sqlserverflexalpha_user (Resource) SQLServer Flex user resource schema. Must have a `region` specified in the provider configuration. ## Example Usage ```terraform -resource "stackit_sqlserverflex_user" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_sqlserverflexalpha_user" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" username = "username" @@ -22,7 +24,7 @@ resource "stackit_sqlserverflex_user" "example" { # Only use the import statement, if you want to import an existing sqlserverflex user import { - to = stackit_sqlserverflex_user.import-example + to = stackitprivatepreview_sqlserverflexalpha_user.import-example id = "${var.project_id},${var.region},${var.sql_instance_id},${var.sql_user_id}" } ``` @@ -43,8 +45,10 @@ import { ### Read-Only +- `default_database` (String) - `host` (String) - `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`instance_id`,`user_id`". - `password` (String, Sensitive) Password of the user account. - `port` (Number) +- `status` (String) - `user_id` (String) User ID. diff --git a/docs/resources/volume.md b/docs/resources/volume.md deleted file mode 100644 index 0e61bb13..00000000 --- a/docs/resources/volume.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "stackit_volume Resource - stackit" -subcategory: "" -description: |- - Volume resource schema. Must have a region specified in the provider configuration. ---- - -# stackit_volume (Resource) - -Volume resource schema. Must have a `region` specified in the provider configuration. - -## Example Usage - -```terraform -resource "stackit_volume" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "my_volume" - availability_zone = "eu01-1" - size = 64 - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing volume -import { - to = stackit_volume.import-example - id = "${var.project_id},${var.region},${var.volume_id}" -} -``` - - -## Schema - -### Required - -- `availability_zone` (String) The availability zone of the volume. -- `project_id` (String) STACKIT project ID to which the volume is associated. - -### Optional - -- `description` (String) The description of the volume. -- `labels` (Map of String) Labels are key-value string pairs which can be attached to a resource container -- `name` (String) The name of the volume. -- `performance_class` (String) The performance class of the volume. Possible values are documented in [Service plans BlockStorage](https://docs.stackit.cloud/products/storage/block-storage/basics/service-plans/#currently-available-service-plans-performance-classes) -- `region` (String) The resource region. If not defined, the provider region is used. -- `size` (Number) The size of the volume in GB. It can only be updated to a larger value than the current size. Either `size` or `source` must be provided -- `source` (Attributes) The source of the volume. It can be either a volume, an image, a snapshot or a backup. Either `size` or `source` must be provided (see [below for nested schema](#nestedatt--source)) - -### Read-Only - -- `id` (String) Terraform's internal resource ID. It is structured as "`project_id`,`region`,`volume_id`". -- `server_id` (String) The server ID of the server to which the volume is attached to. -- `volume_id` (String) The volume ID. - - -### Nested Schema for `source` - -Required: - -- `id` (String) The ID of the source, e.g. image ID -- `type` (String) The type of the source. Possible values are: `volume`, `image`, `snapshot`, `backup`. diff --git a/examples/data-sources/stackit_affinity_group/data-source.tf b/examples/data-sources/stackit_affinity_group/data-source.tf deleted file mode 100644 index 0d6fe625..00000000 --- a/examples/data-sources/stackit_affinity_group/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_affinity_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - affinity_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_cdn_custom_domain/data-source.tf b/examples/data-sources/stackit_cdn_custom_domain/data-source.tf deleted file mode 100644 index 23504bb6..00000000 --- a/examples/data-sources/stackit_cdn_custom_domain/data-source.tf +++ /dev/null @@ -1,6 +0,0 @@ -data "stackit_cdn_custom_domain" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - distribution_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "https://xxx.xxx" -} - diff --git a/examples/data-sources/stackit_cdn_distribution/data-source.tf b/examples/data-sources/stackit_cdn_distribution/data-source.tf deleted file mode 100644 index be24c0bc..00000000 --- a/examples/data-sources/stackit_cdn_distribution/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_cdn_distribution" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - distribution_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - diff --git a/examples/data-sources/stackit_dns_record_set/data-source.tf b/examples/data-sources/stackit_dns_record_set/data-source.tf deleted file mode 100644 index ad81e4d9..00000000 --- a/examples/data-sources/stackit_dns_record_set/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_dns_record_set" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - zone_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - record_set_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_dns_zone/data-source.tf b/examples/data-sources/stackit_dns_zone/data-source.tf deleted file mode 100644 index 227e1268..00000000 --- a/examples/data-sources/stackit_dns_zone/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_dns_zone" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - zone_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_git/data-source.tf b/examples/data-sources/stackit_git/data-source.tf deleted file mode 100644 index d6e73d27..00000000 --- a/examples/data-sources/stackit_git/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_git" "git" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_iaas_project/data-source.tf b/examples/data-sources/stackit_iaas_project/data-source.tf deleted file mode 100644 index cb5e87f0..00000000 --- a/examples/data-sources/stackit_iaas_project/data-source.tf +++ /dev/null @@ -1,3 +0,0 @@ -data "stackit_iaas_project" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_image/data-source.tf b/examples/data-sources/stackit_image/data-source.tf deleted file mode 100644 index adc05587..00000000 --- a/examples/data-sources/stackit_image/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_image" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - image_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_image_v2/data-source.tf b/examples/data-sources/stackit_image_v2/data-source.tf deleted file mode 100644 index 401488c4..00000000 --- a/examples/data-sources/stackit_image_v2/data-source.tf +++ /dev/null @@ -1,28 +0,0 @@ -data "stackit_image_v2" "default" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - image_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -data "stackit_image_v2" "name_match" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "Ubuntu 22.04" -} - -data "stackit_image_v2" "name_regex_latest" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name_regex = "^Ubuntu .*" -} - -data "stackit_image_v2" "name_regex_oldest" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name_regex = "^Ubuntu .*" - sort_ascending = true -} - -data "stackit_image_v2" "filter_distro_version" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = { - distro = "debian" - version = "11" - } -} \ No newline at end of file diff --git a/examples/data-sources/stackit_key_pair/data-source.tf b/examples/data-sources/stackit_key_pair/data-source.tf deleted file mode 100644 index 6fbd302b..00000000 --- a/examples/data-sources/stackit_key_pair/data-source.tf +++ /dev/null @@ -1,3 +0,0 @@ -data "stackit_key_pair" "example" { - name = "example-key-pair-name" -} diff --git a/examples/data-sources/stackit_kms_key/data-source.tf b/examples/data-sources/stackit_kms_key/data-source.tf deleted file mode 100644 index 17ab7214..00000000 --- a/examples/data-sources/stackit_kms_key/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_kms_key" "key" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - key_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_kms_keyring/data-source.tf b/examples/data-sources/stackit_kms_keyring/data-source.tf deleted file mode 100644 index 863c896a..00000000 --- a/examples/data-sources/stackit_kms_keyring/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_kms_keyring" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_kms_wrapping_key/data-source.tf b/examples/data-sources/stackit_kms_wrapping_key/data-source.tf deleted file mode 100644 index bb12e498..00000000 --- a/examples/data-sources/stackit_kms_wrapping_key/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_kms_wrapping_key" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - wrapping_key_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_loadbalancer/data-source.tf b/examples/data-sources/stackit_loadbalancer/data-source.tf deleted file mode 100644 index 52a04e6e..00000000 --- a/examples/data-sources/stackit_loadbalancer/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-load-balancer" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_logme_credential/data-source.tf b/examples/data-sources/stackit_logme_credential/data-source.tf deleted file mode 100644 index d6ea7216..00000000 --- a/examples/data-sources/stackit_logme_credential/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_logme_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_logme_instance/data-source.tf b/examples/data-sources/stackit_logme_instance/data-source.tf deleted file mode 100644 index 5fb2e57f..00000000 --- a/examples/data-sources/stackit_logme_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_logme_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_machine_type/data-source.tf b/examples/data-sources/stackit_machine_type/data-source.tf deleted file mode 100644 index 6120b15f..00000000 --- a/examples/data-sources/stackit_machine_type/data-source.tf +++ /dev/null @@ -1,21 +0,0 @@ -data "stackit_machine_type" "two_vcpus_filter" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "vcpus==2" -} - -data "stackit_machine_type" "filter_sorted_ascending_false" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "vcpus >= 2 && ram >= 2048" - sort_ascending = false -} - -data "stackit_machine_type" "intel_icelake_generic_filter" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "extraSpecs.cpu==\"intel-icelake-generic\" && vcpus == 2" -} - -# returns warning -data "stackit_machine_type" "no_match" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - filter = "vcpus == 99" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_mariadb_credential/data-source.tf b/examples/data-sources/stackit_mariadb_credential/data-source.tf deleted file mode 100644 index 7adeb138..00000000 --- a/examples/data-sources/stackit_mariadb_credential/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_mariadb_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_mongodbflex_instance/data-source.tf b/examples/data-sources/stackit_mongodbflex_instance/data-source.tf deleted file mode 100644 index cc6d6148..00000000 --- a/examples/data-sources/stackit_mongodbflex_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_mongodbflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_network/data-source.tf b/examples/data-sources/stackit_network/data-source.tf deleted file mode 100644 index 6a932ba5..00000000 --- a/examples/data-sources/stackit_network/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_network" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_network_area/data-source.tf b/examples/data-sources/stackit_network_area/data-source.tf deleted file mode 100644 index 74872c56..00000000 --- a/examples/data-sources/stackit_network_area/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_network_area" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_network_area_region/data-source.tf b/examples/data-sources/stackit_network_area_region/data-source.tf deleted file mode 100644 index f673f587..00000000 --- a/examples/data-sources/stackit_network_area_region/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_network_area_region" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_network_area_route/data-source.tf b/examples/data-sources/stackit_network_area_route/data-source.tf deleted file mode 100644 index 3f0db94d..00000000 --- a/examples/data-sources/stackit_network_area_route/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_route_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_network_interface/data-source.tf b/examples/data-sources/stackit_network_interface/data-source.tf deleted file mode 100644 index 2c223f40..00000000 --- a/examples/data-sources/stackit_network_interface/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_network_interface" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_objectstorage_bucket/data-source.tf b/examples/data-sources/stackit_objectstorage_bucket/data-source.tf deleted file mode 100644 index e8da2fe4..00000000 --- a/examples/data-sources/stackit_objectstorage_bucket/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_objectstorage_bucket" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-name" -} diff --git a/examples/data-sources/stackit_objectstorage_credential/data-source.tf b/examples/data-sources/stackit_objectstorage_credential/data-source.tf deleted file mode 100644 index d61e4e47..00000000 --- a/examples/data-sources/stackit_objectstorage_credential/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_objectstorage_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_objectstorage_credentials_group/data-source.tf b/examples/data-sources/stackit_objectstorage_credentials_group/data-source.tf deleted file mode 100644 index 250795f8..00000000 --- a/examples/data-sources/stackit_objectstorage_credentials_group/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_objectstorage_credentials_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_observability_instance/data-source.tf b/examples/data-sources/stackit_observability_instance/data-source.tf deleted file mode 100644 index 9606cf85..00000000 --- a/examples/data-sources/stackit_observability_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_observability_logalertgroup/data-source.tf b/examples/data-sources/stackit_observability_logalertgroup/data-source.tf deleted file mode 100644 index fac8e26b..00000000 --- a/examples/data-sources/stackit_observability_logalertgroup/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_observability_logalertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-log-alert-group" -} diff --git a/examples/data-sources/stackit_observability_scrapeconfig/data-source.tf b/examples/data-sources/stackit_observability_scrapeconfig/data-source.tf deleted file mode 100644 index 2efccf14..00000000 --- a/examples/data-sources/stackit_observability_scrapeconfig/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_observability_scrapeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" -} diff --git a/examples/data-sources/stackit_opensearch_credential/data-source.tf b/examples/data-sources/stackit_opensearch_credential/data-source.tf deleted file mode 100644 index 0cc9149b..00000000 --- a/examples/data-sources/stackit_opensearch_credential/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_opensearch_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_opensearch_instance/data-source.tf b/examples/data-sources/stackit_opensearch_instance/data-source.tf deleted file mode 100644 index 980e3e49..00000000 --- a/examples/data-sources/stackit_opensearch_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_opensearch_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_postgresflex_instance/data-source.tf b/examples/data-sources/stackit_postgresflex_instance/data-source.tf deleted file mode 100644 index c5e07e13..00000000 --- a/examples/data-sources/stackit_postgresflex_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_postgresflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_public_ip/data-source.tf b/examples/data-sources/stackit_public_ip/data-source.tf deleted file mode 100644 index 731d9ed7..00000000 --- a/examples/data-sources/stackit_public_ip/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_public_ip" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - public_ip_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_public_ip_ranges/data-source.tf b/examples/data-sources/stackit_public_ip_ranges/data-source.tf deleted file mode 100644 index 78cc517a..00000000 --- a/examples/data-sources/stackit_public_ip_ranges/data-source.tf +++ /dev/null @@ -1,17 +0,0 @@ -data "stackit_public_ip_ranges" "example" {} - -# example usage: allow stackit services and customer vpn cidr to access observability apis -locals { - vpn_cidrs = ["X.X.X.X/32", "X.X.X.X/24"] -} - -resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Monitoring-Medium-EU01" - # Allow all stackit services and customer vpn cidr to access observability apis - acl = concat(data.stackit_public_ip_ranges.example.cidr_list, local.vpn_cidrs) - metrics_retention_days = 90 - metrics_retention_days_5m_downsampling = 90 - metrics_retention_days_1h_downsampling = 90 -} \ No newline at end of file diff --git a/examples/data-sources/stackit_rabbitmq_credential/data-source.tf b/examples/data-sources/stackit_rabbitmq_credential/data-source.tf deleted file mode 100644 index d0d37058..00000000 --- a/examples/data-sources/stackit_rabbitmq_credential/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_rabbitmq_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_rabbitmq_instance/data-source.tf b/examples/data-sources/stackit_rabbitmq_instance/data-source.tf deleted file mode 100644 index 13ee22a1..00000000 --- a/examples/data-sources/stackit_rabbitmq_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_rabbitmq_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_redis_credential/data-source.tf b/examples/data-sources/stackit_redis_credential/data-source.tf deleted file mode 100644 index 9f96c089..00000000 --- a/examples/data-sources/stackit_redis_credential/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_redis_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credential_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_redis_instance/data-source.tf b/examples/data-sources/stackit_redis_instance/data-source.tf deleted file mode 100644 index d0de5480..00000000 --- a/examples/data-sources/stackit_redis_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_redis_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_resourcemanager_folder/data-source.tf b/examples/data-sources/stackit_resourcemanager_folder/data-source.tf deleted file mode 100644 index a91313e9..00000000 --- a/examples/data-sources/stackit_resourcemanager_folder/data-source.tf +++ /dev/null @@ -1,3 +0,0 @@ -data "stackit_resourcemanager_folder" "example" { - container_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_resourcemanager_project/data-source.tf b/examples/data-sources/stackit_resourcemanager_project/data-source.tf deleted file mode 100644 index 2aa4872d..00000000 --- a/examples/data-sources/stackit_resourcemanager_project/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_resourcemanager_project" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - container_id = "example-container-abc123" -} diff --git a/examples/data-sources/stackit_routing_table/data-source.tf b/examples/data-sources/stackit_routing_table/data-source.tf deleted file mode 100644 index 575ab17d..00000000 --- a/examples/data-sources/stackit_routing_table/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_routing_table" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_routing_table_route/data-source.tf b/examples/data-sources/stackit_routing_table_route/data-source.tf deleted file mode 100644 index 630b9dec..00000000 --- a/examples/data-sources/stackit_routing_table_route/data-source.tf +++ /dev/null @@ -1,6 +0,0 @@ -data "stackit_routing_table_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - route_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_routing_table_routes/data-source.tf b/examples/data-sources/stackit_routing_table_routes/data-source.tf deleted file mode 100644 index badf79c3..00000000 --- a/examples/data-sources/stackit_routing_table_routes/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_routing_table_routes" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_routing_tables/data-source.tf b/examples/data-sources/stackit_routing_tables/data-source.tf deleted file mode 100644 index f71527d7..00000000 --- a/examples/data-sources/stackit_routing_tables/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_routing_tables" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_scf_organization/data-source.tf b/examples/data-sources/stackit_scf_organization/data-source.tf deleted file mode 100644 index 4d466602..00000000 --- a/examples/data-sources/stackit_scf_organization/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_scf_organization" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - org_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_scf_organization_manager/data-source.tf b/examples/data-sources/stackit_scf_organization_manager/data-source.tf deleted file mode 100644 index 53487c9c..00000000 --- a/examples/data-sources/stackit_scf_organization_manager/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_scf_organization_manager" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - org_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_scf_platform/data-source.tf b/examples/data-sources/stackit_scf_platform/data-source.tf deleted file mode 100644 index 0ddf316a..00000000 --- a/examples/data-sources/stackit_scf_platform/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_scf_platform" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - platform_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_secretsmanager_instance/data-source.tf b/examples/data-sources/stackit_secretsmanager_instance/data-source.tf deleted file mode 100644 index 95be0533..00000000 --- a/examples/data-sources/stackit_secretsmanager_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_secretsmanager_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_secretsmanager_user/data-source.tf b/examples/data-sources/stackit_secretsmanager_user/data-source.tf deleted file mode 100644 index 636917dd..00000000 --- a/examples/data-sources/stackit_secretsmanager_user/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_secretsmanager_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_security_group/data-source.tf b/examples/data-sources/stackit_security_group/data-source.tf deleted file mode 100644 index ebb69e53..00000000 --- a/examples/data-sources/stackit_security_group/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_security_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_security_group_rule/data-source.tf b/examples/data-sources/stackit_security_group_rule/data-source.tf deleted file mode 100644 index ad27c79d..00000000 --- a/examples/data-sources/stackit_security_group_rule/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_security_group_rule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_rule_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_server/data-source.tf b/examples/data-sources/stackit_server/data-source.tf deleted file mode 100644 index 16c231f5..00000000 --- a/examples/data-sources/stackit_server/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_server" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} \ No newline at end of file diff --git a/examples/data-sources/stackit_server_backup_schedule/data-source.tf b/examples/data-sources/stackit_server_backup_schedule/data-source.tf deleted file mode 100644 index c6c11269..00000000 --- a/examples/data-sources/stackit_server_backup_schedule/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_server_backup_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - backup_schedule_id = xxxxx -} diff --git a/examples/data-sources/stackit_server_backup_schedules/data-source.tf b/examples/data-sources/stackit_server_backup_schedules/data-source.tf deleted file mode 100644 index 079f0786..00000000 --- a/examples/data-sources/stackit_server_backup_schedules/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_server_backup_schedules" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_server_update_schedule/data-source.tf b/examples/data-sources/stackit_server_update_schedule/data-source.tf deleted file mode 100644 index 694762a7..00000000 --- a/examples/data-sources/stackit_server_update_schedule/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_server_update_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - update_schedule_id = xxxxx -} diff --git a/examples/data-sources/stackit_server_update_schedules/data-source.tf b/examples/data-sources/stackit_server_update_schedules/data-source.tf deleted file mode 100644 index 1d291643..00000000 --- a/examples/data-sources/stackit_server_update_schedules/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_server_update_schedules" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_service_account/data-source.tf b/examples/data-sources/stackit_service_account/data-source.tf deleted file mode 100644 index bb658f11..00000000 --- a/examples/data-sources/stackit_service_account/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - email = "sa01-8565oq1@sa.stackit.cloud" -} diff --git a/examples/data-sources/stackit_ske_cluster/data-source.tf b/examples/data-sources/stackit_ske_cluster/data-source.tf deleted file mode 100644 index 6da899b2..00000000 --- a/examples/data-sources/stackit_ske_cluster/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-name" -} diff --git a/examples/data-sources/stackit_sqlserverflex_instance/data-source.tf b/examples/data-sources/stackit_sqlserverflex_instance/data-source.tf deleted file mode 100644 index f31899f2..00000000 --- a/examples/data-sources/stackit_sqlserverflex_instance/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_sqlserverflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_sqlserverflex_user/data-source.tf b/examples/data-sources/stackit_sqlserverflex_user/data-source.tf deleted file mode 100644 index 39e44e42..00000000 --- a/examples/data-sources/stackit_sqlserverflex_user/data-source.tf +++ /dev/null @@ -1,5 +0,0 @@ -data "stackit_sqlserverflex_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_volume/data-source.tf b/examples/data-sources/stackit_volume/data-source.tf deleted file mode 100644 index ee380b0a..00000000 --- a/examples/data-sources/stackit_volume/data-source.tf +++ /dev/null @@ -1,4 +0,0 @@ -data "stackit_volume" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - volume_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} diff --git a/examples/data-sources/stackit_postgresflex_database/data-source.tf b/examples/data-sources/stackitprivatepreview_postgresflexalpha_database/data-source.tf similarity index 64% rename from examples/data-sources/stackit_postgresflex_database/data-source.tf rename to examples/data-sources/stackitprivatepreview_postgresflexalpha_database/data-source.tf index 224c1211..81f069ff 100644 --- a/examples/data-sources/stackit_postgresflex_database/data-source.tf +++ b/examples/data-sources/stackitprivatepreview_postgresflexalpha_database/data-source.tf @@ -1,4 +1,6 @@ -data "stackit_postgresflex_database" "example" { +# Copyright (c) STACKIT + +data "stackitprivatepreview_postgresflexalpha_database" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" diff --git a/examples/data-sources/stackit_observability_alertgroup/data-source.tf b/examples/data-sources/stackitprivatepreview_postgresflexalpha_instance/data-source.tf similarity index 54% rename from examples/data-sources/stackit_observability_alertgroup/data-source.tf rename to examples/data-sources/stackitprivatepreview_postgresflexalpha_instance/data-source.tf index 18dc3c0b..6485022a 100644 --- a/examples/data-sources/stackit_observability_alertgroup/data-source.tf +++ b/examples/data-sources/stackitprivatepreview_postgresflexalpha_instance/data-source.tf @@ -1,5 +1,6 @@ -data "stackit_observability_alertgroup" "example" { +# Copyright (c) STACKIT + +data "stackitprivatepreview_postgresflexalpha_instance" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-alert-group" } diff --git a/examples/data-sources/stackit_mongodbflex_user/data-source.tf b/examples/data-sources/stackitprivatepreview_postgresflexalpha_user/data-source.tf similarity index 65% rename from examples/data-sources/stackit_mongodbflex_user/data-source.tf rename to examples/data-sources/stackitprivatepreview_postgresflexalpha_user/data-source.tf index 2bbdfc92..eed426b2 100644 --- a/examples/data-sources/stackit_mongodbflex_user/data-source.tf +++ b/examples/data-sources/stackitprivatepreview_postgresflexalpha_user/data-source.tf @@ -1,4 +1,6 @@ -data "stackit_mongodbflex_user" "example" { +# Copyright (c) STACKIT + +data "stackitprivatepreview_postgresflexalpha_user" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" diff --git a/examples/data-sources/stackit_mariadb_instance/data-source.tf b/examples/data-sources/stackitprivatepreview_sqlserverflexalpha_instance/data-source.tf similarity index 54% rename from examples/data-sources/stackit_mariadb_instance/data-source.tf rename to examples/data-sources/stackitprivatepreview_sqlserverflexalpha_instance/data-source.tf index 940c42db..75779eac 100644 --- a/examples/data-sources/stackit_mariadb_instance/data-source.tf +++ b/examples/data-sources/stackitprivatepreview_sqlserverflexalpha_instance/data-source.tf @@ -1,4 +1,6 @@ -data "stackit_mariadb_instance" "example" { +# Copyright (c) STACKIT + +data "stackitprivatepreview_sqlserverflexalpha_instance" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } diff --git a/examples/data-sources/stackit_postgresflex_user/data-source.tf b/examples/data-sources/stackitprivatepreview_sqlserverflexalpha_user/data-source.tf similarity index 64% rename from examples/data-sources/stackit_postgresflex_user/data-source.tf rename to examples/data-sources/stackitprivatepreview_sqlserverflexalpha_user/data-source.tf index 4bd9a45f..8ba5af78 100644 --- a/examples/data-sources/stackit_postgresflex_user/data-source.tf +++ b/examples/data-sources/stackitprivatepreview_sqlserverflexalpha_user/data-source.tf @@ -1,4 +1,6 @@ -data "stackit_postgresflex_user" "example" { +# Copyright (c) STACKIT + +data "stackitprivatepreview_sqlserverflexalpha_user" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" diff --git a/examples/ephemeral-resources/stackit_access_token/ephemeral-resource.tf b/examples/ephemeral-resources/stackit_access_token/ephemeral-resource.tf deleted file mode 100644 index 4d2a7b59..00000000 --- a/examples/ephemeral-resources/stackit_access_token/ephemeral-resource.tf +++ /dev/null @@ -1,44 +0,0 @@ -provider "stackit" { - default_region = "eu01" - service_account_key_path = "/path/to/sa_key.json" - enable_beta_resources = true -} - -ephemeral "stackit_access_token" "example" {} - -locals { - stackit_api_base_url = "https://iaas.api.stackit.cloud" - public_ip_path = "/v2/projects/${var.project_id}/regions/${var.region}/public-ips" - - public_ip_payload = { - labels = { - key = "value" - } - } -} - -# Docs: https://registry.terraform.io/providers/Mastercard/restapi/latest -provider "restapi" { - uri = local.stackit_api_base_url - write_returns_object = true - - headers = { - Authorization = "Bearer ${ephemeral.stackit_access_token.example.access_token}" - Content-Type = "application/json" - } - - create_method = "POST" - update_method = "PATCH" - destroy_method = "DELETE" -} - -resource "restapi_object" "public_ip_restapi" { - path = local.public_ip_path - data = jsonencode(local.public_ip_payload) - - id_attribute = "id" - read_method = "GET" - create_method = "POST" - update_method = "PATCH" - destroy_method = "DELETE" -} diff --git a/examples/provider/provider.tf b/examples/provider/provider.tf index 75b5cc76..85a2a39b 100644 --- a/examples/provider/provider.tf +++ b/examples/provider/provider.tf @@ -1,24 +1,26 @@ -provider "stackit" { +# Copyright (c) STACKIT + +provider "stackitprivatepreview" { default_region = "eu01" } # Authentication # Token flow (scheduled for deprecation and will be removed on December 17, 2025) -provider "stackit" { +provider "stackitprivatepreview" { default_region = "eu01" service_account_token = var.service_account_token } # Key flow -provider "stackit" { +provider "stackitprivatepreview" { default_region = "eu01" service_account_key = var.service_account_key private_key = var.private_key } # Key flow (using path) -provider "stackit" { +provider "stackitprivatepreview" { default_region = "eu01" service_account_key_path = var.service_account_key_path private_key_path = var.private_key_path diff --git a/examples/resources/stackit_affinity_group/resource.tf b/examples/resources/stackit_affinity_group/resource.tf deleted file mode 100644 index b0e506ab..00000000 --- a/examples/resources/stackit_affinity_group/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_affinity_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-affinity-group-name" - policy = "hard-anti-affinity" -} - -# Only use the import statement, if you want to import an existing affinity group -import { - to = stackit_affinity_group.import-example - id = "${var.project_id},${var.region},${var.affinity_group_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_authorization_organization_role_assignment/resource.tf b/examples/resources/stackit_authorization_organization_role_assignment/resource.tf deleted file mode 100644 index f717c334..00000000 --- a/examples/resources/stackit_authorization_organization_role_assignment/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_authorization_organization_role_assignment" "example" { - resource_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - role = "owner" - subject = "john.doe@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing organization role assignment -import { - to = stackit_authorization_organization_role_assignment.import-example - id = "${var.organization_id},${var.org_role_assignment_role},${var.org_role_assignment_subject}" -} diff --git a/examples/resources/stackit_authorization_project_role_assignment/resource.tf b/examples/resources/stackit_authorization_project_role_assignment/resource.tf deleted file mode 100644 index a335c5fd..00000000 --- a/examples/resources/stackit_authorization_project_role_assignment/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_authorization_project_role_assignment" "example" { - resource_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - role = "owner" - subject = "john.doe@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing project role assignment -import { - to = stackit_authorization_project_role_assignment.import-example - id = "${var.project_id},${var.project_role_assignment_role},${var.project_role_assignment_subject}" -} diff --git a/examples/resources/stackit_cdn_custom_domain/resource.tf b/examples/resources/stackit_cdn_custom_domain/resource.tf deleted file mode 100644 index 68ddfb96..00000000 --- a/examples/resources/stackit_cdn_custom_domain/resource.tf +++ /dev/null @@ -1,15 +0,0 @@ -resource "stackit_cdn_custom_domain" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - distribution_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "https://xxx.xxx" - certificate = { - certificate = "-----BEGIN CERTIFICATE-----\nY2VydGlmaWNhdGVfZGF0YQ==\n-----END CERTIFICATE---" - private_key = "-----BEGIN RSA PRIVATE KEY-----\nY2VydGlmaWNhdGVfZGF0YQ==\n-----END RSA PRIVATE KEY---" - } -} - -# Only use the import statement, if you want to import an existing cdn custom domain -import { - to = stackit_cdn_custom_domain.import-example - id = "${var.project_id},${var.distribution_id},${var.custom_domain_name}" -} diff --git a/examples/resources/stackit_cdn_distribution/resource.tf b/examples/resources/stackit_cdn_distribution/resource.tf deleted file mode 100644 index e69a7e61..00000000 --- a/examples/resources/stackit_cdn_distribution/resource.tf +++ /dev/null @@ -1,24 +0,0 @@ -resource "stackit_cdn_distribution" "example_distribution" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - config = { - backend = { - type = "http" - origin_url = "https://mybackend.onstackit.cloud" - geofencing = { - "https://mybackend.onstackit.cloud" = ["DE"] - } - } - regions = ["EU", "US", "ASIA", "AF", "SA"] - blocked_countries = ["DE", "AT", "CH"] - - optimizer = { - enabled = true - } - } -} - -# Only use the import statement, if you want to import an existing cdn distribution -import { - to = stackit_cdn_distribution.import-example - id = "${var.project_id},${var.distribution_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_dns_record_set/resource.tf b/examples/resources/stackit_dns_record_set/resource.tf deleted file mode 100644 index 96fe9443..00000000 --- a/examples/resources/stackit_dns_record_set/resource.tf +++ /dev/null @@ -1,14 +0,0 @@ -resource "stackit_dns_record_set" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - zone_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-record-set" - type = "A" - comment = "Example comment" - records = ["1.2.3.4"] -} - -# Only use the import statement, if you want to import an existing dns record set -import { - to = stackit_dns_record_set.import-example - id = "${var.project_id},${var.zone_id},${var.record_set_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_dns_zone/resource.tf b/examples/resources/stackit_dns_zone/resource.tf deleted file mode 100644 index 431e26b4..00000000 --- a/examples/resources/stackit_dns_zone/resource.tf +++ /dev/null @@ -1,16 +0,0 @@ -resource "stackit_dns_zone" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "Example zone" - dns_name = "example-zone.com" - contact_email = "aa@bb.ccc" - type = "primary" - acl = "192.168.0.0/24" - description = "Example description" - default_ttl = 1230 -} - -# Only use the import statement, if you want to import an existing dns zone -import { - to = stackit_dns_zone.import-example - id = "${var.project_id},${var.zone_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_git/resource.tf b/examples/resources/stackit_git/resource.tf deleted file mode 100644 index a7e02c82..00000000 --- a/examples/resources/stackit_git/resource.tf +++ /dev/null @@ -1,19 +0,0 @@ -resource "stackit_git" "git" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "git-example-instance" -} - -resource "stackit_git" "git" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "git-example-instance" - acl = [ - "0.0.0.0/0" - ] - flavor = "git-100" -} - -# Only use the import statement, if you want to import an existing git resource -import { - to = stackit_git.import-example - id = "${var.project_id},${var.git_instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_image/resource.tf b/examples/resources/stackit_image/resource.tf deleted file mode 100644 index bf3bd692..00000000 --- a/examples/resources/stackit_image/resource.tf +++ /dev/null @@ -1,20 +0,0 @@ -resource "stackit_image" "example_image" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-image" - disk_format = "qcow2" - local_file_path = "./path/to/image.qcow2" - min_disk_size = 10 - min_ram = 5 -} - -# Only use the import statement, if you want to import an existing image -# Must set a configuration value for the local_file_path attribute as the provider has marked it as required. -# Since this attribute is not fetched in general from the API call, after adding it this would replace your image resource after an terraform apply. -# In order to prevent this you need to add: -#lifecycle { -# ignore_changes = [ local_file_path ] -# } -import { - to = stackit_image.import-example - id = "${var.project_id},${var.region},${var.image_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_key_pair/resource.tf b/examples/resources/stackit_key_pair/resource.tf deleted file mode 100644 index 6f575bfe..00000000 --- a/examples/resources/stackit_key_pair/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -# Create a key pair -resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) -} - -# Only use the import statement, if you want to import an existing key pair -import { - to = stackit_key_pair.import-example - id = var.keypair_name -} \ No newline at end of file diff --git a/examples/resources/stackit_kms_key/resource.tf b/examples/resources/stackit_kms_key/resource.tf deleted file mode 100644 index 878f6f68..00000000 --- a/examples/resources/stackit_kms_key/resource.tf +++ /dev/null @@ -1,8 +0,0 @@ -resource "stackit_kms_key" "key" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "key-01" - protection = "software" - algorithm = "aes_256_gcm" - purpose = "symmetric_encrypt_decrypt" -} diff --git a/examples/resources/stackit_kms_keyring/resource.tf b/examples/resources/stackit_kms_keyring/resource.tf deleted file mode 100644 index 1efc90fa..00000000 --- a/examples/resources/stackit_kms_keyring/resource.tf +++ /dev/null @@ -1,5 +0,0 @@ -resource "stackit_kms_keyring" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-name" - description = "example description" -} diff --git a/examples/resources/stackit_kms_wrapping_key/resource.tf b/examples/resources/stackit_kms_wrapping_key/resource.tf deleted file mode 100644 index 3850b8e1..00000000 --- a/examples/resources/stackit_kms_wrapping_key/resource.tf +++ /dev/null @@ -1,8 +0,0 @@ -resource "stackit_kms_wrapping_key" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - keyring_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-name" - protection = "software" - algorithm = "rsa_2048_oaep_sha256" - purpose = "wrap_symmetric_key" -} diff --git a/examples/resources/stackit_loadbalancer/resource.tf b/examples/resources/stackit_loadbalancer/resource.tf deleted file mode 100644 index ac118c79..00000000 --- a/examples/resources/stackit_loadbalancer/resource.tf +++ /dev/null @@ -1,204 +0,0 @@ -# Create a network -resource "stackit_network" "example_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network" - ipv4_nameservers = ["8.8.8.8"] - ipv4_prefix = "192.168.0.0/25" - labels = { - "key" = "value" - } - routed = true -} - -# Create a network interface -resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.example_network.network_id -} - -# Create a public IP for the load balancer -resource "stackit_public_ip" "public-ip" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - lifecycle { - ignore_changes = [network_interface_id] - } -} - -# Create a key pair for accessing the server instance -resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - public_key = chomp(file("path/to/id_rsa.pub")) -} - -# Create a server instance -resource "stackit_server" "boot-from-image" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "59838a89-51b1-4892-b57f-b3caf598ee2f" // Ubuntu 24.04 - } - availability_zone = "xxxx-x" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] -} - -# Create a load balancer -resource "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-load-balancer" - plan_id = "p10" - target_pools = [ - { - name = "example-target-pool" - target_port = 80 - targets = [ - { - display_name = stackit_server.boot-from-image.name - ip = stackit_network_interface.nic.ipv4 - } - ] - active_health_check = { - healthy_threshold = 10 - interval = "3s" - interval_jitter = "3s" - timeout = "3s" - unhealthy_threshold = 10 - } - } - ] - listeners = [ - { - display_name = "example-listener" - port = 80 - protocol = "PROTOCOL_TCP" - target_pool = "example-target-pool" - tcp = { - idle_timeout = "90s" - } - } - ] - networks = [ - { - network_id = stackit_network.example_network.network_id - role = "ROLE_LISTENERS_AND_TARGETS" - } - ] - external_address = stackit_public_ip.public-ip.ip - options = { - private_network_only = false - } -} - -# This example demonstrates an advanced setup where the Load Balancer is in one -# network and the target server is in another. This requires manual -# security group configuration using the `disable_security_group_assignment` -# and `security_group_id` attributes. - -# We create two separate networks: one for the load balancer and one for the target. -resource "stackit_network" "lb_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "lb-network-example" - ipv4_prefix = "192.168.10.0/25" - ipv4_nameservers = ["8.8.8.8"] -} - -resource "stackit_network" "target_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "target-network-example" - ipv4_prefix = "192.168.10.0/25" - ipv4_nameservers = ["8.8.8.8"] -} - -resource "stackit_public_ip" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -resource "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-advanced-lb" - external_address = stackit_public_ip.example.ip - - # Key setting for manual mode: disables automatic security group handling. - disable_security_group_assignment = true - - networks = [{ - network_id = stackit_network.lb_network.network_id - role = "ROLE_LISTENERS_AND_TARGETS" - }] - - listeners = [{ - port = 80 - protocol = "PROTOCOL_TCP" - target_pool = "cross-network-pool" - }] - - target_pools = [{ - name = "cross-network-pool" - target_port = 80 - targets = [{ - display_name = stackit_server.example.name - ip = stackit_network_interface.nic.ipv4 - }] - }] -} - -# Create a new security group to be assigned to the target server. -resource "stackit_security_group" "target_sg" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "target-sg-for-lb-access" - description = "Allows ingress traffic from the example load balancer." -} - -# Create a rule to allow traffic FROM the load balancer. -# This rule uses the computed `security_group_id` of the load balancer. -resource "stackit_security_group_rule" "allow_lb_ingress" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = stackit_security_group.target_sg.security_group_id - direction = "ingress" - protocol = { - name = "tcp" - } - - # This is the crucial link: it allows traffic from the LB's security group. - remote_security_group_id = stackit_loadbalancer.example.security_group_id - - port_range = { - min = 80 - max = 80 - } -} - -resource "stackit_server" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-remote-target" - machine_type = "g2i.2" - availability_zone = "eu01-1" - - boot_volume = { - source_type = "image" - source_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - size = 10 - } - - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] -} - -resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.target_network.network_id - security_group_ids = [stackit_security_group.target_sg.security_group_id] -} -# End of advanced example - -# Only use the import statement, if you want to import an existing loadbalancer -import { - to = stackit_loadbalancer.import-example - id = "${var.project_id},${var.region},${var.loadbalancer_name}" -} diff --git a/examples/resources/stackit_loadbalancer_observability_credential/resource.tf b/examples/resources/stackit_loadbalancer_observability_credential/resource.tf deleted file mode 100644 index 7b29fb3b..00000000 --- a/examples/resources/stackit_loadbalancer_observability_credential/resource.tf +++ /dev/null @@ -1,12 +0,0 @@ -resource "stackit_loadbalancer_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-credentials" - username = "example-user" - password = "example-password" -} - -# Only use the import statement, if you want to import an existing loadbalancer observability credential -import { - to = stackit_loadbalancer_observability_credential.import-example - id = "${var.project_id},${var.region},${var.credentials_ref}" -} diff --git a/examples/resources/stackit_logme_credential/resource.tf b/examples/resources/stackit_logme_credential/resource.tf deleted file mode 100644 index c0c0c720..00000000 --- a/examples/resources/stackit_logme_credential/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_logme_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing logme credential -import { - to = stackit_logme_credential.import-example - id = "${var.project_id},${var.logme_instance_id},${var.logme_credentials_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_logme_instance/resource.tf b/examples/resources/stackit_logme_instance/resource.tf deleted file mode 100644 index e95eeb01..00000000 --- a/examples/resources/stackit_logme_instance/resource.tf +++ /dev/null @@ -1,15 +0,0 @@ -resource "stackit_logme_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "2" - plan_name = "stackit-logme2-1.2.50-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - } -} - -# Only use the import statement, if you want to import an existing logme instance -import { - to = stackit_logme_instance.import-example - id = "${var.project_id},${var.logme_instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_mariadb_credential/resource.tf b/examples/resources/stackit_mariadb_credential/resource.tf deleted file mode 100644 index 335e8e6a..00000000 --- a/examples/resources/stackit_mariadb_credential/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_mariadb_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing mariadb credential -import { - to = stackit_mariadb_credential.import-example - id = "${var.project_id},${var.mariadb_instance_id},${var.mariadb_credential_id}" -} diff --git a/examples/resources/stackit_mariadb_instance/resource.tf b/examples/resources/stackit_mariadb_instance/resource.tf deleted file mode 100644 index 7abe23ff..00000000 --- a/examples/resources/stackit_mariadb_instance/resource.tf +++ /dev/null @@ -1,15 +0,0 @@ -resource "stackit_mariadb_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "10.11" - plan_name = "stackit-mariadb-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - } -} - -# Only use the import statement, if you want to import an existing mariadb instance -import { - to = stackit_mariadb_instance.import-example - id = "${var.project_id},${var.mariadb_instance_id}" -} diff --git a/examples/resources/stackit_mongodbflex_instance/resource.tf b/examples/resources/stackit_mongodbflex_instance/resource.tf deleted file mode 100644 index 6c62321f..00000000 --- a/examples/resources/stackit_mongodbflex_instance/resource.tf +++ /dev/null @@ -1,27 +0,0 @@ -resource "stackit_mongodbflex_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - acl = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] - flavor = { - cpu = 1 - ram = 4 - } - replicas = 1 - storage = { - class = "class" - size = 10 - } - version = "7.0" - options = { - type = "Single" - snapshot_retention_days = 3 - point_in_time_window_hours = 30 - } - backup_schedule = "0 0 * * *" -} - -# Only use the import statement, if you want to import an existing mongodbflex instance -import { - to = stackit_mongodbflex_instance.import-example - id = "${var.project_id},${var.region},${var.instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_mongodbflex_user/resource.tf b/examples/resources/stackit_mongodbflex_user/resource.tf deleted file mode 100644 index df1bfac6..00000000 --- a/examples/resources/stackit_mongodbflex_user/resource.tf +++ /dev/null @@ -1,13 +0,0 @@ -resource "stackit_mongodbflex_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - username = "username" - roles = ["role"] - database = "database" -} - -# Only use the import statement, if you want to import an existing mongodbflex user -import { - to = stackit_mongodbflex_user.import-example - id = "${var.project_id},${var.region},${var.instance_id},${user_id}" -} diff --git a/examples/resources/stackit_network/resource.tf b/examples/resources/stackit_network/resource.tf deleted file mode 100644 index f5760a04..00000000 --- a/examples/resources/stackit_network/resource.tf +++ /dev/null @@ -1,33 +0,0 @@ -resource "stackit_network" "example_with_name" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-with-name" -} - -resource "stackit_network" "example_routed_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-routed-network" - labels = { - "key" = "value" - } - routed = true -} - -resource "stackit_network" "example_non_routed_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-non-routed-network" - ipv4_nameservers = ["1.2.3.4", "5.6.7.8"] - ipv4_gateway = "10.1.2.3" - ipv4_prefix = "10.1.2.0/24" - labels = { - "key" = "value" - } - routed = false -} - -# Only use the import statement, if you want to import an existing network -# Note: There will be a conflict which needs to be resolved manually. -# These attributes cannot be configured together: [ipv4_prefix,ipv4_prefix_length,ipv4_gateway] -import { - to = stackit_network.import-example - id = "${var.project_id},${var.region},${var.network_id}" -} diff --git a/examples/resources/stackit_network_area/resource.tf b/examples/resources/stackit_network_area/resource.tf deleted file mode 100644 index a699e7ca..00000000 --- a/examples/resources/stackit_network_area/resource.tf +++ /dev/null @@ -1,13 +0,0 @@ -resource "stackit_network_area" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network-area" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing network area -import { - to = stackit_network_area.import-example - id = "${var.organization_id},${var.network_area_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_network_area_region/resource.tf b/examples/resources/stackit_network_area_region/resource.tf deleted file mode 100644 index bb876b86..00000000 --- a/examples/resources/stackit_network_area_region/resource.tf +++ /dev/null @@ -1,18 +0,0 @@ -resource "stackit_network_area_region" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - ipv4 = { - transfer_network = "10.1.2.0/24" - network_ranges = [ - { - prefix = "10.0.0.0/16" - } - ] - } -} - -# Only use the import statement, if you want to import an existing network area region -import { - to = stackit_network_area_region.import-example - id = "${var.organization_id},${var.network_area_id},${var.region}" -} diff --git a/examples/resources/stackit_network_area_route/resource.tf b/examples/resources/stackit_network_area_route/resource.tf deleted file mode 100644 index 91ea42d4..00000000 --- a/examples/resources/stackit_network_area_route/resource.tf +++ /dev/null @@ -1,21 +0,0 @@ -resource "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - destination = { - type = "cidrv4" - value = "192.168.0.0/24" - } - next_hop = { - type = "ipv4" - value = "192.168.0.0" - } - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing network area route -import { - to = stackit_network_area_route.import-example - id = "${var.organization_id},${var.network_area_id},${var.region},${var.network_area_route_id}" -} diff --git a/examples/resources/stackit_network_interface/resource.tf b/examples/resources/stackit_network_interface/resource.tf deleted file mode 100644 index 2ff598ff..00000000 --- a/examples/resources/stackit_network_interface/resource.tf +++ /dev/null @@ -1,12 +0,0 @@ -resource "stackit_network_interface" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - allowed_addresses = ["192.168.0.0/24"] - security_group_ids = ["xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"] -} - -# Only use the import statement, if you want to import an existing network interface -import { - to = stackit_network_interface.import-example - id = "${var.project_id},${var.region},${var.network_id},${var.network_interface_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_objectstorage_bucket/resource.tf b/examples/resources/stackit_objectstorage_bucket/resource.tf deleted file mode 100644 index e8c1922c..00000000 --- a/examples/resources/stackit_objectstorage_bucket/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_objectstorage_bucket" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-bucket" -} - -# Only use the import statement, if you want to import an existing objectstorage bucket -import { - to = stackit_objectstorage_bucket.import-example - id = "${var.project_id},${var.region},${var.bucket_name}" -} diff --git a/examples/resources/stackit_objectstorage_credential/resource.tf b/examples/resources/stackit_objectstorage_credential/resource.tf deleted file mode 100644 index 46e11717..00000000 --- a/examples/resources/stackit_objectstorage_credential/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_objectstorage_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - expiration_timestamp = "2027-01-02T03:04:05Z" -} - -# Only use the import statement, if you want to import an existing objectstorage credential -import { - to = stackit_objectstorage_credential.import-example - id = "${var.project_id},${var.region},${var.bucket_credentials_group_id},${var.bucket_credential_id}" -} diff --git a/examples/resources/stackit_objectstorage_credentials_group/resource.tf b/examples/resources/stackit_objectstorage_credentials_group/resource.tf deleted file mode 100644 index 0d6b1e3e..00000000 --- a/examples/resources/stackit_objectstorage_credentials_group/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_objectstorage_credentials_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-credentials-group" -} - -# Only use the import statement, if you want to import an existing objectstorage credential group -import { - to = stackit_objectstorage_credentials_group.import-example - id = "${var.project_id},${var.region},${var.bucket_credentials_group_id}" -} diff --git a/examples/resources/stackit_observability_alertgroup/resource.tf b/examples/resources/stackit_observability_alertgroup/resource.tf deleted file mode 100644 index b4ab9cf3..00000000 --- a/examples/resources/stackit_observability_alertgroup/resource.tf +++ /dev/null @@ -1,38 +0,0 @@ -resource "stackit_observability_alertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-alert-group" - interval = "60s" - rules = [ - { - alert = "example-alert-name" - expression = "kube_node_status_condition{condition=\"Ready\", status=\"false\"} > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - { - alert = "example-alert-name-2" - expression = "kube_node_status_condition{condition=\"Ready\", status=\"false\"} > 0" - for = "1m" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - ] -} - -# Only use the import statement, if you want to import an existing observability alertgroup -import { - to = stackit_observability_alertgroup.import-example - id = "${var.project_id},${var.observability_instance_id},${var.observability_alertgroup_name}" -} diff --git a/examples/resources/stackit_observability_credential/resource.tf b/examples/resources/stackit_observability_credential/resource.tf deleted file mode 100644 index 9bc44457..00000000 --- a/examples/resources/stackit_observability_credential/resource.tf +++ /dev/null @@ -1,5 +0,0 @@ -resource "stackit_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - description = "Description of the credential." -} diff --git a/examples/resources/stackit_observability_instance/resource.tf b/examples/resources/stackit_observability_instance/resource.tf deleted file mode 100644 index 070704dd..00000000 --- a/examples/resources/stackit_observability_instance/resource.tf +++ /dev/null @@ -1,17 +0,0 @@ -resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Starter-EU01" - acl = ["1.1.1.1/32", "2.2.2.2/32"] - logs_retention_days = 30 - traces_retention_days = 30 - metrics_retention_days = 90 - metrics_retention_days_5m_downsampling = 90 - metrics_retention_days_1h_downsampling = 90 -} - -# Only use the import statement, if you want to import an existing observability instance -import { - to = stackit_observability_instance.import-example - id = "${var.project_id},${var.observability_instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_observability_logalertgroup/resource.tf b/examples/resources/stackit_observability_logalertgroup/resource.tf deleted file mode 100644 index d2876a69..00000000 --- a/examples/resources/stackit_observability_logalertgroup/resource.tf +++ /dev/null @@ -1,38 +0,0 @@ -resource "stackit_observability_logalertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-log-alert-group" - interval = "60m" - rules = [ - { - alert = "example-log-alert-name" - expression = "sum(rate({namespace=\"example\", pod=\"logger\"} |= \"Simulated error message\" [1m])) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - { - alert = "example-log-alert-name-2" - expression = "sum(rate({namespace=\"example\", pod=\"logger\"} |= \"Another error message\" [1m])) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "example summary" - description : "example description" - } - }, - ] -} - -# Only use the import statement, if you want to import an existing observability logalertgroup -import { - to = stackit_observability_logalertgroup.import-example - id = "${var.project_id},${var.observability_instance_id},${var.observability_logalertgroup_name}" -} diff --git a/examples/resources/stackit_observability_scrapeconfig/resource.tf b/examples/resources/stackit_observability_scrapeconfig/resource.tf deleted file mode 100644 index f0866184..00000000 --- a/examples/resources/stackit_observability_scrapeconfig/resource.tf +++ /dev/null @@ -1,23 +0,0 @@ -resource "stackit_observability_scrapeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-job" - metrics_path = "/my-metrics" - saml2 = { - enable_url_parameters = true - } - targets = [ - { - urls = ["url1", "urls2"] - labels = { - "url1" = "dev" - } - } - ] -} - -# Only use the import statement, if you want to import an existing observability scrapeconfig -import { - to = stackit_observability_scrapeconfig.import-example - id = "${var.project_id},${var.observability_instance_id},${var.observability_scrapeconfig_name}" -} diff --git a/examples/resources/stackit_opensearch_credential/resource.tf b/examples/resources/stackit_opensearch_credential/resource.tf deleted file mode 100644 index c9e6dcc5..00000000 --- a/examples/resources/stackit_opensearch_credential/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_opensearch_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing opensearch credential -import { - to = stackit_opensearch_credential.import-example - id = "${var.project_id},${var.instance_id},${var.credential_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_opensearch_instance/resource.tf b/examples/resources/stackit_opensearch_instance/resource.tf deleted file mode 100644 index 043f7154..00000000 --- a/examples/resources/stackit_opensearch_instance/resource.tf +++ /dev/null @@ -1,15 +0,0 @@ -resource "stackit_opensearch_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "2" - plan_name = "stackit-opensearch-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - } -} - -# Only use the import statement, if you want to import an existing opensearch instance -import { - to = stackit_opensearch_instance.import-example - id = "${var.project_id},${var.instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_public_ip/resource.tf b/examples/resources/stackit_public_ip/resource.tf deleted file mode 100644 index 691cfc34..00000000 --- a/examples/resources/stackit_public_ip/resource.tf +++ /dev/null @@ -1,13 +0,0 @@ -resource "stackit_public_ip" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing public ip -import { - to = stackit_public_ip.import-example - id = "${var.project_id},${var.region},${var.public_ip_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_public_ip_associate/resource.tf b/examples/resources/stackit_public_ip_associate/resource.tf deleted file mode 100644 index a025d0d7..00000000 --- a/examples/resources/stackit_public_ip_associate/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_public_ip_associate" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - public_ip_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing public ip associate -import { - to = stackit_public_ip_associate.import-example - id = "${var.project_id},${var.region},${var.public_ip_id},${var.network_interface_id}" -} diff --git a/examples/resources/stackit_rabbitmq_credential/resource.tf b/examples/resources/stackit_rabbitmq_credential/resource.tf deleted file mode 100644 index cef13a72..00000000 --- a/examples/resources/stackit_rabbitmq_credential/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_rabbitmq_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing rabbitmq credential -import { - to = stackit_rabbitmq_credential.import-example - id = "${var.project_id},${var.rabbitmq_instance_id},${var.rabbitmq_credential_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_rabbitmq_instance/resource.tf b/examples/resources/stackit_rabbitmq_instance/resource.tf deleted file mode 100644 index c559de44..00000000 --- a/examples/resources/stackit_rabbitmq_instance/resource.tf +++ /dev/null @@ -1,18 +0,0 @@ -resource "stackit_rabbitmq_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "3.13" - plan_name = "stackit-rabbitmq-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - consumer_timeout = 18000000 - enable_monitoring = false - plugins = ["rabbitmq_consistent_hash_exchange", "rabbitmq_federation", "rabbitmq_tracing"] - } -} - -# Only use the import statement, if you want to import an existing rabbitmq instance -import { - to = stackit_rabbitmq_instance.import-example - id = "${var.project_id},${var.rabbitmq_instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_redis_credential/resource.tf b/examples/resources/stackit_redis_credential/resource.tf deleted file mode 100644 index 8c535cc8..00000000 --- a/examples/resources/stackit_redis_credential/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_redis_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing redis credential -import { - to = stackit_redis_credential.import-example - id = "${var.project_id},${var.redis_instance_id},${var.redis_credential_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_redis_instance/resource.tf b/examples/resources/stackit_redis_instance/resource.tf deleted file mode 100644 index 6ccb7728..00000000 --- a/examples/resources/stackit_redis_instance/resource.tf +++ /dev/null @@ -1,18 +0,0 @@ -resource "stackit_redis_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - version = "7" - plan_name = "stackit-redis-1.2.10-replica" - parameters = { - sgw_acl = "193.148.160.0/19,45.129.40.0/21,45.135.244.0/22" - enable_monitoring = false - down_after_milliseconds = 30000 - syslog = ["logs4.your-syslog-endpoint.com:54321"] - } -} - -# Only use the import statement, if you want to import an existing redis instance -import { - to = stackit_redis_instance.import-example - id = "${var.project_id},${var.redis_instance_id}" -} diff --git a/examples/resources/stackit_resourcemanager_folder/resource.tf b/examples/resources/stackit_resourcemanager_folder/resource.tf deleted file mode 100644 index d4d782fa..00000000 --- a/examples/resources/stackit_resourcemanager_folder/resource.tf +++ /dev/null @@ -1,24 +0,0 @@ -resource "stackit_resourcemanager_folder" "example" { - name = "example-folder" - owner_email = "foo.bar@stackit.cloud" - parent_container_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Note: -# You can add projects under folders. -# However, when deleting a project, be aware: -# - Projects may remain "invisible" for up to 7 days after deletion -# - During this time, deleting the parent folder may fail because the project is still technically linked -resource "stackit_resourcemanager_project" "example_project" { - name = "example-project" - owner_email = "foo.bar@stackit.cloud" - parent_container_id = stackit_resourcemanager_folder.example.container_id -} - -# Only use the import statement, if you want to import an existing resourcemanager folder -# Note: There will be a conflict which needs to be resolved manually. -# Must set a configuration value for the owner_email attribute as the provider has marked it as required. -import { - to = stackit_resourcemanager_folder.import-example - id = var.container_id -} \ No newline at end of file diff --git a/examples/resources/stackit_resourcemanager_project/resource.tf b/examples/resources/stackit_resourcemanager_project/resource.tf deleted file mode 100644 index 37bcc4c0..00000000 --- a/examples/resources/stackit_resourcemanager_project/resource.tf +++ /dev/null @@ -1,17 +0,0 @@ -resource "stackit_resourcemanager_project" "example" { - parent_container_id = "example-parent-container-abc123" - name = "example-container" - labels = { - "Label 1" = "foo" - // "networkArea" = stackit_network_area.foo.network_area_id - } - owner_email = "john.doe@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing resourcemanager project -# Note: There will be a conflict which needs to be resolved manually. -# Must set a configuration value for the owner_email attribute as the provider has marked it as required. -import { - to = stackit_resourcemanager_project.import-example - id = var.container_id -} diff --git a/examples/resources/stackit_routing_table/resource.tf b/examples/resources/stackit_routing_table/resource.tf deleted file mode 100644 index 18059992..00000000 --- a/examples/resources/stackit_routing_table/resource.tf +++ /dev/null @@ -1,14 +0,0 @@ -resource "stackit_routing_table" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing routing table -import { - to = stackit_routing_table.import-example - id = "${var.organization_id},${var.region},${var.network_area_id},${var.routing_table_id}" -} diff --git a/examples/resources/stackit_routing_table_route/resource.tf b/examples/resources/stackit_routing_table_route/resource.tf deleted file mode 100644 index 78ff9832..00000000 --- a/examples/resources/stackit_routing_table_route/resource.tf +++ /dev/null @@ -1,22 +0,0 @@ -resource "stackit_routing_table_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - routing_table_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - destination = { - type = "cidrv4" - value = "192.168.178.0/24" - } - next_hop = { - type = "ipv4" - value = "192.168.178.1" - } - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing routing table route -import { - to = stackit_routing_table_route.import-example - id = "${var.organization_id},${var.region},${var.network_area_id},${var.routing_table_id},${var.routing_table_route_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_scf_organization/resource.tf b/examples/resources/stackit_scf_organization/resource.tf deleted file mode 100644 index fc38820e..00000000 --- a/examples/resources/stackit_scf_organization/resource.tf +++ /dev/null @@ -1,18 +0,0 @@ -resource "stackit_scf_organization" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" -} - -resource "stackit_scf_organization" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - platform_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - quota_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - suspended = false -} - -# Only use the import statement, if you want to import an existing scf organization -import { - to = stackit_scf_organization.import-example - id = "${var.project_id},${var.region},${var.org_id}" -} diff --git a/examples/resources/stackit_scf_organization_manager/resource.tf b/examples/resources/stackit_scf_organization_manager/resource.tf deleted file mode 100644 index a16638a6..00000000 --- a/examples/resources/stackit_scf_organization_manager/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_scf_organization_manager" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - org_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing scf org user -# The password field is still null after import and must be entered manually in the state. -import { - to = stackit_scf_organization_manager.import-example - id = "${var.project_id},${var.region},${var.org_id},${var.user_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_secretsmanager_instance/resource.tf b/examples/resources/stackit_secretsmanager_instance/resource.tf deleted file mode 100644 index 1ece81cc..00000000 --- a/examples/resources/stackit_secretsmanager_instance/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_secretsmanager_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - acls = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] -} - -# Only use the import statement, if you want to import an existing secretsmanager instance -import { - to = stackit_secretsmanager_instance.import-example - id = "${var.project_id},${var.secret_instance_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_secretsmanager_user/resource.tf b/examples/resources/stackit_secretsmanager_user/resource.tf deleted file mode 100644 index ec6b642b..00000000 --- a/examples/resources/stackit_secretsmanager_user/resource.tf +++ /dev/null @@ -1,12 +0,0 @@ -resource "stackit_secretsmanager_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - description = "Example user" - write_enabled = false -} - -# Only use the import statement, if you want to import an existing secretsmanager user -import { - to = stackit_secretsmanager_user.import-example - id = "${var.project_id},${var.secret_instance_id},${var.secret_user_id}" -} diff --git a/examples/resources/stackit_security_group/resource.tf b/examples/resources/stackit_security_group/resource.tf deleted file mode 100644 index 131cf639..00000000 --- a/examples/resources/stackit_security_group/resource.tf +++ /dev/null @@ -1,13 +0,0 @@ -resource "stackit_security_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "my_security_group" - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing security group -import { - to = stackit_security_group.import-example - id = "${var.project_id},${var.security_group_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_security_group_rule/resource.tf b/examples/resources/stackit_security_group_rule/resource.tf deleted file mode 100644 index 6844d1ea..00000000 --- a/examples/resources/stackit_security_group_rule/resource.tf +++ /dev/null @@ -1,20 +0,0 @@ -resource "stackit_security_group_rule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - security_group_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - direction = "ingress" - icmp_parameters = { - code = 0 - type = 8 - } - protocol = { - name = "icmp" - } -} - -# Only use the import statement, if you want to import an existing security group rule -# Note: There will be a conflict which needs to be resolved manually. -# Attribute "protocol.number" cannot be specified when "protocol.name" is specified. -import { - to = stackit_security_group_rule.import-example - id = "${var.project_id},${var.security_group_id},${var.security_group_rule_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_server/resource.tf b/examples/resources/stackit_server/resource.tf deleted file mode 100644 index 6fcb8ebc..00000000 --- a/examples/resources/stackit_server/resource.tf +++ /dev/null @@ -1,27 +0,0 @@ -resource "stackit_server" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "59838a89-51b1-4892-b57f-b3caf598ee2f" // Ubuntu 24.04 - } - availability_zone = "xxxx-x" - machine_type = "g2i.1" - network_interfaces = [ - stackit_network_interface.example.network_interface_id - ] -} - -# Only use the import statement, if you want to import an existing server -# Note: There will be a conflict which needs to be resolved manually. -# Must set a configuration value for the boot_volume.source_type and boot_volume.source_id attribute as the provider has marked it as required. -# Since those attributes are not fetched in general from the API call, after adding them this would replace your server resource after an terraform apply. -# In order to prevent this you need to add: -# lifecycle { -# ignore_changes = [ boot_volume ] -# } -import { - to = stackit_server.import-example - id = "${var.project_id},${var.region},${var.server_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_server_backup_schedule/resource.tf b/examples/resources/stackit_server_backup_schedule/resource.tf deleted file mode 100644 index 5e2ec0c6..00000000 --- a/examples/resources/stackit_server_backup_schedule/resource.tf +++ /dev/null @@ -1,18 +0,0 @@ -resource "stackit_server_backup_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example_backup_schedule_name" - rrule = "DTSTART;TZID=Europe/Sofia:20200803T023000 RRULE:FREQ=DAILY;INTERVAL=1" - enabled = true - backup_properties = { - name = "example_backup_name" - retention_period = 14 - volume_ids = null - } -} - -# Only use the import statement, if you want to import an existing server backup schedule -import { - to = stackit_server_backup_schedule.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.server_backup_schedule_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_server_network_interface_attach/resource.tf b/examples/resources/stackit_server_network_interface_attach/resource.tf deleted file mode 100644 index 054421dd..00000000 --- a/examples/resources/stackit_server_network_interface_attach/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_server_network_interface_attach" "attached_network_interface" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_interface_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing server network interface attachment -import { - to = stackit_server_network_interface_attach.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.network_interface_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_server_service_account_attach/resource.tf b/examples/resources/stackit_server_service_account_attach/resource.tf deleted file mode 100644 index 0658f55d..00000000 --- a/examples/resources/stackit_server_service_account_attach/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_server_service_account_attach" "attached_service_account" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - service_account_email = "service-account@stackit.cloud" -} - -# Only use the import statement, if you want to import an existing server service account attachment -import { - to = stackit_server_service_account_attach.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.service_account_email}" -} \ No newline at end of file diff --git a/examples/resources/stackit_server_update_schedule/resource.tf b/examples/resources/stackit_server_update_schedule/resource.tf deleted file mode 100644 index bfc86d7b..00000000 --- a/examples/resources/stackit_server_update_schedule/resource.tf +++ /dev/null @@ -1,14 +0,0 @@ -resource "stackit_server_update_schedule" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example_update_schedule_name" - rrule = "DTSTART;TZID=Europe/Sofia:20200803T023000 RRULE:FREQ=DAILY;INTERVAL=1" - enabled = true - maintenance_window = 1 -} - -# Only use the import statement, if you want to import an existing server update schedule -import { - to = stackit_server_update_schedule.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.server_update_schedule_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_server_volume_attach/resource.tf b/examples/resources/stackit_server_volume_attach/resource.tf deleted file mode 100644 index a503eabe..00000000 --- a/examples/resources/stackit_server_volume_attach/resource.tf +++ /dev/null @@ -1,11 +0,0 @@ -resource "stackit_server_volume_attach" "attached_volume" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - server_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - volume_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -# Only use the import statement, if you want to import an existing server volume attachment -import { - to = stackit_server_volume_attach.import-example - id = "${var.project_id},${var.region},${var.server_id},${var.volume_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_service_account/resource.tf b/examples/resources/stackit_service_account/resource.tf deleted file mode 100644 index 988cf345..00000000 --- a/examples/resources/stackit_service_account/resource.tf +++ /dev/null @@ -1,10 +0,0 @@ -resource "stackit_service_account" "sa" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "sa01" -} - -# Only use the import statement, if you want to import an existing service account -import { - to = stackit_service_account.import-example - id = "${var.project_id},${var.service_account_email}" -} \ No newline at end of file diff --git a/examples/resources/stackit_ske_cluster/resource.tf b/examples/resources/stackit_ske_cluster/resource.tf deleted file mode 100644 index cabf801a..00000000 --- a/examples/resources/stackit_ske_cluster/resource.tf +++ /dev/null @@ -1,27 +0,0 @@ -resource "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - kubernetes_version_min = "x.x" - node_pools = [ - { - name = "np-example" - machine_type = "x.x" - os_version = "x.x.x" - minimum = "2" - maximum = "3" - availability_zones = ["eu01-3"] - } - ] - maintenance = { - enable_kubernetes_version_updates = true - enable_machine_image_version_updates = true - start = "01:00:00Z" - end = "02:00:00Z" - } -} - -# Only use the import statement, if you want to import an existing ske cluster -import { - to = stackit_ske_cluster.import-example - id = "${var.project_id},${var.region},${var.ske_name}" -} \ No newline at end of file diff --git a/examples/resources/stackit_ske_kubeconfig/resource.tf b/examples/resources/stackit_ske_kubeconfig/resource.tf deleted file mode 100644 index a71b59aa..00000000 --- a/examples/resources/stackit_ske_kubeconfig/resource.tf +++ /dev/null @@ -1,8 +0,0 @@ -resource "stackit_ske_kubeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - cluster_name = "example-cluster" - - refresh = true - expiration = 7200 # 2 hours - refresh_before = 3600 # 1 hour -} diff --git a/examples/resources/stackit_volume/resource.tf b/examples/resources/stackit_volume/resource.tf deleted file mode 100644 index 7a5c28ec..00000000 --- a/examples/resources/stackit_volume/resource.tf +++ /dev/null @@ -1,15 +0,0 @@ -resource "stackit_volume" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "my_volume" - availability_zone = "eu01-1" - size = 64 - labels = { - "key" = "value" - } -} - -# Only use the import statement, if you want to import an existing volume -import { - to = stackit_volume.import-example - id = "${var.project_id},${var.region},${var.volume_id}" -} \ No newline at end of file diff --git a/examples/resources/stackit_postgresflex_database/resource.tf b/examples/resources/stackitprivatepreview_postgresflexalpha_database/resource.tf similarity index 68% rename from examples/resources/stackit_postgresflex_database/resource.tf rename to examples/resources/stackitprivatepreview_postgresflexalpha_database/resource.tf index 5388e1a2..4dac8bfe 100644 --- a/examples/resources/stackit_postgresflex_database/resource.tf +++ b/examples/resources/stackitprivatepreview_postgresflexalpha_database/resource.tf @@ -1,4 +1,6 @@ -resource "stackit_postgresflex_database" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_postgresflexalpha_database" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" name = "mydb" @@ -7,6 +9,6 @@ resource "stackit_postgresflex_database" "example" { # Only use the import statement, if you want to import an existing postgresflex database import { - to = stackit_postgresflex_database.import-example + to = stackitprivatepreview_postgresflexalpha_database.import-example id = "${var.project_id},${var.region},${var.postgres_instance_id},${var.postgres_database_id}" } \ No newline at end of file diff --git a/examples/resources/stackit_postgresflex_instance/resource.tf b/examples/resources/stackitprivatepreview_postgresflexalpha_instance/resource.tf similarity index 74% rename from examples/resources/stackit_postgresflex_instance/resource.tf rename to examples/resources/stackitprivatepreview_postgresflexalpha_instance/resource.tf index 46a8d051..e65f8073 100644 --- a/examples/resources/stackit_postgresflex_instance/resource.tf +++ b/examples/resources/stackitprivatepreview_postgresflexalpha_instance/resource.tf @@ -1,4 +1,6 @@ -resource "stackit_postgresflex_instance" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_postgresflexalpha_instance" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" name = "example-instance" acl = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] @@ -17,6 +19,6 @@ resource "stackit_postgresflex_instance" "example" { # Only use the import statement, if you want to import an existing postgresflex instance import { - to = stackit_postgresflex_instance.import-example + to = stackitprivatepreview_postgresflexalpha_instance.import-example id = "${var.project_id},${var.region},${var.postgres_instance_id}" } \ No newline at end of file diff --git a/examples/resources/stackit_postgresflex_user/resource.tf b/examples/resources/stackitprivatepreview_postgresflexalpha_user/resource.tf similarity index 68% rename from examples/resources/stackit_postgresflex_user/resource.tf rename to examples/resources/stackitprivatepreview_postgresflexalpha_user/resource.tf index 1521fa45..5ab8c922 100644 --- a/examples/resources/stackit_postgresflex_user/resource.tf +++ b/examples/resources/stackitprivatepreview_postgresflexalpha_user/resource.tf @@ -1,4 +1,6 @@ -resource "stackit_postgresflex_user" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_postgresflexalpha_user" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" username = "username" @@ -7,6 +9,6 @@ resource "stackit_postgresflex_user" "example" { # Only use the import statement, if you want to import an existing postgresflex user import { - to = stackit_postgresflex_user.import-example + to = stackitprivatepreview_postgresflexalpha_user.import-example id = "${var.project_id},${var.region},${var.postgres_instance_id},${var.user_id}" } \ No newline at end of file diff --git a/examples/resources/stackit_sqlserverflex_instance/resource.tf b/examples/resources/stackitprivatepreview_sqlserverflexalpha_instance/resource.tf similarity index 73% rename from examples/resources/stackit_sqlserverflex_instance/resource.tf rename to examples/resources/stackitprivatepreview_sqlserverflexalpha_instance/resource.tf index f18f9b12..059948d0 100644 --- a/examples/resources/stackit_sqlserverflex_instance/resource.tf +++ b/examples/resources/stackitprivatepreview_sqlserverflexalpha_instance/resource.tf @@ -1,4 +1,6 @@ -resource "stackit_sqlserverflex_instance" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_sqlserverflexalpha_instance" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" name = "example-instance" acl = ["XXX.XXX.XXX.X/XX", "XX.XXX.XX.X/XX"] @@ -16,6 +18,6 @@ resource "stackit_sqlserverflex_instance" "example" { # Only use the import statement, if you want to import an existing sqlserverflex instance import { - to = stackit_sqlserverflex_instance.import-example + to = stackitprivatepreview_sqlserverflexalpha_instance.import-example id = "${var.project_id},${var.region},${var.sql_instance_id}" } diff --git a/examples/resources/stackit_sqlserverflex_user/resource.tf b/examples/resources/stackitprivatepreview_sqlserverflexalpha_user/resource.tf similarity index 67% rename from examples/resources/stackit_sqlserverflex_user/resource.tf rename to examples/resources/stackitprivatepreview_sqlserverflexalpha_user/resource.tf index 98ebcfab..b328576c 100644 --- a/examples/resources/stackit_sqlserverflex_user/resource.tf +++ b/examples/resources/stackitprivatepreview_sqlserverflexalpha_user/resource.tf @@ -1,4 +1,6 @@ -resource "stackit_sqlserverflex_user" "example" { +# Copyright (c) STACKIT + +resource "stackitprivatepreview_sqlserverflexalpha_user" "example" { project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" instance_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" username = "username" @@ -7,6 +9,6 @@ resource "stackit_sqlserverflex_user" "example" { # Only use the import statement, if you want to import an existing sqlserverflex user import { - to = stackit_sqlserverflex_user.import-example + to = stackitprivatepreview_sqlserverflexalpha_user.import-example id = "${var.project_id},${var.region},${var.sql_instance_id},${var.sql_user_id}" } \ No newline at end of file diff --git a/go.mod b/go.mod index c938e177..d3409ce0 100644 --- a/go.mod +++ b/go.mod @@ -43,9 +43,63 @@ require ( ) require ( + github.com/AlecAivazis/survey/v2 v2.3.7 // indirect + github.com/BurntSushi/toml v1.2.1 // indirect + github.com/Kunde21/markdownfmt/v3 v3.1.0 // indirect + github.com/Masterminds/goutils v1.1.1 // indirect + github.com/Masterminds/semver/v3 v3.2.0 // indirect + github.com/Masterminds/sprig/v3 v3.2.3 // indirect + github.com/armon/go-radix v1.0.0 // indirect + github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef // indirect + github.com/bgentry/speakeasy v0.1.0 // indirect + github.com/bmatcuk/doublestar/v4 v4.9.1 // indirect + github.com/bradleyfalzon/ghinstallation/v2 v2.5.0 // indirect + github.com/cli/go-gh/v2 v2.11.2 // indirect + github.com/cli/safeexec v1.0.0 // indirect + github.com/fsnotify/fsnotify v1.5.4 // indirect + github.com/go-openapi/errors v0.20.2 // indirect + github.com/go-openapi/strfmt v0.21.3 // indirect + github.com/golang-jwt/jwt/v4 v4.5.1 // indirect + github.com/google/go-github/v45 v45.2.0 // indirect + github.com/google/go-github/v53 v53.0.0 // indirect + github.com/google/go-querystring v1.1.0 // indirect + github.com/hashicorp/cli v1.1.7 // indirect + github.com/hashicorp/copywrite v0.22.0 // indirect github.com/hashicorp/go-retryablehttp v0.7.7 // indirect + github.com/hashicorp/hcl v1.0.0 // indirect + github.com/hashicorp/terraform-plugin-docs v0.24.0 // indirect + github.com/huandu/xstrings v1.3.3 // indirect + github.com/imdario/mergo v0.3.15 // indirect + github.com/inconshreveable/mousetrap v1.0.1 // indirect + github.com/jedib0t/go-pretty v4.3.0+incompatible // indirect + github.com/jedib0t/go-pretty/v6 v6.4.6 // indirect + github.com/joho/godotenv v1.3.0 // indirect + github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect + github.com/knadh/koanf v1.5.0 // indirect github.com/kr/text v0.2.0 // indirect + github.com/mattn/go-runewidth v0.0.15 // indirect + github.com/mergestat/timediff v0.0.3 // indirect + github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/oklog/ulid v1.3.1 // indirect + github.com/posener/complete v1.2.3 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/samber/lo v1.37.0 // indirect + github.com/shopspring/decimal v1.3.1 // indirect + github.com/spf13/cast v1.5.0 // indirect + github.com/spf13/cobra v1.6.1 // indirect + github.com/spf13/pflag v1.0.5 // indirect + github.com/thanhpk/randstr v1.0.4 // indirect + github.com/yuin/goldmark v1.7.7 // indirect + github.com/yuin/goldmark-meta v1.1.0 // indirect + go.abhg.dev/goldmark/frontmatter v0.2.0 // indirect + go.mongodb.org/mongo-driver v1.10.0 // indirect + golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df // indirect + golang.org/x/oauth2 v0.30.0 // indirect golang.org/x/telemetry v0.0.0-20251203150158-8fff8a5912fc // indirect + golang.org/x/term v0.38.0 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect ) require ( diff --git a/go.sum b/go.sum index 897ebf8d..b87cbdf3 100644 --- a/go.sum +++ b/go.sum @@ -1,92 +1,262 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +cloud.google.com/go/compute/metadata v0.2.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk= dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= +github.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkkhIiSjQ= +github.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak= +github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ= +github.com/Kunde21/markdownfmt/v3 v3.1.0 h1:KiZu9LKs+wFFBQKhrZJrFZwtLnCCWJahL+S+E/3VnM0= +github.com/Kunde21/markdownfmt/v3 v3.1.0/go.mod h1:tPXN1RTyOzJwhfHoon9wUr4HGYmWgVxSQN6VBJDkrVc= +github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= +github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= +github.com/Masterminds/semver/v3 v3.2.0 h1:3MEsd0SM6jqZojhjLWWeBY+Kcjy9i6MQAeY7YgDP83g= +github.com/Masterminds/semver/v3 v3.2.0/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ= +github.com/Masterminds/sprig/v3 v3.2.3 h1:eL2fZNezLomi0uOLqjQoN6BfsDD+fyLtgbJMAj9n6YA= +github.com/Masterminds/sprig/v3 v3.2.3/go.mod h1:rXcFaZ2zZbLRJv/xSysmlgIM1u11eBaRMhvYXJNkGuM= github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY= github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= +github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g= github.com/ProtonMail/go-crypto v1.1.6 h1:ZcV+Ropw6Qn0AX9brlQLAUXfqLBc7Bl+f/DmNxpLfdw= github.com/ProtonMail/go-crypto v1.1.6/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE= github.com/agext/levenshtein v1.2.2 h1:0S/Yg6LYmFJ5stwQeRp6EeOcCbj7xiqQSdNelsXvaqE= github.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= +github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho= +github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= github.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJEynmyUenKwP++x/+DdGV/Ec= github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY= github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4= +github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= +github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= +github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= +github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI= +github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= +github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef h1:46PFijGLmAjMPwCCCo7Jf0W6f9slllCkkv7vyc1yOSg= +github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw= +github.com/aws/aws-sdk-go-v2 v1.9.2/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4= +github.com/aws/aws-sdk-go-v2/config v1.8.3/go.mod h1:4AEiLtAb8kLs7vgw2ZV3p2VZ1+hBavOc84hqxVNpCyw= +github.com/aws/aws-sdk-go-v2/credentials v1.4.3/go.mod h1:FNNC6nQZQUuyhq5aE5c7ata8o9e4ECGmS4lAXC7o1mQ= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.6.0/go.mod h1:gqlclDEZp4aqJOancXK6TN24aKhT0W0Ae9MHk3wzTMM= +github.com/aws/aws-sdk-go-v2/internal/ini v1.2.4/go.mod h1:ZcBrrI3zBKlhGFNYWvju0I3TR93I7YIgAfy82Fh4lcQ= +github.com/aws/aws-sdk-go-v2/service/appconfig v1.4.2/go.mod h1:FZ3HkCe+b10uFZZkFdvf98LHW21k49W8o8J366lqVKY= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.3.2/go.mod h1:72HRZDLMtmVQiLG2tLfQcaWLCssELvGl+Zf2WVxMmR8= +github.com/aws/aws-sdk-go-v2/service/sso v1.4.2/go.mod h1:NBvT9R1MEF+Ud6ApJKM0G+IkPchKS7p7c2YPKwHmBOk= +github.com/aws/aws-sdk-go-v2/service/sts v1.7.2/go.mod h1:8EzeIqfWt2wWT4rJVu3f21TfrhJ8AEMzVybRNSb/b4g= +github.com/aws/smithy-go v1.8.0/go.mod h1:SObp3lf9smib00L/v3U2eAKG8FyQ7iLrJnQiAmR5n+E= +github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= +github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY= +github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= +github.com/bmatcuk/doublestar/v4 v4.6.0 h1:HTuxyug8GyFbRkrffIpzNCSK4luc0TY3wzXvzIZhEXc= +github.com/bmatcuk/doublestar/v4 v4.6.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc= +github.com/bmatcuk/doublestar/v4 v4.9.1 h1:X8jg9rRZmJd4yRy7ZeNDRnM+T3ZfHv15JiBJ/avrEXE= +github.com/bmatcuk/doublestar/v4 v4.9.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc= +github.com/bradleyfalzon/ghinstallation/v2 v2.5.0 h1:yaYcGQ7yEIGbsJfW/9z7v1sLiZg/5rSNNXwmMct5XaE= +github.com/bradleyfalzon/ghinstallation/v2 v2.5.0/go.mod h1:amcvPQMrRkWNdueWOjPytGL25xQGzox7425qMgzo+Vo= github.com/bufbuild/protocompile v0.14.1 h1:iA73zAf/fyljNjQKwYzUHD6AD4R8KMasmwa/FBatYVw= github.com/bufbuild/protocompile v0.14.1/go.mod h1:ppVdAIhbr2H8asPk6k4pY7t9zB1OU5DoEw9xY/FUi1c= +github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= +github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= +github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= +github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/cli/go-gh/v2 v2.11.2 h1:oad1+sESTPNTiTvh3I3t8UmxuovNDxhwLzeMHk45Q9w= +github.com/cli/go-gh/v2 v2.11.2/go.mod h1:vVFhi3TfjseIW26ED9itAR8gQK0aVThTm8sYrsZ5QTI= +github.com/cli/safeexec v1.0.0 h1:0VngyaIyqACHdcMNWfo6+KdUYnqEr2Sg+bSP1pdF+dI= +github.com/cli/safeexec v1.0.0/go.mod h1:Z/D4tTN8Vs5gXYHDCbaM1S/anmEDnJb1iW0+EJ5zx3Q= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I= +github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA= github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0= github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs= +github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= +github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= +github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= +github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4= github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s= github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc= github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ= +github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= +github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= +github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= +github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= +github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= +github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= +github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= +github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= +github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI= +github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU= +github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI= github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic= github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM= github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU= github.com/go-git/go-git/v5 v5.14.0 h1:/MD3lCrGjCen5WfEAzKg00MJJffKhC8gzS80ycmCi60= github.com/go-git/go-git/v5 v5.14.0/go.mod h1:Z5Xhoia5PcWA3NF8vRLURn9E5FRhSl7dGj9ItW3Wk5k= +github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY= +github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc= +github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= +github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= +github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A= github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-openapi/errors v0.20.2 h1:dxy7PGTqEh94zj2E3h1cUmQQWiM1+aeCROfAr02EmK8= +github.com/go-openapi/errors v0.20.2/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M= +github.com/go-openapi/strfmt v0.21.3 h1:xwhj5X6CjXEZZHMWy1zKJxvW9AfHC9pkyUjLvHtKG7o= +github.com/go-openapi/strfmt v0.21.3/go.mod h1:k+RzNO0Da+k3FrrynSNN8F7n/peCmQQqbbXjtDfvmGg= +github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/go-test/deep v1.0.2-0.20181118220953-042da051cf31/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68= github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= +github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= +github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo= +github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo= github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ= github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= +github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= +github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= +github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= +github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= +github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= +github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= +github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= +github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/go-github/v45 v45.2.0 h1:5oRLszbrkvxDDqBCNj2hjDZMKmvexaZ1xw/FCD+K3FI= +github.com/google/go-github/v45 v45.2.0/go.mod h1:FObaZJEDSTa/WGCzZ2Z3eoCDXWJKMenWWTrd8jrta28= +github.com/google/go-github/v53 v53.0.0 h1:T1RyHbSnpHYnoF0ZYKiIPSgPtuJ8G6vgc0MKodXsQDQ= +github.com/google/go-github/v53 v53.0.0/go.mod h1:XhFRObz+m/l+UCm9b7KSIC3lT3NWSXGt7mOsAWEloao= +github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= +github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY= github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= +github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= +github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= +github.com/hashicorp/cli v1.1.7 h1:/fZJ+hNdwfTSfsxMBa9WWMlfjUZbX8/LnUxgAd7lCVU= +github.com/hashicorp/cli v1.1.7/go.mod h1:e6Mfpga9OCT1vqzFuoGZiiF/KaG9CbUfO5s3ghU3YgU= +github.com/hashicorp/consul/api v1.13.0/go.mod h1:ZlVrynguJKcYr54zGaDbaL3fOvKC9m72FhPvA8T35KQ= +github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms= +github.com/hashicorp/copywrite v0.22.0 h1:mqjMrgP3VptS7aLbu2l39rtznoK+BhphHst6i7HiTAo= +github.com/hashicorp/copywrite v0.22.0/go.mod h1:FqvGJt2+yoYDpVYgFSdg3R2iyhkCVaBmPMhfso0MR2k= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/go-checkpoint v0.5.0 h1:MFYpPZCnQqQTE18jFwSII6eUQrD/oxMFp3mlgcqk5mU= github.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg= github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= +github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-cty v1.5.0 h1:EkQ/v+dDNUqnuVpmS5fPqyY71NXVgT5gf32+57xY8g0= github.com/hashicorp/go-cty v1.5.0/go.mod h1:lFUCG5kd8exDobgSfyj4ONE/dc822kiYMguVKdHGMLM= +github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI= +github.com/hashicorp/go-hclog v0.8.0/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= +github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k= github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= +github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= +github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= +github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= +github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= +github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY= github.com/hashicorp/go-plugin v1.7.0 h1:YghfQH/0QmPNc/AZMTFE3ac8fipZyZECHdDPshfk+mA= github.com/hashicorp/go-plugin v1.7.0/go.mod h1:BExt6KEaIYx804z8k4gRzRLEvxKVb+kn0NMcihqOqb8= +github.com/hashicorp/go-retryablehttp v0.5.4/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs= github.com/hashicorp/go-retryablehttp v0.7.7 h1:C8hUCYzor8PIfXHa4UrZkU4VvK8o9ISHxT2Q8+VepXU= github.com/hashicorp/go-retryablehttp v0.7.7/go.mod h1:pkQpWZeYWskR+D1tR2O5OcBFOxfA7DoAO6xtkuQnHTk= +github.com/hashicorp/go-rootcerts v1.0.1/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= +github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= +github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= +github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= +github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= +github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/hc-install v0.9.2 h1:v80EtNX4fCVHqzL9Lg/2xkp62bbvQMnvPQ0G+OmtO24= github.com/hashicorp/hc-install v0.9.2/go.mod h1:XUqBQNnuT4RsxoxiM9ZaUk0NX8hi2h+Lb6/c0OZnC/I= +github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl/v2 v2.24.0 h1:2QJdZ454DSsYGoaE6QheQZjtKZSUs9Nh2izTWiwQxvE= github.com/hashicorp/hcl/v2 v2.24.0/go.mod h1:oGoO1FIQYfn/AgyOhlg9qLC6/nOJPX3qGbkZpYAcqfM= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= +github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc= +github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE= +github.com/hashicorp/serf v0.9.6/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4= github.com/hashicorp/terraform-exec v0.24.0 h1:mL0xlk9H5g2bn0pPF6JQZk5YlByqSqrO5VoaNtAf8OE= github.com/hashicorp/terraform-exec v0.24.0/go.mod h1:lluc/rDYfAhYdslLJQg3J0oDqo88oGQAdHR+wDqFvo4= github.com/hashicorp/terraform-json v0.27.2 h1:BwGuzM6iUPqf9JYM/Z4AF1OJ5VVJEEzoKST/tRDBJKU= github.com/hashicorp/terraform-json v0.27.2/go.mod h1:GzPLJ1PLdUG5xL6xn1OXWIjteQRT2CNT9o/6A9mi9hE= +github.com/hashicorp/terraform-plugin-docs v0.24.0 h1:YNZYd+8cpYclQyXbl1EEngbld8w7/LPOm99GD5nikIU= +github.com/hashicorp/terraform-plugin-docs v0.24.0/go.mod h1:YLg+7LEwVmRuJc0EuCw0SPLxuQXw5mW8iJ5ml/kvi+o= github.com/hashicorp/terraform-plugin-framework v1.17.0 h1:JdX50CFrYcYFY31gkmitAEAzLKoBgsK+iaJjDC8OexY= github.com/hashicorp/terraform-plugin-framework v1.17.0/go.mod h1:4OUXKdHNosX+ys6rLgVlgklfxN3WHR5VHSOABeS/BM0= github.com/hashicorp/terraform-plugin-framework-validators v0.19.0 h1:Zz3iGgzxe/1XBkooZCewS0nJAaCFPFPHdNJd8FgE4Ow= @@ -103,52 +273,184 @@ github.com/hashicorp/terraform-registry-address v0.4.0 h1:S1yCGomj30Sao4l5BMPjTG github.com/hashicorp/terraform-registry-address v0.4.0/go.mod h1:LRS1Ay0+mAiRkUyltGT+UHWkIqTFvigGn/LbMshfflE= github.com/hashicorp/terraform-svchost v0.1.1 h1:EZZimZ1GxdqFRinZ1tpJwVxxt49xc/S52uzrw4x0jKQ= github.com/hashicorp/terraform-svchost v0.1.1/go.mod h1:mNsjQfZyf/Jhz35v6/0LWcv26+X7JPS+buii2c9/ctc= +github.com/hashicorp/vault/api v1.0.4/go.mod h1:gDcqh3WGcR1cpF5AJz/B1UFheUEneMoIospckxBxk6Q= +github.com/hashicorp/vault/sdk v0.1.13/go.mod h1:B+hVj7TpuQY1Y/GPbCpffmgd+tSEwvhkWnjtSYCaS2M= +github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= +github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= github.com/hashicorp/yamux v0.1.2 h1:XtB8kyFOyHXYVFnwT5C3+Bdo8gArse7j2AQ0DA0Uey8= github.com/hashicorp/yamux v0.1.2/go.mod h1:C+zze2n6e/7wshOZep2A70/aQU6QBRWJO/G6FT1wIns= +github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68= +github.com/hjson/hjson-go/v4 v4.0.0/go.mod h1:KaYt3bTw3zhBjYqnXkYywcYctk0A2nxeEFTse3rH13E= +github.com/huandu/xstrings v1.3.3 h1:/Gcsuc1x8JVbJ9/rlye4xZnVAbEkGauT8lbebqcQws4= +github.com/huandu/xstrings v1.3.3/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= +github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= +github.com/imdario/mergo v0.3.15 h1:M8XP7IuFNsqUx6VPK2P9OSmsYsI/YFaGil0uD21V3dM= +github.com/imdario/mergo v0.3.15/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= +github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc= +github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= +github.com/jedib0t/go-pretty v4.3.0+incompatible h1:CGs8AVhEKg/n9YbUenWmNStRW2PHJzaeDodcfvRAbIo= +github.com/jedib0t/go-pretty v4.3.0+incompatible/go.mod h1:XemHduiw8R651AF9Pt4FwCTKeG3oo7hrHJAoznj9nag= +github.com/jedib0t/go-pretty/v6 v6.4.6 h1:v6aG9h6Uby3IusSSEjHaZNXpHFhzqMmjXcPq1Rjl9Jw= +github.com/jedib0t/go-pretty/v6 v6.4.6/go.mod h1:Ndk3ase2CkQbXLLNf5QDHoYb6J9WtVfmHZu9n8rk2xs= github.com/jhump/protoreflect v1.17.0 h1:qOEr613fac2lOuTgWN4tPAtLL7fUSbuJL5X5XumQh94= github.com/jhump/protoreflect v1.17.0/go.mod h1:h9+vUUL38jiBzck8ck+6G/aeMX8Z4QUY/NiJPwPNi+8= +github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= +github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= +github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= +github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= +github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= +github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= +github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= +github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= +github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs= +github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4= github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= +github.com/knadh/koanf v1.5.0 h1:q2TSd/3Pyc/5yP9ldIrSdIz26MCcyNQzW0pEAugLPNs= +github.com/knadh/koanf v1.5.0/go.mod h1:Hgyjp4y8v44hpZtPzs7JZfRAW5AhN7KfZcwv1RYggDs= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= +github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4= github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= +github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= +github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= +github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U= +github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= +github.com/mergestat/timediff v0.0.3 h1:ucCNh4/ZrTPjFZ081PccNbhx9spymCJkFxSzgVuPU+Y= +github.com/mergestat/timediff v0.0.3/go.mod h1:yvMUaRu2oetc+9IbPLYBJviz6sA7xz8OXMDfhBl7YSI= +github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI= +github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso= +github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI= +github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= +github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI= +github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw= github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= +github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU= github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8= +github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0= github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0= +github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/mitchellh/mapstructure v1.3.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= +github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= +github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= +github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= +github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= +github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= +github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= +github.com/npillmayer/nestext v0.1.3/go.mod h1:h2lrijH8jpicr25dFY+oAJLyzlya6jhnuG+zWp9L0Uk= +github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= github.com/oklog/run v1.2.0 h1:O8x3yXwah4A73hJdlrwo/2X6J62gE5qTMusH0dvz60E= github.com/oklog/run v1.2.0/go.mod h1:mgDbKRSwPhJfesJ4PntqFUbKQRZ50NgmZTSPlFA0YFk= +github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4= +github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= +github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= +github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= +github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE= +github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= github.com/pjbgf/sha1cd v0.3.2 h1:a9wb0bp1oC2TGwStyn0Umc/IGKQnEgF0vVaZ8QF8eo4= github.com/pjbgf/sha1cd v0.3.2/go.mod h1:zQWigSxVmsHEZow5qaLtPYxpcKMMQpa09ixqBxuCS6A= +github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkg/profile v1.6.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= +github.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo= +github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s= +github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= +github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= +github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= +github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= +github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= +github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= +github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= +github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc= +github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= +github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= +github.com/rhnvrm/simples3 v0.6.1/go.mod h1:Y+3vYm2V7Y4VijFoJHHTrja6OgPrJ2cBti8dPGkC3sA= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= +github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= +github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= +github.com/samber/lo v1.37.0 h1:XjVcB8g6tgUp8rsPsJ2CvhClfImrpL04YpQHXeHPhRw= +github.com/samber/lo v1.37.0/go.mod h1:9vaz2O4o8oOnK23pd2TrXufcbdbJIa3b6cstBWKpopA= +github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8= github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4= +github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= +github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8= +github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= +github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= github.com/skeema/knownhosts v1.3.1 h1:X2osQ+RAjK76shCbvhHHHVl3ZlgDm8apHEHFqRjnBY8= github.com/skeema/knownhosts v1.3.1/go.mod h1:r7KTdC8l4uxWRyK2TpQZ/1o5HaSzh06ePQNxPwTcfiY= +github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w= +github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155UU= +github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA= +github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY= +github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/stackitcloud/stackit-sdk-go/core v0.20.1 h1:odiuhhRXmxvEvnVTeZSN9u98edvw2Cd3DcnkepncP3M= github.com/stackitcloud/stackit-sdk-go/core v0.20.1/go.mod h1:fqto7M82ynGhEnpZU6VkQKYWYoFG5goC076JWXTUPRQ= github.com/stackitcloud/stackit-sdk-go/services/authorization v0.9.0 h1:7ZKd3b+E/R4TEVShLTXxx5FrsuDuJBOyuVOuKTMa4mo= @@ -206,11 +508,25 @@ github.com/stackitcloud/stackit-sdk-go/services/ske v1.4.0/go.mod h1:xRBgpJ8P5Nf github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex v1.3.3 h1:TFefEGGxvcI7euqyosbLS/zSEOy+3JMGOirW3vNj/84= github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex v1.3.3/go.mod h1:Jsry+gfhuXv2P0ldfa48BaL605NhDjdQMgaoV8czlbo= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= +github.com/stretchr/testify v1.7.4/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/teambition/rrule-go v1.8.2 h1:lIjpjvWTj9fFUZCmuoVDrKVOtdiyzbzc93qTmRVe/J8= github.com/teambition/rrule-go v1.8.2/go.mod h1:Ieq5AbrKGciP1V//Wq8ktsTXwSwJHDD5mD/wLBGl3p4= +github.com/thanhpk/randstr v1.0.4 h1:IN78qu/bR+My+gHCvMEXhR/i5oriVHcTB/BJJIRTsNo= +github.com/thanhpk/randstr v1.0.4/go.mod h1:M/H2P1eNLZzlDwAzpkkkUvoyNNMbzRGhESZuEQk3r0U= +github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= github.com/vmihailenco/msgpack v4.0.4+incompatible h1:dSLoQfGFAo3F6OoNhwUmLwVgaUXK79GlxNBwueZn0xI= github.com/vmihailenco/msgpack v4.0.4+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= @@ -220,11 +536,29 @@ github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAh github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds= github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM= github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw= +github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= +github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g= +github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8= +github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +github.com/yuin/goldmark v1.7.7 h1:5m9rrB1sW3JUMToKFQfb+FGt1U7r57IHu5GrYrG2nqU= +github.com/yuin/goldmark v1.7.7/go.mod h1:uzxRWxtg69N339t3louHJ7+O03ezfj6PlliRlaOzY1E= +github.com/yuin/goldmark-meta v1.1.0 h1:pWw+JLHGZe8Rk0EGsMVssiNb/AaPMHfSRszZeUeiOUc= +github.com/yuin/goldmark-meta v1.1.0/go.mod h1:U4spWENafuA7Zyg+Lj5RqK/MF+ovMYtBvXi1lBb2VP0= github.com/zclconf/go-cty v1.17.0 h1:seZvECve6XX4tmnvRzWtJNHdscMtYEx5R7bnnVyd/d0= github.com/zclconf/go-cty v1.17.0/go.mod h1:wqFzcImaLTI6A5HfsRwB0nj5n0MRZFwmey8YoFPPs3U= github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo= github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM= +go.abhg.dev/goldmark/frontmatter v0.2.0 h1:P8kPG0YkL12+aYk2yU3xHv4tcXzeVnN+gU0tJ5JnxRw= +go.abhg.dev/goldmark/frontmatter v0.2.0/go.mod h1:XqrEkZuM57djk7zrlRUB02x8I5J0px76YjkOzhB4YlU= +go.etcd.io/etcd/api/v3 v3.5.4/go.mod h1:5GB2vv4A4AOn3yk7MftYGHkUfGtDHnEraIjym4dYz5A= +go.etcd.io/etcd/client/pkg/v3 v3.5.4/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g= +go.etcd.io/etcd/client/v3 v3.5.4/go.mod h1:ZaRkVgBZC+L+dLCjTcF1hRXpgZXQPOvnA/Ak/gq3kiY= +go.mongodb.org/mongo-driver v1.10.0 h1:UtV6N5k14upNp4LTduX0QCufG124fSu25Wz9tu94GLg= +go.mongodb.org/mongo-driver v1.10.0/go.mod h1:wsihk0Kdgv8Kqu1Anit4sfK+22vSFbUrAVEYRhCXrA8= go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ= @@ -237,74 +571,238 @@ go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFh go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps= go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4= go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0= +go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= +go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= +go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo= +golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= +golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= +golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= +golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU= golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0= +golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= +golang.org/x/exp v0.0.0-20220303212507-bbda1eaf7a17 h1:3MTrJm4PyNL9NBqvYDSj3DHl46qQakyfqfWo4jgfaEM= +golang.org/x/exp v0.0.0-20220303212507-bbda1eaf7a17/go.mod h1:lgLbSvA5ygNOMpwM/9anMpWVlVJ7Z+cHWq/eFuinpGE= +golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME= +golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc= +golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= +golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= +golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI= golang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg= +golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= +golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8= +golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU= golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= +golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= +golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE= +golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI= +golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/telemetry v0.0.0-20251203150158-8fff8a5912fc h1:bH6xUXay0AIFMElXG2rQ4uiE+7ncwtiOdPfYK1NK2XA= golang.org/x/telemetry v0.0.0-20251203150158-8fff8a5912fc/go.mod h1:hKdjCMrbv9skySur+Nek8Hd0uJ0GuxJIoIX2payrIdQ= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q= golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU= golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY= +golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA= golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= +google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM= google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= +google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= +google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= +google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= +google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= google.golang.org/genproto/googleapis/rpc v0.0.0-20250707201910-8d1bb00bc6a7 h1:pFyd6EwwL2TqFf8emdthzeX+gZE1ElRq3iM8pui4KBY= google.golang.org/genproto/googleapis/rpc v0.0.0-20250707201910-8d1bb00bc6a7/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= +google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= +google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= +google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= +google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= +google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= google.golang.org/grpc v1.75.1 h1:/ODCNEuf9VghjgO3rqLcfg8fiOP0nSluljWFlDxELLI= google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ= +google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= +google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= +google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= +google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= +google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= +google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= +google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= +google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= +google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= +google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw= google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU= +gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= +gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d/go.mod h1:cuepJuh7vyXfUyUwEgHQXw849cJrilpS5NeIjOWESAw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= +gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= +sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= diff --git a/golang-ci.yaml b/golang-ci.yaml index 3487f745..d134590b 100644 --- a/golang-ci.yaml +++ b/golang-ci.yaml @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + # This file contains all available configuration options # with their default values. diff --git a/main.go b/main.go index 2b958c43..a059652c 100644 --- a/main.go +++ b/main.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package main import ( diff --git a/pkg/postgresflexalpha/api_default_test.go b/pkg/postgresflexalpha/api_default_test.go index 81f31fc1..1f3d713b 100644 --- a/pkg/postgresflexalpha/api_default_test.go +++ b/pkg/postgresflexalpha/api_default_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + /* PostgreSQL Flex API diff --git a/pkg/postgresflexalpha/wait/wait.go b/pkg/postgresflexalpha/wait/wait.go index c66716a0..1f9eb592 100644 --- a/pkg/postgresflexalpha/wait/wait.go +++ b/pkg/postgresflexalpha/wait/wait.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package wait import ( diff --git a/pkg/postgresflexalpha/wait/wait_test.go b/pkg/postgresflexalpha/wait/wait_test.go index fc87463f..17ece086 100644 --- a/pkg/postgresflexalpha/wait/wait_test.go +++ b/pkg/postgresflexalpha/wait/wait_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package wait import ( diff --git a/pkg/sqlserverflexalpha/api_default_test.go b/pkg/sqlserverflexalpha/api_default_test.go index c9e7f0fe..3a0f278d 100644 --- a/pkg/sqlserverflexalpha/api_default_test.go +++ b/pkg/sqlserverflexalpha/api_default_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + /* STACKIT MSSQL Service API diff --git a/pkg/sqlserverflexalpha/model_get_backup_response.go b/pkg/sqlserverflexalpha/model_get_backup_response.go index b241ca22..5a382853 100644 --- a/pkg/sqlserverflexalpha/model_get_backup_response.go +++ b/pkg/sqlserverflexalpha/model_get_backup_response.go @@ -21,22 +21,6 @@ var _ MappedNullable = &GetBackupResponse{} types and functions for completionTime */ -// isAny -type GetBackupResponseGetCompletionTimeAttributeType = any -type GetBackupResponseGetCompletionTimeArgType = any -type GetBackupResponseGetCompletionTimeRetType = any - -func getGetBackupResponseGetCompletionTimeAttributeTypeOk(arg GetBackupResponseGetCompletionTimeAttributeType) (ret GetBackupResponseGetCompletionTimeRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setGetBackupResponseGetCompletionTimeAttributeType(arg *GetBackupResponseGetCompletionTimeAttributeType, val GetBackupResponseGetCompletionTimeRetType) { - *arg = &val -} - // isModel type GetBackupResponseGetCompletionTimeAttributeType = *string type GetBackupResponseGetCompletionTimeArgType = string @@ -57,22 +41,6 @@ func setGetBackupResponseGetCompletionTimeAttributeType(arg *GetBackupResponseGe types and functions for id */ -// isAny -type GetBackupResponseGetIdAttributeType = any -type GetBackupResponseGetIdArgType = any -type GetBackupResponseGetIdRetType = any - -func getGetBackupResponseGetIdAttributeTypeOk(arg GetBackupResponseGetIdAttributeType) (ret GetBackupResponseGetIdRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setGetBackupResponseGetIdAttributeType(arg *GetBackupResponseGetIdAttributeType, val GetBackupResponseGetIdRetType) { - *arg = &val -} - // isModel type GetBackupResponseGetIdAttributeType = *int64 type GetBackupResponseGetIdArgType = int64 @@ -92,23 +60,6 @@ func setGetBackupResponseGetIdAttributeType(arg *GetBackupResponseGetIdAttribute /* types and functions for name */ - -// isAny -type GetBackupResponseGetNameAttributeType = any -type GetBackupResponseGetNameArgType = any -type GetBackupResponseGetNameRetType = any - -func getGetBackupResponseGetNameAttributeTypeOk(arg GetBackupResponseGetNameAttributeType) (ret GetBackupResponseGetNameRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setGetBackupResponseGetNameAttributeType(arg *GetBackupResponseGetNameAttributeType, val GetBackupResponseGetNameRetType) { - *arg = &val -} - // isModel type GetBackupResponseGetNameAttributeType = *string type GetBackupResponseGetNameArgType = string @@ -129,22 +80,6 @@ func setGetBackupResponseGetNameAttributeType(arg *GetBackupResponseGetNameAttri types and functions for retainedUntil */ -// isAny -type GetBackupResponseGetRetainedUntilAttributeType = any -type GetBackupResponseGetRetainedUntilArgType = any -type GetBackupResponseGetRetainedUntilRetType = any - -func getGetBackupResponseGetRetainedUntilAttributeTypeOk(arg GetBackupResponseGetRetainedUntilAttributeType) (ret GetBackupResponseGetRetainedUntilRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setGetBackupResponseGetRetainedUntilAttributeType(arg *GetBackupResponseGetRetainedUntilAttributeType, val GetBackupResponseGetRetainedUntilRetType) { - *arg = &val -} - // isModel type GetBackupResponseGetRetainedUntilAttributeType = *string type GetBackupResponseGetRetainedUntilArgType = string @@ -165,22 +100,6 @@ func setGetBackupResponseGetRetainedUntilAttributeType(arg *GetBackupResponseGet types and functions for size */ -// isAny -type GetBackupResponseGetSizeAttributeType = any -type GetBackupResponseGetSizeArgType = any -type GetBackupResponseGetSizeRetType = any - -func getGetBackupResponseGetSizeAttributeTypeOk(arg GetBackupResponseGetSizeAttributeType) (ret GetBackupResponseGetSizeRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setGetBackupResponseGetSizeAttributeType(arg *GetBackupResponseGetSizeAttributeType, val GetBackupResponseGetSizeRetType) { - *arg = &val -} - // isModel type GetBackupResponseGetSizeAttributeType = *int64 type GetBackupResponseGetSizeArgType = int64 @@ -201,22 +120,6 @@ func setGetBackupResponseGetSizeAttributeType(arg *GetBackupResponseGetSizeAttri types and functions for type */ -// isAny -type GetBackupResponseGetTypeAttributeType = any -type GetBackupResponseGetTypeArgType = any -type GetBackupResponseGetTypeRetType = any - -func getGetBackupResponseGetTypeAttributeTypeOk(arg GetBackupResponseGetTypeAttributeType) (ret GetBackupResponseGetTypeRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setGetBackupResponseGetTypeAttributeType(arg *GetBackupResponseGetTypeAttributeType, val GetBackupResponseGetTypeRetType) { - *arg = &val -} - // isModel type GetBackupResponseGetTypeAttributeType = *string type GetBackupResponseGetTypeArgType = string diff --git a/pkg/sqlserverflexalpha/model_list_backup.go b/pkg/sqlserverflexalpha/model_list_backup.go index db156545..e6092662 100644 --- a/pkg/sqlserverflexalpha/model_list_backup.go +++ b/pkg/sqlserverflexalpha/model_list_backup.go @@ -21,22 +21,6 @@ var _ MappedNullable = &ListBackup{} types and functions for completionTime */ -// isAny -type ListBackupGetCompletionTimeAttributeType = any -type ListBackupGetCompletionTimeArgType = any -type ListBackupGetCompletionTimeRetType = any - -func getListBackupGetCompletionTimeAttributeTypeOk(arg ListBackupGetCompletionTimeAttributeType) (ret ListBackupGetCompletionTimeRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setListBackupGetCompletionTimeAttributeType(arg *ListBackupGetCompletionTimeAttributeType, val ListBackupGetCompletionTimeRetType) { - *arg = &val -} - // isModel type ListBackupGetCompletionTimeAttributeType = *string type ListBackupGetCompletionTimeArgType = string @@ -57,22 +41,6 @@ func setListBackupGetCompletionTimeAttributeType(arg *ListBackupGetCompletionTim types and functions for id */ -// isAny -type ListBackupGetIdAttributeType = any -type ListBackupGetIdArgType = any -type ListBackupGetIdRetType = any - -func getListBackupGetIdAttributeTypeOk(arg ListBackupGetIdAttributeType) (ret ListBackupGetIdRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setListBackupGetIdAttributeType(arg *ListBackupGetIdAttributeType, val ListBackupGetIdRetType) { - *arg = &val -} - // isModel type ListBackupGetIdAttributeType = *int64 type ListBackupGetIdArgType = int64 @@ -93,22 +61,6 @@ func setListBackupGetIdAttributeType(arg *ListBackupGetIdAttributeType, val List types and functions for name */ -// isAny -type ListBackupGetNameAttributeType = any -type ListBackupGetNameArgType = any -type ListBackupGetNameRetType = any - -func getListBackupGetNameAttributeTypeOk(arg ListBackupGetNameAttributeType) (ret ListBackupGetNameRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setListBackupGetNameAttributeType(arg *ListBackupGetNameAttributeType, val ListBackupGetNameRetType) { - *arg = &val -} - // isModel type ListBackupGetNameAttributeType = *string type ListBackupGetNameArgType = string @@ -129,22 +81,6 @@ func setListBackupGetNameAttributeType(arg *ListBackupGetNameAttributeType, val types and functions for retainedUntil */ -// isAny -type ListBackupGetRetainedUntilAttributeType = any -type ListBackupGetRetainedUntilArgType = any -type ListBackupGetRetainedUntilRetType = any - -func getListBackupGetRetainedUntilAttributeTypeOk(arg ListBackupGetRetainedUntilAttributeType) (ret ListBackupGetRetainedUntilRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setListBackupGetRetainedUntilAttributeType(arg *ListBackupGetRetainedUntilAttributeType, val ListBackupGetRetainedUntilRetType) { - *arg = &val -} - // isModel type ListBackupGetRetainedUntilAttributeType = *string type ListBackupGetRetainedUntilArgType = string @@ -161,26 +97,6 @@ func setListBackupGetRetainedUntilAttributeType(arg *ListBackupGetRetainedUntilA *arg = &val } -/* - types and functions for size -*/ - -// isAny -type ListBackupGetSizeAttributeType = any -type ListBackupGetSizeArgType = any -type ListBackupGetSizeRetType = any - -func getListBackupGetSizeAttributeTypeOk(arg ListBackupGetSizeAttributeType) (ret ListBackupGetSizeRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setListBackupGetSizeAttributeType(arg *ListBackupGetSizeAttributeType, val ListBackupGetSizeRetType) { - *arg = &val -} - // isModel type ListBackupGetSizeAttributeType = *int64 type ListBackupGetSizeArgType = int64 @@ -197,26 +113,6 @@ func setListBackupGetSizeAttributeType(arg *ListBackupGetSizeAttributeType, val *arg = &val } -/* - types and functions for type -*/ - -// isAny -type ListBackupGetTypeAttributeType = any -type ListBackupGetTypeArgType = any -type ListBackupGetTypeRetType = any - -func getListBackupGetTypeAttributeTypeOk(arg ListBackupGetTypeAttributeType) (ret ListBackupGetTypeRetType, ok bool) { - if arg == nil { - return ret, false - } - return *arg, true -} - -func setListBackupGetTypeAttributeType(arg *ListBackupGetTypeAttributeType, val ListBackupGetTypeRetType) { - *arg = &val -} - // isModel type ListBackupGetTypeAttributeType = *string type ListBackupGetTypeArgType = string diff --git a/pkg/sqlserverflexalpha/wait/wait.go b/pkg/sqlserverflexalpha/wait/wait.go index 5f1563be..9a62f80f 100644 --- a/pkg/sqlserverflexalpha/wait/wait.go +++ b/pkg/sqlserverflexalpha/wait/wait.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package wait import ( diff --git a/pkg/sqlserverflexalpha/wait/wait_test.go b/pkg/sqlserverflexalpha/wait/wait_test.go index 737f4e89..f6ffeec4 100644 --- a/pkg/sqlserverflexalpha/wait/wait_test.go +++ b/pkg/sqlserverflexalpha/wait/wait_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package wait import ( diff --git a/sample/main.tf b/sample/main.tf index 8cf8307c..f87ce3f9 100644 --- a/sample/main.tf +++ b/sample/main.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + resource "stackitalpha_kms_keyring" "keyring" { project_id = var.project_id display_name = "keyring01" diff --git a/sample/providers.tf b/sample/providers.tf index fdeea0d9..8af98505 100644 --- a/sample/providers.tf +++ b/sample/providers.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + terraform { required_providers { stackitalpha = { diff --git a/sample/tf.sh b/sample/tf.sh index 5cc86c55..9f597428 100755 --- a/sample/tf.sh +++ b/sample/tf.sh @@ -1,4 +1,6 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # copy or rename sample.tfrc.example and adjust it TERRAFORM_CONFIG=$(pwd)/sample.tfrc diff --git a/sample/tofu.sh b/sample/tofu.sh index ac84a493..bea0e72c 100755 --- a/sample/tofu.sh +++ b/sample/tofu.sh @@ -1,4 +1,6 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # copy or rename sample.tfrc.example and adjust it TERRAFORM_CONFIG=$(pwd)/sample.tfrc diff --git a/sample/user.tf b/sample/user.tf new file mode 100644 index 00000000..5729307b --- /dev/null +++ b/sample/user.tf @@ -0,0 +1,8 @@ +# Copyright (c) STACKIT + +resource "stackit_sqlserverflexalpha_user" "ptlsdbuser" { + project_id = stackitalpha_postgresflexalpha_instance.ptlsdbsrv.project_id + instance_id = stackitalpha_postgresflexalpha_instance.ptlsdbsrv.id + username = var.db_username + roles = ["createdb", "login", "createrole"] +} \ No newline at end of file diff --git a/sample/variables.tf.example b/sample/variables.tf.example index a4705793..51a70be4 100644 --- a/sample/variables.tf.example +++ b/sample/variables.tf.example @@ -5,3 +5,7 @@ variable "project_id" { variable "sa_email" { default = "" } + +variable "db_username" { + default = "" +} diff --git a/scripts/check-docs.sh b/scripts/check-docs.sh index 4577dce0..602244cf 100755 --- a/scripts/check-docs.sh +++ b/scripts/check-docs.sh @@ -1,4 +1,6 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # This script is used to ensure for PRs the docs are up-to-date via the CI pipeline # Usage: ./check-docs.sh diff --git a/scripts/lint-golangci-lint.sh b/scripts/lint-golangci-lint.sh index c2ffd78f..b9e07251 100755 --- a/scripts/lint-golangci-lint.sh +++ b/scripts/lint-golangci-lint.sh @@ -1,4 +1,6 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # This script lints the SDK modules and the internal examples # Pre-requisites: golangci-lint set -eo pipefail diff --git a/scripts/project.sh b/scripts/project.sh index 91bb1efd..159ba952 100755 --- a/scripts/project.sh +++ b/scripts/project.sh @@ -1,4 +1,6 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # This script is used to manage the project, only used for installing the required tools for now # Usage: ./project.sh [action] diff --git a/scripts/replace.sh b/scripts/replace.sh index 9326b1f7..953ac6e0 100755 --- a/scripts/replace.sh +++ b/scripts/replace.sh @@ -1,4 +1,6 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # Add replace directives to local files to go.work set -eo pipefail diff --git a/scripts/tfplugindocs.sh b/scripts/tfplugindocs.sh index 3ca0e9f1..8c79e7ef 100755 --- a/scripts/tfplugindocs.sh +++ b/scripts/tfplugindocs.sh @@ -1,10 +1,12 @@ #!/usr/bin/env bash +# Copyright (c) STACKIT + # Pre-requisites: tfplugindocs set -eo pipefail ROOT_DIR=$(git rev-parse --show-toplevel) EXAMPLES_DIR="${ROOT_DIR}/examples" -PROVIDER_NAME="stackit" +PROVIDER_NAME="stackitprivatepreview" # Create a new empty directory for the docs if [ -d ${ROOT_DIR}/docs ]; then @@ -14,4 +16,4 @@ mkdir -p ${ROOT_DIR}/docs echo ">> Generating documentation" tfplugindocs generate \ - --provider-name "stackit" + --provider-name "stackitprivatepreview" diff --git a/stackit/internal/conversion/conversion.go b/stackit/internal/conversion/conversion.go index a0a4c945..bdcacdfd 100644 --- a/stackit/internal/conversion/conversion.go +++ b/stackit/internal/conversion/conversion.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package conversion import ( diff --git a/stackit/internal/conversion/conversion_test.go b/stackit/internal/conversion/conversion_test.go index 08083abb..53fd738a 100644 --- a/stackit/internal/conversion/conversion_test.go +++ b/stackit/internal/conversion/conversion_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package conversion import ( diff --git a/stackit/internal/core/core.go b/stackit/internal/core/core.go index e3dd02e0..d3ea252c 100644 --- a/stackit/internal/core/core.go +++ b/stackit/internal/core/core.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package core import ( diff --git a/stackit/internal/core/core_test.go b/stackit/internal/core/core_test.go index 8824e870..0905899e 100644 --- a/stackit/internal/core/core_test.go +++ b/stackit/internal/core/core_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package core import ( diff --git a/stackit/internal/features/beta.go b/stackit/internal/features/beta.go index 4354fbd5..f0615eaa 100644 --- a/stackit/internal/features/beta.go +++ b/stackit/internal/features/beta.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package features import ( diff --git a/stackit/internal/features/beta_test.go b/stackit/internal/features/beta_test.go index 242636c5..4ea67e10 100644 --- a/stackit/internal/features/beta_test.go +++ b/stackit/internal/features/beta_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package features import ( diff --git a/stackit/internal/features/experiments.go b/stackit/internal/features/experiments.go index 35193048..6f56fb64 100644 --- a/stackit/internal/features/experiments.go +++ b/stackit/internal/features/experiments.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package features import ( diff --git a/stackit/internal/features/experiments_test.go b/stackit/internal/features/experiments_test.go index f8692597..256055ca 100644 --- a/stackit/internal/features/experiments_test.go +++ b/stackit/internal/features/experiments_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package features import ( diff --git a/stackit/internal/services/authorization/authorization_acc_test.go b/stackit/internal/services/authorization/authorization_acc_test.go deleted file mode 100644 index 7fcede14..00000000 --- a/stackit/internal/services/authorization/authorization_acc_test.go +++ /dev/null @@ -1,114 +0,0 @@ -package authorization_test - -import ( - "context" - "errors" - "fmt" - "regexp" - "slices" - "testing" - - _ "embed" - - "github.com/hashicorp/terraform-plugin-testing/config" - "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-plugin-testing/terraform" - stackitSdkConfig "github.com/stackitcloud/stackit-sdk-go/core/config" - "github.com/stackitcloud/stackit-sdk-go/services/authorization" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/testutil" -) - -//go:embed testfiles/prerequisites.tf -var prerequisites string - -//go:embed testfiles/double-definition.tf -var doubleDefinition string - -//go:embed testfiles/project-owner.tf -var projectOwner string - -//go:embed testfiles/invalid-role.tf -var invalidRole string - -//go:embed testfiles/organization-role.tf -var organizationRole string - -var testConfigVars = config.Variables{ - "project_id": config.StringVariable(testutil.ProjectId), - "test_service_account": config.StringVariable(testutil.TestProjectServiceAccountEmail), - "organization_id": config.StringVariable(testutil.OrganizationId), -} - -func TestAccProjectRoleAssignmentResource(t *testing.T) { - t.Log(testutil.AuthorizationProviderConfig()) - resource.Test(t, resource.TestCase{ - ProtoV6ProviderFactories: testutil.TestAccProtoV6ProviderFactories, - Steps: []resource.TestStep{ - { - ConfigVariables: testConfigVars, - Config: testutil.AuthorizationProviderConfig() + prerequisites, - Check: func(_ *terraform.State) error { - client, err := authApiClient() - if err != nil { - return err - } - - members, err := client.ListMembers(context.TODO(), "project", testutil.ProjectId).Execute() - - if err != nil { - return err - } - - if !slices.ContainsFunc(*members.Members, func(m authorization.Member) bool { - return *m.Role == "reader" && *m.Subject == testutil.TestProjectServiceAccountEmail - }) { - t.Log(members.Members) - return errors.New("Membership not found") - } - return nil - }, - }, - { - // Assign a resource to an organization - ConfigVariables: testConfigVars, - Config: testutil.AuthorizationProviderConfig() + prerequisites + organizationRole, - }, - { - // The Service Account inherits owner permissions for the project from the organization. Check if you can still assign owner permissions on the project explicitly - ConfigVariables: testConfigVars, - Config: testutil.AuthorizationProviderConfig() + prerequisites + organizationRole + projectOwner, - }, - { - // Expect failure on creating an already existing role_assignment - // Would be bad, since two resources could be created and deletion of one would lead to state drift for the second TF resource - ConfigVariables: testConfigVars, - Config: testutil.AuthorizationProviderConfig() + prerequisites + doubleDefinition, - ExpectError: regexp.MustCompile(".+"), - }, - { - // Assign a non-existent role. Expect failure - ConfigVariables: testConfigVars, - Config: testutil.AuthorizationProviderConfig() + prerequisites + invalidRole, - ExpectError: regexp.MustCompile(".+"), - }, - }, - }) -} - -func authApiClient() (*authorization.APIClient, error) { - var client *authorization.APIClient - var err error - if testutil.AuthorizationCustomEndpoint == "" { - client, err = authorization.NewAPIClient( - stackitSdkConfig.WithRegion("eu01"), - ) - } else { - client, err = authorization.NewAPIClient( - stackitSdkConfig.WithEndpoint(testutil.AuthorizationCustomEndpoint), - ) - } - if err != nil { - return nil, fmt.Errorf("creating client: %w", err) - } - return client, nil -} diff --git a/stackit/internal/services/authorization/roleassignments/resource.go b/stackit/internal/services/authorization/roleassignments/resource.go deleted file mode 100644 index cd29fdb0..00000000 --- a/stackit/internal/services/authorization/roleassignments/resource.go +++ /dev/null @@ -1,370 +0,0 @@ -package roleassignments - -import ( - "context" - "encoding/json" - "errors" - "fmt" - "strings" - - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/utils" - - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/conversion" - authorizationUtils "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/authorization/utils" - - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/resource" - "github.com/hashicorp/terraform-plugin-framework/resource/schema" - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-plugin-log/tflog" - "github.com/stackitcloud/stackit-sdk-go/services/authorization" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/core" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/features" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/validate" -) - -// List of permission assignments targets in form [TF resource name]:[api name] -var roleTargets = []string{ - "project", - "organization", -} - -// Ensure the implementation satisfies the expected interfaces. -var ( - _ resource.Resource = &roleAssignmentResource{} - _ resource.ResourceWithConfigure = &roleAssignmentResource{} - _ resource.ResourceWithImportState = &roleAssignmentResource{} - - errRoleAssignmentNotFound = errors.New("response members did not contain expected role assignment") - errRoleAssignmentDuplicateFound = errors.New("found a duplicate role assignment.") -) - -// Provider's internal model -type Model struct { - Id types.String `tfsdk:"id"` // needed by TF - ResourceId types.String `tfsdk:"resource_id"` - Role types.String `tfsdk:"role"` - Subject types.String `tfsdk:"subject"` -} - -// NewProjectRoleAssignmentResource is a helper function to simplify the provider implementation. -func NewRoleAssignmentResources() []func() resource.Resource { - resources := make([]func() resource.Resource, 0) - for _, v := range roleTargets { - resources = append(resources, func() resource.Resource { - return &roleAssignmentResource{ - apiName: v, - } - }) - } - return resources -} - -// roleAssignmentResource is the resource implementation. -type roleAssignmentResource struct { - authorizationClient *authorization.APIClient - apiName string -} - -// Metadata returns the resource type name. -func (r *roleAssignmentResource) Metadata(_ context.Context, req resource.MetadataRequest, resp *resource.MetadataResponse) { - resp.TypeName = fmt.Sprintf("%s_authorization_%s_role_assignment", req.ProviderTypeName, r.apiName) -} - -// Configure adds the provider configured client to the resource. -func (r *roleAssignmentResource) Configure(ctx context.Context, req resource.ConfigureRequest, resp *resource.ConfigureResponse) { - providerData, ok := conversion.ParseProviderData(ctx, req.ProviderData, &resp.Diagnostics) - if !ok { - return - } - - features.CheckExperimentEnabled(ctx, &providerData, features.IamExperiment, fmt.Sprintf("stackit_authorization_%s_role_assignment", r.apiName), core.Resource, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } - - apiClient := authorizationUtils.ConfigureClient(ctx, &providerData, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } - r.authorizationClient = apiClient - tflog.Info(ctx, fmt.Sprintf("Resource Manager %s Role Assignment client configured", r.apiName)) -} - -// Schema defines the schema for the resource. -func (r *roleAssignmentResource) Schema(_ context.Context, _ resource.SchemaRequest, resp *resource.SchemaResponse) { - descriptions := map[string]string{ - "main": features.AddExperimentDescription(fmt.Sprintf("%s Role Assignment resource schema.", r.apiName), features.IamExperiment, core.Resource), - "id": "Terraform's internal resource identifier. It is structured as \"[resource_id],[role],[subject]\".", - "resource_id": fmt.Sprintf("%s Resource to assign the role to.", r.apiName), - "role": "Role to be assigned", - "subject": "Identifier of user, service account or client. Usually email address or name in case of clients", - } - - resp.Schema = schema.Schema{ - Description: descriptions["main"], - Attributes: map[string]schema.Attribute{ - "id": schema.StringAttribute{ - Description: descriptions["id"], - Computed: true, - PlanModifiers: []planmodifier.String{ - stringplanmodifier.UseStateForUnknown(), - }, - }, - "resource_id": schema.StringAttribute{ - Description: descriptions["resource_id"], - Required: true, - PlanModifiers: []planmodifier.String{ - stringplanmodifier.RequiresReplace(), - }, - Validators: []validator.String{ - validate.UUID(), - validate.NoSeparator(), - }, - }, - "role": schema.StringAttribute{ - Description: descriptions["role"], - Required: true, - PlanModifiers: []planmodifier.String{ - stringplanmodifier.RequiresReplace(), - }, - }, - "subject": schema.StringAttribute{ - Description: descriptions["subject"], - Required: true, - PlanModifiers: []planmodifier.String{ - stringplanmodifier.RequiresReplace(), - }, - }, - }, - } -} - -// Create creates the resource and sets the initial Terraform state. -func (r *roleAssignmentResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { // nolint:gocritic // function signature required by Terraform - var model Model - diags := req.Plan.Get(ctx, &model) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return - } - - ctx = core.InitProviderContext(ctx) - - ctx = r.annotateLogger(ctx, &model) - - if err := r.checkDuplicate(ctx, model); err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error while checking for duplicate role assignments", err.Error()) - return - } - - // Create new project role assignment - payload, err := r.toCreatePayload(&model) - if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error creating credential", fmt.Sprintf("Creating API payload: %v", err)) - return - } - createResp, err := r.authorizationClient.AddMembers(ctx, model.ResourceId.ValueString()).AddMembersPayload(*payload).Execute() - if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, fmt.Sprintf("Error creating %s role assignment", r.apiName), fmt.Sprintf("Calling API: %v", err)) - return - } - - ctx = core.LogResponse(ctx) - - // Map response body to schema - err = mapMembersResponse(createResp, &model) - if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, fmt.Sprintf("Error creating %s role assignment", r.apiName), fmt.Sprintf("Processing API payload: %v", err)) - return - } - diags = resp.State.Set(ctx, model) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return - } - tflog.Info(ctx, fmt.Sprintf("%s role assignment created", r.apiName)) -} - -// Read refreshes the Terraform state with the latest data. -func (r *roleAssignmentResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { // nolint:gocritic // function signature required by Terraform - var model Model - diags := req.State.Get(ctx, &model) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return - } - - ctx = core.InitProviderContext(ctx) - - ctx = r.annotateLogger(ctx, &model) - - listResp, err := r.authorizationClient.ListMembers(ctx, r.apiName, model.ResourceId.ValueString()).Subject(model.Subject.ValueString()).Execute() - if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error reading authorizations", fmt.Sprintf("Calling API: %v", err)) - return - } - - ctx = core.LogResponse(ctx) - - // Map response body to schema - err = mapListMembersResponse(listResp, &model) - if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error reading authorization", fmt.Sprintf("Processing API payload: %v", err)) - return - } - - // Set refreshed state - diags = resp.State.Set(ctx, model) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return - } - tflog.Info(ctx, fmt.Sprintf("%s role assignment read successful", r.apiName)) -} - -// Update updates the resource and sets the updated Terraform state on success. -func (r *roleAssignmentResource) Update(_ context.Context, _ resource.UpdateRequest, _ *resource.UpdateResponse) { // nolint:gocritic // function signature required by Terraform - // does nothing since resource updates should always trigger resource replacement -} - -// Delete deletes the resource and removes the Terraform state on success. -func (r *roleAssignmentResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { // nolint:gocritic // function signature required by Terraform - var model Model - diags := req.State.Get(ctx, &model) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return - } - - ctx = core.InitProviderContext(ctx) - - ctx = r.annotateLogger(ctx, &model) - - payload := authorization.RemoveMembersPayload{ - ResourceType: &r.apiName, - Members: &[]authorization.Member{ - *authorization.NewMember(model.Role.ValueString(), model.Subject.ValueString()), - }, - } - - // Delete existing project role assignment - _, err := r.authorizationClient.RemoveMembers(ctx, model.ResourceId.ValueString()).RemoveMembersPayload(payload).Execute() - if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, fmt.Sprintf("Error deleting %s role assignment", r.apiName), fmt.Sprintf("Calling API: %v", err)) - } - - ctx = core.LogResponse(ctx) - - tflog.Info(ctx, fmt.Sprintf("%s role assignment deleted", r.apiName)) -} - -// ImportState imports a resource into the Terraform state on success. -// The expected format of the project role assignment resource import identifier is: resource_id,role,subject -func (r *roleAssignmentResource) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { - idParts := strings.Split(req.ID, core.Separator) - if len(idParts) != 3 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" { - core.LogAndAddError(ctx, &resp.Diagnostics, - fmt.Sprintf("Error importing %s role assignment", r.apiName), - fmt.Sprintf("Expected import identifier with format [resource_id],[role],[subject], got %q", req.ID), - ) - return - } - - resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("resource_id"), idParts[0])...) - resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("role"), idParts[1])...) - resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("subject"), idParts[2])...) - tflog.Info(ctx, fmt.Sprintf("%s role assignment state imported", r.apiName)) -} - -// Maps project role assignment fields to the provider's internal model. -func mapListMembersResponse(resp *authorization.ListMembersResponse, model *Model) error { - if resp == nil { - return fmt.Errorf("response input is nil") - } - if resp.Members == nil { - return fmt.Errorf("response members are nil") - } - if model == nil { - return fmt.Errorf("model input is nil") - } - - model.Id = utils.BuildInternalTerraformId(model.ResourceId.ValueString(), model.Role.ValueString(), model.Subject.ValueString()) - model.ResourceId = types.StringPointerValue(resp.ResourceId) - - for _, m := range *resp.Members { - if *m.Role == model.Role.ValueString() && *m.Subject == model.Subject.ValueString() { - model.Role = types.StringPointerValue(m.Role) - model.Subject = types.StringPointerValue(m.Subject) - return nil - } - } - return errRoleAssignmentNotFound -} - -func mapMembersResponse(resp *authorization.MembersResponse, model *Model) error { - listMembersResponse, err := typeConverter[authorization.ListMembersResponse](resp) - if err != nil { - return err - } - return mapListMembersResponse(listMembersResponse, model) -} - -// Helper to convert objects with equal JSON tags -func typeConverter[R any](data any) (*R, error) { - var result R - b, err := json.Marshal(&data) - if err != nil { - return nil, err - } - err = json.Unmarshal(b, &result) - if err != nil { - return nil, err - } - return &result, err -} - -// Build Createproject role assignmentPayload from provider's model -func (r *roleAssignmentResource) toCreatePayload(model *Model) (*authorization.AddMembersPayload, error) { - if model == nil { - return nil, fmt.Errorf("nil model") - } - - return &authorization.AddMembersPayload{ - ResourceType: &r.apiName, - Members: &[]authorization.Member{ - *authorization.NewMember(model.Role.ValueString(), model.Subject.ValueString()), - }, - }, nil -} - -func (r *roleAssignmentResource) annotateLogger(ctx context.Context, model *Model) context.Context { - resourceId := model.ResourceId.ValueString() - ctx = tflog.SetField(ctx, "resource_id", resourceId) - ctx = tflog.SetField(ctx, "subject", model.Subject.ValueString()) - ctx = tflog.SetField(ctx, "role", model.Role.ValueString()) - ctx = tflog.SetField(ctx, "resource_type", r.apiName) - return ctx -} - -// returns an error if duplicate role assignment exists -func (r *roleAssignmentResource) checkDuplicate(ctx context.Context, model Model) error { //nolint:gocritic // A read only copy is required since an api response is parsed into the model and this check should not affect the model parameter - listResp, err := r.authorizationClient.ListMembers(ctx, r.apiName, model.ResourceId.ValueString()).Subject(model.Subject.ValueString()).Execute() - if err != nil { - return err - } - - // Map response body to schema - err = mapListMembersResponse(listResp, &model) - - if err != nil { - if errors.Is(err, errRoleAssignmentNotFound) { - return nil - } - return err - } - return errRoleAssignmentDuplicateFound -} diff --git a/stackit/internal/services/authorization/testfiles/double-definition.tf b/stackit/internal/services/authorization/testfiles/double-definition.tf deleted file mode 100644 index 78db7598..00000000 --- a/stackit/internal/services/authorization/testfiles/double-definition.tf +++ /dev/null @@ -1,6 +0,0 @@ - -resource "stackit_authorization_project_role_assignment" "serviceaccount_duplicate" { - resource_id = var.project_id - role = "reader" - subject = var.test_service_account -} diff --git a/stackit/internal/services/authorization/testfiles/invalid-role.tf b/stackit/internal/services/authorization/testfiles/invalid-role.tf deleted file mode 100644 index 67ee43f6..00000000 --- a/stackit/internal/services/authorization/testfiles/invalid-role.tf +++ /dev/null @@ -1,6 +0,0 @@ - -resource "stackit_authorization_project_role_assignment" "invalid_role" { - resource_id = var.project_id - role = "thisrolesdoesnotexist" - subject = var.test_service_account -} diff --git a/stackit/internal/services/authorization/testfiles/organization-role.tf b/stackit/internal/services/authorization/testfiles/organization-role.tf deleted file mode 100644 index 800d8bc1..00000000 --- a/stackit/internal/services/authorization/testfiles/organization-role.tf +++ /dev/null @@ -1,6 +0,0 @@ - -resource "stackit_authorization_organization_role_assignment" "serviceaccount" { - resource_id = var.organization_id - role = "organization.member" - subject = var.test_service_account -} \ No newline at end of file diff --git a/stackit/internal/services/authorization/testfiles/prerequisites.tf b/stackit/internal/services/authorization/testfiles/prerequisites.tf deleted file mode 100644 index 4188842a..00000000 --- a/stackit/internal/services/authorization/testfiles/prerequisites.tf +++ /dev/null @@ -1,10 +0,0 @@ - -variable "project_id" {} -variable "test_service_account" {} -variable "organization_id" {} - -resource "stackit_authorization_project_role_assignment" "serviceaccount" { - resource_id = var.project_id - role = "reader" - subject = var.test_service_account -} diff --git a/stackit/internal/services/authorization/testfiles/project-owner.tf b/stackit/internal/services/authorization/testfiles/project-owner.tf deleted file mode 100644 index d1f288fd..00000000 --- a/stackit/internal/services/authorization/testfiles/project-owner.tf +++ /dev/null @@ -1,6 +0,0 @@ - -resource "stackit_authorization_project_role_assignment" "serviceaccount_project_owner" { - resource_id = var.project_id - role = "owner" - subject = var.test_service_account -} diff --git a/stackit/internal/services/authorization/utils/util.go b/stackit/internal/services/authorization/utils/util.go deleted file mode 100644 index 99694780..00000000 --- a/stackit/internal/services/authorization/utils/util.go +++ /dev/null @@ -1,29 +0,0 @@ -package utils - -import ( - "context" - "fmt" - - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/stackitcloud/stackit-sdk-go/core/config" - "github.com/stackitcloud/stackit-sdk-go/services/authorization" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/core" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/utils" -) - -func ConfigureClient(ctx context.Context, providerData *core.ProviderData, diags *diag.Diagnostics) *authorization.APIClient { - apiClientConfigOptions := []config.ConfigurationOption{ - config.WithCustomAuth(providerData.RoundTripper), - utils.UserAgentConfigOption(providerData.Version), - } - if providerData.AuthorizationCustomEndpoint != "" { - apiClientConfigOptions = append(apiClientConfigOptions, config.WithEndpoint(providerData.AuthorizationCustomEndpoint)) - } - apiClient, err := authorization.NewAPIClient(apiClientConfigOptions...) - if err != nil { - core.LogAndAddError(ctx, diags, "Error configuring API client", fmt.Sprintf("Configuring client: %v. This is an error related to the provider configuration, not to the resource configuration", err)) - return nil - } - - return apiClient -} diff --git a/stackit/internal/services/authorization/utils/util_test.go b/stackit/internal/services/authorization/utils/util_test.go deleted file mode 100644 index 794f255a..00000000 --- a/stackit/internal/services/authorization/utils/util_test.go +++ /dev/null @@ -1,93 +0,0 @@ -package utils - -import ( - "context" - "os" - "reflect" - "testing" - - "github.com/hashicorp/terraform-plugin-framework/diag" - sdkClients "github.com/stackitcloud/stackit-sdk-go/core/clients" - "github.com/stackitcloud/stackit-sdk-go/core/config" - "github.com/stackitcloud/stackit-sdk-go/services/authorization" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/core" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/utils" -) - -const ( - testVersion = "1.2.3" - testCustomEndpoint = "https://authorization-custom-endpoint.api.stackit.cloud" -) - -func TestConfigureClient(t *testing.T) { - /* mock authentication by setting service account token env variable */ - os.Clearenv() - err := os.Setenv(sdkClients.ServiceAccountToken, "mock-val") - if err != nil { - t.Errorf("error setting env variable: %v", err) - } - - type args struct { - providerData *core.ProviderData - } - tests := []struct { - name string - args args - wantErr bool - expected *authorization.APIClient - }{ - { - name: "default endpoint", - args: args{ - providerData: &core.ProviderData{ - Version: testVersion, - }, - }, - expected: func() *authorization.APIClient { - apiClient, err := authorization.NewAPIClient( - utils.UserAgentConfigOption(testVersion), - ) - if err != nil { - t.Errorf("error configuring client: %v", err) - } - return apiClient - }(), - wantErr: false, - }, - { - name: "custom endpoint", - args: args{ - providerData: &core.ProviderData{ - Version: testVersion, - AuthorizationCustomEndpoint: testCustomEndpoint, - }, - }, - expected: func() *authorization.APIClient { - apiClient, err := authorization.NewAPIClient( - utils.UserAgentConfigOption(testVersion), - config.WithEndpoint(testCustomEndpoint), - ) - if err != nil { - t.Errorf("error configuring client: %v", err) - } - return apiClient - }(), - wantErr: false, - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - ctx := context.Background() - diags := diag.Diagnostics{} - - actual := ConfigureClient(ctx, tt.args.providerData, &diags) - if diags.HasError() != tt.wantErr { - t.Errorf("ConfigureClient() error = %v, want %v", diags.HasError(), tt.wantErr) - } - - if !reflect.DeepEqual(actual, tt.expected) { - t.Errorf("ConfigureClient() = %v, want %v", actual, tt.expected) - } - }) - } -} diff --git a/stackit/internal/services/postgresflexalpha/database/resource.go.bak_test.go b/stackit/internal/services/postgresflexalpha/database/resource_test.go.bak similarity index 99% rename from stackit/internal/services/postgresflexalpha/database/resource.go.bak_test.go rename to stackit/internal/services/postgresflexalpha/database/resource_test.go.bak index 0dcf38fb..c2cf6d96 100644 --- a/stackit/internal/services/postgresflexalpha/database/resource.go.bak_test.go +++ b/stackit/internal/services/postgresflexalpha/database/resource_test.go.bak @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/instance/resource.go b/stackit/internal/services/postgresflexalpha/instance/resource.go index 50064f73..0c9af360 100644 --- a/stackit/internal/services/postgresflexalpha/instance/resource.go +++ b/stackit/internal/services/postgresflexalpha/instance/resource.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/instance/resource_test.go b/stackit/internal/services/postgresflexalpha/instance/resource_test.go index 26e29cf8..314237e5 100644 --- a/stackit/internal/services/postgresflexalpha/instance/resource_test.go +++ b/stackit/internal/services/postgresflexalpha/instance/resource_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/instance/use_state_for_unknown_if_flavor_unchanged_modifier.go b/stackit/internal/services/postgresflexalpha/instance/use_state_for_unknown_if_flavor_unchanged_modifier.go index a99b2e80..2a3d94b6 100644 --- a/stackit/internal/services/postgresflexalpha/instance/use_state_for_unknown_if_flavor_unchanged_modifier.go +++ b/stackit/internal/services/postgresflexalpha/instance/use_state_for_unknown_if_flavor_unchanged_modifier.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/postgresflex_acc_test.go b/stackit/internal/services/postgresflexalpha/postgresflex_acc_test.go index 122633b3..4601ed11 100644 --- a/stackit/internal/services/postgresflexalpha/postgresflex_acc_test.go +++ b/stackit/internal/services/postgresflexalpha/postgresflex_acc_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflex_test import ( diff --git a/stackit/internal/services/postgresflexalpha/user/datasource.go b/stackit/internal/services/postgresflexalpha/user/datasource.go index cf701a3d..79861e19 100644 --- a/stackit/internal/services/postgresflexalpha/user/datasource.go +++ b/stackit/internal/services/postgresflexalpha/user/datasource.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/user/datasource_test.go b/stackit/internal/services/postgresflexalpha/user/datasource_test.go index 10f39a1e..d49ef243 100644 --- a/stackit/internal/services/postgresflexalpha/user/datasource_test.go +++ b/stackit/internal/services/postgresflexalpha/user/datasource_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/user/resource.go b/stackit/internal/services/postgresflexalpha/user/resource.go index 447251fe..fbc3035c 100644 --- a/stackit/internal/services/postgresflexalpha/user/resource.go +++ b/stackit/internal/services/postgresflexalpha/user/resource.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/user/resource_test.go b/stackit/internal/services/postgresflexalpha/user/resource_test.go index cd2e472c..6dbe2e18 100644 --- a/stackit/internal/services/postgresflexalpha/user/resource_test.go +++ b/stackit/internal/services/postgresflexalpha/user/resource_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package postgresflexa import ( diff --git a/stackit/internal/services/postgresflexalpha/utils/util.go b/stackit/internal/services/postgresflexalpha/utils/util.go index 61ae36c6..e15548fa 100644 --- a/stackit/internal/services/postgresflexalpha/utils/util.go +++ b/stackit/internal/services/postgresflexalpha/utils/util.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/services/postgresflexalpha/utils/util_test.go b/stackit/internal/services/postgresflexalpha/utils/util_test.go index 4af08da6..a5f17e37 100644 --- a/stackit/internal/services/postgresflexalpha/utils/util_test.go +++ b/stackit/internal/services/postgresflexalpha/utils/util_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/services/sqlserverflexalpha/instance/datasource.go b/stackit/internal/services/sqlserverflexalpha/instance/datasource.go index 8ad7afc9..669f3aea 100644 --- a/stackit/internal/services/sqlserverflexalpha/instance/datasource.go +++ b/stackit/internal/services/sqlserverflexalpha/instance/datasource.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package sqlserverflex import ( diff --git a/stackit/internal/services/sqlserverflexalpha/instance/resource.go b/stackit/internal/services/sqlserverflexalpha/instance/resource.go index 72a0de30..6f29b50f 100644 --- a/stackit/internal/services/sqlserverflexalpha/instance/resource.go +++ b/stackit/internal/services/sqlserverflexalpha/instance/resource.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package sqlserverflex import ( diff --git a/stackit/internal/services/sqlserverflexalpha/instance/resource_test.go b/stackit/internal/services/sqlserverflexalpha/instance/resource_test.go index 66021845..8c329bec 100644 --- a/stackit/internal/services/sqlserverflexalpha/instance/resource_test.go +++ b/stackit/internal/services/sqlserverflexalpha/instance/resource_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package sqlserverflex import ( diff --git a/stackit/internal/services/sqlserverflexalpha/sqlserverflex_acc_test.go b/stackit/internal/services/sqlserverflexalpha/sqlserverflex_acc_test.go index e88ac599..e3b4fa2b 100644 --- a/stackit/internal/services/sqlserverflexalpha/sqlserverflex_acc_test.go +++ b/stackit/internal/services/sqlserverflexalpha/sqlserverflex_acc_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package sqlserverflex_test import ( diff --git a/stackit/internal/services/sqlserverflexalpha/testdata/resource-max.tf b/stackit/internal/services/sqlserverflexalpha/testdata/resource-max.tf index a0cf700a..1c3cdd15 100644 --- a/stackit/internal/services/sqlserverflexalpha/testdata/resource-max.tf +++ b/stackit/internal/services/sqlserverflexalpha/testdata/resource-max.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + variable "project_id" {} variable "name" {} variable "acl1" {} diff --git a/stackit/internal/services/sqlserverflexalpha/testdata/resource-min.tf b/stackit/internal/services/sqlserverflexalpha/testdata/resource-min.tf index 3953ddf1..f53ef3e6 100644 --- a/stackit/internal/services/sqlserverflexalpha/testdata/resource-min.tf +++ b/stackit/internal/services/sqlserverflexalpha/testdata/resource-min.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + variable "project_id" {} variable "name" {} variable "flavor_cpu" {} diff --git a/stackit/internal/services/sqlserverflexalpha/user/datasource.go b/stackit/internal/services/sqlserverflexalpha/user/datasource.go index cb0980f8..71946c6c 100644 --- a/stackit/internal/services/sqlserverflexalpha/user/datasource.go +++ b/stackit/internal/services/sqlserverflexalpha/user/datasource.go @@ -1,14 +1,19 @@ -package sqlserverflex +// Copyright (c) STACKIT + +package sqlserverflexalpha import ( "context" "fmt" "net/http" + "strconv" - "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/conversion" - sqlserverflexUtils "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/sqlserverflex/utils" - + "github.com/hashicorp/terraform-plugin-framework-validators/int64validator" "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/stackitcloud/terraform-provider-stackit/pkg/sqlserverflexalpha" + "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/conversion" + sqlserverflexUtils "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/sqlserverflexalpha/utils" + "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-log/tflog" @@ -18,7 +23,6 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex" ) // Ensure the implementation satisfies the expected interfaces. @@ -27,15 +31,17 @@ var ( ) type DataSourceModel struct { - Id types.String `tfsdk:"id"` // needed by TF - UserId types.String `tfsdk:"user_id"` - InstanceId types.String `tfsdk:"instance_id"` - ProjectId types.String `tfsdk:"project_id"` - Username types.String `tfsdk:"username"` - Roles types.Set `tfsdk:"roles"` - Host types.String `tfsdk:"host"` - Port types.Int64 `tfsdk:"port"` - Region types.String `tfsdk:"region"` + Id types.String `tfsdk:"id"` // needed by TF + UserId types.Int64 `tfsdk:"user_id"` + InstanceId types.String `tfsdk:"instance_id"` + ProjectId types.String `tfsdk:"project_id"` + Username types.String `tfsdk:"username"` + Roles types.Set `tfsdk:"roles"` + Host types.String `tfsdk:"host"` + Port types.Int64 `tfsdk:"port"` + Region types.String `tfsdk:"region"` + Status types.String `tfsdk:"status"` + DefaultDatabase types.String `tfsdk:"default_database"` } // NewUserDataSource is a helper function to simplify the provider implementation. @@ -45,17 +51,25 @@ func NewUserDataSource() datasource.DataSource { // userDataSource is the data source implementation. type userDataSource struct { - client *sqlserverflex.APIClient + client *sqlserverflexalpha.APIClient providerData core.ProviderData } // Metadata returns the data source type name. -func (r *userDataSource) Metadata(_ context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { - resp.TypeName = req.ProviderTypeName + "_sqlserverflex_user" +func (r *userDataSource) Metadata( + _ context.Context, + req datasource.MetadataRequest, + resp *datasource.MetadataResponse, +) { + resp.TypeName = req.ProviderTypeName + "_sqlserverflexalpha_user" } // Configure adds the provider configured client to the data source. -func (r *userDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, resp *datasource.ConfigureResponse) { +func (r *userDataSource) Configure( + ctx context.Context, + req datasource.ConfigureRequest, + resp *datasource.ConfigureResponse, +) { var ok bool r.providerData, ok = conversion.ParseProviderData(ctx, req.ProviderData, &resp.Diagnostics) if !ok { @@ -73,15 +87,17 @@ func (r *userDataSource) Configure(ctx context.Context, req datasource.Configure // Schema defines the schema for the data source. func (r *userDataSource) Schema(_ context.Context, _ datasource.SchemaRequest, resp *datasource.SchemaResponse) { descriptions := map[string]string{ - "main": "SQLServer Flex user data source schema. Must have a `region` specified in the provider configuration.", - "id": "Terraform's internal data source. ID. It is structured as \"`project_id`,`region`,`instance_id`,`user_id`\".", - "user_id": "User ID.", - "instance_id": "ID of the SQLServer Flex instance.", - "project_id": "STACKIT project ID to which the instance is associated.", - "username": "Username of the SQLServer Flex instance.", - "roles": "Database access levels for the user.", - "password": "Password of the user account.", - "region": "The resource region. If not defined, the provider region is used.", + "main": "SQLServer Flex user data source schema. Must have a `region` specified in the provider configuration.", + "id": "Terraform's internal data source. ID. It is structured as \"`project_id`,`region`,`instance_id`,`user_id`\".", + "user_id": "User ID.", + "instance_id": "ID of the SQLServer Flex instance.", + "project_id": "STACKIT project ID to which the instance is associated.", + "username": "Username of the SQLServer Flex instance.", + "roles": "Database access levels for the user.", + "password": "Password of the user account.", + "region": "The resource region. If not defined, the provider region is used.", + "status": "Status of the user.", + "default_database": "Default database of the user.", } resp.Schema = schema.Schema{ @@ -91,11 +107,11 @@ func (r *userDataSource) Schema(_ context.Context, _ datasource.SchemaRequest, r Description: descriptions["id"], Computed: true, }, - "user_id": schema.StringAttribute{ + "user_id": schema.Int64Attribute{ Description: descriptions["user_id"], Required: true, - Validators: []validator.String{ - validate.NoSeparator(), + Validators: []validator.Int64{ + int64validator.AtLeast(1), }, }, "instance_id": schema.StringAttribute{ @@ -134,12 +150,22 @@ func (r *userDataSource) Schema(_ context.Context, _ datasource.SchemaRequest, r Optional: true, Description: descriptions["region"], }, + "status": schema.StringAttribute{ + Computed: true, + }, + "default_database": schema.StringAttribute{ + Computed: true, + }, }, } } // Read refreshes the Terraform state with the latest data. -func (r *userDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { // nolint:gocritic // function signature required by Terraform +func (r *userDataSource) Read( + ctx context.Context, + req datasource.ReadRequest, + resp *datasource.ReadResponse, +) { // nolint:gocritic // function signature required by Terraform var model DataSourceModel diags := req.Config.Get(ctx, &model) resp.Diagnostics.Append(diags...) @@ -151,21 +177,26 @@ func (r *userDataSource) Read(ctx context.Context, req datasource.ReadRequest, r projectId := model.ProjectId.ValueString() instanceId := model.InstanceId.ValueString() - userId := model.UserId.ValueString() + userId := model.UserId.ValueInt64() region := r.providerData.GetRegionWithOverride(model.Region) ctx = tflog.SetField(ctx, "project_id", projectId) ctx = tflog.SetField(ctx, "instance_id", instanceId) ctx = tflog.SetField(ctx, "user_id", userId) ctx = tflog.SetField(ctx, "region", region) - recordSetResp, err := r.client.GetUser(ctx, projectId, instanceId, userId, region).Execute() + recordSetResp, err := r.client.GetUserRequest(ctx, projectId, region, instanceId, userId).Execute() if err != nil { utils.LogError( ctx, &resp.Diagnostics, err, "Reading user", - fmt.Sprintf("User with ID %q or instance with ID %q does not exist in project %q.", userId, instanceId, projectId), + fmt.Sprintf( + "User with ID %q or instance with ID %q does not exist in project %q.", + userId, + instanceId, + projectId, + ), map[int]string{ http.StatusForbidden: fmt.Sprintf("Project with ID %q not found or forbidden access", projectId), }, @@ -179,7 +210,12 @@ func (r *userDataSource) Read(ctx context.Context, req datasource.ReadRequest, r // Map response body to schema and populate Computed attribute values err = mapDataSourceFields(recordSetResp, &model, region) if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error reading user", fmt.Sprintf("Processing API payload: %v", err)) + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error reading user", + fmt.Sprintf("Processing API payload: %v", err), + ) return } @@ -189,38 +225,38 @@ func (r *userDataSource) Read(ctx context.Context, req datasource.ReadRequest, r if resp.Diagnostics.HasError() { return } - tflog.Info(ctx, "SQLServer Flex user read") + tflog.Info(ctx, "SQLServer Flex instance read") } -func mapDataSourceFields(userResp *sqlserverflex.GetUserResponse, model *DataSourceModel, region string) error { - if userResp == nil || userResp.Item == nil { +func mapDataSourceFields(userResp *sqlserverflexalpha.GetUserResponse, model *DataSourceModel, region string) error { + if userResp == nil { return fmt.Errorf("response is nil") } if model == nil { return fmt.Errorf("model input is nil") } - user := userResp.Item + user := userResp - var userId string - if model.UserId.ValueString() != "" { - userId = model.UserId.ValueString() + var userId int64 + if model.UserId.ValueInt64() != 0 { + userId = model.UserId.ValueInt64() } else if user.Id != nil { userId = *user.Id } else { return fmt.Errorf("user id not present") } model.Id = utils.BuildInternalTerraformId( - model.ProjectId.ValueString(), region, model.InstanceId.ValueString(), userId, + model.ProjectId.ValueString(), region, model.InstanceId.ValueString(), strconv.FormatInt(userId, 10), ) - model.UserId = types.StringValue(userId) + model.UserId = types.Int64Value(userId) model.Username = types.StringPointerValue(user.Username) if user.Roles == nil { model.Roles = types.SetNull(types.StringType) } else { - roles := []attr.Value{} + var roles []attr.Value for _, role := range *user.Roles { - roles = append(roles, types.StringValue(role)) + roles = append(roles, types.StringValue(string(role))) } rolesSet, diags := types.SetValue(types.StringType, roles) if diags.HasError() { @@ -231,5 +267,8 @@ func mapDataSourceFields(userResp *sqlserverflex.GetUserResponse, model *DataSou model.Host = types.StringPointerValue(user.Host) model.Port = types.Int64PointerValue(user.Port) model.Region = types.StringValue(region) + model.Status = types.StringPointerValue(user.Status) + model.DefaultDatabase = types.StringPointerValue(user.DefaultDatabase) + return nil } diff --git a/stackit/internal/services/sqlserverflexalpha/user/datasource_test.go b/stackit/internal/services/sqlserverflexalpha/user/datasource_test.go index b5179c44..5f99a8e5 100644 --- a/stackit/internal/services/sqlserverflexalpha/user/datasource_test.go +++ b/stackit/internal/services/sqlserverflexalpha/user/datasource_test.go @@ -1,4 +1,6 @@ -package sqlserverflex +// Copyright (c) STACKIT + +package sqlserverflexalpha import ( "testing" @@ -7,84 +9,87 @@ import ( "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/stackitcloud/stackit-sdk-go/core/utils" - "github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex" + "github.com/stackitcloud/terraform-provider-stackit/pkg/sqlserverflexalpha" ) func TestMapDataSourceFields(t *testing.T) { const testRegion = "region" tests := []struct { description string - input *sqlserverflex.GetUserResponse + input *sqlserverflexalpha.GetUserResponse region string expected DataSourceModel isValid bool }{ { "default_values", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{}, - }, + &sqlserverflexalpha.GetUserResponse{}, testRegion, DataSourceModel{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), - InstanceId: types.StringValue("iid"), - ProjectId: types.StringValue("pid"), - Username: types.StringNull(), - Roles: types.SetNull(types.StringType), - Host: types.StringNull(), - Port: types.Int64Null(), - Region: types.StringValue(testRegion), + Id: types.StringValue("pid,region,iid,1"), + UserId: types.Int64Value(1), + InstanceId: types.StringValue("iid"), + ProjectId: types.StringValue("pid"), + Username: types.StringNull(), + Roles: types.SetNull(types.StringType), + Host: types.StringNull(), + Port: types.Int64Null(), + Region: types.StringValue(testRegion), + Status: types.StringNull(), + DefaultDatabase: types.StringNull(), }, true, }, { "simple_values", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{ - Roles: &[]string{ - "role_1", - "role_2", - "", - }, - Username: utils.Ptr("username"), - Host: utils.Ptr("host"), - Port: utils.Ptr(int64(1234)), + &sqlserverflexalpha.GetUserResponse{ + + Roles: &[]sqlserverflexalpha.UserRole{ + "role_1", + "role_2", + "", }, + Username: utils.Ptr("username"), + Host: utils.Ptr("host"), + Port: utils.Ptr(int64(1234)), + Status: utils.Ptr("active"), + DefaultDatabase: utils.Ptr("default_db"), }, testRegion, DataSourceModel{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,1"), + UserId: types.Int64Value(1), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringValue("username"), - Roles: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("role_1"), - types.StringValue("role_2"), - types.StringValue(""), - }), - Host: types.StringValue("host"), - Port: types.Int64Value(1234), - Region: types.StringValue(testRegion), + Roles: types.SetValueMust( + types.StringType, []attr.Value{ + types.StringValue("role_1"), + types.StringValue("role_2"), + types.StringValue(""), + }, + ), + Host: types.StringValue("host"), + Port: types.Int64Value(1234), + Region: types.StringValue(testRegion), + Status: types.StringValue("active"), + DefaultDatabase: types.StringValue("default_db"), }, true, }, { "null_fields_and_int_conversions", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{ - Id: utils.Ptr("uid"), - Roles: &[]string{}, - Username: nil, - Host: nil, - Port: utils.Ptr(int64(2123456789)), - }, + &sqlserverflexalpha.GetUserResponse{ + Id: utils.Ptr(int64(1)), + Roles: &[]sqlserverflexalpha.UserRole{}, + Username: nil, + Host: nil, + Port: utils.Ptr(int64(2123456789)), }, testRegion, DataSourceModel{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,1"), + UserId: types.Int64Value(1), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringNull(), @@ -104,41 +109,41 @@ func TestMapDataSourceFields(t *testing.T) { }, { "nil_response_2", - &sqlserverflex.GetUserResponse{}, + &sqlserverflexalpha.GetUserResponse{}, testRegion, DataSourceModel{}, false, }, { "no_resource_id", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{}, - }, + &sqlserverflexalpha.GetUserResponse{}, testRegion, DataSourceModel{}, false, }, } for _, tt := range tests { - t.Run(tt.description, func(t *testing.T) { - state := &DataSourceModel{ - ProjectId: tt.expected.ProjectId, - InstanceId: tt.expected.InstanceId, - UserId: tt.expected.UserId, - } - err := mapDataSourceFields(tt.input, state, tt.region) - if !tt.isValid && err == nil { - t.Fatalf("Should have failed") - } - if tt.isValid && err != nil { - t.Fatalf("Should not have failed: %v", err) - } - if tt.isValid { - diff := cmp.Diff(state, &tt.expected) - if diff != "" { - t.Fatalf("Data does not match: %s", diff) + t.Run( + tt.description, func(t *testing.T) { + state := &DataSourceModel{ + ProjectId: tt.expected.ProjectId, + InstanceId: tt.expected.InstanceId, + UserId: tt.expected.UserId, } - } - }) + err := mapDataSourceFields(tt.input, state, tt.region) + if !tt.isValid && err == nil { + t.Fatalf("Should have failed") + } + if tt.isValid && err != nil { + t.Fatalf("Should not have failed: %v", err) + } + if tt.isValid { + diff := cmp.Diff(state, &tt.expected) + if diff != "" { + t.Fatalf("Data does not match: %s", diff) + } + } + }, + ) } } diff --git a/stackit/internal/services/sqlserverflexalpha/user/resource.go b/stackit/internal/services/sqlserverflexalpha/user/resource.go index e73fb9b0..f52d990b 100644 --- a/stackit/internal/services/sqlserverflexalpha/user/resource.go +++ b/stackit/internal/services/sqlserverflexalpha/user/resource.go @@ -1,12 +1,17 @@ -package sqlserverflex +// Copyright (c) STACKIT + +package sqlserverflexalpha import ( "context" + "errors" "fmt" "net/http" + "strconv" "strings" - sqlserverflexUtils "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/sqlserverflex/utils" + "github.com/stackitcloud/terraform-provider-stackit/pkg/sqlserverflexalpha" + sqlserverflexalphaUtils "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/sqlserverflexalpha/utils" "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-log/tflog" @@ -24,7 +29,6 @@ import ( "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/stackitcloud/stackit-sdk-go/core/oapierror" - "github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex" ) // Ensure the implementation satisfies the expected interfaces. @@ -36,16 +40,18 @@ var ( ) type Model struct { - Id types.String `tfsdk:"id"` // needed by TF - UserId types.String `tfsdk:"user_id"` - InstanceId types.String `tfsdk:"instance_id"` - ProjectId types.String `tfsdk:"project_id"` - Username types.String `tfsdk:"username"` - Roles types.Set `tfsdk:"roles"` - Password types.String `tfsdk:"password"` - Host types.String `tfsdk:"host"` - Port types.Int64 `tfsdk:"port"` - Region types.String `tfsdk:"region"` + Id types.String `tfsdk:"id"` // needed by TF + UserId types.Int64 `tfsdk:"user_id"` + InstanceId types.String `tfsdk:"instance_id"` + ProjectId types.String `tfsdk:"project_id"` + Username types.String `tfsdk:"username"` + Roles types.Set `tfsdk:"roles"` + Password types.String `tfsdk:"password"` + Host types.String `tfsdk:"host"` + Port types.Int64 `tfsdk:"port"` + Region types.String `tfsdk:"region"` + Status types.String `tfsdk:"status"` + DefaultDatabase types.String `tfsdk:"default_database"` } // NewUserResource is a helper function to simplify the provider implementation. @@ -55,13 +61,13 @@ func NewUserResource() resource.Resource { // userResource is the resource implementation. type userResource struct { - client *sqlserverflex.APIClient + client *sqlserverflexalpha.APIClient providerData core.ProviderData } // Metadata returns the resource type name. func (r *userResource) Metadata(_ context.Context, req resource.MetadataRequest, resp *resource.MetadataResponse) { - resp.TypeName = req.ProviderTypeName + "_sqlserverflex_user" + resp.TypeName = req.ProviderTypeName + "_sqlserverflexalpha_user" } // Configure adds the provider configured client to the resource. @@ -72,17 +78,21 @@ func (r *userResource) Configure(ctx context.Context, req resource.ConfigureRequ return } - apiClient := sqlserverflexUtils.ConfigureClient(ctx, &r.providerData, &resp.Diagnostics) + apiClient := sqlserverflexalphaUtils.ConfigureClient(ctx, &r.providerData, &resp.Diagnostics) if resp.Diagnostics.HasError() { return } r.client = apiClient - tflog.Info(ctx, "SQLServer Flex user client configured") + tflog.Info(ctx, "SQLServer Alpha Flex user client configured") } // ModifyPlan implements resource.ResourceWithModifyPlan. // Use the modifier to set the effective region in the current plan. -func (r *userResource) ModifyPlan(ctx context.Context, req resource.ModifyPlanRequest, resp *resource.ModifyPlanResponse) { // nolint:gocritic // function signature required by Terraform +func (r *userResource) ModifyPlan( + ctx context.Context, + req resource.ModifyPlanRequest, + resp *resource.ModifyPlanResponse, +) { // nolint:gocritic // function signature required by Terraform var configModel Model // skip initial empty configuration to avoid follow-up errors if req.Config.Raw.IsNull() { @@ -113,14 +123,16 @@ func (r *userResource) ModifyPlan(ctx context.Context, req resource.ModifyPlanRe // Schema defines the schema for the resource. func (r *userResource) Schema(_ context.Context, _ resource.SchemaRequest, resp *resource.SchemaResponse) { descriptions := map[string]string{ - "main": "SQLServer Flex user resource schema. Must have a `region` specified in the provider configuration.", - "id": "Terraform's internal resource ID. It is structured as \"`project_id`,`region`,`instance_id`,`user_id`\".", - "user_id": "User ID.", - "instance_id": "ID of the SQLServer Flex instance.", - "project_id": "STACKIT project ID to which the instance is associated.", - "username": "Username of the SQLServer Flex instance.", - "roles": "Database access levels for the user. The values for the default roles are: `##STACKIT_DatabaseManager##`, `##STACKIT_LoginManager##`, `##STACKIT_ProcessManager##`, `##STACKIT_ServerManager##`, `##STACKIT_SQLAgentManager##`, `##STACKIT_SQLAgentUser##`", - "password": "Password of the user account.", + "main": "SQLServer Flex user resource schema. Must have a `region` specified in the provider configuration.", + "id": "Terraform's internal resource ID. It is structured as \"`project_id`,`region`,`instance_id`,`user_id`\".", + "user_id": "User ID.", + "instance_id": "ID of the SQLServer Flex instance.", + "project_id": "STACKIT project ID to which the instance is associated.", + "username": "Username of the SQLServer Flex instance.", + "roles": "Database access levels for the user. The values for the default roles are: `##STACKIT_DatabaseManager##`, `##STACKIT_LoginManager##`, `##STACKIT_ProcessManager##`, `##STACKIT_ServerManager##`, `##STACKIT_SQLAgentManager##`, `##STACKIT_SQLAgentUser##`", + "password": "Password of the user account.", + "status": "Status of the user.", + "default_database": "Default database of the user.", } resp.Schema = schema.Schema{ @@ -203,12 +215,22 @@ func (r *userResource) Schema(_ context.Context, _ resource.SchemaRequest, resp stringplanmodifier.RequiresReplace(), }, }, + "status": schema.StringAttribute{ + Computed: true, + }, + "default_database": schema.StringAttribute{ + Computed: true, + }, }, } } // Create creates the resource and sets the initial Terraform state. -func (r *userResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { // nolint:gocritic // function signature required by Terraform +func (r *userResource) Create( + ctx context.Context, + req resource.CreateRequest, + resp *resource.CreateResponse, +) { // nolint:gocritic // function signature required by Terraform var model Model diags := req.Plan.Get(ctx, &model) resp.Diagnostics.Append(diags...) @@ -226,7 +248,7 @@ func (r *userResource) Create(ctx context.Context, req resource.CreateRequest, r ctx = tflog.SetField(ctx, "instance_id", instanceId) ctx = tflog.SetField(ctx, "region", region) - var roles []string + var roles []sqlserverflexalpha.UserRole if !(model.Roles.IsNull() || model.Roles.IsUnknown()) { diags = model.Roles.ElementsAs(ctx, &roles, false) resp.Diagnostics.Append(diags...) @@ -242,7 +264,12 @@ func (r *userResource) Create(ctx context.Context, req resource.CreateRequest, r return } // Create new user - userResp, err := r.client.CreateUser(ctx, projectId, instanceId, region).CreateUserPayload(*payload).Execute() + userResp, err := r.client.CreateUserRequest( + ctx, + projectId, + region, + instanceId, + ).CreateUserRequestPayload(*payload).Execute() if err != nil { core.LogAndAddError(ctx, &resp.Diagnostics, "Error creating user", fmt.Sprintf("Calling API: %v", err)) return @@ -250,17 +277,27 @@ func (r *userResource) Create(ctx context.Context, req resource.CreateRequest, r ctx = core.LogResponse(ctx) - if userResp == nil || userResp.Item == nil || userResp.Item.Id == nil || *userResp.Item.Id == "" { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error creating user", "API didn't return user Id. A user might have been created") + if userResp == nil || userResp.Id == nil || *userResp.Id == 0 { + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error creating user", + "API didn't return user Id. A user might have been created", + ) return } - userId := *userResp.Item.Id + userId := *userResp.Id ctx = tflog.SetField(ctx, "user_id", userId) // Map response body to schema err = mapFieldsCreate(userResp, &model, region) if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error creating user", fmt.Sprintf("Processing API payload: %v", err)) + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error creating user", + fmt.Sprintf("Processing API payload: %v", err), + ) return } // Set state to fully populated data @@ -273,7 +310,11 @@ func (r *userResource) Create(ctx context.Context, req resource.CreateRequest, r } // Read refreshes the Terraform state with the latest data. -func (r *userResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { // nolint:gocritic // function signature required by Terraform +func (r *userResource) Read( + ctx context.Context, + req resource.ReadRequest, + resp *resource.ReadResponse, +) { // nolint:gocritic // function signature required by Terraform var model Model diags := req.State.Get(ctx, &model) resp.Diagnostics.Append(diags...) @@ -285,16 +326,21 @@ func (r *userResource) Read(ctx context.Context, req resource.ReadRequest, resp projectId := model.ProjectId.ValueString() instanceId := model.InstanceId.ValueString() - userId := model.UserId.ValueString() + userId := model.UserId.ValueInt64() region := r.providerData.GetRegionWithOverride(model.Region) ctx = tflog.SetField(ctx, "project_id", projectId) ctx = tflog.SetField(ctx, "instance_id", instanceId) ctx = tflog.SetField(ctx, "user_id", userId) ctx = tflog.SetField(ctx, "region", region) - recordSetResp, err := r.client.GetUser(ctx, projectId, instanceId, userId, region).Execute() + recordSetResp, err := r.client.GetUserRequest(ctx, projectId, region, instanceId, userId).Execute() if err != nil { - oapiErr, ok := err.(*oapierror.GenericOpenAPIError) //nolint:errorlint //complaining that error.As should be used to catch wrapped errors, but this error should not be wrapped + var oapiErr *oapierror.GenericOpenAPIError + ok := errors.As( + err, + &oapiErr, + ) + //nolint:errorlint //complaining that error.As should be used to catch wrapped errors, but this error should not be wrapped if ok && oapiErr.StatusCode == http.StatusNotFound { resp.State.RemoveResource(ctx) return @@ -308,7 +354,12 @@ func (r *userResource) Read(ctx context.Context, req resource.ReadRequest, resp // Map response body to schema err = mapFields(recordSetResp, &model, region) if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error reading user", fmt.Sprintf("Processing API payload: %v", err)) + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error reading user", + fmt.Sprintf("Processing API payload: %v", err), + ) return } @@ -322,13 +373,21 @@ func (r *userResource) Read(ctx context.Context, req resource.ReadRequest, resp } // Update updates the resource and sets the updated Terraform state on success. -func (r *userResource) Update(ctx context.Context, _ resource.UpdateRequest, resp *resource.UpdateResponse) { // nolint:gocritic // function signature required by Terraform +func (r *userResource) Update( + ctx context.Context, + _ resource.UpdateRequest, + resp *resource.UpdateResponse, +) { // nolint:gocritic // function signature required by Terraform // Update shouldn't be called core.LogAndAddError(ctx, &resp.Diagnostics, "Error updating user", "User can't be updated") } // Delete deletes the resource and removes the Terraform state on success. -func (r *userResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { // nolint:gocritic // function signature required by Terraform +func (r *userResource) Delete( + ctx context.Context, + req resource.DeleteRequest, + resp *resource.DeleteResponse, +) { // nolint:gocritic // function signature required by Terraform // Retrieve values from plan var model Model diags := req.State.Get(ctx, &model) @@ -341,7 +400,7 @@ func (r *userResource) Delete(ctx context.Context, req resource.DeleteRequest, r projectId := model.ProjectId.ValueString() instanceId := model.InstanceId.ValueString() - userId := model.UserId.ValueString() + userId := model.UserId.ValueInt64() region := model.Region.ValueString() ctx = tflog.SetField(ctx, "project_id", projectId) ctx = tflog.SetField(ctx, "instance_id", instanceId) @@ -349,7 +408,7 @@ func (r *userResource) Delete(ctx context.Context, req resource.DeleteRequest, r ctx = tflog.SetField(ctx, "region", region) // Delete existing record set - err := r.client.DeleteUser(ctx, projectId, instanceId, userId, region).Execute() + err := r.client.DeleteUserRequest(ctx, projectId, region, instanceId, userId).Execute() if err != nil { core.LogAndAddError(ctx, &resp.Diagnostics, "Error deleting user", fmt.Sprintf("Calling API: %v", err)) return @@ -362,12 +421,20 @@ func (r *userResource) Delete(ctx context.Context, req resource.DeleteRequest, r // ImportState imports a resource into the Terraform state on success. // The expected format of the resource import identifier is: project_id,zone_id,record_set_id -func (r *userResource) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { +func (r *userResource) ImportState( + ctx context.Context, + req resource.ImportStateRequest, + resp *resource.ImportStateResponse, +) { idParts := strings.Split(req.ID, core.Separator) if len(idParts) != 4 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" || idParts[3] == "" { - core.LogAndAddError(ctx, &resp.Diagnostics, + core.LogAndAddError( + ctx, &resp.Diagnostics, "Error importing user", - fmt.Sprintf("Expected import identifier with format [project_id],[region],[instance_id],[user_id], got %q", req.ID), + fmt.Sprintf( + "Expected import identifier with format [project_id],[region],[instance_id],[user_id], got %q", + req.ID, + ), ) return } @@ -376,28 +443,35 @@ func (r *userResource) ImportState(ctx context.Context, req resource.ImportState resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("region"), idParts[1])...) resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("instance_id"), idParts[2])...) resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("user_id"), idParts[3])...) - core.LogAndAddWarning(ctx, &resp.Diagnostics, + core.LogAndAddWarning( + ctx, + &resp.Diagnostics, "SQLServer Flex user imported with empty password", "The user password is not imported as it is only available upon creation of a new user. The password field will be empty.", ) tflog.Info(ctx, "SQLServer Flex user state imported") } -func mapFieldsCreate(userResp *sqlserverflex.CreateUserResponse, model *Model, region string) error { - if userResp == nil || userResp.Item == nil { +func mapFieldsCreate(userResp *sqlserverflexalpha.CreateUserResponse, model *Model, region string) error { + if userResp == nil { return fmt.Errorf("response is nil") } if model == nil { return fmt.Errorf("model input is nil") } - user := userResp.Item + user := userResp if user.Id == nil { return fmt.Errorf("user id not present") } userId := *user.Id - model.Id = utils.BuildInternalTerraformId(model.ProjectId.ValueString(), region, model.InstanceId.ValueString(), userId) - model.UserId = types.StringValue(userId) + model.Id = utils.BuildInternalTerraformId( + model.ProjectId.ValueString(), + region, + model.InstanceId.ValueString(), + strconv.FormatInt(userId, 10), + ) + model.UserId = types.Int64Value(userId) model.Username = types.StringPointerValue(user.Username) if user.Password == nil { @@ -406,9 +480,9 @@ func mapFieldsCreate(userResp *sqlserverflex.CreateUserResponse, model *Model, r model.Password = types.StringValue(*user.Password) if user.Roles != nil { - roles := []attr.Value{} + var roles []attr.Value for _, role := range *user.Roles { - roles = append(roles, types.StringValue(role)) + roles = append(roles, types.StringValue(string(role))) } rolesSet, diags := types.SetValue(types.StringType, roles) if diags.HasError() { @@ -424,21 +498,24 @@ func mapFieldsCreate(userResp *sqlserverflex.CreateUserResponse, model *Model, r model.Host = types.StringPointerValue(user.Host) model.Port = types.Int64PointerValue(user.Port) model.Region = types.StringValue(region) + model.Status = types.StringPointerValue(user.Status) + model.DefaultDatabase = types.StringPointerValue(user.DefaultDatabase) + return nil } -func mapFields(userResp *sqlserverflex.GetUserResponse, model *Model, region string) error { - if userResp == nil || userResp.Item == nil { +func mapFields(userResp *sqlserverflexalpha.GetUserResponse, model *Model, region string) error { + if userResp == nil { return fmt.Errorf("response is nil") } if model == nil { return fmt.Errorf("model input is nil") } - user := userResp.Item + user := userResp - var userId string - if model.UserId.ValueString() != "" { - userId = model.UserId.ValueString() + var userId int64 + if model.UserId.ValueInt64() != 0 { + userId = model.UserId.ValueInt64() } else if user.Id != nil { userId = *user.Id } else { @@ -448,15 +525,15 @@ func mapFields(userResp *sqlserverflex.GetUserResponse, model *Model, region str model.ProjectId.ValueString(), region, model.InstanceId.ValueString(), - userId, + strconv.FormatInt(userId, 10), ) - model.UserId = types.StringValue(userId) + model.UserId = types.Int64Value(userId) model.Username = types.StringPointerValue(user.Username) if user.Roles != nil { - roles := []attr.Value{} + var roles []attr.Value for _, role := range *user.Roles { - roles = append(roles, types.StringValue(role)) + roles = append(roles, types.StringValue(string(role))) } rolesSet, diags := types.SetValue(types.StringType, roles) if diags.HasError() { @@ -475,13 +552,17 @@ func mapFields(userResp *sqlserverflex.GetUserResponse, model *Model, region str return nil } -func toCreatePayload(model *Model, roles []string) (*sqlserverflex.CreateUserPayload, error) { +func toCreatePayload( + model *Model, + roles []sqlserverflexalpha.UserRole, +) (*sqlserverflexalpha.CreateUserRequestPayload, error) { if model == nil { return nil, fmt.Errorf("nil model") } - return &sqlserverflex.CreateUserPayload{ - Username: conversion.StringValueToPointer(model.Username), - Roles: &roles, + return &sqlserverflexalpha.CreateUserRequestPayload{ + Username: conversion.StringValueToPointer(model.Username), + DefaultDatabase: conversion.StringValueToPointer(model.DefaultDatabase), + Roles: &roles, }, nil } diff --git a/stackit/internal/services/sqlserverflexalpha/user/resource_test.go b/stackit/internal/services/sqlserverflexalpha/user/resource_test.go index 058b213d..8277203a 100644 --- a/stackit/internal/services/sqlserverflexalpha/user/resource_test.go +++ b/stackit/internal/services/sqlserverflexalpha/user/resource_test.go @@ -1,4 +1,6 @@ -package sqlserverflex +// Copyright (c) STACKIT + +package sqlserverflexalpha import ( "testing" @@ -7,30 +9,28 @@ import ( "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/stackitcloud/stackit-sdk-go/core/utils" - "github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex" + "github.com/stackitcloud/terraform-provider-stackit/pkg/sqlserverflexalpha" ) func TestMapFieldsCreate(t *testing.T) { const testRegion = "region" tests := []struct { description string - input *sqlserverflex.CreateUserResponse + input *sqlserverflexalpha.CreateUserResponse region string expected Model isValid bool }{ { "default_values", - &sqlserverflex.CreateUserResponse{ - Item: &sqlserverflex.SingleUser{ - Id: utils.Ptr("uid"), - Password: utils.Ptr(""), - }, + &sqlserverflexalpha.CreateUserResponse{ + Id: utils.Ptr(int64(1)), + Password: utils.Ptr(""), }, testRegion, Model{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,1"), + UserId: types.Int64Value(1), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringNull(), @@ -44,63 +44,67 @@ func TestMapFieldsCreate(t *testing.T) { }, { "simple_values", - &sqlserverflex.CreateUserResponse{ - Item: &sqlserverflex.SingleUser{ - Id: utils.Ptr("uid"), - Roles: &[]string{ - "role_1", - "role_2", - "", - }, - Username: utils.Ptr("username"), - Password: utils.Ptr("password"), - Host: utils.Ptr("host"), - Port: utils.Ptr(int64(1234)), + &sqlserverflexalpha.CreateUserResponse{ + Id: utils.Ptr(int64(2)), + Roles: &[]sqlserverflexalpha.UserRole{ + "role_1", + "role_2", + "", }, + Username: utils.Ptr("username"), + Password: utils.Ptr("password"), + Host: utils.Ptr("host"), + Port: utils.Ptr(int64(1234)), + Status: utils.Ptr("status"), + DefaultDatabase: utils.Ptr("default_db"), }, testRegion, Model{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,2"), + UserId: types.Int64Value(2), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringValue("username"), - Roles: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("role_1"), - types.StringValue("role_2"), - types.StringValue(""), - }), - Password: types.StringValue("password"), - Host: types.StringValue("host"), - Port: types.Int64Value(1234), - Region: types.StringValue(testRegion), + Roles: types.SetValueMust( + types.StringType, []attr.Value{ + types.StringValue("role_1"), + types.StringValue("role_2"), + types.StringValue(""), + }, + ), + Password: types.StringValue("password"), + Host: types.StringValue("host"), + Port: types.Int64Value(1234), + Region: types.StringValue(testRegion), + Status: types.StringValue("status"), + DefaultDatabase: types.StringValue("default_db"), }, true, }, { "null_fields_and_int_conversions", - &sqlserverflex.CreateUserResponse{ - Item: &sqlserverflex.SingleUser{ - Id: utils.Ptr("uid"), - Roles: &[]string{}, - Username: nil, - Password: utils.Ptr(""), - Host: nil, - Port: utils.Ptr(int64(2123456789)), - }, + &sqlserverflexalpha.CreateUserResponse{ + Id: utils.Ptr(int64(3)), + Roles: &[]sqlserverflexalpha.UserRole{}, + Username: nil, + Password: utils.Ptr(""), + Host: nil, + Port: utils.Ptr(int64(2123456789)), }, testRegion, Model{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), - InstanceId: types.StringValue("iid"), - ProjectId: types.StringValue("pid"), - Username: types.StringNull(), - Roles: types.SetValueMust(types.StringType, []attr.Value{}), - Password: types.StringValue(""), - Host: types.StringNull(), - Port: types.Int64Value(2123456789), - Region: types.StringValue(testRegion), + Id: types.StringValue("pid,region,iid,3"), + UserId: types.Int64Value(3), + InstanceId: types.StringValue("iid"), + ProjectId: types.StringValue("pid"), + Username: types.StringNull(), + Roles: types.SetValueMust(types.StringType, []attr.Value{}), + Password: types.StringValue(""), + Host: types.StringNull(), + Port: types.Int64Value(2123456789), + Region: types.StringValue(testRegion), + DefaultDatabase: types.StringNull(), + Status: types.StringNull(), }, true, }, @@ -113,26 +117,22 @@ func TestMapFieldsCreate(t *testing.T) { }, { "nil_response_2", - &sqlserverflex.CreateUserResponse{}, + &sqlserverflexalpha.CreateUserResponse{}, testRegion, Model{}, false, }, { "no_resource_id", - &sqlserverflex.CreateUserResponse{ - Item: &sqlserverflex.SingleUser{}, - }, + &sqlserverflexalpha.CreateUserResponse{}, testRegion, Model{}, false, }, { "no_password", - &sqlserverflex.CreateUserResponse{ - Item: &sqlserverflex.SingleUser{ - Id: utils.Ptr("uid"), - }, + &sqlserverflexalpha.CreateUserResponse{ + Id: utils.Ptr(int64(1)), }, testRegion, Model{}, @@ -140,25 +140,27 @@ func TestMapFieldsCreate(t *testing.T) { }, } for _, tt := range tests { - t.Run(tt.description, func(t *testing.T) { - state := &Model{ - ProjectId: tt.expected.ProjectId, - InstanceId: tt.expected.InstanceId, - } - err := mapFieldsCreate(tt.input, state, tt.region) - if !tt.isValid && err == nil { - t.Fatalf("Should have failed") - } - if tt.isValid && err != nil { - t.Fatalf("Should not have failed: %v", err) - } - if tt.isValid { - diff := cmp.Diff(state, &tt.expected) - if diff != "" { - t.Fatalf("Data does not match: %s", diff) + t.Run( + tt.description, func(t *testing.T) { + state := &Model{ + ProjectId: tt.expected.ProjectId, + InstanceId: tt.expected.InstanceId, } - } - }) + err := mapFieldsCreate(tt.input, state, tt.region) + if !tt.isValid && err == nil { + t.Fatalf("Should have failed") + } + if tt.isValid && err != nil { + t.Fatalf("Should not have failed: %v", err) + } + if tt.isValid { + diff := cmp.Diff(state, &tt.expected) + if diff != "" { + t.Fatalf("Data does not match: %s", diff) + } + } + }, + ) } } @@ -166,20 +168,18 @@ func TestMapFields(t *testing.T) { const testRegion = "region" tests := []struct { description string - input *sqlserverflex.GetUserResponse + input *sqlserverflexalpha.GetUserResponse region string expected Model isValid bool }{ { "default_values", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{}, - }, + &sqlserverflexalpha.GetUserResponse{}, testRegion, Model{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,1"), + UserId: types.Int64Value(1), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringNull(), @@ -192,30 +192,30 @@ func TestMapFields(t *testing.T) { }, { "simple_values", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{ - Roles: &[]string{ - "role_1", - "role_2", - "", - }, - Username: utils.Ptr("username"), - Host: utils.Ptr("host"), - Port: utils.Ptr(int64(1234)), + &sqlserverflexalpha.GetUserResponse{ + Roles: &[]sqlserverflexalpha.UserRole{ + "role_1", + "role_2", + "", }, + Username: utils.Ptr("username"), + Host: utils.Ptr("host"), + Port: utils.Ptr(int64(1234)), }, testRegion, Model{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,2"), + UserId: types.Int64Value(2), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringValue("username"), - Roles: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("role_1"), - types.StringValue("role_2"), - types.StringValue(""), - }), + Roles: types.SetValueMust( + types.StringType, []attr.Value{ + types.StringValue("role_1"), + types.StringValue("role_2"), + types.StringValue(""), + }, + ), Host: types.StringValue("host"), Port: types.Int64Value(1234), Region: types.StringValue(testRegion), @@ -224,19 +224,17 @@ func TestMapFields(t *testing.T) { }, { "null_fields_and_int_conversions", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{ - Id: utils.Ptr("uid"), - Roles: &[]string{}, - Username: nil, - Host: nil, - Port: utils.Ptr(int64(2123456789)), - }, + &sqlserverflexalpha.GetUserResponse{ + Id: utils.Ptr(int64(1)), + Roles: &[]sqlserverflexalpha.UserRole{}, + Username: nil, + Host: nil, + Port: utils.Ptr(int64(2123456789)), }, testRegion, Model{ - Id: types.StringValue("pid,region,iid,uid"), - UserId: types.StringValue("uid"), + Id: types.StringValue("pid,region,iid,1"), + UserId: types.Int64Value(1), InstanceId: types.StringValue("iid"), ProjectId: types.StringValue("pid"), Username: types.StringNull(), @@ -256,42 +254,42 @@ func TestMapFields(t *testing.T) { }, { "nil_response_2", - &sqlserverflex.GetUserResponse{}, + &sqlserverflexalpha.GetUserResponse{}, testRegion, Model{}, false, }, { "no_resource_id", - &sqlserverflex.GetUserResponse{ - Item: &sqlserverflex.UserResponseUser{}, - }, + &sqlserverflexalpha.GetUserResponse{}, testRegion, Model{}, false, }, } for _, tt := range tests { - t.Run(tt.description, func(t *testing.T) { - state := &Model{ - ProjectId: tt.expected.ProjectId, - InstanceId: tt.expected.InstanceId, - UserId: tt.expected.UserId, - } - err := mapFields(tt.input, state, tt.region) - if !tt.isValid && err == nil { - t.Fatalf("Should have failed") - } - if tt.isValid && err != nil { - t.Fatalf("Should not have failed: %v", err) - } - if tt.isValid { - diff := cmp.Diff(state, &tt.expected) - if diff != "" { - t.Fatalf("Data does not match: %s", diff) + t.Run( + tt.description, func(t *testing.T) { + state := &Model{ + ProjectId: tt.expected.ProjectId, + InstanceId: tt.expected.InstanceId, + UserId: tt.expected.UserId, } - } - }) + err := mapFields(tt.input, state, tt.region) + if !tt.isValid && err == nil { + t.Fatalf("Should have failed") + } + if tt.isValid && err != nil { + t.Fatalf("Should not have failed: %v", err) + } + if tt.isValid { + diff := cmp.Diff(state, &tt.expected) + if diff != "" { + t.Fatalf("Data does not match: %s", diff) + } + } + }, + ) } } @@ -299,16 +297,16 @@ func TestToCreatePayload(t *testing.T) { tests := []struct { description string input *Model - inputRoles []string - expected *sqlserverflex.CreateUserPayload + inputRoles []sqlserverflexalpha.UserRole + expected *sqlserverflexalpha.CreateUserRequestPayload isValid bool }{ { "default_values", &Model{}, - []string{}, - &sqlserverflex.CreateUserPayload{ - Roles: &[]string{}, + []sqlserverflexalpha.UserRole{}, + &sqlserverflexalpha.CreateUserRequestPayload{ + Roles: &[]sqlserverflexalpha.UserRole{}, Username: nil, }, true, @@ -318,12 +316,12 @@ func TestToCreatePayload(t *testing.T) { &Model{ Username: types.StringValue("username"), }, - []string{ + []sqlserverflexalpha.UserRole{ "role_1", "role_2", }, - &sqlserverflex.CreateUserPayload{ - Roles: &[]string{ + &sqlserverflexalpha.CreateUserRequestPayload{ + Roles: &[]sqlserverflexalpha.UserRole{ "role_1", "role_2", }, @@ -336,11 +334,11 @@ func TestToCreatePayload(t *testing.T) { &Model{ Username: types.StringNull(), }, - []string{ + []sqlserverflexalpha.UserRole{ "", }, - &sqlserverflex.CreateUserPayload{ - Roles: &[]string{ + &sqlserverflexalpha.CreateUserRequestPayload{ + Roles: &[]sqlserverflexalpha.UserRole{ "", }, Username: nil, @@ -350,7 +348,7 @@ func TestToCreatePayload(t *testing.T) { { "nil_model", nil, - []string{}, + []sqlserverflexalpha.UserRole{}, nil, false, }, @@ -359,29 +357,31 @@ func TestToCreatePayload(t *testing.T) { &Model{ Username: types.StringValue("username"), }, - []string{}, - &sqlserverflex.CreateUserPayload{ - Roles: &[]string{}, + []sqlserverflexalpha.UserRole{}, + &sqlserverflexalpha.CreateUserRequestPayload{ + Roles: &[]sqlserverflexalpha.UserRole{}, Username: utils.Ptr("username"), }, true, }, } for _, tt := range tests { - t.Run(tt.description, func(t *testing.T) { - output, err := toCreatePayload(tt.input, tt.inputRoles) - if !tt.isValid && err == nil { - t.Fatalf("Should have failed") - } - if tt.isValid && err != nil { - t.Fatalf("Should not have failed: %v", err) - } - if tt.isValid { - diff := cmp.Diff(output, tt.expected) - if diff != "" { - t.Fatalf("Data does not match: %s", diff) + t.Run( + tt.description, func(t *testing.T) { + output, err := toCreatePayload(tt.input, tt.inputRoles) + if !tt.isValid && err == nil { + t.Fatalf("Should have failed") } - } - }) + if tt.isValid && err != nil { + t.Fatalf("Should not have failed: %v", err) + } + if tt.isValid { + diff := cmp.Diff(output, tt.expected) + if diff != "" { + t.Fatalf("Data does not match: %s", diff) + } + } + }, + ) } } diff --git a/stackit/internal/services/sqlserverflexalpha/utils/util.go b/stackit/internal/services/sqlserverflexalpha/utils/util.go index 5c14c085..3c49e1b9 100644 --- a/stackit/internal/services/sqlserverflexalpha/utils/util.go +++ b/stackit/internal/services/sqlserverflexalpha/utils/util.go @@ -1,10 +1,12 @@ +// Copyright (c) STACKIT + package utils import ( "context" "fmt" - "github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex" + sqlserverflex "github.com/stackitcloud/terraform-provider-stackit/pkg/sqlserverflexalpha" "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/stackitcloud/stackit-sdk-go/core/config" @@ -12,19 +14,34 @@ import ( "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/utils" ) -func ConfigureClient(ctx context.Context, providerData *core.ProviderData, diags *diag.Diagnostics) *sqlserverflex.APIClient { +func ConfigureClient( + ctx context.Context, + providerData *core.ProviderData, + diags *diag.Diagnostics, +) *sqlserverflex.APIClient { apiClientConfigOptions := []config.ConfigurationOption{ config.WithCustomAuth(providerData.RoundTripper), utils.UserAgentConfigOption(providerData.Version), } if providerData.SQLServerFlexCustomEndpoint != "" { - apiClientConfigOptions = append(apiClientConfigOptions, config.WithEndpoint(providerData.SQLServerFlexCustomEndpoint)) + apiClientConfigOptions = append( + apiClientConfigOptions, + config.WithEndpoint(providerData.SQLServerFlexCustomEndpoint), + ) } else { apiClientConfigOptions = append(apiClientConfigOptions, config.WithRegion(providerData.GetRegion())) } apiClient, err := sqlserverflex.NewAPIClient(apiClientConfigOptions...) if err != nil { - core.LogAndAddError(ctx, diags, "Error configuring API client", fmt.Sprintf("Configuring client: %v. This is an error related to the provider configuration, not to the resource configuration", err)) + core.LogAndAddError( + ctx, + diags, + "Error configuring API client", + fmt.Sprintf( + "Configuring client: %v. This is an error related to the provider configuration, not to the resource configuration", + err, + ), + ) return nil } diff --git a/stackit/internal/services/sqlserverflexalpha/utils/util_test.go b/stackit/internal/services/sqlserverflexalpha/utils/util_test.go index 5ee93949..cfa3f198 100644 --- a/stackit/internal/services/sqlserverflexalpha/utils/util_test.go +++ b/stackit/internal/services/sqlserverflexalpha/utils/util_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( @@ -9,7 +11,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/diag" sdkClients "github.com/stackitcloud/stackit-sdk-go/core/clients" "github.com/stackitcloud/stackit-sdk-go/core/config" - "github.com/stackitcloud/stackit-sdk-go/services/sqlserverflex" + "github.com/stackitcloud/terraform-provider-stackit/pkg/sqlserverflexalpha" + "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/core" "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/utils" ) @@ -34,7 +37,7 @@ func TestConfigureClient(t *testing.T) { name string args args wantErr bool - expected *sqlserverflex.APIClient + expected *sqlserverflexalpha.APIClient }{ { name: "default endpoint", @@ -43,8 +46,8 @@ func TestConfigureClient(t *testing.T) { Version: testVersion, }, }, - expected: func() *sqlserverflex.APIClient { - apiClient, err := sqlserverflex.NewAPIClient( + expected: func() *sqlserverflexalpha.APIClient { + apiClient, err := sqlserverflexalpha.NewAPIClient( config.WithRegion("eu01"), utils.UserAgentConfigOption(testVersion), ) @@ -63,8 +66,8 @@ func TestConfigureClient(t *testing.T) { SQLServerFlexCustomEndpoint: testCustomEndpoint, }, }, - expected: func() *sqlserverflex.APIClient { - apiClient, err := sqlserverflex.NewAPIClient( + expected: func() *sqlserverflexalpha.APIClient { + apiClient, err := sqlserverflexalpha.NewAPIClient( utils.UserAgentConfigOption(testVersion), config.WithEndpoint(testCustomEndpoint), ) @@ -77,18 +80,20 @@ func TestConfigureClient(t *testing.T) { }, } for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - ctx := context.Background() - diags := diag.Diagnostics{} + t.Run( + tt.name, func(t *testing.T) { + ctx := context.Background() + diags := diag.Diagnostics{} - actual := ConfigureClient(ctx, tt.args.providerData, &diags) - if diags.HasError() != tt.wantErr { - t.Errorf("ConfigureClient() error = %v, want %v", diags.HasError(), tt.wantErr) - } + actual := ConfigureClient(ctx, tt.args.providerData, &diags) + if diags.HasError() != tt.wantErr { + t.Errorf("ConfigureClient() error = %v, want %v", diags.HasError(), tt.wantErr) + } - if !reflect.DeepEqual(actual, tt.expected) { - t.Errorf("ConfigureClient() = %v, want %v", actual, tt.expected) - } - }) + if !reflect.DeepEqual(actual, tt.expected) { + t.Errorf("ConfigureClient() = %v, want %v", actual, tt.expected) + } + }, + ) } } diff --git a/stackit/internal/testutil/testutil.go b/stackit/internal/testutil/testutil.go index c7e5654a..fd7d95ff 100644 --- a/stackit/internal/testutil/testutil.go +++ b/stackit/internal/testutil/testutil.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package testutil import ( diff --git a/stackit/internal/testutil/testutil_test.go b/stackit/internal/testutil/testutil_test.go index e92a718a..f74ca81c 100644 --- a/stackit/internal/testutil/testutil_test.go +++ b/stackit/internal/testutil/testutil_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package testutil import ( diff --git a/stackit/internal/utils/attributes.go b/stackit/internal/utils/attributes.go index bddc30ba..4572960f 100644 --- a/stackit/internal/utils/attributes.go +++ b/stackit/internal/utils/attributes.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/attributes_test.go b/stackit/internal/utils/attributes_test.go index b7b3c8a1..cddaceb5 100644 --- a/stackit/internal/utils/attributes_test.go +++ b/stackit/internal/utils/attributes_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/headers.go b/stackit/internal/utils/headers.go index abbedbc3..bd51f2f3 100644 --- a/stackit/internal/utils/headers.go +++ b/stackit/internal/utils/headers.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/headers_test.go b/stackit/internal/utils/headers_test.go index f7f0c175..03880034 100644 --- a/stackit/internal/utils/headers_test.go +++ b/stackit/internal/utils/headers_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/regions.go b/stackit/internal/utils/regions.go index 89dbdae9..1b7cec36 100644 --- a/stackit/internal/utils/regions.go +++ b/stackit/internal/utils/regions.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/regions_test.go b/stackit/internal/utils/regions_test.go index 78ca8db6..242a340f 100644 --- a/stackit/internal/utils/regions_test.go +++ b/stackit/internal/utils/regions_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/use_state_for_unknown_if.go b/stackit/internal/utils/use_state_for_unknown_if.go index 00e90c61..76db6bca 100644 --- a/stackit/internal/utils/use_state_for_unknown_if.go +++ b/stackit/internal/utils/use_state_for_unknown_if.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/use_state_for_unknown_if_test.go b/stackit/internal/utils/use_state_for_unknown_if_test.go index 387e270a..01817fb0 100644 --- a/stackit/internal/utils/use_state_for_unknown_if_test.go +++ b/stackit/internal/utils/use_state_for_unknown_if_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/utils.go b/stackit/internal/utils/utils.go index a7614113..962799ea 100644 --- a/stackit/internal/utils/utils.go +++ b/stackit/internal/utils/utils.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/utils/utils_test.go b/stackit/internal/utils/utils_test.go index 0dc5bf5b..00e9f77c 100644 --- a/stackit/internal/utils/utils_test.go +++ b/stackit/internal/utils/utils_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package utils import ( diff --git a/stackit/internal/validate/validate.go b/stackit/internal/validate/validate.go index 0af0f3c6..9675bec0 100644 --- a/stackit/internal/validate/validate.go +++ b/stackit/internal/validate/validate.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package validate import ( diff --git a/stackit/internal/validate/validate_test.go b/stackit/internal/validate/validate_test.go index 3436a7a1..210a5ca9 100644 --- a/stackit/internal/validate/validate_test.go +++ b/stackit/internal/validate/validate_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package validate import ( diff --git a/stackit/provider.go b/stackit/provider.go index fa2ec888..06e406e4 100644 --- a/stackit/provider.go +++ b/stackit/provider.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package stackit import ( @@ -18,8 +20,8 @@ import ( "github.com/stackitcloud/stackit-sdk-go/core/config" "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/core" "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/features" - roleAssignements "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/authorization/roleassignments" postgresFlexAlphaInstance "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/postgresflexalpha/instance" + sqlServerFlexAlpaUser "github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/sqlserverflexalpha/user" ) // Ensure the implementation satisfies the expected interfaces @@ -42,7 +44,7 @@ func New(version string) func() provider.Provider { } func (p *Provider) Metadata(_ context.Context, _ provider.MetadataRequest, resp *provider.MetadataResponse) { - resp.TypeName = "stackitalpha" + resp.TypeName = "stackitprivatepreview" resp.Version = p.version } @@ -131,7 +133,10 @@ func (p *Provider) Schema(_ context.Context, _ provider.SchemaRequest, resp *pro "service_enablement_custom_endpoint": "Custom endpoint for the Service Enablement API", "token_custom_endpoint": "Custom endpoint for the token API, which is used to request access tokens when using the key flow", "enable_beta_resources": "Enable beta resources. Default is false.", - "experiments": fmt.Sprintf("Enables experiments. These are unstable features without official support. More information can be found in the README. Available Experiments: %v", strings.Join(features.AvailableExperiments, ", ")), + "experiments": fmt.Sprintf( + "Enables experiments. These are unstable features without official support. More information can be found in the README. Available Experiments: %v", + strings.Join(features.AvailableExperiments, ", "), + ), } resp.Schema = schema.Schema{ @@ -331,7 +336,12 @@ func (p *Provider) Configure(ctx context.Context, req provider.ConfigureRequest, if !v.IsUnknown() && !v.IsNull() { val, err := v.ToBoolValue(ctx) if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error configuring provider", fmt.Sprintf("Setting up bool value: %v", diags.Errors())) + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error configuring provider", + fmt.Sprintf("Setting up bool value: %v", diags.Errors()), + ) } setter(val.ValueBool()) } @@ -347,48 +357,106 @@ func (p *Provider) Configure(ctx context.Context, req provider.ConfigureRequest, setStringField(providerConfig.TokenCustomEndpoint, func(v string) { sdkConfig.TokenCustomUrl = v }) setStringField(providerConfig.DefaultRegion, func(v string) { providerData.DefaultRegion = v }) - setStringField(providerConfig.Region, func(v string) { providerData.Region = v }) // nolint:staticcheck // preliminary handling of deprecated attribute + setStringField( + providerConfig.Region, + func(v string) { providerData.Region = v }, + ) // nolint:staticcheck // preliminary handling of deprecated attribute setBoolField(providerConfig.EnableBetaResources, func(v bool) { providerData.EnableBetaResources = v }) - setStringField(providerConfig.AuthorizationCustomEndpoint, func(v string) { providerData.AuthorizationCustomEndpoint = v }) + setStringField( + providerConfig.AuthorizationCustomEndpoint, + func(v string) { providerData.AuthorizationCustomEndpoint = v }, + ) setStringField(providerConfig.CdnCustomEndpoint, func(v string) { providerData.CdnCustomEndpoint = v }) setStringField(providerConfig.DnsCustomEndpoint, func(v string) { providerData.DnsCustomEndpoint = v }) setStringField(providerConfig.GitCustomEndpoint, func(v string) { providerData.GitCustomEndpoint = v }) setStringField(providerConfig.IaaSCustomEndpoint, func(v string) { providerData.IaaSCustomEndpoint = v }) setStringField(providerConfig.KmsCustomEndpoint, func(v string) { providerData.KMSCustomEndpoint = v }) - setStringField(providerConfig.LoadBalancerCustomEndpoint, func(v string) { providerData.LoadBalancerCustomEndpoint = v }) + setStringField( + providerConfig.LoadBalancerCustomEndpoint, + func(v string) { providerData.LoadBalancerCustomEndpoint = v }, + ) setStringField(providerConfig.LogMeCustomEndpoint, func(v string) { providerData.LogMeCustomEndpoint = v }) setStringField(providerConfig.MariaDBCustomEndpoint, func(v string) { providerData.MariaDBCustomEndpoint = v }) - setStringField(providerConfig.ModelServingCustomEndpoint, func(v string) { providerData.ModelServingCustomEndpoint = v }) - setStringField(providerConfig.MongoDBFlexCustomEndpoint, func(v string) { providerData.MongoDBFlexCustomEndpoint = v }) - setStringField(providerConfig.ObjectStorageCustomEndpoint, func(v string) { providerData.ObjectStorageCustomEndpoint = v }) - setStringField(providerConfig.ObservabilityCustomEndpoint, func(v string) { providerData.ObservabilityCustomEndpoint = v }) - setStringField(providerConfig.OpenSearchCustomEndpoint, func(v string) { providerData.OpenSearchCustomEndpoint = v }) - setStringField(providerConfig.PostgresFlexCustomEndpoint, func(v string) { providerData.PostgresFlexCustomEndpoint = v }) + setStringField( + providerConfig.ModelServingCustomEndpoint, + func(v string) { providerData.ModelServingCustomEndpoint = v }, + ) + setStringField( + providerConfig.MongoDBFlexCustomEndpoint, + func(v string) { providerData.MongoDBFlexCustomEndpoint = v }, + ) + setStringField( + providerConfig.ObjectStorageCustomEndpoint, + func(v string) { providerData.ObjectStorageCustomEndpoint = v }, + ) + setStringField( + providerConfig.ObservabilityCustomEndpoint, + func(v string) { providerData.ObservabilityCustomEndpoint = v }, + ) + setStringField( + providerConfig.OpenSearchCustomEndpoint, + func(v string) { providerData.OpenSearchCustomEndpoint = v }, + ) + setStringField( + providerConfig.PostgresFlexCustomEndpoint, + func(v string) { providerData.PostgresFlexCustomEndpoint = v }, + ) setStringField(providerConfig.RabbitMQCustomEndpoint, func(v string) { providerData.RabbitMQCustomEndpoint = v }) setStringField(providerConfig.RedisCustomEndpoint, func(v string) { providerData.RedisCustomEndpoint = v }) - setStringField(providerConfig.ResourceManagerCustomEndpoint, func(v string) { providerData.ResourceManagerCustomEndpoint = v }) + setStringField( + providerConfig.ResourceManagerCustomEndpoint, + func(v string) { providerData.ResourceManagerCustomEndpoint = v }, + ) setStringField(providerConfig.ScfCustomEndpoint, func(v string) { providerData.ScfCustomEndpoint = v }) - setStringField(providerConfig.SecretsManagerCustomEndpoint, func(v string) { providerData.SecretsManagerCustomEndpoint = v }) - setStringField(providerConfig.ServerBackupCustomEndpoint, func(v string) { providerData.ServerBackupCustomEndpoint = v }) - setStringField(providerConfig.ServerUpdateCustomEndpoint, func(v string) { providerData.ServerUpdateCustomEndpoint = v }) - setStringField(providerConfig.ServiceAccountCustomEndpoint, func(v string) { providerData.ServiceAccountCustomEndpoint = v }) - setStringField(providerConfig.ServiceEnablementCustomEndpoint, func(v string) { providerData.ServiceEnablementCustomEndpoint = v }) + setStringField( + providerConfig.SecretsManagerCustomEndpoint, + func(v string) { providerData.SecretsManagerCustomEndpoint = v }, + ) + setStringField( + providerConfig.ServerBackupCustomEndpoint, + func(v string) { providerData.ServerBackupCustomEndpoint = v }, + ) + setStringField( + providerConfig.ServerUpdateCustomEndpoint, + func(v string) { providerData.ServerUpdateCustomEndpoint = v }, + ) + setStringField( + providerConfig.ServiceAccountCustomEndpoint, + func(v string) { providerData.ServiceAccountCustomEndpoint = v }, + ) + setStringField( + providerConfig.ServiceEnablementCustomEndpoint, + func(v string) { providerData.ServiceEnablementCustomEndpoint = v }, + ) setStringField(providerConfig.SkeCustomEndpoint, func(v string) { providerData.SKECustomEndpoint = v }) - setStringField(providerConfig.SqlServerFlexCustomEndpoint, func(v string) { providerData.SQLServerFlexCustomEndpoint = v }) + setStringField( + providerConfig.SqlServerFlexCustomEndpoint, + func(v string) { providerData.SQLServerFlexCustomEndpoint = v }, + ) if !(providerConfig.Experiments.IsUnknown() || providerConfig.Experiments.IsNull()) { var experimentValues []string diags := providerConfig.Experiments.ElementsAs(ctx, &experimentValues, false) if diags.HasError() { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error configuring provider", fmt.Sprintf("Setting up experiments: %v", diags.Errors())) + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error configuring provider", + fmt.Sprintf("Setting up experiments: %v", diags.Errors()), + ) } providerData.Experiments = experimentValues } roundTripper, err := sdkauth.SetupAuth(sdkConfig) if err != nil { - core.LogAndAddError(ctx, &resp.Diagnostics, "Error configuring provider", fmt.Sprintf("Setting up authentication: %v", err)) + core.LogAndAddError( + ctx, + &resp.Diagnostics, + "Error configuring provider", + fmt.Sprintf("Setting up authentication: %v", err), + ) return } @@ -402,7 +470,10 @@ func (p *Provider) Configure(ctx context.Context, req provider.ConfigureRequest, var ephemeralProviderData core.EphemeralProviderData ephemeralProviderData.ProviderData = providerData setStringField(providerConfig.ServiceAccountKey, func(v string) { ephemeralProviderData.ServiceAccountKey = v }) - setStringField(providerConfig.ServiceAccountKeyPath, func(v string) { ephemeralProviderData.ServiceAccountKeyPath = v }) + setStringField( + providerConfig.ServiceAccountKeyPath, + func(v string) { ephemeralProviderData.ServiceAccountKeyPath = v }, + ) setStringField(providerConfig.PrivateKey, func(v string) { ephemeralProviderData.PrivateKey = v }) setStringField(providerConfig.PrivateKeyPath, func(v string) { ephemeralProviderData.PrivateKeyPath = v }) setStringField(providerConfig.TokenCustomEndpoint, func(v string) { ephemeralProviderData.TokenCustomEndpoint = v }) @@ -413,15 +484,16 @@ func (p *Provider) Configure(ctx context.Context, req provider.ConfigureRequest, // DataSources defines the data sources implemented in the provider. func (p *Provider) DataSources(_ context.Context) []func() datasource.DataSource { - return []func() datasource.DataSource{} + return []func() datasource.DataSource{ + sqlServerFlexAlpaUser.NewUserDataSource, + } } // Resources defines the resources implemented in the provider. func (p *Provider) Resources(_ context.Context) []func() resource.Resource { resources := []func() resource.Resource{ postgresFlexAlphaInstance.NewInstanceResource, + sqlServerFlexAlpaUser.NewUserResource, } - resources = append(resources, roleAssignements.NewRoleAssignmentResources()...) - return resources } diff --git a/stackit/provider_acc_test.go b/stackit/provider_acc_test.go index 24eb81f8..557d3c61 100644 --- a/stackit/provider_acc_test.go +++ b/stackit/provider_acc_test.go @@ -1,3 +1,5 @@ +// Copyright (c) STACKIT + package stackit_test import ( diff --git a/stackit/testdata/provider-all-attributes.tf b/stackit/testdata/provider-all-attributes.tf index 895ea245..59452c76 100644 --- a/stackit/testdata/provider-all-attributes.tf +++ b/stackit/testdata/provider-all-attributes.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + variable "project_id" {} variable "name" {} diff --git a/stackit/testdata/provider-credentials.tf b/stackit/testdata/provider-credentials.tf index 32c1d863..45778443 100644 --- a/stackit/testdata/provider-credentials.tf +++ b/stackit/testdata/provider-credentials.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + variable "project_id" {} variable "name" {} diff --git a/stackit/testdata/provider-invalid-attribute.tf b/stackit/testdata/provider-invalid-attribute.tf index d5a11a2c..fff0834a 100644 --- a/stackit/testdata/provider-invalid-attribute.tf +++ b/stackit/testdata/provider-invalid-attribute.tf @@ -1,3 +1,5 @@ +# Copyright (c) STACKIT + variable "project_id" {} variable "name" {} diff --git a/templates/guides/aws_provider_s3_stackit.md.tmpl b/templates/guides/aws_provider_s3_stackit.md.tmpl deleted file mode 100644 index b57cacb5..00000000 --- a/templates/guides/aws_provider_s3_stackit.md.tmpl +++ /dev/null @@ -1,91 +0,0 @@ ---- -page_title: "Using AWS Provider for STACKIT Object Storage (S3 compatible)" ---- -# Using AWS Provider for STACKIT Object Storage (S3 compatible) - -## Overview - -This guide outlines the process of utilizing the [AWS Terraform Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) alongside the STACKIT provider to create and manage STACKIT Object Storage (S3 compatible) resources. - -## Steps - -1. **Configure STACKIT Provider** - - First, configure the STACKIT provider to connect to the STACKIT services. - - ```hcl - provider "stackit" { - default_region = "eu01" - } - ``` - -2. **Define STACKIT Object Storage Bucket** - - Create a STACKIT Object Storage Bucket and obtain credentials for the AWS provider. - - ```hcl - resource "stackit_objectstorage_bucket" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - } - - resource "stackit_objectstorage_credentials_group" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-credentials-group" - } - - resource "stackit_objectstorage_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - credentials_group_id = stackit_objectstorage_credentials_group.example.credentials_group_id - expiration_timestamp = "2027-01-02T03:04:05Z" - } - ``` - -3. **Configure AWS Provider** - - Configure the AWS provider to connect to the STACKIT Object Storage bucket. - - ```hcl - provider "aws" { - region = "eu01" - skip_credentials_validation = true - skip_region_validation = true - skip_requesting_account_id = true - - access_key = stackit_objectstorage_credential.example.access_key - secret_key = stackit_objectstorage_credential.example.secret_access_key - - endpoints { - s3 = "https://object.storage.eu01.onstackit.cloud" - } - } - ``` - -4. **Use the Provider to Manage Objects or Policies** - - ```hcl - resource "aws_s3_object" "test_file" { - bucket = stackit_objectstorage_bucket.example.name - key = "hello_world.txt" - source = "files/hello_world.txt" - content_type = "text/plain" - etag = filemd5("files/hello_world.txt") - } - - resource "aws_s3_bucket_policy" "allow_public_read_access" { - bucket = stackit_objectstorage_bucket.example.name - policy = < The environment variable takes precedence over the provider configuration option. This means that if the `STACKIT_TF_ENABLE_BETA_RESOURCES` environment variable is set to a valid value (`"true"` or `"false"`), it will override the `enable_beta_resources` option specified in the provider configuration. \ No newline at end of file diff --git a/templates/guides/scf_cloudfoundry.md.tmpl b/templates/guides/scf_cloudfoundry.md.tmpl deleted file mode 100644 index b468cbe1..00000000 --- a/templates/guides/scf_cloudfoundry.md.tmpl +++ /dev/null @@ -1,251 +0,0 @@ ---- -page_title: "How to provision Cloud Foundry using Terraform" ---- -# How to provision Cloud Foundry using Terraform - -## Objective - -This tutorial demonstrates how to provision Cloud Foundry resources by -integrating the STACKIT Terraform provider with the Cloud Foundry Terraform -provider. The STACKIT Terraform provider will create a managed Cloud Foundry -organization and set up a technical "org manager" user with -`organization_manager` permissions. These credentials, along with the Cloud -Foundry API URL (retrieved dynamically from a platform data resource), are -passed to the Cloud Foundry Terraform provider to manage resources within the -new organization. - -### Output - -This configuration creates a Cloud Foundry organization, mirroring the structure -created via the portal. It sets up three distinct spaces: `dev`, `qa`, and -`prod`. The configuration assigns, a specified user the `organization_manager` -and `organization_user` roles at the organization level, and the -`space_developer` role in each space. - -### Scope - -This tutorial covers the interaction between the STACKIT Terraform provider and -the Cloud Foundry Terraform provider. It assumes you are familiar with: - -- Setting up a STACKIT project and configuring the STACKIT Terraform provider - with a service account (see the general STACKIT documentation for details). -- Basic Terraform concepts, such as variables and locals. - -This document does not cover foundational topics or every feature of the Cloud -Foundry Terraform provider. - -### Example configuration - -The following Terraform configuration provisions a Cloud Foundry organization -and related resources using the STACKIT Terraform provider and the Cloud Foundry -Terraform provider: - -``` -terraform { - required_providers { - stackit = { - source = "stackitcloud/stackit" - } - cloudfoundry = { - source = "cloudfoundry/cloudfoundry" - } - } -} - -variable "project_id" { - type = string - description = "Id of the Project" -} - -variable "org_name" { - type = string - description = "Name of the Organization" -} - -variable "admin_email" { - type = string - description = "Users who are granted permissions" -} - -provider "stackit" { - default_region = "eu01" -} - -resource "stackit_scf_organization" "scf_org" { - name = var.org_name - project_id = var.project_id -} - -data "stackit_scf_platform" "scf_platform" { - project_id = var.project_id - platform_id = stackit_scf_organization.scf_org.platform_id -} - -resource "stackit_scf_organization_manager" "scf_manager" { - project_id = var.project_id - org_id = stackit_scf_organization.scf_org.org_id -} - -provider "cloudfoundry" { - api_url = data.stackit_scf_platform.scf_platform.api_url - user = stackit_scf_organization_manager.scf_manager.username - password = stackit_scf_organization_manager.scf_manager.password -} - -locals { - spaces = ["dev", "qa", "prod"] -} - -resource "cloudfoundry_org_role" "org_user" { - username = var.admin_email - type = "organization_user" - org = stackit_scf_organization.scf_org.org_id -} - -resource "cloudfoundry_org_role" "org_manager" { - username = var.admin_email - type = "organization_manager" - org = stackit_scf_organization.scf_org.org_id -} - -resource "cloudfoundry_space" "spaces" { - for_each = toset(local.spaces) - name = each.key - org = stackit_scf_organization.scf_org.org_id -} - -resource "cloudfoundry_space_role" "space_developer" { - for_each = toset(local.spaces) - username = var.admin_email - type = "space_developer" - depends_on = [cloudfoundry_org_role.org_user] - space = cloudfoundry_space.spaces[each.key].id -} -``` - -## Explanation of configuration - -### STACKIT provider configuration - -``` -provider "stackit" { - default_region = "eu01" -} -``` - -The STACKIT Cloud Foundry Application Programming Interface (SCF API) is -regionalized. Each region operates independently. Set `default_region` in the -provider configuration, to specify the region for all resources, unless you -override it for individual resources. You must also provide access data for the -relevant STACKIT project for the provider to function. - -For more details, see -the:[STACKIT Terraform Provider documentation.](https://registry.terraform.io/providers/stackitcloud/stackit/latest/docs) - -### stackit_scf_organization.scf_org resource - -``` -resource "stackit_scf_organization" "scf_org" { - name = var.org_name - project_id = var.project_id -} -``` - -This resource provisions a Cloud Foundry organization, which acts as the -foundational container in the Cloud Foundry environment. Each Cloud Foundry -provider configuration is scoped to a specific organization. The organization’s -name, defined by a variable, must be unique across the platform. The -organization is created within a designated STACKIT project, which requires the -STACKIT provider to be configured with the necessary permissions for that -project. - -### stackit_scf_organization_manager.scf_manager resource - -``` -resource "stackit_scf_organization_manager" "scf_manager" { - project_id = var.project_id - org_id = stackit_scf_organization.scf_org.org_id -} -``` - -This resource creates a technical user in the Cloud Foundry organization with -the organization_manager permission. The user is linked to the organization and -is automatically deleted when the organization is removed. - -### stackit_scf_platform.scf_platform data source - -``` -data "stackit_scf_platform" "scf_platform" { - project_id = var.project_id - platform_id = stackit_scf_organization.scf_org.platform_id -} -``` - -This data source retrieves properties of the Cloud Foundry platform where the -organization is provisioned. It does not create resources, but provides -information about the existing platform. - -### Cloud Foundry provider configuration - -``` -provider "cloudfoundry" { - api_url = data.stackit_scf_platform.scf_platform.api_url - user = stackit_scf_organization_manager.scf_manager.username - password = stackit_scf_organization_manager.scf_manager.password -} -``` - -The Cloud Foundry provider is configured to manage resources in the new -organization. The provider uses the API URL from the `stackit_scf_platform` data -source and authenticates using the credentials of the technical user created by -the `stackit_scf_organization_manager` resource. - -For more information, see the: -[Cloud Foundry Terraform Provider documentation.](https://registry.terraform.io/providers/cloudfoundry/cloudfoundry/latest/docs) - -## Deploy resources - -Follow these steps to initialize your environment and provision Cloud Foundry -resources using Terraform. - -### Initialize Terraform - -Run the following command to initialize the working directory and download the -required provider plugins: - -``` -terraform init -``` - -### Create the organization manager user - -Run this command to provision the organization and technical user needed to -initialize the Cloud Foundry Terraform provider. This step is required only -during the initial setup. For later changes, you do not need the -target flag. - -``` -terraform apply -target stackit_scf_organization_manager.scf_manager -``` - -### Apply the full configuration - -Run this command to provision all resources defined in your Terraform -configuration within the Cloud Foundry organization: - -``` -terraform apply -``` - -## Verify the deployment - -Verify that your Cloud Foundry resources are provisioned correctly. Use the -following Cloud Foundry CLI commands to check applications, services, and -routes: - -- `cf apps` -- `cf services` -- `cf routes` - -For more information, see the -[Cloud Foundry documentation](https://docs.cloudfoundry.org/) and the -[Cloud Foundry CLI Reference Guide](https://cli.cloudfoundry.org/). \ No newline at end of file diff --git a/templates/guides/ske_kube_state_metric_alerts.md.tmpl b/templates/guides/ske_kube_state_metric_alerts.md.tmpl deleted file mode 100644 index 22c2b4ce..00000000 --- a/templates/guides/ske_kube_state_metric_alerts.md.tmpl +++ /dev/null @@ -1,267 +0,0 @@ ---- -page_title: "Alerting with Kube-State-Metrics in STACKIT Observability" ---- -# Alerting with Kube-State-Metrics in STACKIT Observability - -## Overview - -This guide explains how to configure the STACKIT Observability product to send alerts using metrics gathered from kube-state-metrics. - -1. **Set Up Providers** - - Begin by configuring the STACKIT and Kubernetes providers to connect to the STACKIT services. - - ```hcl - provider "stackit" { - default_region = "eu01" - } - - provider "kubernetes" { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - - provider "helm" { - kubernetes { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - } - ``` - -2. **Create SKE Cluster and Kubeconfig Resource** - - Set up a STACKIT SKE Cluster and generate the associated kubeconfig resource. - - ```hcl - resource "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - kubernetes_version_min = "1.31" - node_pools = [ - { - name = "standard" - machine_type = "c1.4" - minimum = "3" - maximum = "9" - max_surge = "3" - availability_zones = ["eu01-1", "eu01-2", "eu01-3"] - os_version_min = "4081.2.1" - os_name = "flatcar" - volume_size = 32 - volume_type = "storage_premium_perf6" - } - ] - maintenance = { - enable_kubernetes_version_updates = true - enable_machine_image_version_updates = true - start = "01:00:00Z" - end = "02:00:00Z" - } - } - - resource "stackit_ske_kubeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - cluster_name = stackit_ske_cluster.example.name - refresh = true - } - ``` - -3. **Create Observability Instance and Credentials** - - Establish a STACKIT Observability instance and its credentials to handle alerts. - - ```hcl - locals { - alert_config = { - route = { - receiver = "EmailStackit", - repeat_interval = "1m", - continue = true - } - receivers = [ - { - name = "EmailStackit", - email_configs = [ - { - to = "" - } - ] - } - ] - } - } - - resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - plan_name = "Observability-Large-EU01" - alert_config = local.alert_config - } - - resource "stackit_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - } - ``` - -4. **Install Prometheus Operator** - - Use the Prometheus Helm chart to install kube-state-metrics and transfer metrics to the STACKIT Observability instance. Customize the helm values as needed for your deployment. - - ```yaml - # helm values - # save as prom-values.tftpl - prometheus: - enabled: true - agentMode: true - prometheusSpec: - enableRemoteWriteReceiver: true - scrapeInterval: 60s - evaluationInterval: 60s - replicas: 1 - storageSpec: - volumeClaimTemplate: - spec: - storageClassName: premium-perf4-stackit - accessModes: ['ReadWriteOnce'] - resources: - requests: - storage: 80Gi - remoteWrite: - - url: ${metrics_push_url} - queueConfig: - batchSendDeadline: '5s' - # both values need to be configured according to your observability plan - capacity: 30000 - maxSamplesPerSend: 3000 - writeRelabelConfigs: - - sourceLabels: ['__name__'] - regex: 'apiserver_.*|etcd_.*|prober_.*|storage_.*|workqueue_(work|queue)_duration_seconds_bucket|kube_pod_tolerations|kubelet_.*|kubernetes_feature_enabled|instance_scrape_target_status' - action: 'drop' - - sourceLabels: ['namespace'] - regex: 'example' - action: 'keep' - basicAuth: - username: - key: username - name: ${secret_name} - password: - key: password - name: ${secret_name} - - grafana: - enabled: false - - defaultRules: - create: false - - alertmanager: - enabled: false - - nodeExporter: - enabled: true - - kube-state-metrics: - enabled: true - customResourceState: - enabled: true - collectors: - - deployments - - pods - ``` - - ```hcl - resource "kubernetes_namespace" "monitoring" { - metadata { - name = "monitoring" - } - } - - resource "kubernetes_secret" "argus_prometheus_authorization" { - metadata { - name = "argus-prometheus-credentials" - namespace = kubernetes_namespace.monitoring.metadata[0].name - } - - data = { - username = stackit_observability_credential.example.username - password = stackit_observability_credential.example.password - } - } - - resource "helm_release" "prometheus_operator" { - name = "prometheus-operator" - repository = "https://prometheus-community.github.io/helm-charts" - chart = "kube-prometheus-stack" - version = "60.1.0" - namespace = kubernetes_namespace.monitoring.metadata[0].name - - values = [ - templatefile("prom-values.tftpl", { - metrics_push_url = stackit_observability_instance.example.metrics_push_url - secret_name = kubernetes_secret.argus_prometheus_authorization.metadata[0].name - }) - ] - } - ``` - -5. **Create Alert Group** - - Define an alert group with a rule to notify when a pod is running in the "example" namespace. - - ```hcl - resource "stackit_observability_alertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - name = "TestAlertGroup" - interval = "2h" - rules = [ - { - alert = "SimplePodCheck" - expression = "sum(kube_pod_status_phase{phase=\"Running\", namespace=\"example\"}) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary = "Test Alert is working" - description = "Test Alert" - } - }, - ] - } - ``` - -6. **Deploy Test Pod** - - Deploy a test pod; doing so should trigger an email notification, as the deployment satisfies the conditions defined in the alert group rule. In a real-world scenario, you would typically configure alerts to monitor pods for error states instead. - - ```hcl - resource "kubernetes_namespace" "example" { - metadata { - name = "example" - } - } - - resource "kubernetes_pod" "example" { - metadata { - name = "nginx" - namespace = kubernetes_namespace.example.metadata[0].name - labels = { - app = "nginx" - } - } - - spec { - container { - image = "nginx:latest" - name = "nginx" - } - } - } - ``` \ No newline at end of file diff --git a/templates/guides/ske_log_alerts.md.tmpl b/templates/guides/ske_log_alerts.md.tmpl deleted file mode 100644 index 60498b05..00000000 --- a/templates/guides/ske_log_alerts.md.tmpl +++ /dev/null @@ -1,199 +0,0 @@ ---- -page_title: "SKE Log Alerts with STACKIT Observability" ---- -# SKE Log Alerts with STACKIT Observability - -## Overview - -This guide walks you through setting up log-based alerting in STACKIT Observability using Promtail to ship Kubernetes logs. - -1. **Set Up Providers** - - Begin by configuring the STACKIT and Kubernetes providers to connect to the STACKIT services. - - ```hcl - provider "stackit" { - region = "eu01" - } - - provider "kubernetes" { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - - provider "helm" { - kubernetes { - host = yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.server - client_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-certificate-data) - client_key = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).users.0.user.client-key-data) - cluster_ca_certificate = base64decode(yamldecode(stackit_ske_kubeconfig.example.kube_config).clusters.0.cluster.certificate-authority-data) - } - } - ``` - -2. **Create SKE Cluster and Kubeconfig Resource** - - Set up a STACKIT SKE Cluster and generate the associated kubeconfig resource. - - ```hcl - resource "stackit_ske_cluster" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - kubernetes_version_min = "1.31" - node_pools = [ - { - name = "standard" - machine_type = "c1.4" - minimum = "3" - maximum = "9" - max_surge = "3" - availability_zones = ["eu01-1", "eu01-2", "eu01-3"] - os_version_min = "4081.2.1" - os_name = "flatcar" - volume_size = 32 - volume_type = "storage_premium_perf6" - } - ] - maintenance = { - enable_kubernetes_version_updates = true - enable_machine_image_version_updates = true - start = "01:00:00Z" - end = "02:00:00Z" - } - } - - resource "stackit_ske_kubeconfig" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - cluster_name = stackit_ske_cluster.example.name - refresh = true - } - ``` - -3. **Create Observability Instance and Credentials** - - Establish a STACKIT Observability instance and its credentials to handle alerts. - - ```hcl - locals { - alert_config = { - route = { - receiver = "EmailStackit", - repeat_interval = "1m", - continue = true - } - receivers = [ - { - name = "EmailStackit", - email_configs = [ - { - to = "" - } - ] - } - ] - } - } - - resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example" - plan_name = "Observability-Large-EU01" - alert_config = local.alert_config - } - - resource "stackit_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - } - ``` - -4. **Install Promtail** - - Deploy Promtail via Helm to collect logs and forward them to the STACKIT Observability Loki endpoint. - - ```hcl - resource "helm_release" "promtail" { - name = "promtail" - repository = "https://grafana.github.io/helm-charts" - chart = "promtail" - namespace = kubernetes_namespace.monitoring.metadata.0.name - version = "6.16.4" - - values = [ - <<-EOF - config: - clients: - # To find the Loki push URL, navigate to the observability instance in the portal and select the API tab. - - url: "https://${stackit_observability_credential.example.username}:${stackit_observability_credential.example.password}@/instances/${stackit_observability_instance.example.instance_id}/loki/api/v1/push" - EOF - ] - } - ``` - -5. **Create Alert Group** - - Create a log alert that triggers when a specific pod logs an error message. - - ```hcl - resource "stackit_observability_logalertgroup" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.example.instance_id - name = "TestLogAlertGroup" - interval = "1m" - rules = [ - { - alert = "SimplePodLogAlertCheck" - expression = "sum(rate({namespace=\"example\", pod=\"logger\"} |= \"Simulated error message\" [1m])) > 0" - for = "60s" - labels = { - severity = "critical" - }, - annotations = { - summary : "Test Log Alert is working" - description : "Test Log Alert" - }, - }, - ] - } - ``` - -6. **Deploy Test Pod** - - Launch a pod that emits simulated error logs. This should trigger the alert if everything is set up correctly. - - ```hcl - resource "kubernetes_namespace" "example" { - metadata { - name = "example" - } - } - - resource "kubernetes_pod" "logger" { - metadata { - name = "logger" - namespace = kubernetes_namespace.example.metadata[0].name - labels = { - app = "logger" - } - } - - spec { - container { - name = "logger" - image = "bash" - command = [ - "bash", - "-c", - <&2 - done - EOF - ] - } - } - } - ``` \ No newline at end of file diff --git a/templates/guides/stackit_cdn_with_custom_domain.md.tmpl b/templates/guides/stackit_cdn_with_custom_domain.md.tmpl deleted file mode 100644 index 1fd9cbdb..00000000 --- a/templates/guides/stackit_cdn_with_custom_domain.md.tmpl +++ /dev/null @@ -1,255 +0,0 @@ ---- -page_title: "Using STACKIT CDN to service static files from an HTTP Origin with STACKIT CDN" ---- - -# Using STACKIT CDN to service static files from an HTTP Origin with STACKIT CDN - -This guide will walk you through the process of setting up a STACKIT CDN distribution to serve static files from a -generic HTTP origin using Terraform. This is a common use case for developers who want to deliver content with low -latency and high data transfer speeds. - ---- - -## Prerequisites - -Before you begin, make sure you have the following: - -* A **STACKIT project** and a user account with the necessary permissions for the CDN. -* A **Service Account Key**: you can read about creating one here: [Create a Service Account Key -](https://docs.stackit.cloud/platform/access-and-identity/service-accounts/how-tos/manage-service-account-keys/) - ---- - -## Step 1: Configure the Terraform Provider - -First, you need to configure the STACKIT provider in your Terraform configuration. Create a file named `main.tf` and add -the following code. This block tells Terraform to download and use the STACKIT provider. - -```terraform -terraform { - required_providers { - stackit = { - source = "stackitcloud/stackit" - } - } -} - -variable "service_account_key" { - type = string - description = "Your STACKIT service account key." - sensitive = true - default = "path/to/sa-key.json" -} - -variable "project_id" { - type = string - default = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Your project ID -} - -provider "stackit" { - # The STACKIT provider is configured using the defined variables. - default_region = "eu01" - service_account_key_path = var.service_account_key -} - -``` - -## Step 2: Create the DNS Zone - -The first resource you'll create is the DNS zone, which will manage the records for your domain. - -```terraform -resource "stackit_dns_zone" "example_zone" { - project_id = var.project_id - name = "My DNS zone" - dns_name = "myapp.runs.onstackit.cloud" - contact_email = "aa@bb.ccc" - type = "primary" -} -``` - -## Step 3: Create the CDN Distribution - -Next, define the CDN distribution. This is the core service that will cache and serve your content from its origin. - -```terraform -resource "stackit_cdn_distribution" "example_distribution" { - project_id = var.project_id - - config = { - # Define the backend configuration - backend = { - type = "http" - - # Replace with the URL of your HTTP origin - origin_url = "https://your-origin-server.com" - } - - # The regions where content will be hosted - regions = ["EU", "US", "ASIA", "AF", "SA"] - blocked_countries = [] - } - -} -``` - -## Step 4: Create the DNS CNAME Record - -Finally, create the **CNAME record** to point your custom domain to the CDN. This step must come after the CDN is -created because it needs the CDN's unique domain name as its target. - -```terraform -resource "stackit_dns_record_set" "cname_record" { - project_id = stackit_dns_zone.example_zone.project_id - zone_id = stackit_dns_zone.example_zone.zone_id - - # This is the custom domain name which will be added to your zone - name = "cdn" - type = "CNAME" - ttl = 3600 - - # Points to the CDN distribution's unique domain. - # Notice the added dot at the end of the domain name to point to a FQDN. - records = ["${stackit_cdn_distribution.example_distribution.domains[0].name}."] -} - -``` - -This record directs traffic from your custom domain to the STACKIT CDN infrastructure. - -## Step 5: Add a Custom Domain to the CDN - -To provide a user-friendly URL, associate a custom domain (like `cdn.myapp.runs.onstackit.cloud`) with your -distribution. - -```terraform -resource "stackit_cdn_custom_domain" "example_custom_domain" { - project_id = stackit_cdn_distribution.example_distribution.project_id - distribution_id = stackit_cdn_distribution.example_distribution.distribution_id - - # Creates "cdn.myapp.runs.onstackit.cloud" dynamically - name = "${stackit_dns_record_set.cname_record.name}.${stackit_dns_zone.example_zone.dns_name}" -} - -``` - -This resource links the subdomain you created in the previous step to the CDN distribution. - -## Complete Terraform Configuration - -Here is the complete `main.tf` file, which follows the logical order of operations. - -```terraform -# This configuration file sets up a complete STACKIT CDN distribution -# with a custom domain managed by STACKIT DNS. - -# ----------------------------------------------------------------------------- -# PROVIDER CONFIGURATION -# ----------------------------------------------------------------------------- - -terraform { - required_providers { - stackit = { - source = "stackitcloud/stackit" - } - } -} - -variable "service_account_key" { - type = string - description = "Your STACKIT service account key." - sensitive = true - default = "path/to/sa-key.json" -} - -variable "project_id" { - type = string - description = "Your STACKIT project ID." - default = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -} - -provider "stackit" { - # The STACKIT provider is configured using the defined variables. - default_region = "eu01" - service_account_key_path = var.service_account_key -} - -# ----------------------------------------------------------------------------- -# DNS ZONE RESOURCE -# ----------------------------------------------------------------------------- -# The DNS zone manages all records for your domain. -# It's the first resource to be created. -# ----------------------------------------------------------------------------- - -resource "stackit_dns_zone" "example_zone" { - project_id = var.project_id - name = "My DNS zone" - dns_name = "myapp.runs.onstackit.cloud" - contact_email = "aa@bb.ccc" - type = "primary" -} - -# ----------------------------------------------------------------------------- -# CDN DISTRIBUTION RESOURCE -# ----------------------------------------------------------------------------- -# This resource defines the CDN, its origin, and caching regions. -# ----------------------------------------------------------------------------- - -resource "stackit_cdn_distribution" "example_distribution" { - project_id = var.project_id - - config = { - # Define the backend configuration - backend = { - type = "http" - - # Replace with the URL of your HTTP origin - origin_url = "https://your-origin-server.com" - } - - # The regions where content will be hosted - regions = ["EU", "US", "ASIA", "AF", "SA"] - blocked_countries = [] - } -} - -# ----------------------------------------------------------------------------- -# CUSTOM DOMAIN AND DNS RECORD -# ----------------------------------------------------------------------------- -# These resources link your CDN to a user-friendly custom domain and create -# the necessary DNS record to route traffic. -# ----------------------------------------------------------------------------- - -resource "stackit_dns_record_set" "cname_record" { - project_id = stackit_dns_zone.example_zone.project_id - zone_id = stackit_dns_zone.example_zone.zone_id - # This is the custom domain name which will be added to your zone - name = "cdn" - type = "CNAME" - ttl = 3600 - # Points to the CDN distribution's unique domain. - # The dot at the end makes it a fully qualified domain name (FQDN). - records = ["${stackit_cdn_distribution.example_distribution.domains[0].name}."] - -} - -resource "stackit_cdn_custom_domain" "example_custom_domain" { - project_id = stackit_cdn_distribution.example_distribution.project_id - distribution_id = stackit_cdn_distribution.example_distribution.distribution_id - - # Creates "cdn.myapp.runs.onstackit.cloud" dynamically - name = "${stackit_dns_record_set.cname_record.name}.${stackit_dns_zone.example_zone.dns_name}" -} - -# ----------------------------------------------------------------------------- -# OUTPUTS -# ----------------------------------------------------------------------------- -# This output will display the final custom URL after `terraform apply` is run. -# ----------------------------------------------------------------------------- - -output "custom_cdn_url" { - description = "The final custom domain URL for the CDN distribution." - value = "https://${stackit_cdn_custom_domain.example_custom_domain.name}" -} - -``` diff --git a/templates/guides/stackit_org_service_account.md.tmpl b/templates/guides/stackit_org_service_account.md.tmpl deleted file mode 100644 index e75ad7ef..00000000 --- a/templates/guides/stackit_org_service_account.md.tmpl +++ /dev/null @@ -1,15 +0,0 @@ ---- -page_title: "Creating projects in empty organization via Terraform" ---- -# Creating projects in empty organization via Terraform - -Consider the following situation: You're starting with an empty STACKIT organization and want to create projects -in this organization using the `stackit_resourcemanager_project` resource. Unfortunately it's not possible to create -a service account on organization level which can be used for authentication in the STACKIT Terraform provider. -The following steps will help you to get started: - -1. Using the STACKIT portal, create a dummy project in your organization which will hold your service account, let's name it e.g. "dummy-service-account-project". -2. In this "dummy-service-account-project", create a service account. Create and save a service account key to use for authentication for the STACKIT Terraform provider later as described in the docs. Now copy the e-mail address of the service account you just created. -3. Here comes the important part: Navigate to your organization, open it and select "Access". Click on the "Grant access" button and paste the e-mail address of your service account. Be careful to grant the service account enough permissions to create projects in your organization, e.g. by assigning the "owner" role to it. - -*This problem was brought up initially in [this](https://github.com/stackitcloud/terraform-provider-stackit/issues/855) issue on GitHub.* diff --git a/templates/guides/using_loadbalancer_with_observability.md.tmpl b/templates/guides/using_loadbalancer_with_observability.md.tmpl deleted file mode 100644 index a6bc9703..00000000 --- a/templates/guides/using_loadbalancer_with_observability.md.tmpl +++ /dev/null @@ -1,163 +0,0 @@ ---- -page_title: "Using the STACKIT Loadbalancer together with STACKIT Observability" ---- -# Using the STACKIT Loadbalancer together with STACKIT Observability - -## Overview - -This guide explains how to configure the STACKIT Loadbalancer product to send metrics and logs to a STACKIT Observability instance. - -1. **Set Up Providers** - - Begin by configuring the STACKIT provider to connect to the STACKIT services. - - ```hcl - provider "stackit" { - default_region = "eu01" - } - ``` - -2. **Create an Observability instance** - - Establish a STACKIT Observability instance and its credentials. - - ```hcl - resource "stackit_observability_instance" "observability01" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Monitoring-Medium-EU01" - acl = ["0.0.0.0/0"] - metrics_retention_days = 90 - metrics_retention_days_5m_downsampling = 90 - metrics_retention_days_1h_downsampling = 90 - } - - resource "stackit_observability_credential" "observability01-credential" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_observability_instance.observability01.instance_id - } - ``` - -3. **Create STACKIT Loadbalancer credentials reference** - - Create a STACKIT Loadbalancer credentials which will be used in the STACKIT Loadbalancer resource as a reference. - - ```hcl - resource "stackit_loadbalancer_observability_credential" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - display_name = "example-credentials" - username = stackit_observability_credential.observability01-credential.username - password = stackit_observability_credential.observability01-credential.password - } - ``` - -4. **Create the STACKIT Loadbalancer** - - ```hcl - # Create a network - resource "stackit_network" "example_network" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-network" - ipv4_nameservers = ["8.8.8.8"] - ipv4_prefix = "192.168.0.0/25" - labels = { - "key" = "value" - } - routed = true - } - - # Create a network interface - resource "stackit_network_interface" "nic" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_id = stackit_network.example_network.network_id - } - - # Create a public IP for the load balancer - resource "stackit_public_ip" "public-ip" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - lifecycle { - ignore_changes = [network_interface_id] - } - } - - # Create a key pair for accessing the server instance - resource "stackit_key_pair" "keypair" { - name = "example-key-pair" - # set the path of your public key file here - public_key = chomp(file("/home/bob/.ssh/id_ed25519.pub")) - } - - # Create a server instance - resource "stackit_server" "boot-from-image" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-server" - boot_volume = { - size = 64 - source_type = "image" - source_id = "59838a89-51b1-4892-b57f-b3caf598ee2f" // Ubuntu 24.04 - } - availability_zone = "eu01-1" - machine_type = "g2i.1" - keypair_name = stackit_key_pair.keypair.name - network_interfaces = [ - stackit_network_interface.nic.network_interface_id - ] - } - - # Create a load balancer - resource "stackit_loadbalancer" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-load-balancer" - target_pools = [ - { - name = "example-target-pool" - target_port = 80 - targets = [ - { - display_name = stackit_server.boot-from-image.name - ip = stackit_network_interface.nic.ipv4 - } - ] - active_health_check = { - healthy_threshold = 10 - interval = "3s" - interval_jitter = "3s" - timeout = "3s" - unhealthy_threshold = 10 - } - } - ] - listeners = [ - { - display_name = "example-listener" - port = 80 - protocol = "PROTOCOL_TCP" - target_pool = "example-target-pool" - } - ] - networks = [ - { - network_id = stackit_network.example_network.network_id - role = "ROLE_LISTENERS_AND_TARGETS" - } - ] - external_address = stackit_public_ip.public-ip.ip - options = { - private_network_only = false - observability = { - logs = { - # uses the load balancer credential from the last step - credentials_ref = stackit_loadbalancer_observability_credential.example.credentials_ref - # uses the observability instance from step 1 - push_url = stackit_observability_instance.observability01.logs_push_url - } - metrics = { - # uses the load balancer credential from the last step - credentials_ref = stackit_loadbalancer_observability_credential.example.credentials_ref - # uses the observability instance from step 1 - push_url = stackit_observability_instance.observability01.metrics_push_url - } - } - } - } - ``` diff --git a/templates/guides/vault_secrets_manager.md.tmpl b/templates/guides/vault_secrets_manager.md.tmpl deleted file mode 100644 index d97b0533..00000000 --- a/templates/guides/vault_secrets_manager.md.tmpl +++ /dev/null @@ -1,83 +0,0 @@ ---- -page_title: "Using Vault Provider with STACKIT Secrets Manager" ---- -# Using Vault Provider with STACKIT Secrets Manager - -## Overview - -This guide outlines the process of utilizing the [HashiCorp Vault provider](https://registry.terraform.io/providers/hashicorp/vault) alongside the STACKIT provider to write secrets in the STACKIT Secrets Manager. The guide focuses on secrets from STACKIT Cloud resources but can be adapted for any secret. - -## Steps - -1. **Configure STACKIT Provider** - - ```hcl - provider "stackit" { - default_region = "eu01" - } - ``` - -2. **Create STACKIT Secrets Manager Instance** - - ```hcl - resource "stackit_secretsmanager_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - } - ``` - -3. **Define STACKIT Secrets Manager User** - - ```hcl - resource "stackit_secretsmanager_user" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - instance_id = stackit_secretsmanager_instance.example.instance_id - description = "Example user" - write_enabled = true - } - ``` - -4. **Configure Vault Provider** - - ```hcl - provider "vault" { - address = "https://prod.sm.eu01.stackit.cloud" - skip_child_token = true - - auth_login_userpass { - username = stackit_secretsmanager_user.example.username - password = stackit_secretsmanager_user.example.password - } - } - ``` - -5. **Define Terraform Resource (Example: Observability Monitoring Instance)** - - ```hcl - resource "stackit_observability_instance" "example" { - project_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - name = "example-instance" - plan_name = "Observability-Monitoring-Medium-EU01" - } - ``` - -6. **Store Secret in Vault** - - ```hcl - resource "vault_kv_secret_v2" "example" { - mount = stackit_secretsmanager_instance.example.instance_id - name = "my-secret" - cas = 1 - delete_all_versions = true - data_json = jsonencode( - { - grafana_password = stackit_observability_instance.example.grafana_initial_admin_password, - other_secret = ..., - } - ) - } - ``` - -## Note - -This example can be adapted for various resources within the provider as well as any other Secret the user wants to set in the Secrets Manager instance. Adapting this examples means replacing the Observability Monitoring Grafana password with the appropriate value. \ No newline at end of file diff --git a/templates/resources/network_area_route.md.tmpl b/templates/resources/network_area_route.md.tmpl deleted file mode 100644 index 48cd3b48..00000000 --- a/templates/resources/network_area_route.md.tmpl +++ /dev/null @@ -1,54 +0,0 @@ ---- -# generated by https://github.com/hashicorp/terraform-plugin-docs -page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" -subcategory: "" -description: |- - {{ .Description | trimspace }} ---- - -# {{.Name}} ({{.Type}}) - -{{ .Description | trimspace }} - -## Example Usage - -{{ tffile (printf "examples/resources/%s/resource.tf" .Name) }} - -## Migration of IaaS resources from versions <= v0.74.0 - -The release of the STACKIT IaaS API v2 provides a lot of new features, but also includes some breaking changes -(when coming from v1 of the STACKIT IaaS API) which must be somehow represented on Terraform side. The -`stackit_network_area_route` resource did undergo some changes. See the example below how to migrate your resources. - -### Breaking change: Network area route resource (stackit_network_area_route) - -**Configuration for <= v0.74.0** - -```terraform -resource "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - prefix = "192.168.0.0/24" # prefix field got removed for provider versions > v0.74.0, use the new destination field instead - next_hop = "192.168.0.0" # schema of the next_hop field changed, see below -} -``` - -**Configuration for > v0.74.0** - -```terraform -resource "stackit_network_area_route" "example" { - organization_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - network_area_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" - destination = { # the new 'destination' field replaces the old 'prefix' field - type = "cidrv4" - value = "192.168.0.0/24" # migration: put the value of the old 'prefix' field here - } - next_hop = { - type = "ipv4" - value = "192.168.0.0" # migration: put the value of the old 'next_hop' field here - } -} -``` - -{{ .SchemaMarkdown | trimspace }} - diff --git a/tools/tools.go b/tools/tools.go new file mode 100644 index 00000000..b142c65c --- /dev/null +++ b/tools/tools.go @@ -0,0 +1,12 @@ +package tools + +// Generate copyright headers +//go:generate go run github.com/hashicorp/copywrite headers -d .. --config ../.copywrite.hcl + +// Format Terraform code for use in documentation. +// If you do not have Terraform installed, you can remove the formatting command, but it is suggested +// to ensure the documentation is formatted properly. +//go:generate terraform fmt -recursive ../examples/ + +// Generate documentation. +//go:generate go run github.com/hashicorp/terraform-plugin-docs/cmd/tfplugindocs generate --provider-dir .. -provider-name stackitprivatepreview