[NaverCloud]Terraformを使用してNKS-1を配備


開いている文章


ここではTerraformを用いたNKS構成方法について説明する.
筆者はTerraformの知識と経験がほとんどなく、AWS環境で非常に簡単なリソース導入のみを導入しました.この文書では、NCP GitHubで提供されているTerraformファイルを使用してNKSクラスタを導入する方法について説明します.個別にカスタマイズする必要はありません.
📢注:https://github.com/NaverCloudPlatform/terraform-provider-ncloud

準備作業


Terraformを使用したローカルおよびNCP環境の事前準備事項.
  • Terraform 0.13.x
  • Go v1.16.x (to build the provider plugin)
  • GOPATH設定


    筆者はWindow 10環境でWSL(Ubuntu)をインストールしてテストした.この場合、GOPATHはローカルでenvに設定されていないため、次の操作が実行されます.
  • go env運転命令
  • go env | grep PATH
    GOPATH="/home/hyungwook/go"
    envによって決定されたパスをエクスポートし、環境変数に設定します.
  • exportコマンド実行
  • export GOPATH="/home/hyungwook/go"
    env | grep GOPATH
    GOPATH=/home/hyungwook/go

    Building The Provider


    GOPATHを使用して作業ディレクトリを作成し、Terraform構築用のgit sourceを取得します.
    $ mkdir -p $GOPATH/src/github.com/NaverCloudPlatform; cd $GOPATH/src/github.com/NaverCloudPlatform
    $ git clone [email protected]:NaverCloudPlatform/terraform-provider-ncloud.git 
    このパスに移動し、naver cloud go moduleを構築します.
    $ cd $GOPATH/src/github.com/NaverCloudPlatform/terraform-provider-ncloud
    $ make build
    📢注:筆者はmake buildを行う過程で、以下のエラーが発生し、以下の措置を取って行った.
  • make buildイベント発生
  • ~/go/src/github.com/NaverCloudPlatform/terraform-provider-ncloud   master  make build
    ==> Checking that code complies with gofmt requirements...
    go install
    go: github.com/NaverCloudPlatform/[email protected]: missing go.sum entry; to add it:
            go mod download github.com/NaverCloudPlatform/ncloud-sdk-go-v2
    make: *** [GNUmakefile:17: build] Error 1
    -go mod tidy実行コマンド
    ~/go/src/github.com/NaverCloudPlatform/terraform-provider-ncloud   master  go mod tidy
    go: downloading github.com/hashicorp/terraform-plugin-sdk/v2 v2.10.0
    go: downloading github.com/NaverCloudPlatform/ncloud-sdk-go-v2 v1.3.3
    go: downloading github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320
    go: downloading gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c
    go: downloading github.com/hashicorp/go-hclog v0.16.1
    go: downloading github.com/hashicorp/go-plugin v1.4.1
    go: downloading github.com/hashicorp/terraform-plugin-go v0.5.0
    go: downloading github.com/hashicorp/go-uuid v1.0.2
    go: downloading github.com/mitchellh/go-testing-interface v1.14.1
    go: downloading google.golang.org/grpc v1.32.0
    go: downloading github.com/hashicorp/go-multierror v1.1.1
    ...
    📢注:https://www.sysnet.pe.kr/2/0/12808go mod tidyその後、正常make buildと確認された.
  • make build運転
  • make build
    ==> Checking that code complies with gofmt requirements...
    go install
    📢注意:このほか、ここです。NCP Provider docsを参照してください.

    Webコンソールの使用


    筆者はnaver cloudアカウントを再作成し,TerraformのためにAPI認証鍵を再作成した.
    次のタスクは[Mypage]-[AccountManagement]-[Identification Key Management]で行います.

    認証キーの発行

  • [新しいAPI検証鍵を作成]をクリックし、aces鍵、秘密鍵情報を記録します.
  • 変数は、必要なAPI検証鍵を生成し、Terraformで使用するために使用されます.この値をtfファイルに挿入します.

    検証鍵変数値の設定

  • examples/nks
  •  $ pwd
    /home/hyungwook/go/src/github.com/NaverCloudPlatform/terraform-provider-ncloud/examples/nks
  • ファイルリストの表示
  • $ ls -lrt
    total 24
    -rw-r--r-- 1 hyungwook hyungwook  135 Dec 26 16:02 versions.tf
    -rw-r--r-- 1 hyungwook hyungwook 2028 Dec 26 16:02 main.tf
    -rw-r--r-- 1 hyungwook hyungwook  322 Dec 26 16:19 variables.tf

    クラスタの作成(適用)


    Terraformの初期化と実行

  • terraform init
  • $ terraform init
    
    Initializing the backend...
    
    Initializing provider plugins...
    - Finding latest version of navercloudplatform/ncloud...
    - Installing navercloudplatform/ncloud v2.2.1...
    - Installed navercloudplatform/ncloud v2.2.1 (signed by a HashiCorp partner, key ID 9DCE2431234C9)
    
    Partner and community providers are signed by their developers.
    If you'd like to know more about provider signing, you can read about it here:
    https://www.terraform.io/docs/cli/plugins/signing.html
    
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run "terraform init" in the future.
    
    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
  • terraform plan
  • $ terraform plan
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      + create
    
    Terraform will perform the following actions:
    
      # ncloud_login_key.loginkey will be created
      + resource "ncloud_login_key" "loginkey" {
          + fingerprint = (known after apply)
          + id          = (known after apply)
          + key_name    = "my-key"
          + private_key = (sensitive value)
        }
    
      # ncloud_nks_cluster.cluster will be created
      + resource "ncloud_nks_cluster" "cluster" {
          + cluster_type         = "SVR.VNKS.STAND.C002.M008.NET.SSD.B050.G002"
          + endpoint             = (known after apply)
          + id                   = (known after apply)
          + k8s_version          = "1.20.13-nks.1"
          + kube_network_plugin  = "cilium"
          + lb_private_subnet_no = (known after apply)
          + login_key_name       = "my-key"
          + name                 = "sample-cluster"
          + subnet_no_list       = (known after apply)
          + uuid                 = (known after apply)
          + vpc_no               = (known after apply)
          + zone                 = "KR-1"
    
          + log {
              + audit = true
            }
        }
    
      # ncloud_nks_node_pool.node_pool will be created
      + resource "ncloud_nks_node_pool" "node_pool" {
          + cluster_uuid   = (known after apply)
          + id             = (known after apply)
          + instance_no    = (known after apply)
          + k8s_version    = (known after apply)
          + node_count     = 1
          + node_pool_name = "pool1"
          + product_code   = "SVR.VSVR.STAND.C002.M008.NET.SSD.B050.G002"
          + subnet_no      = (known after apply)
    
          + autoscale {
              + enabled = true
              + max     = 2
              + min     = 1
            }
        }
    
      # ncloud_subnet.lb_subnet will be created
      + resource "ncloud_subnet" "lb_subnet" {
          + id             = (known after apply)
          + name           = "lb-subnet"
          + network_acl_no = (known after apply)
          + subnet         = "10.0.100.0/24"
          + subnet_no      = (known after apply)
          + subnet_type    = "PRIVATE"
          + usage_type     = "LOADB"
          + vpc_no         = (known after apply)
          + zone           = "KR-1"
        }
    
      # ncloud_subnet.node_subnet will be created
      + resource "ncloud_subnet" "node_subnet" {
          + id             = (known after apply)
          + name           = "node-subnet"
          + network_acl_no = (known after apply)
          + subnet         = "10.0.1.0/24"
          + subnet_no      = (known after apply)
          + subnet_type    = "PRIVATE"
          + usage_type     = "GEN"
          + vpc_no         = (known after apply)
          + zone           = "KR-1"
        }
    
      # ncloud_vpc.vpc will be created
      + resource "ncloud_vpc" "vpc" {
          + default_access_control_group_no = (known after apply)
          + default_network_acl_no          = (known after apply)
          + default_private_route_table_no  = (known after apply)
          + default_public_route_table_no   = (known after apply)
          + id                              = (known after apply)
          + ipv4_cidr_block                 = "10.0.0.0/16"
          + name                            = "vpc"
          + vpc_no                          = (known after apply)
        }
    
    Plan: 6 to add, 0 to change, 0 to destroy.
    注意:筆者はterraformapply後のインストール時にネットワークが切断され、破棄して再稼働した.destroyを追加すると、リソースは正常に削除されず、コンソールから直接削除して再適用されます.
  • terraform apply
  • $ terraform apply -auto-approve
    ncloud_vpc.vpc: Refreshing state... [id=16240]
    ncloud_subnet.node_subnet: Refreshing state... [id=32073]
    
    Note: Objects have changed outside of Terraform
    
    Terraform detected the following changes made outside of Terraform since the last "terraform apply":
    
      # ncloud_subnet.node_subnet has been deleted
      - resource "ncloud_subnet" "node_subnet" {
          - id             = "32073" -> null
          - name           = "node-subnet" -> null
          - network_acl_no = "23481" -> null
          - subnet         = "10.0.1.0/24" -> null
          - subnet_no      = "32073" -> null
          - subnet_type    = "PRIVATE" -> null
          - usage_type     = "GEN" -> null
          - vpc_no         = "16240" -> null
          - zone           = "KR-1" -> null
        }
      # ncloud_vpc.vpc has been deleted
      - resource "ncloud_vpc" "vpc" {
          - default_access_control_group_no = "33795" -> null
          - default_network_acl_no          = "23481" -> null
          - default_private_route_table_no  = "31502" -> null
          - default_public_route_table_no   = "31501" -> null
          - id                              = "16240" -> null
          - ipv4_cidr_block                 = "10.0.0.0/16" -> null
          - name                            = "vpc" -> null
          - vpc_no                          = "16240" -> null
        }
    
    Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
    changes.
    
    ──────────────────────────────────────────────────────────────────────
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      + create
    
    Terraform will perform the following actions:
    
      # ncloud_login_key.loginkey will be created
      + resource "ncloud_login_key" "loginkey" {
          + fingerprint = (known after apply)
          + id          = (known after apply)
          + key_name    = "my-key"
          + private_key = (sensitive value)
        }
    
      # ncloud_nks_cluster.cluster will be created
      + resource "ncloud_nks_cluster" "cluster" {
          + cluster_type         = "SVR.VNKS.STAND.C002.M008.NET.SSD.B050.G002"
          + endpoint             = (known after apply)
          + id                   = (known after apply)
          + k8s_version          = "1.20.13-nks.1"
          + kube_network_plugin  = "cilium"
          + lb_private_subnet_no = (known after apply)
          + login_key_name       = "my-key"
          + name                 = "sample-cluster"
          + subnet_no_list       = (known after apply)
          + uuid                 = (known after apply)
          + vpc_no               = (known after apply)
          + zone                 = "KR-1"
    
          + log {
              + audit = true
            }
        }
    
      # ncloud_nks_node_pool.node_pool will be created
      + resource "ncloud_nks_node_pool" "node_pool" {
          + cluster_uuid   = (known after apply)
          + id             = (known after apply)
          + instance_no    = (known after apply)
          + k8s_version    = (known after apply)
          + node_count     = 1
          + node_pool_name = "pool1"
          + product_code   = "SVR.VSVR.STAND.C002.M008.NET.SSD.B050.G002"
          + subnet_no      = (known after apply)
    
          + autoscale {
              + enabled = true
              + max     = 2
              + min     = 1
            }
        }
    
      # ncloud_subnet.lb_subnet will be created
      + resource "ncloud_subnet" "lb_subnet" {
          + id             = (known after apply)
          + name           = "lb-subnet"
          + network_acl_no = (known after apply)
          + subnet         = "10.0.100.0/24"
          + subnet_no      = (known after apply)
          + subnet_type    = "PRIVATE"
          + usage_type     = "LOADB"
          + vpc_no         = (known after apply)
          + zone           = "KR-1"
        }
    
      # ncloud_subnet.node_subnet will be created
      + resource "ncloud_subnet" "node_subnet" {
          + id             = (known after apply)
          + name           = "node-subnet"
          + network_acl_no = (known after apply)
          + subnet         = "10.0.1.0/24"
          + subnet_no      = (known after apply)
          + subnet_type    = "PRIVATE"
          + usage_type     = "GEN"
          + vpc_no         = (known after apply)
          + zone           = "KR-1"
        }
    
      # ncloud_vpc.vpc will be created
      + resource "ncloud_vpc" "vpc" {
          + default_access_control_group_no = (known after apply)
          + default_network_acl_no          = (known after apply)
          + default_private_route_table_no  = (known after apply)
          + default_public_route_table_no   = (known after apply)
          + id                              = (known after apply)
          + ipv4_cidr_block                 = "10.0.0.0/16"
          + name                            = "vpc"
          + vpc_no                          = (known after apply)
        }
    
    Plan: 6 to add, 0 to change, 0 to destroy.
    ncloud_vpc.vpc: Creating...
    ncloud_login_key.loginkey: Creating...
    ncloud_login_key.loginkey: Creation complete after 3s [id=my-key]
    ncloud_vpc.vpc: Still creating... [10s elapsed]
    ncloud_vpc.vpc: Creation complete after 14s [id=16241]
    ncloud_subnet.lb_subnet: Creating...
    ncloud_subnet.node_subnet: Creating...
    ncloud_subnet.node_subnet: Still creating... [10s elapsed]
    ncloud_subnet.lb_subnet: Still creating... [10s elapsed]
    ncloud_subnet.node_subnet: Creation complete after 13s [id=32076]
    ncloud_subnet.lb_subnet: Creation complete after 13s [id=32075]
    ncloud_nks_cluster.cluster: Creating...
    ncloud_nks_cluster.cluster: Still creating... [10s elapsed]
    ...
    ncloud_nks_cluster.cluster: Still creating... [15m50s elapsed]
    ncloud_nks_cluster.cluster: Creation complete after 16m0s [id=b6510323-da50-4f3e-a093-a6111d04a268]
    ncloud_nks_node_pool.node_pool: Creating...
    ncloud_nks_node_pool.node_pool: Still creating... [10s elapsed]
    ...
    ncloud_nks_node_pool.node_pool: Still creating... [9m20s elapsed]
    ncloud_nks_node_pool.node_pool: Creation complete after 9m28s [id=b6510323-da50-4f3e-a093-a6111d04a268:pool1]
    
    Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

    リソースの検証


    Terraformで配布されたリソースが正常に生成されていることを確認します.

    クラスタ画面


    [VPC]-[KubernetesServices]-[クラスタ]で生成されたクラスタのリストを表示できます.

    筆者が配布したsample-clastをクリックし、詳細画面を確認します.

    ノードとノードプール


    NKSクラスタを作成するノードプールを登録し、ノードプールごとに異なるノードspecを設定できます.[VPC]-[KubernetesServices]-[Node Pools]で見つけることができ、筆者はpool 1のみをデフォルト値として作成した.


    詳細画面は、ノードリストのノードをクリックして表示できます.以下は生成中の1番ノードの詳細画面です.

    各ノードは仮想マシンによって作成されるため、[VPC]-[サーバ]-[サーバ]で各サーバの作成を検証できます.

    クラスタ接続の検証


    クラスタが正常に配備されると、STATUS: Readyというタグが表示されます.
    kubeconfigファイルをダウンロードしてクラスタを管理し、kubectlコマンドで検証します.

    Kubectlコマンドのテスト


    クラスタが作成されている場合、このコマンドを使用して検証した結果、2つのノードのうち1つだけがReadyの状態であり、まだ作成されていないノードはNotReadyであることがわかりました.
  • kubectl get nodes -o wide:ノードリストの確認
  • $ kubectl --kubeconfig="kubeconfig-b6510323-da50-4f3e-a093-a6111d04a268.yaml" get nodes -o wide
    NAME              STATUS     ROLES    AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
    nks-pool1-w-udd   Ready      <none>   10m   v1.20.13   10.0.1.9      <none>        Ubuntu 18.04.5 LTS   5.4.0-65-generic   containerd://1.3.7
    nks-pool1-w-ude   NotReady   <none>   23s   v1.20.13   10.0.1.10     <none>        Ubuntu 18.04.5 LTS   5.4.0-65-generic   containerd://1.3.7
  • kubectl version:クラスタバージョン情報の確認
  • $ k version --kubeconfig kubeconfig-b6510323-da50-4f3e-a093-a6111d04a268.yaml
    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.13", GitCommit:"2444b3347a2c45eb965b182fb836e1f51dc61b70", GitTreeState:"clean", BuildDate:"2021-11-17T13:00:29Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

    NATゲートウェイの追加


    本明細書では確認していないが、NKS基本構成でNATゲートウェイが設定されていないと外部通信ができない.そのため、外部画像ダウンロード(Image Pull)などは機能せず、NAT Gatewayを追加する必要があります.
    注意:次の記事では、NATゲートウェイを含むTerraform導入の使用方法について説明します.

    「VPC」-「VPC」-「NATゲートウェイ」で、「+NATゲートウェイの作成」をクリックし、名前、VPC、Zone、コメントを作成します.

    Route Tableの追加


    前に作成したNATゲートウェイに接続されているVPCと外部通信するためにRoute Tableを追加します.

    デフォルトでは、Route TableはVPCの作成時に生成された10.0.0.0/16帯域幅としか通信できないように設定されています.

    外部通信に0.0.0.0/0帯域幅を追加します.

    アプリケーション導入テスト


    Route Tableが追加されているので、外部との通信が可能であることをNKSクラスタで確認できます.
    次に、kubectl runコマンドを使用して簡単なnginx画像を実行します.
  • kubectl run <파드명> --image <이미지명>
  • kubectl run nginx-test --image nginx --kubeconfig kubeconfig-b6510323-da50-4f3e-a093-a6111d04a268.yaml
    pod/nginx-test created
    📢注意:筆者は「便宜上」--kubeconfigオプションでkubeconfigファイルを指定していますが、KUBECONFIG環境変数または$HOME/.kube/configでkubeconfigファイルを設定して操作することもできます.リンク
    k get pods -o wide --kubeconfig kubeconfig-b6510323-da50-4f3e-a093-a6111d04a268.yaml
    NAME         READY   STATUS    RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
    nginx-test   1/1     Running   0          49s   198.18.1.116   nks-pool1-w-ude   <none>           <none>
  • port-forward接続テスト
  • sudo kubectl port-forward  --kubeconfig=kubeconfig-b6510323-da50-4f3e-a093-a6111d04a268.yaml nginx-test 80:80
    [sudo] password for hyungwook:
    Forwarding from 127.0.0.1:80 -> 80
    Forwarding from [::1]:80 -> 80

    クラスタの削除(Destory)


    実験が完了したらterraformdestroyコマンドでリソースを削除します.
    ただし、次のように削除されることはありません.これはNATゲートウェイが直接追加されているため、Terraformとして削除された場合、VPCは正常に削除されません.NAT Gateway、Route Tableはコンソールから直接削除し、VPCを削除する必要があります.
  • terraform destroy
  • terraform destroy -auto-approve
    ncloud_vpc.vpc: Refreshing state... [id=16241]
    ncloud_login_key.loginkey: Refreshing state... [id=my-key]
    ncloud_subnet.lb_subnet: Refreshing state... [id=32075]
    ncloud_subnet.node_subnet: Refreshing state... [id=32076]
    ncloud_nks_cluster.cluster: Refreshing state... [id=b6510323-da50-4f3e-a093-a6111d04a268]
    ncloud_nks_node_pool.node_pool: Refreshing state... [id=b6510323-da50-4f3e-a093-a6111d04a268:pool1]
    
    Note: Objects have changed outside of Terraform
    
    Terraform detected the following changes made outside of Terraform since the last "terraform apply":
    
      # ncloud_nks_node_pool.node_pool has been changed
      ~ resource "ncloud_nks_node_pool" "node_pool" {
            id             = "b6510323-da50-4f3e-a093-a6111d04a268:pool1"
          ~ node_count     = 1 -> 2
            # (6 unchanged attributes hidden)
    
            # (1 unchanged block hidden)
        }
    
    Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
    changes.
    
    ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      - destroy
    
    Terraform will perform the following actions:
    
      # ncloud_login_key.loginkey will be destroyed
      - resource "ncloud_login_key" "loginkey" {
          - fingerprint = "96:53:da:74:63:69:8d:23:ae:a7:dc:d0:4e:ae:2a:46" -> null
          - id          = "my-key" -> null
          - key_name    = "my-key" -> null
          - private_key = (sensitive value)
        }
    
      # ncloud_nks_cluster.cluster will be destroyed
      - resource "ncloud_nks_cluster" "cluster" {
          - cluster_type         = "SVR.VNKS.STAND.C002.M008.NET.SSD.B050.G002" -> null
          - endpoint             = "https://b6510323-da50-4f3e-a093-a6111d04a268.kr.vnks.ntruss.com" -> null
          - id                   = "b6510323-da50-4f3e-a093-a6111d04a268" -> null
          - k8s_version          = "1.20.13-nks.1" -> null
          - kube_network_plugin  = "cilium" -> null
          - lb_private_subnet_no = "32075" -> null
          - login_key_name       = "my-key" -> null
          - name                 = "sample-cluster" -> null
          - subnet_no_list       = [
              - "32076",
            ] -> null
          - uuid                 = "b6510323-da50-4f3e-a093-a6111d04a268" -> null
          - vpc_no               = "16241" -> null
          - zone                 = "KR-1" -> null
    
          - log {
              - audit = true -> null
            }
        }
    
      # ncloud_nks_node_pool.node_pool will be destroyed
      - resource "ncloud_nks_node_pool" "node_pool" {
          - cluster_uuid   = "b6510323-da50-4f3e-a093-a6111d04a268" -> null
          - id             = "b6510323-da50-4f3e-a093-a6111d04a268:pool1" -> null
          - instance_no    = "9500424" -> null
          - k8s_version    = "1.20.13" -> null
          - node_count     = 2 -> null
          - node_pool_name = "pool1" -> null
          - product_code   = "SVR.VSVR.STAND.C002.M008.NET.SSD.B050.G002" -> null
          - subnet_no      = "32076" -> null
    
          - autoscale {
              - enabled = true -> null
              - max     = 2 -> null
              - min     = 1 -> null
            }
        }
    
      # ncloud_subnet.lb_subnet will be destroyed
      - resource "ncloud_subnet" "lb_subnet" {
          - id             = "32075" -> null
          - name           = "lb-subnet" -> null
          - network_acl_no = "23482" -> null
          - subnet         = "10.0.100.0/24" -> null
          - subnet_no      = "32075" -> null
          - subnet_type    = "PRIVATE" -> null
          - usage_type     = "LOADB" -> null
          - vpc_no         = "16241" -> null
          - zone           = "KR-1" -> null
        }
    
      # ncloud_subnet.node_subnet will be destroyed
      - resource "ncloud_subnet" "node_subnet" {
          - id             = "32076" -> null
          - name           = "node-subnet" -> null
          - network_acl_no = "23482" -> null
          - subnet         = "10.0.1.0/24" -> null
          - subnet_no      = "32076" -> null
          - subnet_type    = "PRIVATE" -> null
          - usage_type     = "GEN" -> null
          - vpc_no         = "16241" -> null
          - zone           = "KR-1" -> null
        }
    
      # ncloud_vpc.vpc will be destroyed
      - resource "ncloud_vpc" "vpc" {
          - default_access_control_group_no = "33799" -> null
          - default_network_acl_no          = "23482" -> null
          - default_private_route_table_no  = "31504" -> null
          - default_public_route_table_no   = "31503" -> null
          - id                              = "16241" -> null
          - ipv4_cidr_block                 = "10.0.0.0/16" -> null
          - name                            = "vpc" -> null
          - vpc_no                          = "16241" -> null
        }
    
    Plan: 0 to add, 0 to change, 6 to destroy.
    ncloud_nks_node_pool.node_pool: Destroying... [id=b6510323-da50-4f3e-a093-a6111d04a268:pool1]
    ncloud_nks_node_pool.node_pool: Still destroying... [id=b6510323-da50-4f3e-a093-a6111d04a268:pool1, 10s elapsed]
    ...
    da50-4f3e-a093-a6111d04a268:pool1, 3m50s elapsed]
    ncloud_nks_node_pool.node_pool: Destruction complete after 3m57s
    ncloud_nks_cluster.cluster: Destroying... [id=b6510323-da50-4f3e-a093-a6111d04a268]
    ncloud_nks_cluster.cluster: Still destroying... [id=b6510323-da50-4f3e-a093-a6111d04a268, 10s elapsed]
    ...
    ncloud_nks_cluster.cluster: Still destroying... [id=b6510323-da50-4f3e-a093-a6111d04a268, 2m10s elapsed]
    ncloud_nks_cluster.cluster: Destruction complete after 2m15s
    ncloud_subnet.lb_subnet: Destroying... [id=32075]
    ncloud_login_key.loginkey: Destroying... [id=my-key]
    ncloud_subnet.node_subnet: Destroying... [id=32076]
    ncloud_login_key.loginkey: Destruction complete after 4s
    ncloud_subnet.lb_subnet: Still destroying... [id=32075, 10s elapsed]
    ncloud_subnet.node_subnet: Still destroying... [id=32076, 10s elapsed]
    ncloud_subnet.lb_subnet: Destruction complete after 13s
    ncloud_subnet.node_subnet: Still destroying... [id=32076, 20s elapsed]
    ncloud_subnet.node_subnet: Destruction complete after 23s
    ncloud_vpc.vpc: Destroying... [id=16241]
    ╷
    │ Error: Status: 400 Bad Request, Body: {"responseError": {"returnCode": "1000036",
    │   "returnMessage": "You cannot delete all endpoints because they have not been returned."}}
    本稿では,NKSクラスタを完全に作成/削除していないため,残念な点が多い.
    次の記事では、NATゲートウェイを同時に作成するために、ファイル.tfを変更して更新します.👋