Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4593.2.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-02ace936051e68d7b Launch Stack
    HVM (arm64) ami-05c0e2610bdd3bd69 Launch Stack
    ap-east-1 HVM (amd64) ami-05d78574bd68c1620 Launch Stack
    HVM (arm64) ami-05fce1054a22b8a1c Launch Stack
    ap-northeast-1 HVM (amd64) ami-0a83332afdee87626 Launch Stack
    HVM (arm64) ami-06e9c46a042acc673 Launch Stack
    ap-northeast-2 HVM (amd64) ami-08cb873dbde1e551a Launch Stack
    HVM (arm64) ami-03d8d6932f1b3ce0d Launch Stack
    ap-south-1 HVM (amd64) ami-04157cdd1ed9216bb Launch Stack
    HVM (arm64) ami-0d6b4486fc2ec9454 Launch Stack
    ap-southeast-1 HVM (amd64) ami-029589b26d1fc2fa0 Launch Stack
    HVM (arm64) ami-08c5f62c5f6b0e846 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0a706f7140fbf0e29 Launch Stack
    HVM (arm64) ami-0d633f7baa8571880 Launch Stack
    ap-southeast-3 HVM (amd64) ami-011966d4c7642e869 Launch Stack
    HVM (arm64) ami-07624003aa858360f Launch Stack
    ca-central-1 HVM (amd64) ami-06f8f862099c391e3 Launch Stack
    HVM (arm64) ami-06de19d6527777dbe Launch Stack
    eu-central-1 HVM (amd64) ami-0175ff2e90869b010 Launch Stack
    HVM (arm64) ami-0c77c99a87e0a9385 Launch Stack
    eu-north-1 HVM (amd64) ami-0b37b9297fc421a84 Launch Stack
    HVM (arm64) ami-0c2255dc8c0bbb98f Launch Stack
    eu-south-1 HVM (amd64) ami-0aebb29ab68c7797b Launch Stack
    HVM (arm64) ami-082f8db69de6953f3 Launch Stack
    eu-west-1 HVM (amd64) ami-00a9918479a5be094 Launch Stack
    HVM (arm64) ami-0f242c33d531a4d56 Launch Stack
    eu-west-2 HVM (amd64) ami-0bb30a9a0ca93299e Launch Stack
    HVM (arm64) ami-076ddd279a7d6a185 Launch Stack
    eu-west-3 HVM (amd64) ami-046052b765cb72de5 Launch Stack
    HVM (arm64) ami-07b745071941ee74b Launch Stack
    sa-east-1 HVM (amd64) ami-06f9af7a4a9833e5a Launch Stack
    HVM (arm64) ami-07c65b8c356ee7d18 Launch Stack
    us-east-1 HVM (amd64) ami-0749a12240ca137bf Launch Stack
    HVM (arm64) ami-0093b00782fdb7e44 Launch Stack
    us-east-2 HVM (amd64) ami-06e9d85833e9f32ab Launch Stack
    HVM (arm64) ami-0829309e12f6df4a9 Launch Stack
    us-west-1 HVM (amd64) ami-08595d699ad706fe9 Launch Stack
    HVM (arm64) ami-0e90f6e5ebbccb36f Launch Stack
    us-west-2 HVM (amd64) ami-0007a09e5c891d51a Launch Stack
    HVM (arm64) ami-0ce605082061bbb10 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4628.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0bd2a15470bef3b08 Launch Stack
    HVM (arm64) ami-020afd1452c558fa6 Launch Stack
    ap-east-1 HVM (amd64) ami-05703b25b3f9e1839 Launch Stack
    HVM (arm64) ami-02071456390b35b2a Launch Stack
    ap-northeast-1 HVM (amd64) ami-0e4efcee61361d2b4 Launch Stack
    HVM (arm64) ami-05dc703588b4cd426 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0ed3d08061ed525d5 Launch Stack
    HVM (arm64) ami-0b660f52db2e7ba39 Launch Stack
    ap-south-1 HVM (amd64) ami-0754fa8cfb42f9f81 Launch Stack
    HVM (arm64) ami-0d9ceca9f9287dc27 Launch Stack
    ap-southeast-1 HVM (amd64) ami-06857b87bbab52670 Launch Stack
    HVM (arm64) ami-0aa119da72c37b5a4 Launch Stack
    ap-southeast-2 HVM (amd64) ami-00df6c10f82fad8e6 Launch Stack
    HVM (arm64) ami-0fb918e758ded2af1 Launch Stack
    ap-southeast-3 HVM (amd64) ami-03400c48970ff28e1 Launch Stack
    HVM (arm64) ami-0cfd5f3402ea9cff2 Launch Stack
    ca-central-1 HVM (amd64) ami-03cdf7837fc7a9b10 Launch Stack
    HVM (arm64) ami-0ea2d4b9c86b67a8c Launch Stack
    eu-central-1 HVM (amd64) ami-0d330b64bc0ffc7bc Launch Stack
    HVM (arm64) ami-03442c4fe5a90fc34 Launch Stack
    eu-north-1 HVM (amd64) ami-0b3e01823efb03246 Launch Stack
    HVM (arm64) ami-0c3b1a00f783172c2 Launch Stack
    eu-south-1 HVM (amd64) ami-04704378cff59b29d Launch Stack
    HVM (arm64) ami-00e92a521d4a5bad7 Launch Stack
    eu-west-1 HVM (amd64) ami-0862862ecbbebb3aa Launch Stack
    HVM (arm64) ami-0bebd1e4a40875cc1 Launch Stack
    eu-west-2 HVM (amd64) ami-0c86376f7e0f9dca7 Launch Stack
    HVM (arm64) ami-04fab10c2b3a9df1a Launch Stack
    eu-west-3 HVM (amd64) ami-0690ecd84549273c1 Launch Stack
    HVM (arm64) ami-0b99726884198abbc Launch Stack
    sa-east-1 HVM (amd64) ami-0b1c16e3cef6918c8 Launch Stack
    HVM (arm64) ami-03f7b19b8eb5bf0e6 Launch Stack
    us-east-1 HVM (amd64) ami-090429fb75eb3a748 Launch Stack
    HVM (arm64) ami-0cc1f986d1d7948f7 Launch Stack
    us-east-2 HVM (amd64) ami-022e10d844b93cef2 Launch Stack
    HVM (arm64) ami-0b62e8bd87df8770f Launch Stack
    us-west-1 HVM (amd64) ami-0770e9837f656b572 Launch Stack
    HVM (arm64) ami-0487381bb7cfcdeb7 Launch Stack
    us-west-2 HVM (amd64) ami-028e6a454300f96d7 Launch Stack
    HVM (arm64) ami-023a29291c270a74a Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4669.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0605b95eb6d242a3b Launch Stack
    HVM (arm64) ami-0839e746e7dcf3233 Launch Stack
    ap-east-1 HVM (amd64) ami-0618d3b16533e8059 Launch Stack
    HVM (arm64) ami-09cb5ab09d1400530 Launch Stack
    ap-northeast-1 HVM (amd64) ami-020a0c5e9408d388b Launch Stack
    HVM (arm64) ami-06d18afe66809c114 Launch Stack
    ap-northeast-2 HVM (amd64) ami-049fd36e11eba7b4b Launch Stack
    HVM (arm64) ami-032749144d6f8513f Launch Stack
    ap-south-1 HVM (amd64) ami-086e62d0950b12ba6 Launch Stack
    HVM (arm64) ami-06ec15c6cf0894f39 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b37549bc309558e9 Launch Stack
    HVM (arm64) ami-0bfc4b3ff387bc22d Launch Stack
    ap-southeast-2 HVM (amd64) ami-0cabd84ad43354b0e Launch Stack
    HVM (arm64) ami-032dbb104162c82f9 Launch Stack
    ap-southeast-3 HVM (amd64) ami-02c5ac1529c4b7713 Launch Stack
    HVM (arm64) ami-026d7dc6f8c3cdba3 Launch Stack
    ca-central-1 HVM (amd64) ami-02ace989821add682 Launch Stack
    HVM (arm64) ami-03f5123263d2ba08e Launch Stack
    eu-central-1 HVM (amd64) ami-07504311456fd4e18 Launch Stack
    HVM (arm64) ami-0e976ce847da02f77 Launch Stack
    eu-north-1 HVM (amd64) ami-076cf50badd42b021 Launch Stack
    HVM (arm64) ami-03a0de127642be435 Launch Stack
    eu-south-1 HVM (amd64) ami-077605c51832af788 Launch Stack
    HVM (arm64) ami-07d5687df16e954c0 Launch Stack
    eu-west-1 HVM (amd64) ami-0ca4ef3b99b153833 Launch Stack
    HVM (arm64) ami-060f22fc16043a122 Launch Stack
    eu-west-2 HVM (amd64) ami-056788f2bbe2b4977 Launch Stack
    HVM (arm64) ami-075171796d1932254 Launch Stack
    eu-west-3 HVM (amd64) ami-06bff03f3082e4c44 Launch Stack
    HVM (arm64) ami-01f4d4f90845bcd08 Launch Stack
    sa-east-1 HVM (amd64) ami-00522bd412100e97e Launch Stack
    HVM (arm64) ami-09101b67e771da4cb Launch Stack
    us-east-1 HVM (amd64) ami-06e5c19d425ba8fe3 Launch Stack
    HVM (arm64) ami-00e5abdaae213cf68 Launch Stack
    us-east-2 HVM (amd64) ami-0ed1edff346db8ca9 Launch Stack
    HVM (arm64) ami-0e3935a7619cd6a35 Launch Stack
    us-west-1 HVM (amd64) ami-042ebd37bc7394d22 Launch Stack
    HVM (arm64) ami-0879f6b06aa1da776 Launch Stack
    us-west-2 HVM (amd64) ami-08ee9cfad01bea7b9 Launch Stack
    HVM (arm64) ami-0bbf23b5440811fb2 Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.7.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-00bde772472b08beb Launch Stack
    HVM (arm64) ami-0e49bee00bc9e8ac4 Launch Stack
    ap-east-1 HVM (amd64) ami-0c1c2cdecf0eae1c1 Launch Stack
    HVM (arm64) ami-09c0e91085a9e5cb1 Launch Stack
    ap-northeast-1 HVM (amd64) ami-02f8ef5b1e101a694 Launch Stack
    HVM (arm64) ami-00b4755a8f082af4d Launch Stack
    ap-northeast-2 HVM (amd64) ami-093e93a462e7bad25 Launch Stack
    HVM (arm64) ami-03425aadf32e1d567 Launch Stack
    ap-south-1 HVM (amd64) ami-067c6fab2f9ac9265 Launch Stack
    HVM (arm64) ami-0c437f511234770d7 Launch Stack
    ap-southeast-1 HVM (amd64) ami-05fdc190a1c53e0e1 Launch Stack
    HVM (arm64) ami-0369732bbf7d00283 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0bffe7305ae7e03d4 Launch Stack
    HVM (arm64) ami-093fe1d0ba89cba03 Launch Stack
    ap-southeast-3 HVM (amd64) ami-011a5840197208be5 Launch Stack
    HVM (arm64) ami-0c4e4d5d333059632 Launch Stack
    ca-central-1 HVM (amd64) ami-0332d68c9aad9072d Launch Stack
    HVM (arm64) ami-0255bf9dba3103632 Launch Stack
    eu-central-1 HVM (amd64) ami-004a962d9b75db8fa Launch Stack
    HVM (arm64) ami-094164bf9361aa0aa Launch Stack
    eu-north-1 HVM (amd64) ami-08644419ce33265c6 Launch Stack
    HVM (arm64) ami-05d8e27c6899aca22 Launch Stack
    eu-south-1 HVM (amd64) ami-08a7acc9ba672688f Launch Stack
    HVM (arm64) ami-0451f86c5064ec97f Launch Stack
    eu-west-1 HVM (amd64) ami-03eb21068807067d6 Launch Stack
    HVM (arm64) ami-0ee48705f5a6bf37b Launch Stack
    eu-west-2 HVM (amd64) ami-0ad04c73680f4e622 Launch Stack
    HVM (arm64) ami-0612dcb539422e4e7 Launch Stack
    eu-west-3 HVM (amd64) ami-0caf9249931005f8a Launch Stack
    HVM (arm64) ami-0e2bf1dcbe15b6269 Launch Stack
    sa-east-1 HVM (amd64) ami-0c0f1286ebdbe11be Launch Stack
    HVM (arm64) ami-07d4f3a6ef02ba008 Launch Stack
    us-east-1 HVM (amd64) ami-01dbacc5d7f623ef1 Launch Stack
    HVM (arm64) ami-0c2e9d254b450a823 Launch Stack
    us-east-2 HVM (amd64) ami-0036fb59b1a3d4e57 Launch Stack
    HVM (arm64) ami-0d83fbab14a0bf371 Launch Stack
    us-west-1 HVM (amd64) ami-0ae9d555b50216318 Launch Stack
    HVM (arm64) ami-085cc4f46e4cbd4e8 Launch Stack
    us-west-2 HVM (amd64) ami-06a3c3af393774fbd Launch Stack
    HVM (arm64) ami-0d0ab6da4b92f0fd8 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-06e5c19d425ba8fe3 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-06e5c19d425ba8fe3 (amd64), Beta ami-090429fb75eb3a748 (amd64), or Stable ami-0749a12240ca137bf (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <buildbot@flatcar-linux.org>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... me@mail.net"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .