Before deploying the infrastructure using Terraform, ensure that Terraform and AWS CLI are installed on your local machine. Also, configure AWS CLI with your AWS credentials.
Follow the installation instructions for Terraform based on your operating system:
Follow the installation instructions for AWS CLI based on your operating system:
After installing AWS CLI, configure it with your AWS credentials by running the aws configure
command and providing your Access Key ID, Secret Access Key, region, and output format.
Terraform template to set up a simple cluster.
Modify the mgmt_nodes
and storage_nodes
variables in variables.tf
as needed.
TFSTATE_BUCKET=simplyblock-terraform-state-bucket
TFSTATE_KEY=csi
TFSTATE_REGION=us-east-2
TFSTATE_DYNAMODB_TABLE=terraform-up-and-running-locks
terraform init -reconfigure \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}" \
-backend-config="dynamodb_table=${TFSTATE_DYNAMODB_TABLE}" \
-backend-config="encrypt=true"
terraform workspace select -or-create <workspace_name>
terraform plan
Warning Do not specify
-var region
duringterraform apply
but rather update the region default value in thevariable.tf
to avoid redundant resources.
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var enable_eks=1 --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var az=us-east-2b --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var extra_nodes_arch=arm64 \
-var extra_nodes_instance_type="m6gd.xlarge" --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 \
-var mgmt_nodes_instance_type="m5.large" -var storage_nodes_instance_type="m5.large" --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var volumes_per_storage_nodes=2 --auto-approve
# -var storage_nodes_ebs_size1=2 for Journal Manager
# -var storage_nodes_ebs_size2=50 for Storage node
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var storage_nodes_ebs_size1=2 \
-var storage_nodes_ebs_size2=50 --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var nr_hugepages=2048 --auto-approve
terraform apply -var-file="dev.tfvars" --auto-approve
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 \
-var sec_storage_nodes=1 -var sec_storage_nodes_instance_type="m5.large" --auto-approve
terraform output -json > outputs.json
# The bootstrap-cluster.sh creates the KEY in `.ssh` directory in the home directory
chmod +x ./bootstrap-cluster.sh
./bootstrap-cluster.sh
./bootstrap-cluster.sh --help
./bootstrap-cluster.sh --max-lvol 10 --max-snap 10 --max-prov 150g
./bootstrap-cluster.sh --log-del-interval 30m --metrics-retention-period 2h --contact-point <slack webhook>
terraform apply -var mgmt_nodes=1 -var storage_nodes=3 -var snode_deploy_on_k8s="true" --auto-approve
./bootstrap-cluster.sh --k8s-snode
./bootstrap-k3s.sh --k8s-snode
API_INVOKE_URL=https://x8dg1t0y1k.execute-api.us-east-2.amazonaws.com
CLUSTER_ID=10b8b609-7b28-4797-a3a1-0a64fed1fad2
CLUSTER_SECRET=I7U9C0daZ64RsxmNG4NK
export $(xargs <local.env) && ./shutdown-restart.sh shutdown
export $(xargs <local.env) && ./shutdown-restart.sh restart
terraform apply -var mgmt_nodes=0 -var storage_nodes=0 --auto-approve
or you could destroy all the resources created
terraform destroy --auto-approve
Key pair file name: simplyblock-us-east-1.pem
Use this command to SSH into the management node or storage nodes in private subnets:
ssh -i ~/.ssh/simplyblock-us-east-1.pem -o ProxyCommand="ssh -i ~/.ssh/simplyblock-us-east-1.pem -W %h:%p ec2-user@<Bastion-Public-IP>" ec2-user@<Management-Node-Private-IP or Storage-Node-Private-IP>
please make sure that session manager plugin is installed
start session by running
aws ssm start-session --target i-040f2ed69d42bcabc