aws

Migrate my note flask app from ecs to pi

So to cutdown some cost on hosting my tnote flask app - tnote.tdinvoke.net, I’m hoping to move it down to my pi4. But I don’t want to deal with port forwarding and ssl certificate setup. So enter cloudflare tunnel, it’s not perfectly safe as cloudflare can see all traffic going to the exposed sites but since these are just my lab projects, I think I should be fine.

I need to use my tdinvoke.net domain for the sites, so I had to migrate my r53 dns setup over to cloudflare.

  • Move all my dns records to cloudflare manually. I don’t have much so it’s pretty painless. Note: All my alias records to aws cloudfront need to be created as CNAME - ‘DNS only’ on cloudflare.
  • Point my registered domain name-servers to cloudflare name-servers.

Migration from ecs was not too bad since I just need to spin up the containers on my pi.

Here’s an overview flow of the setup:

More information on cloudflare tunnel and how to setup one - here.

Flask Note app with aws, terraform and github action

This project is part of a mentoring program from my current work - Vanguard Au. Thanks Indika for the guidance and supports through this project.

Please test out the app here: https://tnote.tdinvoke.net

Flask note app

Source: https://github.com/tduong10101/tnote/tree/env/tnote-dev-apse2/website

Simple flask note application that let user sign-up/in, create/delete notes. Thanks to Tech With Tim for the tutorial.

Changes from the tutorial

Moved the db out to a mysql instance

Setup .env variables:

1
2
3
4
5
6
7
8
9
from dotenv import load_dotenv
....
load_dotenv()

SQL_USERNAME=os.getenv('SQL_USERNAME')
SQL_PASSWORD=os.getenv('SQL_PASSWORD')
SQL_HOST=os.getenv('SQL_HOST')
SQL_PORT=os.getenv('SQL_PORT')
DB_NAME=os.getenv('DB_NAME')

connection string:

1
url=f"mysql+pymysql://{SQL_USERNAME}:{SQL_PASSWORD}@{SQL_HOST}:{SQL_PORT}/{DB_NAME}"

update create_db function as below:

1
2
3
4
5
6
7
8
9
10
def create_db(url,app):
try:
if not database_exists(url):
create_database(url)
with app.app_context():
db.create_all()
print('Created Database!')
except Exception as e:
if e.code!='f405':
raise e

Updated encryption method to use ‘scrypt’

1
new_user=User(email=email,first_name=first_name,password=generate_password_hash(password1, method='scrypt'))

Added Gunicorn Server

1
/home/python/.local/bin/gunicorn -w 2 -b 0.0.0.0:80 "website:create_app()"

Github Workflow configuration

Source: https://github.com/tduong10101/tnote/tree/env/tnote-dev-apse2/.github/workflows

Github - AWS OIDC configuration

Follow this doco to configure OIDC so Github action can access AWS resources.

app

Utilise aws-actions/amazon-ecr-login couple with OIDC AWS to configure docker registry.

1
2
3
4
5
6
7
8
9
10
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::<aws_acc_num>:role/github-ecr-img-builder
role-session-name: GitHub_to_AWS_via_FederatedOIDC
aws-region: ap-southeast-2

- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1

This action can only be triggered manually.

network

Source: https://github.com/tduong10101/tnote/blob/env/tnote-dev-apse2/.github/workflows/network.yml

This action cover aws network resource management for the app. It can be triggered manually, push and PR flow.

Here the trigger details:

Action Trigger
Atmos Terraform Plan Manual, PR create
Atmos Terraform Apply Manual, PR merge (Push)
Atmos Terraform Destroy Manual

Auto trigger only apply on branch with “env/*“

infra

Source: https://github.com/tduong10101/tnote/blob/env/tnote-dev-apse2/.github/workflows/infra.yml

This action for creating AWS ECS resources, dns record and rds mysql db.

Action Trigger
Atmos Terraform Plan Manual, PR create
Atmos Terraform Apply Manual
Atmos Terraform Destroy Manual

Terraform - Atmos

Atmos solve the missing param management piece over multi stacks for Terraform.

name_pattern is set with: {tenant}-{state}-{environment} example: tnote-dev-apse2

Source: https://github.com/tduong10101/tnote/tree/env/tnote-dev-apse2/atmos-tf

Structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
.
├── atmos.yaml
├── components
│ └── terraform
│ ├── infra
│ │ ├── _data.tf
│ │ ├── _provider.tf
│ │ ├── _vars.tf
│ │ └── main.tf
│ └── network
│ ├── _provider.tf
│ ├── _var.tf
│ ├── backend.tf.json
│ └── main.tf
└── stacks
├── tnote
│ ├── _defaults.yaml
│ ├── dev
│ │ ├── _defaults.yaml
│ │ └── ap-southeast-2
│ │ ├── _defaults.yaml
│ │ └── main.yaml
│ ├── prod
│ │ ├── _defaults.yaml
│ │ └── us-east-1
│ │ ├── _defaults.yaml
│ │ └── main.yaml
│ └── test
└── workflows
└── workflows-tnote.yaml

Issue encoutnered

Avoid service start deadlock when start ecs service from UserData

Symptom: ecs service is in ‘inactive’ status and run service command stuck when manually run on the ec2 instance.

1
sudo systemctl enable --now --no-block ecs.service

Ensure rds and ecs are in same vpc

Remember to turn on ecs logging by adding the cloudwatch loggroup resource.

Error:

1
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'terraform-20231118092121881600000001.czfqdh70wguv.ap-southeast-2.rds.amazonaws.com' (timed out)")

Don’t declare db_name in rds resource block

This is due to the note app has a db/table create function, if the db_name is declared in terraform it would create an empty db without the required tables. Which would then resulting in app fail to run.

Load secrets into atmos terraform using github secret and TFVAR

Ensure sensitive is set to true for the secret. Use github secret and TF_VAR to load the secret into atmos terraform TF_VAR_secret_name={secrets.secret_name}

Terraform and Github actions for Vrising hosting on AWS

It’s been awhile since the last time I play Vrising, but I think this would be a good project for me to get my hands on setting up a CICD pipeline with terraform and github actions (an upgraded version from my AWS Vrising hosting solution).

There are a few changes to the original solution, first one is the use of vrising docker image (thanks to TrueOsiris), instead of manually install vrising server to the ec2 instance. Docker container would be started as part of the ec2 user data. Here’s the user data script.

The second change is terraform configurations turning all the manual setup processes into IaC. Note, on the ec2 instance resource, we have a ‘home_cdir_block’ variable, referencing an input from github actions secret. So then only the IPs in ‘home_cdir_block’ can connect to our server. Another layer of protection is the server’s password in user data script which also getting input from github secret variable.

Terraform resources would then get deploy out by github actions with OIDC configured to assume a role in AWS. The configuraiton process can be found here. The IAM role I set up for this project is attached with ‘AmazonEC2FullAccess’ and the below inline policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::<your-s3-bucket-name>",
"arn:aws:s3:::<your-s3-bucket-name>/*"
]
}
]
}

Oh I forgot to mention, we also need an S3 bucket create to store the tfstate file as stated in _provider.tf.

Below is an overview of the upgraded solution.

Github repo: https://github.com/tduong10101/Vrising-aws

Forza log streaming to Opensearch

In this project I attempted to get forza log display in ‘real time’ on AWS Opensearch (poorman splunk). Below is a quick overview of how to the log flow and access configurations.

Forza -> Pi -> Firehose data stream:

Setting up log streaming from forza to raspberry pi is quite straight forward. I forked jasperan forza-horizon-5-telemetry-listener repo and updated it with a delay functionality and also a function to send the log to aws firehose data stream (forked repo). Then I just got the python script run while I’m playing on forza

Opensearch/Cognito configuration:

Ok this is the hard part. I spent most of the time on this. Firstly, to have firehose stream data into opensearch I need to somehow map the aws access roles to opensearch roles. ‘Fine grain access’ option will not work, we can either use SAML to hook up to an idp or use the inhouse aws cognito. Since I don’t have an existing idp, I had to setup cognito identity pool and user pool. From there I can then give the admin opensearch role to cognito authenticate role which assigned to a user pool group.

Below are some screenshots on opensearch cluster and cognito. Also thanks to Soumil Shah his video on setting up congito with opensearch helped alot. Here’s the link.

opensearch configure

opensearch security configure

opensearch role

Note: all this can be by pass if I choose to send the log straight to opensearch via https rest. But then it would be too easy ;)

Note 2: I’ve gone with t3.small.search which is why I put real-time in quotation

Firehose data stream -> firehose delivery stream -> Opensearch:

Setting up delivery stream is not too hard. There’s a template for sending log to opensearch. Just remember to give it the right access role that.

Here is what it looks like on opensearch:

I’m too tired to build any dashboard for it. Also the timestamp from the log didn’t get transform into ‘date’ type, so I’ll need to look into it at another time.

Improvements:

  • docker setup to run the python listener/log stream script
  • maybe stream log straight to opensearch for real time log? I feel insecure sending username/password with the payload though.
  • do this but with splunk? I’m sure the indexing performance would be much better. There’s arealdy an addon for forza on splunk, but it’s not available on splunk cloud. The addon is where I got the idea for this project.

Hosting VRising on AWS

Quick Intro - “V Rising is a open-world survival game developed and published by Stunlock Studios that was released on May 17, 2022. Awaken as a vampire. Hunt for blood in nearby settlements to regain your strength and evade the scorching sun to survive. Raise your castle and thrive in an ever-changing open world full of mystery.” (vrising.fandom.com)

Hosting a dedicated server for this game is similar to how we set one up with Valheim (Hosting Valheim on AWS). We used the same server tier as Valheim, below are the details.

VRising Server does not officially available for Linux OS, luckily we found this guide for setting it up on Centos. Hence we used a centos AMI on “community AMIs”.

- AMI: ap-southeast-2 image for x86_64 CentOS_7
- instance-type: t3.medium   
- vpc: we're using default vpc created by aws on our account.
- storage: 8gb.
- security group with the folowing rules:
    - Custom UDP Rule: UDP 9876 open to our pc ips
    - Custom UDP Rule: UDP 9877 open to our pc ips
    - ssh: TCP 22 open to our pc ips

We are using the same cloudwatch alarm as Valheim to turn off the server when there’s no activity.

This time we also added a new feature, using discord bot chat to turn the aws server on/off.

This discord bot is hosted on one of our raspberry pi. Here is the bot repo and below is the flow chart of the whole setup.

Improvements:

  • Maybe cloudwatch alarm can be set with a lambda function. Which would send a notification to our discord channel letting us know that the server got turn off by cloudwatch.

  • We considered using Elastic IP so the server can retain its ip after it got turn off. But we decided not to, as we wanted to save some cost.

It has been awhile since we got together and work on something this fun. Hopefully, we’ll find more ideas and interesting projects to work together. Thanks for reading, so long until next time.

Hosting Valheim on AWS

Quick introduction for Valheim, it’s an indi game developed by IronGate which is a viking survival game where player can build/craft like minecraft, fight like darksoul and explore a beautiful world like zelda. The game is a huge success with 5 milion players, more information can be found here at Valheim official site.

Below are the steps I took to setup a dedicated server on aws to host valheim:

  1. Spin up an ec2 instance: The game ran pretty smooth with a tiny bit of latency. Below is the instance details:

    • AMI: Ubuntu Server 20.04 LTS (HVM)
    • instance-type: t3a.medium. This is the cheapest we can get. Unfortunately valheim does not support 64bit(Arm) so we can’t use t4 instance type.
    • vpc: we’re using default vpc created by aws on our account.
    • storage: 8gb, the game only require less than 2gb.
    • security group with the folowing rules:
      • Custom TCP Rule: TCP 2456 - 2458 open to our pc ips
      • Custom UDP Rule: UDP 2456 - 2458 open to our pc ips
      • ssh: TCP 22 open to our pc ips
  2. Install Valheim server follow this git repo created by Nimdy: Dedicated_Valheim_Server_Script

  3. Setup cloudwatch to monitor “Network packet out(count)” to stop the instance when it’s not in use after 25 minutes. Valheim server save the world every 20 minutes, this ensure we have the game save whenever we log off:

    • Threshold type: statics
    • Whenever NetworkPacketsOut is: Lower/Equal <= threshold
    • than: 250
    • period: 5 minutes
    • Datapoint to alarm: 5 out of 5
    • treat missing data as missing
  4. Optional: Migrate exisiting world on local computer to valheim server. Coppy the following to files from the below source to valheim server world location: .fwl, .fwl.old, .db. I’m using FileZilla to transfer the file to the ec2 instance.

    • source: C:\Users\YOURNAME\AppData\LocalLow\IronGate\Valheim\worlds
    • valheim server world location: /home/steam/.config/unity3d/IronGate/Valheim/worlds
  5. Run the below script to start the instance and game. (aws powershell module is requrired on the local computer)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    Import-Module AWS.Tools.EC2

    $steam_exe = <steam_exe_location>
    $instance_id = <ec2_valheim_instance_id>
    Set-DefaultAWSRegion -Region <ec2_valheim_instance_region>


    $instance_status = Get-EC2InstanceStatus -InstanceId $instance_id
    if ($instance_status -eq $null){
    Start-EC2Instance -InstanceId $instance_id
    do {
    $instance = (Get-EC2Instance -InstanceId $instance_id).Instances
    Start-Sleep -Seconds 10
    } while ($instance.PublicIpAddress -eq $null)
    } else {
    $instance = (Get-EC2Instance -InstanceId $instance_id).Instances
    }

    $server_ip = $instance.PublicIpAddress

    while ($instance_status.Status.status -ne "ok"){
    Start-Sleep -Seconds 10
    $instance_status = Get-EC2InstanceStatus -InstanceId $instance_id
    $instance_status.Status.status
    }
    if ($instance_status.Status.status -eq "ok"){
    & $steam_exe -applaunch 892970 +connect ${server_ip}:2456
    }

We got this setup running fine for the last 2-3 weeks, and it’s costing us around $1.8 usd. Pretty happy with it, next improvement I guess maybe put together a Terraform for this or if possible, have cloudwatch monitor valheim’s log instead of network packet out count.

Terraform-AWS serverless blog

I’m learning Terraform at the moment and thought this could be a good hand-on side project for me. The provided terraform code will spin up a github repo, a codebuild project and a s3 bucket to host a static blog (blue box in the flow chart above). I figure people might not want to use cloudfront or route 53 as they are not free tier service, so I left them out.

To spin this up, we will need the below prerequisites:

Once all the prerequisites are setup, follow the steps below.

  1. Open cmd/powershell and run the following commands to clone terraform and build spec file:
1
git clone https://github.com/tduong10101/serverless-blog-terra.git
  1. Update serverless-blog-terra/variable.tfvars with your github token and site name that you would like set up
  2. Run the following commands
1
2
3
cd serverless-blog-terra
terraform init
terraform apply -var-file variable.tfvars
  1. Review the resouces and put in “yes” to approve terraform to spin them up.
  2. Grab the outputs and save them somewhere, we’ll use them for later steps.
  3. Navigate to the parent folder of serverless-blog-terra
1
cd ..
  1. Create a new folder, give it the same name as git repo (doesn’t matter if the is not the same, it’s just easier to manage), cd to new folder and run hexo init command

    1
    2
    3
    mkdir <new folder>
    cd .\<new folder>
    hexo init
  2. Copy buildspec.yml file from serverless-blog-terra folder to this new folder

  3. Update the buildspec.yml with s3:// link from step 5

  4. Init Git and setup git remote with the below commands. Insert your git repo url from step 5.

1
2
3
4
5
git init
git add *
git commit -m "init"
git remote add origin "<your-git-url-from-step-5>"
git push -u origin master
  1. Wait for codebuild to complete update S3 bucket. Logon to AWS console to confirm.
  1. Open the website_endpoint url on step 5 and enjoy your serverless blog.

Visit Hexo for instructions on how to create posts, change theme, add plugins etc

Remove the blog:

  1. If you don’t like the new blog and want to clean up aws/git resources. Run the below command:
1
terraform destroy -var-file variable.tfvars
  1. Once terraform finish cleaning up the resources. The rest of the folders can be removed from local computer.

Hosting a simple Code Editor on S3

I got this old code editor project sitting in github without much description - repo link. So I thought why not try to host it on S3 so I could showcase it in the repo.

Also it’s a good pratice to brush up my knowledge on some of the AWS services (S3, CloudFront, Route53). After almost an hour, I got the site up so it’s not too bad. Below are the steps that I took.

  1. Create a S3 bucket and upload my code to this new bucket - ceditor.tdinvoke.net.

  2. Enable “Static website hosting” on the bucket

  3. Create a web CloudFront without following settings (the rest are set with default)

    1. Origin Domain Name: endpoint url in S3 ceditor.tdinvoke.net ‘Static Website Hosting’
    2. Alternate Domain Names (CNAMEs): codeplayer.tdinvoke.net
    3. Viewer Protocol Policy: Redirect HTTP to HTTPS
    4. SSL Certificate: Custom SSL Certificate - reference my existing SSL certificate
  4. Create new A record in Route 53 and point it to the new CloudFront Distributions

Aaand here is the site: https://codeplayer.tdinvoke.net/

Next I need to go back to the repo and write up a readme.md for it.

Get AWS IAM credentials report script

Quick powershell script to generate and save AWS IAM credentials report to csv format on a local location.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Import-Module AWSPowerShell
$reportLocation = "C:\report"
if (!(test-path($reportLocation))){
New-Item -ItemType Directory -Path $reportLocation
}
$date = get-date -Format dd-MM-yy-hh-mm-ss
$reportName = "aws-credentials-report-$date.csv"
$reportPath = Join-Path -Path $reportLocation -ChildPath $reportName
# request iam credential report to be generated
do {
$result = Request-IAMCredentialReport
Start-Sleep -Seconds 10
} while ($result.State.Value -notmatch "COMPLETE")
# get iam report
$report = Get-IAMCredentialReport -AsTextArray
# convert to powershell object
$report = $report|ConvertFrom-Csv
# export to set location
$report | Export-Csv -Path $reportPath -NoTypeInformation

How to verify google search with route53

Just recently got this site on google search, totally forgot about it when I created the site.
The process is quite easy. Follow the instructions on this link should cover the task.
Might take from 10 minutes to 5 hours for the TXT record to populate, so be patient!