Things I tend to forget

Quick ElasticSearch Index Cleanup

There are many solutions using curator and other libraries. I’ll try to add links here later (if I remember)

However most of the time, all that’s needed is a quick curl call

curl -X DELETE "https://${ES-Endpoint}/${INDEX_NAME}"

Get the ES endpoint from the domain overview

List the index names using

curl "https://${ES-Endpoint}/_cat/indices"

So for a quick cleanup session for everything that matches a grep

curl "https://${ES-Endpoint}/_cat/indices" | grep 'something' | awk '{print$3}' \
while read INDEX_NAME; do
  echo "Deleting ${INDEX_NAME}"
  curl -X DELETE "https://${ES-Endpoint}/${INDEX_NAME}"

Similar Posts:

My Ansible AWS EC2 Dynamic Inventory

Start with the the Ansible configuration. This can be set in /etc/ansible/ansible.cfg or ~/.ansible.cfg (in the home directory) or ansible.cfg (in the current directory)

My suggestion is use one of the first 2 (ie. /etc/ or ~/.ansible.cfg if you’re going to be managing instances from your machine. Update the configuration as needed.

inventory = ./ansible_plugins
enable_plugins = aws_ec2
host_key_checking = False
pipelining = True
log_path = /var/log/ansible

You may need other plugins, this one is for aws_ec2. In the /etc/ansible/ansible_plugins directory, create the *_aws_ec2.yml configuration file for your inventory

# /etc/ansible/ansible_plugins/testing_aws_ec2.yml
plugin: aws_ec2
aws_profile: testing
  - us-east-1
  - us-east-2
  tag:Team: testing
  instance-state-name : running
  - instance-id
  - dns-name
  - prefix: team
    key: tags['Team']

I’m filtering using a tag:Team == testing and showing only running instances.

I’m also using the instance-id and dns-name attributes as hostname

And I’m using the tag['Team']as a grouping.

So now, I can do the following from any directory (since my configuration is global in /etc/ansible)

$ ansible-inventory --list --yaml

          ami_launch_index: 0
          architecture: x86_64
          - device_name: /dev/sda1
              attach_time: 2020-08-10 15:20:58+00:00
              delete_on_termination: true
              status: attached
              volume_id: vol-xxxxxxxxxxxxxx
        i-xyxyxyxyxyyxyxyy: {}
        i-xyxyxy2321yxyxyy: {}
        i-xyxyxyxyxy89yxyy: {}
        i-xyxy1210xyyxyxyy: {}
        i-xyxy999999yxyxyy: {}
        i-xyxyxy44xyyxyxyy: {}
        i-xyx2323yxyyxyxyy: {}
        i-xyxyxyxyxy9977yy: {}
    ungrouped: {}

I can also use the team_testing or the individual instance_id in my Ansible hostscalls.

Similar Posts:

Get daily cost alerts on AWS

So I wanted to have a better alarm system for when AWS hits us with unexpected costs. It’s better to know there’s something wrong rather quickly and not suffer hundreds of dollars costs for something you don’t really need or want.

The AWS provided alarm checks for hikes on a monthly basis. Here’s the doc they published. So that’s an alarm that sounds when your estimated bill is going to be higher than the budgeted amount, or what you had in mind in the first place. Not very useful honestly in our case. It will just be too late.

The only alternative I found was creating a daily check, that will compare yesterday’s costs against a max_amount set by default. Let’s say you want to have your daily bill not exceed 5$US.

For ease of use and maintainability, I’m using a lambda function triggered by a cron (EventBridge rule) for the daily checks. And I’m sending the Alarm using an SNS topic, this way I can subscribe to it by email, or send it to our Mattermost channel, etc.

Here’s the code:

import os
import json
import boto3
from datetime import datetime, timedelta

def lambda_handler(event, context):
    yesterday   = datetime.strftime( - timedelta(1), '%Y-%m-%d')
    twodaysago  = datetime.strftime( - timedelta(2), '%Y-%m-%d')
    cost_metric = os.environ.get('cost_metric')
    max_amount  = os.environ.get('max_amount')
    ce  = boto3.client('ce')
    sns = boto3.client('sns')
    result = ce.get_cost_and_usage(
        TimePeriod={'Start': twodaysago, 'End': yesterday}, 
    total_amount = result['ResultsByTime'][0].get('Total').get(cost_metric).get('Amount')
    if total_amount > max_amount:
        sns_topic = sns.create_topic(Name='BillingAlert')
            Message='Total cost "{} USD" exceeded max_amount rate: {}'.format(total_amount, max)   
    return {
        'statusCode': 200,
        'body': json.dumps('cost check: {}'.format(total_amount))

Note that you will need to add a couple of environment variables to Lambda: cost_metric and max_amount
And the following permissions to the role used by the lambda function: ce:GetCostAndUsage, sns:Publish and sns:CreateTopic

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ce:GetCostAndUsage",
            "Resource": "*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:us-east-1:*:*"

After that’s setup, go to your SNS topic (created by Lambda if it doesn’t exist) and subscribe to it. There you go, daily checks and an alarm if the bill is higher than expected.

Similar Posts:

How can I copy S3 objects from another AWS account?


Well, you can do it manually following the notes in the AWS KB article above linked. I am always confused by those, and I have to read them a few times before I’m able to apply the steps correctly. Recently however I’ve been going the other way around: whenever there are steps to follow, I try to translate them into an Ansible playbook

Here’s the playbook for syncing files between S3 buckets in different accounts:

# copy-bucket.yml
- hosts: localhost
    source_bucket: my-source-bucket  
    dest_bucket: my-dest-bucket
    dest_user_arn: arn:aws:iam::ACCOUNTID:user/USERNAME
    dest_user_name: USERNAME
    source_profile: src_profile
    dest_profile: dest_profile
    - name: Attach bucket policy to source bucket
        name: "{{ source_bucket }}"
        policy: "{{ lookup('template','bucket-policy.json.j2') }}"
        profile: "{{ source_profile }}"

    - name: Attach an IAM policy to a user in dest account
        iam_type: user
        iam_name: "{{ dest_user_name }}"
        policy_name: "s3_move_access"
        state: present
        policy_json: "{{ lookup( 'template', 'user-policy.json.j2') }}"
        profile: "{{ dest_profile }}"
      register: user_policy

    - name: Sync the files 
      shell: aws s3 sync s3://{{ source_bucket }}/ s3://{{ dest_bucket }}/ --profile {{ dest_profile }}

You will also need the following json templates

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
                "arn:aws:s3:::{{ source_bucket }}",
                "arn:aws:s3:::{{ source_bucket }}/*"
            "Effect": "Allow",
            "Action": [
            "Resource": [
                "arn:aws:s3:::{{ dest_bucket }}",
                "arn:aws:s3:::{{ dest_bucket }}/*"
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "DelegateS3Access",
            "Effect": "Allow",
            "Principal": {"AWS": "{{ dest_user_arn }}"},
            "Action": ["s3:ListBucket","s3:GetObject"],
            "Resource": [
                "arn:aws:s3:::{{ source_bucket }}/*",
                "arn:aws:s3:::{{ source_bucket }}"

run: ansible-playbook -v copy-bucket.yml

make sure the profile names are setup on your machine correctly, and the IAM user is there.

Similar Posts:

ChatOps with Mattermost and AWS Lambda

I’ve been working towards making things simpler when managing distributed resources at work. And since we spend most of our day in the chat room (was Slack, now Mattermost) I thought it’s best to get started with ChatOps

It’s just a fancy word for doing stuff right from the chat window. And there’s so much one can do, especially with simple Slash Commands.

Here’s a lambda function I setup yesterday for invalidating CloudFront distributions.

from time import time
import boto3

import json
import os
import re

EXPECTED_TOKEN = os.environ['mmToken']
ALLOWED_USERS = re.split('[, ]', os.environ['allowedUsers'])
    'site-name': 'DISTRIBUTIONID',

def parse_command_text(command_text):
    pattern = r"({})\s+(.*)".format('|'.join(DISTRIBUTIONS.keys()))
    m = re.match(pattern, command_text)
    if m:
        return { 'site':, 'path': path}
        return False

def lambda_handler(event, context):
    # Parse the request
        request_data = event["queryStringParameters"]
        return {
            "statusCode": 400,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Use GET for setting up mattermost slash command" }'

    # Check the token matches.
    if request_data.get("token", "") != EXPECTED_TOKEN:
        print('Wrong Token!')
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Mattermost token does not match" }'
    # Check the user is allowed to run the command
    if request_data.get("user_name", "") not in ALLOWED_USERS:
        print('Wrong User! {} not in {}'.format(request_data['user_name'], ALLOWED_USERS))
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "User not allowed to perform action" }'

    # parse the command
    command_text = request_data.get("text", "")
    if not command_text:
        print('Nothing to do, bailing out')
        return {
            "statusCode": 404,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "No command text sent" }'
    parts = parse_command_text(command_text)
    if not parts: 
        print('Bad formatting - command: {}'.format(command_text))
        return {
            "statusCode": 402,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Wrong pattern" }'

    # Do the actual work
    cf_client = boto3.client('cloudfront')

    # Invalidate
    boto_response = cf_client.create_invalidation(
            'Paths': {
                'Quantity': len(parts['path']),
                'Items': parts['path'] 
            'CallerReference': str(time()).replace(".", "")

    # Build the response message text.
    text = """##### Executing invalidation
| Key | Info |
| --- | ---- |
| Site | {} |
| Path | {} |
| ID | {} |
| Status | {} |""".format(

    # Build the response object.
    response = {
        "response_type": "in_channel",
        "text": text,

    # Return the response as JSON
    return {
        "body": json.dumps(response),
        "headers": {"Content-Type": "application/json"},
        "statusCode": 200,

Note that you need to hook that up with an API Gateway in AWS. Once that’s done, you will have a URL endpoint ready for deployment.

Next, I created the slash command in mattermost with the following:

slash command configuration

That’s pretty much it. Rinse and repeat for a different command, different usage.

On my list next is to have more interaction with the user in mattermost per
Weekend Project, Yay!

Similar Posts:

AWS Lambda Function Code

Quick snippet to get the function code

wget -O $(aws lambda get-function --function-name MyFunctionName --query 'Code.Location' --output text)

And another to update lambda with the latest

cd package 
zip -r9 ../ .
cd ..
zip -g
aws lambda update-function-code --function-name MyFunctionName --zip-file fileb://

Similar Posts:

MFA from command line

MFA, 2FA, 2 step validation, etc. are everywhere these days. And it’s a good thing.

Problem with using the phone to get the authentication code is that you need to have it handy at all times (when you want to login at least) and that you have to read the code then type it in (too many steps)

One possible alternative is to use the command line oathtool

Here’s my snippet, I added the following line in my .bashrc

function mfa () { oathtool --base32 --totp "$(cat ~/.mfa/$1.mfa)" | xclip -sel clip ;}

Some preparation work:

sudo apt install oathtool xclip
mkdir ~/.mfa

If you want to use the same account with both a phone based MFA generator and the shell, set them up at the same time. Simply use the generated string for setting up the account in the Google Authenticator (as an example) and the add it to the ~/.mfa/account_name.mfa

The use of xclip automatically copies the 6 digit authentication code to the clipboard. You can go ahead and paste it.

The above setup works in Ubuntu. Didn’t try it on other systems.

Similar Posts:

VaultPress SSH/SFTP access for WordPress site behind AWS load balancer

I recently worked on migrating a large WordPress site from a dedicated server to AWS following the reference architecture as outlined in

One of the upsides of this architecture is that the site is entirely behind the load balancer and the instances running the web server are never accessible from the internet (not even via ssh or http). So any updates to the site need to be deployed using Systems Manager. I will post about that later … However due to this restriction, VaultPress is only able to access the site (for backups) over https, which causes extra load on the web instances.

To fix that, we need to allow VaultPress SSH access directly to the web instances (at least to 1 of them). And the only way in via SSH is through a bastion instance.

On this bastion instance, I installed Nginx with the following configuration in /etc/nginx/nginx.conf

stream {
        server {
                listen 1234;

where is the internal IP of one web instances behind the load balancer.

At this stage (after reloading Nginx), I added the vaultpress user to the web instances and set their SSH public key in the .ssh/authorized_keys for that user.

Testing the connection works, all green on the VaultPress site. The SSH and the SFTP connection should work properly. To test manually, I added my own public key and connected to:

ssh -l vaultpress -p 1234 

Since the IPs may change in case there’s an auto scaling event, I added a cron job that runs every 5 minutes on the bastion and updates the nginx configuration to point to one of the web instances internal IPs. A simplistic version looks like this:

IP="$( | awk '{ print $1 }')"
sed -i -E "s/proxy_pass (.+):22/proxy_pass ${IP}:22/" /etc/nginx/nginx.conf
nginx -t && service nginx reload

You can also use CloudWatch Events to add a rule, that runs when AutoScaling happens. Pointing that to a Lambda function that calls Systems Manager to change the IP in Nginx. Here’s a reference for that

Oh, and don’t forget to lock down the bastion instance. In my case, I set the Security group in the EC2 console to allow only the following:

Type: Custom TCP Rule
Protocol: TCP
Port Range: 1234
Description: VaultPress SSH Access

Similar Posts:

List IPs from CloudTrail events

A quick command to list the IPs from AWS CloudTrail events.

aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '.sourceIPAddress' \
  | sort | uniq

This of course can be extended to include more information, for example:

aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '{ User: .userIdentity.userName, IP: .sourceIPAddress, Event: .eventName }'

Similar Posts:

Change profiles automatically in Tilix when connecting to SSH hosts

I use Tilix as my main terminal app on Ubuntu. I like it mainly because it’s easy to use and shows multiple sessions in tiles and tabs, so it’s easy to switch between multiple servers without too much fuss

Tilix is also pretty customizable. I just needed to have the “profile” automatically switch according to the server/machine I switch to. This can be done according to the documentation per but I wanted something configured locally, not sent to the remote system.

So I just added a function that wraps my ssh command. Here’s how it looks in my .bashrc

ssh() { 
	SSHAPP=`which ssh`;   
	echo "switching to $ARGS"; 
	printf "\033]7;file://%s/\007" "$ARGS";   

This sets up the hostname of machine you’re logging into as the title. That’s your trigger. What remains is to create a profile and assign that profile to automatically switch when the trigger (hostname:directory, user can also be added) is set.

Go to Profiles > Edit Profile > Advanced (tab) and under Match add the hostname.

That’s about it. I’m going to add a new profile with a RED background now for my production machines! too much?

Update (Jan 8, 2020)

I noticed that I have also added a Match for my localhost hostname in the Default profile, so that the profile would revert to that once I logged off a remote host.

Another thing I needed was to create a special trigger if I wanted to use a wildcard match for hostnames (ie. if I wanted to switch profile on all my AWS instances via Session Manager). And match that special hostname (aws-instance below for example). Here’s my ssh bash function now:

ssh () {
     if [[ $ARGS == i-* ]]
         echo "switching to AWS instance"
         printf "\033]7;file://%s/\007" "aws-instance"
         echo "switching to $ARGS"
         printf "\033]7;file://%s/\007" "$ARGS"

Similar Posts: