Work (100)


Get daily cost alerts on AWS

So I wanted to have a better alarm system for when AWS hits us with unexpected costs. It’s better to know there’s something wrong rather quickly and not suffer hundreds of dollars costs for something you don’t really need or want.

The AWS provided alarm checks for hikes on a monthly basis. Here’s the doc they published. So that’s an alarm that sounds when your estimated bill is going to be higher than the budgeted amount, or what you had in mind in the first place. Not very useful honestly in our case. It will just be too late.

The only alternative I found was creating a daily check, that will compare yesterday’s costs against a max_amount set by default. Let’s say you want to have your daily bill not exceed 5$US.

For ease of use and maintainability, I’m using a lambda function triggered by a cron (EventBridge rule) for the daily checks. And I’m sending the Alarm using an SNS topic, this way I can subscribe to it by email, or send it to our Mattermost channel, etc.

Here’s the code:

import os
import json
import boto3
from datetime import datetime, timedelta


def lambda_handler(event, context):
    yesterday   = datetime.strftime(datetime.now() - timedelta(1), '%Y-%m-%d')
    twodaysago  = datetime.strftime(datetime.now() - timedelta(2), '%Y-%m-%d')
    cost_metric = os.environ.get('cost_metric')
    max_amount  = os.environ.get('max_amount')
    
    ce  = boto3.client('ce')
    sns = boto3.client('sns')
    
    result = ce.get_cost_and_usage(
        TimePeriod={'Start': twodaysago, 'End': yesterday}, 
        Granularity='DAILY', 
        Metrics=[cost_metric]
    )
    total_amount = result['ResultsByTime'][0].get('Total').get(cost_metric).get('Amount')
    
    if total_amount > max_amount:
        sns_topic = sns.create_topic(Name='BillingAlert')
        sns.publish(
            TopicArn=sns_topic.get('TopicArn'),   
            Message='Total cost "{} USD" exceeded max_amount rate: {}'.format(total_amount, max)   
        )
        
    return {
        'statusCode': 200,
        'body': json.dumps('cost check: {}'.format(total_amount))
    }

Note that you will need to add a couple of environment variables to Lambda: cost_metric and max_amount
And the following permissions to the role used by the lambda function: ce:GetCostAndUsage, sns:Publish and sns:CreateTopic

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ce:GetCostAndUsage",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "sns:Publish",
                "sns:CreateTopic",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:*:log-group:/aws/lambda/checkDailyCosts:*",
                "arn:aws:sns:*:*:*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:us-east-1:*:*"
        }
    ]
}

After that’s setup, go to your SNS topic (created by Lambda if it doesn’t exist) and subscribe to it. There you go, daily checks and an alarm if the bill is higher than expected.

Similar Posts:




How can I copy S3 objects from another AWS account?

Ref: https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/

Well, you can do it manually following the notes in the AWS KB article above linked. I am always confused by those, and I have to read them a few times before I’m able to apply the steps correctly. Recently however I’ve been going the other way around: whenever there are steps to follow, I try to translate them into an Ansible playbook

Here’s the playbook for syncing files between S3 buckets in different accounts:

---
# copy-bucket.yml
- hosts: localhost
  vars:
    source_bucket: my-source-bucket  
    dest_bucket: my-dest-bucket
    dest_user_arn: arn:aws:iam::ACCOUNTID:user/USERNAME
    dest_user_name: USERNAME
    source_profile: src_profile
    dest_profile: dest_profile
  tasks:
    - name: Attach bucket policy to source bucket
      s3_bucket:
        name: "{{ source_bucket }}"
        policy: "{{ lookup('template','bucket-policy.json.j2') }}"
        profile: "{{ source_profile }}"

    - name: Attach an IAM policy to a user in dest account
      iam_policy:
        iam_type: user
        iam_name: "{{ dest_user_name }}"
        policy_name: "s3_move_access"
        state: present
        policy_json: "{{ lookup( 'template', 'user-policy.json.j2') }}"
        profile: "{{ dest_profile }}"
      register: user_policy

    - name: Sync the files 
      shell: aws s3 sync s3://{{ source_bucket }}/ s3://{{ dest_bucket }}/ --profile {{ dest_profile }}

You will also need the following json templates

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::{{ source_bucket }}",
                "arn:aws:s3:::{{ source_bucket }}/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::{{ dest_bucket }}",
                "arn:aws:s3:::{{ dest_bucket }}/*"
            ]
        }
    ]
}
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DelegateS3Access",
            "Effect": "Allow",
            "Principal": {"AWS": "{{ dest_user_arn }}"},
            "Action": ["s3:ListBucket","s3:GetObject"],
            "Resource": [
                "arn:aws:s3:::{{ source_bucket }}/*",
                "arn:aws:s3:::{{ source_bucket }}"
            ]
        }
    ]
}

run: ansible-playbook -v copy-bucket.yml

make sure the profile names are setup on your machine correctly, and the IAM user is there.

Similar Posts:




ChatOps with Mattermost and AWS Lambda

I’ve been working towards making things simpler when managing distributed resources at work. And since we spend most of our day in the chat room (was Slack, now Mattermost) I thought it’s best to get started with ChatOps

It’s just a fancy word for doing stuff right from the chat window. And there’s so much one can do, especially with simple Slash Commands.

Here’s a lambda function I setup yesterday for invalidating CloudFront distributions.

from time import time
import boto3

import json
import os
import re


EXPECTED_TOKEN = os.environ['mmToken']
ALLOWED_USERS = re.split('[, ]', os.environ['allowedUsers'])
DISTRIBUTIONS = {
    'site-name': 'DISTRIBUTIONID',
    'another.site': 'DISTRIBUTIONID05'
}

def parse_command_text(command_text):
    pattern = r"({})\s+(.*)".format('|'.join(DISTRIBUTIONS.keys()))
    m = re.match(pattern, command_text)
    if m:
        return { 'site': m.group(1), 'path': path}
    else: 
        return False

def lambda_handler(event, context):
    # Parse the request
    try:
        request_data = event["queryStringParameters"]
    except:
        return {
            "statusCode": 400,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Use GET for setting up mattermost slash command" }'
        }

    # Check the token matches.
    if request_data.get("token", "") != EXPECTED_TOKEN:
        print('Wrong Token!')
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Mattermost token does not match" }'
        }
        
    # Check the user is allowed to run the command
    if request_data.get("user_name", "") not in ALLOWED_USERS:
        print('Wrong User! {} not in {}'.format(request_data['user_name'], ALLOWED_USERS))
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "User not allowed to perform action" }'
        }

    # parse the command
    command_text = request_data.get("text", "")
    if not command_text:
        print('Nothing to do, bailing out')
        return {
            "statusCode": 404,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "No command text sent" }'
        }
    parts = parse_command_text(command_text)
    if not parts: 
        print('Bad formatting - command: {}'.format(command_text))
        return {
            "statusCode": 402,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Wrong pattern" }'
        }


    # Do the actual work
    cf_client = boto3.client('cloudfront')

    # Invalidate
    boto_response = cf_client.create_invalidation(
        DistributionId=DISTRIBUTIONS[parts['site']],
        InvalidationBatch={
            'Paths': {
                'Quantity': len(parts['path']),
                'Items': parts['path'] 
            },
            'CallerReference': str(time()).replace(".", "")
        }
    )['Invalidation']

    # Build the response message text.
    text = """##### Executing invalidation
| Key | Info |
| --- | ---- |
| Site | {} |
| Path | {} |
| ID | {} |
| Status | {} |""".format(
        parts['site'], 
        parts['path'],
        boto_response['Id'], 
        boto_response['Status']
    )

    # Build the response object.
    response = {
        "response_type": "in_channel",
        "text": text,
    }

    # Return the response as JSON
    return {
        "body": json.dumps(response),
        "headers": {"Content-Type": "application/json"},
        "statusCode": 200,
    }

Note that you need to hook that up with an API Gateway in AWS. Once that’s done, you will have a URL endpoint ready for deployment.

Next, I created the slash command in mattermost with the following:

slash command configuration

That’s pretty much it. Rinse and repeat for a different command, different usage.

On my list next is to have more interaction with the user in mattermost per https://docs.mattermost.com/developer/interactive-messages.html
Weekend Project, Yay!

Similar Posts:




AWS Lambda Function Code

Quick snippet to get the function code

wget -O function.zip $(aws lambda get-function --function-name MyFunctionName --query 'Code.Location' --output text)

And another to update lambda with the latest

cd package 
zip -r9 ../function.zip .
cd ..
zip -g function.zip function.py
aws lambda update-function-code --function-name MyFunctionName --zip-file fileb://function.zip

Similar Posts:




MFA from command line

MFA, 2FA, 2 step validation, etc. are everywhere these days. And it’s a good thing.

Problem with using the phone to get the authentication code is that you need to have it handy at all times (when you want to login at least) and that you have to read the code then type it in (too many steps)

One possible alternative is to use the command line oathtool

Here’s my snippet, I added the following line in my .bashrc

function mfa () { oathtool --base32 --totp "$(cat ~/.mfa/$1.mfa)" | xclip -sel clip ;}

Some preparation work:

sudo apt install oathtool xclip
mkdir ~/.mfa

If you want to use the same account with both a phone based MFA generator and the shell, set them up at the same time. Simply use the generated string for setting up the account in the Google Authenticator (as an example) and the add it to the ~/.mfa/account_name.mfa

The use of xclip automatically copies the 6 digit authentication code to the clipboard. You can go ahead and paste it.

The above setup works in Ubuntu. Didn’t try it on other systems.

Similar Posts:




VaultPress SSH/SFTP access for WordPress site behind AWS load balancer

I recently worked on migrating a large WordPress site from a dedicated server to AWS following the reference architecture as outlined in https://github.com/aws-samples/aws-refarch-wordpress

One of the upsides of this architecture is that the site is entirely behind the load balancer and the instances running the web server are never accessible from the internet (not even via ssh or http). So any updates to the site need to be deployed using Systems Manager. I will post about that later … However due to this restriction, VaultPress is only able to access the site (for backups) over https, which causes extra load on the web instances.

To fix that, we need to allow VaultPress SSH access directly to the web instances (at least to 1 of them). And the only way in via SSH is through a bastion instance.

On this bastion instance, I installed Nginx with the following configuration in /etc/nginx/nginx.conf

stream {
        server {
                listen 1234;
                proxy_pass 10.0.1.12:22;
        }
}

where 10.0.1.12 is the internal IP of one web instances behind the load balancer.

At this stage (after reloading Nginx), I added the vaultpress user to the web instances and set their SSH public key in the .ssh/authorized_keys for that user.

Testing the connection works, all green on the VaultPress site. The SSH and the SFTP connection should work properly. To test manually, I added my own public key and connected to:

ssh -l vaultpress -p 1234 bastion.mydomain.com 

Since the IPs may change in case there’s an auto scaling event, I added a cron job that runs every 5 minutes on the bastion and updates the nginx configuration to point to one of the web instances internal IPs. A simplistic version looks like this:

#!/bin/bash
IP="$(list-ips.sh | awk '{ print $1 }')"
sed -i -E "s/proxy_pass (.+):22/proxy_pass ${IP}:22/" /etc/nginx/nginx.conf
nginx -t && service nginx reload

You can also use CloudWatch Events to add a rule, that runs when AutoScaling happens. Pointing that to a Lambda function that calls Systems Manager to change the IP in Nginx. Here’s a reference for that https://docs.aws.amazon.com/autoscaling/ec2/userguide/cloud-watch-events.html#create-lambda-function

Oh, and don’t forget to lock down the bastion instance. In my case, I set the Security group in the EC2 console to allow only the following:

Type: Custom TCP Rule
Protocol: TCP
Port Range: 1234
Source: 192.0.64.0/18
Description: VaultPress SSH Access

Similar Posts:




List IPs from CloudTrail events

A quick command to list the IPs from AWS CloudTrail events.

#!/bin/bash
ACCESS_KEY_ID=AKIASMOETHINGHERE
MAX_ITEMS=100
aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '.sourceIPAddress' \
  | sort | uniq

This of course can be extended to include more information, for example:

#!/bin/bash
ACCESS_KEY_ID=AKIASMOETHINGHERE
MAX_ITEMS=100
aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '{ User: .userIdentity.userName, IP: .sourceIPAddress, Event: .eventName }'

Similar Posts:




packet_write_wait: Connection to xxx port 22: Broken pipe

SSH connections started dropping left and right a few days ago. I thought at first it was a problem with my connection, our DSL connection in Beirut, Lebanon is getting a lot better this last year. But it still had its quirks now and then and I blamed the wires and bad weather.

But it happened again the second day and the third, and it was becoming really annoying with jobs killed in the middle of execution when I forget to start a screen.

Long story short, it seems that some settings were removed or reset after an upgrade in Ubuntu on my home machine. This is what I added in /etc/ssh/ssh_config (not sshd_config!)

Host *
     ServerAliveInterval 30
     ServerAliveCountMax 5

And that’s it!

If you don’t want to edit the system-wide configuration, you can always edit ~/.ssh/config with the same for similar effect.

Similar Posts:




Check PHP-FPM Status from CLI

It’s good to have the status page, especially if you need to troubleshoot issues that are not showing up in the regular logs, such as high load or memory consumption.

However, looking at that page and refreshing it manually is not always useful. Sometimes you need to log that data, or have a way to pinpoint a single PID causing the load.

First make sure you have the status page accessible. Here’s a tutorial I like: https://easyengine.io/tutorials/php/fpm-status-page/

The create this script on the server. Make sure to change the connect part to your PHP-FPM pool’s correct port or socket

#!/bin/bash
# Requirements: cgi-fcgi
#   on ubuntu: apt-get install libfcgi0ldbl

RESULT=$(SCRIPT_NAME=/status \
SCRIPT_FILENAME='/status' \
QUERY_STRING=full \
REQUEST_METHOD=GET \
/usr/bin/cgi-fcgi -bind -connect 127.0.0.1:9000)

if [ -n "$1" ]; then
echo -e "$RESULT" | grep -A12 "$1"
else
echo -e "$RESULT"
fi

One way I use it is run `top` and check for the suspect process PID, then run ` fpm_status.sh <PID>`

Similar Posts:




Handling AWS Simple Email Service notifications

Apparently it’s not enough to gather information on your email sending, bounces and complaints. You also need to act when there’s a problem otherwise Amazon will put your account under probation and might even shut it completely if you don’t fix the problem.

And it’s pretty easy to implement per their sample code and the explanation in the aws blog

Here are the steps I use to set it up:

  • Create an SNS topic. Let’s call it SES-BOUNCES. Or create 2 SNS topics (1 for bounces, another for complaints). I will go with just one.
  • Create a service on your site to handle the bounces. I used some simple PHP code (pasted below) for that.
  • Create a subscription in your SNS topic (created first)

The service should handle 2 things:

  • the subscription (that will happen once!): call the SubscribeURL and you’re done.
  • the bounces/complaints: Remove the email addresses form your mailing list, or blacklist them, or … The main idea is to stop sending emails to those who complain. And stop trying to send emails to addresses that bounce.

Here’s the code for the PHP service, it’s pretty straight forward

if ( 'POST' !== $_SERVER['REQUEST_METHOD'] ) {
	http_response_code( 405 );
	die;
}

require 'vendor/autoload.php';

use Aws\Sns\Message;
use Aws\Sns\MessageValidator;

function parse_bounce( $message ) {
	$recipients = array();
	foreach ( $message['bounce']['bouncedRecipients'] as $recipient ) {
		$recipients[] = $recipient['emailAddress'];
	}
	switch ( $message['bounce']['bounceType'] ) {
		case 'Transient':
			error_log( "BouncesController - Soft bounces occurred for " . join( ', ', $recipients ) );

			return array();
			break;
		case 'Permanent':
			error_log( "BouncesController - Hard bounces occurred for " . join( ', ', $recipients ) );

			return $recipients;
			break;
	}
}

function parse_complaint( $message ) {
	$recipients = array();
	foreach ( $message['complaint']['complainedRecipients'] as $recipient ) {
		$recipients[] = $recipient['emailAddress'];
	}

	return $recipients;
}

function blacklist_recipients( $recipients ) {
	foreach ( $recipients as $email_address ) {
		// blacklist those emails
	}
}

try {
	$sns_message = Message::fromRawPostData();

	$validator = new MessageValidator();
	$validator->validate( $sns_message );

	if ( $validator->isValid( $sns_message ) ) {
		if ( in_array( $sns_message['Type'], [ 'SubscriptionConfirmation', 'UnsubscribeConfirmation' ] ) ) {
			file_get_contents( $sns_message['SubscribeURL'] );
			error_log( 'Subscribed to ' . $sns_message['SubscribeURL'] );
		}

		if ( $sns_message['Type'] == 'Notification' ) {
			$message           = $sns_message['Message'];
			$notification_type = $message['notificationType'];
			if ( $notification_type == 'Bounce' ) {
				blacklist_recipients( parse_bounce( $message ) );
			}
			if ( $notification_type == 'Complaint' ) {
				blacklist_recipients( parse_complaint( $message ) );
			}
		}
	}


} catch ( Exception $e ) {
	error_log( 'Error: ' . $e->getMessage() );
	http_response_code( 404 );
	die;
}

Don’t forget to run composer to get those dependencies:

{
  "require": {
    "aws/aws-sdk-php": "3.*",
    "aws/aws-php-sns-message-validator": "1.4.0"
  }
}

And make sure you don’t send unsolicited emails! it’s just not nice.

Similar Posts: