Tools (57)


My Ansible AWS EC2 Dynamic Inventory

Start with the the Ansible configuration. This can be set in /etc/ansible/ansible.cfg or ~/.ansible.cfg (in the home directory) or ansible.cfg (in the current directory)

My suggestion is use one of the first 2 (ie. /etc/ or ~/.ansible.cfg if you’re going to be managing instances from your machine. Update the configuration as needed.

[defaults]
inventory = ./ansible_plugins
enable_plugins = aws_ec2
host_key_checking = False
pipelining = True
log_path = /var/log/ansible

You may need other plugins, this one is for aws_ec2. In the /etc/ansible/ansible_plugins directory, create the *_aws_ec2.yml configuration file for your inventory

# /etc/ansible/ansible_plugins/testing_aws_ec2.yml
---
plugin: aws_ec2
aws_profile: testing
regions:
  - us-east-1
  - us-east-2
filters:
  tag:Team: testing
  instance-state-name : running
hostnames:
  - instance-id
  - dns-name
keyed_groups:
  - prefix: team
    key: tags['Team']

I’m filtering using a tag:Team == testing and showing only running instances.

I’m also using the instance-id and dns-name attributes as hostname

And I’m using the tag['Team']as a grouping.

So now, I can do the following from any directory (since my configuration is global in /etc/ansible)

$ ansible-inventory --list --yaml

all:
  children:
    aws_ec2:
      hosts:
        i-xxxxxxxxxxxxxxx:
          ami_launch_index: 0
          architecture: x86_64
          block_device_mappings:
          - device_name: /dev/sda1
            ebs:
              attach_time: 2020-08-10 15:20:58+00:00
              delete_on_termination: true
              status: attached
              volume_id: vol-xxxxxxxxxxxxxx
...
    team_testing:
      hosts:
        i-xyxyxyxyxyyxyxyy: {}
        i-xyxyxy2321yxyxyy: {}
        i-xyxyxyxyxy89yxyy: {}
        i-xyxy1210xyyxyxyy: {}
        i-xyxy999999yxyxyy: {}
        i-xyxyxy44xyyxyxyy: {}
        i-xyx2323yxyyxyxyy: {}
        i-xyxyxyxyxy9977yy: {}
    ungrouped: {}

I can also use the team_testing or the individual instance_id in my Ansible hostscalls.

Similar Posts:




Get daily cost alerts on AWS

So I wanted to have a better alarm system for when AWS hits us with unexpected costs. It’s better to know there’s something wrong rather quickly and not suffer hundreds of dollars costs for something you don’t really need or want.

The AWS provided alarm checks for hikes on a monthly basis. Here’s the doc they published. So that’s an alarm that sounds when your estimated bill is going to be higher than the budgeted amount, or what you had in mind in the first place. Not very useful honestly in our case. It will just be too late.

The only alternative I found was creating a daily check, that will compare yesterday’s costs against a max_amount set by default. Let’s say you want to have your daily bill not exceed 5$US.

For ease of use and maintainability, I’m using a lambda function triggered by a cron (EventBridge rule) for the daily checks. And I’m sending the Alarm using an SNS topic, this way I can subscribe to it by email, or send it to our Mattermost channel, etc.

Here’s the code:

import os
import json
import boto3
from datetime import datetime, timedelta


def lambda_handler(event, context):
    yesterday   = datetime.strftime(datetime.now() - timedelta(1), '%Y-%m-%d')
    twodaysago  = datetime.strftime(datetime.now() - timedelta(2), '%Y-%m-%d')
    cost_metric = os.environ.get('cost_metric')
    max_amount  = os.environ.get('max_amount')
    
    ce  = boto3.client('ce')
    sns = boto3.client('sns')
    
    result = ce.get_cost_and_usage(
        TimePeriod={'Start': twodaysago, 'End': yesterday}, 
        Granularity='DAILY', 
        Metrics=[cost_metric]
    )
    total_amount = result['ResultsByTime'][0].get('Total').get(cost_metric).get('Amount')
    
    if total_amount > max_amount:
        sns_topic = sns.create_topic(Name='BillingAlert')
        sns.publish(
            TopicArn=sns_topic.get('TopicArn'),   
            Message='Total cost "{} USD" exceeded max_amount rate: {}'.format(total_amount, max)   
        )
        
    return {
        'statusCode': 200,
        'body': json.dumps('cost check: {}'.format(total_amount))
    }

Note that you will need to add a couple of environment variables to Lambda: cost_metric and max_amount
And the following permissions to the role used by the lambda function: ce:GetCostAndUsage, sns:Publish and sns:CreateTopic

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ce:GetCostAndUsage",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "sns:Publish",
                "sns:CreateTopic",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:*:log-group:/aws/lambda/checkDailyCosts:*",
                "arn:aws:sns:*:*:*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:us-east-1:*:*"
        }
    ]
}

After that’s setup, go to your SNS topic (created by Lambda if it doesn’t exist) and subscribe to it. There you go, daily checks and an alarm if the bill is higher than expected.

Similar Posts:




ChatOps with Mattermost and AWS Lambda

I’ve been working towards making things simpler when managing distributed resources at work. And since we spend most of our day in the chat room (was Slack, now Mattermost) I thought it’s best to get started with ChatOps

It’s just a fancy word for doing stuff right from the chat window. And there’s so much one can do, especially with simple Slash Commands.

Here’s a lambda function I setup yesterday for invalidating CloudFront distributions.

from time import time
import boto3

import json
import os
import re


EXPECTED_TOKEN = os.environ['mmToken']
ALLOWED_USERS = re.split('[, ]', os.environ['allowedUsers'])
DISTRIBUTIONS = {
    'site-name': 'DISTRIBUTIONID',
    'another.site': 'DISTRIBUTIONID05'
}

def parse_command_text(command_text):
    pattern = r"({})\s+(.*)".format('|'.join(DISTRIBUTIONS.keys()))
    m = re.match(pattern, command_text)
    if m:
        return { 'site': m.group(1), 'path': path}
    else: 
        return False

def lambda_handler(event, context):
    # Parse the request
    try:
        request_data = event["queryStringParameters"]
    except:
        return {
            "statusCode": 400,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Use GET for setting up mattermost slash command" }'
        }

    # Check the token matches.
    if request_data.get("token", "") != EXPECTED_TOKEN:
        print('Wrong Token!')
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Mattermost token does not match" }'
        }
        
    # Check the user is allowed to run the command
    if request_data.get("user_name", "") not in ALLOWED_USERS:
        print('Wrong User! {} not in {}'.format(request_data['user_name'], ALLOWED_USERS))
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "User not allowed to perform action" }'
        }

    # parse the command
    command_text = request_data.get("text", "")
    if not command_text:
        print('Nothing to do, bailing out')
        return {
            "statusCode": 404,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "No command text sent" }'
        }
    parts = parse_command_text(command_text)
    if not parts: 
        print('Bad formatting - command: {}'.format(command_text))
        return {
            "statusCode": 402,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Wrong pattern" }'
        }


    # Do the actual work
    cf_client = boto3.client('cloudfront')

    # Invalidate
    boto_response = cf_client.create_invalidation(
        DistributionId=DISTRIBUTIONS[parts['site']],
        InvalidationBatch={
            'Paths': {
                'Quantity': len(parts['path']),
                'Items': parts['path'] 
            },
            'CallerReference': str(time()).replace(".", "")
        }
    )['Invalidation']

    # Build the response message text.
    text = """##### Executing invalidation
| Key | Info |
| --- | ---- |
| Site | {} |
| Path | {} |
| ID | {} |
| Status | {} |""".format(
        parts['site'], 
        parts['path'],
        boto_response['Id'], 
        boto_response['Status']
    )

    # Build the response object.
    response = {
        "response_type": "in_channel",
        "text": text,
    }

    # Return the response as JSON
    return {
        "body": json.dumps(response),
        "headers": {"Content-Type": "application/json"},
        "statusCode": 200,
    }

Note that you need to hook that up with an API Gateway in AWS. Once that’s done, you will have a URL endpoint ready for deployment.

Next, I created the slash command in mattermost with the following:

slash command configuration

That’s pretty much it. Rinse and repeat for a different command, different usage.

On my list next is to have more interaction with the user in mattermost per https://docs.mattermost.com/developer/interactive-messages.html
Weekend Project, Yay!

Similar Posts:




AWS Lambda Function Code

Quick snippet to get the function code

wget -O function.zip $(aws lambda get-function --function-name MyFunctionName --query 'Code.Location' --output text)

And another to update lambda with the latest

cd package 
zip -r9 ../function.zip .
cd ..
zip -g function.zip function.py
aws lambda update-function-code --function-name MyFunctionName --zip-file fileb://function.zip

Similar Posts:




MFA from command line

MFA, 2FA, 2 step validation, etc. are everywhere these days. And it’s a good thing.

Problem with using the phone to get the authentication code is that you need to have it handy at all times (when you want to login at least) and that you have to read the code then type it in (too many steps)

One possible alternative is to use the command line oathtool

Here’s my snippet, I added the following line in my .bashrc

function mfa () { oathtool --base32 --totp "$(cat ~/.mfa/$1.mfa)" | xclip -sel clip ;}

Some preparation work:

sudo apt install oathtool xclip
mkdir ~/.mfa

If you want to use the same account with both a phone based MFA generator and the shell, set them up at the same time. Simply use the generated string for setting up the account in the Google Authenticator (as an example) and the add it to the ~/.mfa/account_name.mfa

The use of xclip automatically copies the 6 digit authentication code to the clipboard. You can go ahead and paste it.

The above setup works in Ubuntu. Didn’t try it on other systems.

Similar Posts:




List IPs from CloudTrail events

A quick command to list the IPs from AWS CloudTrail events.

#!/bin/bash
ACCESS_KEY_ID=AKIASMOETHINGHERE
MAX_ITEMS=100
aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '.sourceIPAddress' \
  | sort | uniq

This of course can be extended to include more information, for example:

#!/bin/bash
ACCESS_KEY_ID=AKIASMOETHINGHERE
MAX_ITEMS=100
aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '{ User: .userIdentity.userName, IP: .sourceIPAddress, Event: .eventName }'

Similar Posts:




Check PHP-FPM Status from CLI

It’s good to have the status page, especially if you need to troubleshoot issues that are not showing up in the regular logs, such as high load or memory consumption.

However, looking at that page and refreshing it manually is not always useful. Sometimes you need to log that data, or have a way to pinpoint a single PID causing the load.

First make sure you have the status page accessible. Here’s a tutorial I like: https://easyengine.io/tutorials/php/fpm-status-page/

The create this script on the server. Make sure to change the connect part to your PHP-FPM pool’s correct port or socket

#!/bin/bash
# Requirements: cgi-fcgi
#   on ubuntu: apt-get install libfcgi0ldbl

RESULT=$(SCRIPT_NAME=/status \
SCRIPT_FILENAME='/status' \
QUERY_STRING=full \
REQUEST_METHOD=GET \
/usr/bin/cgi-fcgi -bind -connect 127.0.0.1:9000)

if [ -n "$1" ]; then
echo -e "$RESULT" | grep -A12 "$1"
else
echo -e "$RESULT"
fi

One way I use it is run `top` and check for the suspect process PID, then run ` fpm_status.sh <PID>`

Similar Posts:




Slash command for Mattermost

Following up on the code that set the nickname and status via bash function, I wanted to do the same using a slash command

Here’s the code in PHP

<?php require 'vendor/autoload.php';
use GuzzleHttp\Client;
$client = new Client(['base_uri' => 'https://chat.example.com/api/v4/']);
$SLASHCMD_TOKEN = 'GET YOUR TOKEN';
$ADMIN_TOKEN = 'GET ADMIN TOKEN';
if (isset($_POST['token']) && $_POST['token'] == $SLASHCMD_TOKEN && $_POST['command'] == '/status' && !empty($_POST['text']))
{
    $user_id = $_POST['user_id'];
    $text = $_POST['text'];
    $params = preg_split('/("[^"]*")|\h+/', $text, 2, PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE);
    if ($params && count($params) > 0)
    {
        $nickname = $params[0];
        $status = (count($params) == 2) ? $params[1] : '';
    }
    $response = $client->request('PUT', "users/$user_id/patch", ['headers' => ['Authorization' => "Bearer " . $ADMIN_TOKEN], 'json' => ['nickname' => $nickname]]);
    if (in_array($status, array(
        'dnd',
        'online',
        'offline',
        'away'
    )))
    {
        $response = $client->request('PUT', "users/$user_id/status", ['headers' => ['Authorization' => "Bearer " . $PERSONAL_TOKEN], 'json' => ['status' => $status]]);
    }
}

seems to work for me.

You’ll need to follow the instructions in the documentation to create the command on the server. Make sure to save the tokens in a safe place as usual.

Similar Posts:




Change Nickname in Mattermost

A colleague asked me today for a quick way to set the nickname in Mattermost. He needed to do that to provide more information about his status than what the actual “Status” in shows, which is limited to “Online”, “Away”, “Do Not Disturn” and “Offline”

So if you want to tell people you’re away for a couple of hours, or sick, walking the dog, etc. then you need to go IRC style and put the additional information in your nickname. Not too bad actually, just inconvenient.

I checked the Mattermost API docs and wrote a small bash script to get things going

#!/bin/bash
# Requirements:
#  - get the token from Mattermost > Account Settings > Security > Personal Access Tokens > Create New Token
#    make sure to save the Token itself, not the ID!
#  - install jq
TOKEN=GETYOUROWNTOKEN
NICKNAME=${1:-NickNack}
STATUS=${2:-online}
CHANNEL_ID="my_hello_channel_ID"

user_id=$(curl -sH "Authorization: Bearer $TOKEN" \
  https://chat.example.com/api/v4/users/me | jq -r .id)
curl -XPUT -d '{"nickname":"'$NICKNAME'"}' \
  -sH "Authorization: Bearer $TOKEN" \
  "https://chat.example.com/api/v4/users/$user_id/patch"
curl -XPUT -d '{"status":"'$STATUS'"}' \
  -sH "Authorization: Bearer $TOKEN" \
  "https://chat.example.com/api/v4/users/$user_id/status"
if [ -n "$3" ]; then
  curl -XPOST -d '{"channel_id":"'"$CHANNEL_ID"'", "message":"'"$3"'"}' \
  -sH "Authorization: Bearer $TOKEN" "https://chat.example.com/api/v4/posts"
fi

A couple of things to watch out there:

  • You need to save the TOKEN, not the TOKEN ID. Once created and saved the actual TOKEN is no longer showing in the UI. So save that somewhere safe and use it in the script
  • The user needs to be able to create their own token. Follow the procedure per the docs here to allow them to do that. Yes, you need to do all that 🙂
  • The Channel ID can be copied from the channel drop-down menu > View info. In the bottom left, in grey you will see: `ID: xxxxxxxxxx` that’s the one you need!

 

For convenience, I added a few aliases in my bashrc:

alias lunch="mmstatus.sh 'abdallah|lunch' 'dnd' 'going to lunch break'"
alias back="mmstatus.sh 'abdallah|work' 'online' 'back!'"
alias goodmorning="mmstatus.sh 'abdallah|work' online 'Good morning :)'"

I know it’s better to add a slash-command for that. Something like ‘/nick …’ or ‘/status …’. I’ll check out those docs later.

Similar Posts:




Amazon SES Dashboard

At work, we wanted to switch from Mandrill/Mailchimp to Amazon SES for a long time. But that was not happening mainly because the tools SES offered to monitor sent mail were, how should I say, DIY.
So, after some deliberation and when I found some time to tackle it, I did it 🙂

The setup is not too complex? Well, it is. But once you understand it, it’s pretty basic.

Let’s start at the source: Amazon

You will see this notice under Notifications for each Email Address you create/verify in SES:

Amazon SES can send you detailed notifications about your bounces, complaints, and deliveries.
Bounce and complaint notifications are available by email or through Amazon Simple Notification Service (Amazon SNS).

Next step is to create the SNS Topic, it’s just a label really.

You will also need an Amazon SQS queue. A standard queue should be good. Once it’s there, copy the ARN as you will need that for the SNS subscription.

Let’s go back to the SNS Topic we created and click on the Create subscription button. Choose Amazon SQS for the Protocol and paste the ARN of the SQS queue you created earlier. You may need to confirm that too? Just click the button if it’s there.

That’s all on the Amazon side! See how easy that was?!

Next you need a Graylog setup.

Where do I start? Well, first choose where do you want to put that Graylog “machine”. For Amazon EC2 I would just go with their ready-made AMIs. Here’s the link/docs to follow: http://docs.graylog.org/en/latest/pages/installation/aws.html (but and I quote: The Graylog appliance is not created to provide a production ready solution)

Another way to get started quickly is an Ansible role you can pick/install from Ansible Galaxy. Check out the QuickStart in the README per https://galaxy.ansible.com/Graylog2/graylog-ansible-role/#readme

But since I like doing things the “easy” way, I went with the Ubuntu 16.04 package per http://docs.graylog.org/en/latest/pages/installation/operating_system_packages.html
Seriously, it’s much easier to use and maintain since I know where everything is. Maybe it’s just me …
Anyway, here’s my bash session:

apt update && apt upgrade
sudo apt-get install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
apt update && apt install -y mongodb-org
systemctl daemon-reload
systemctl enable mongod.service
systemctl restart mongod.service
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
apt update && apt-get install elasticsearch
vi /etc/elasticsearch/elasticsearch.yml)
vi /etc/elasticsearch/elasticsearch.yml
systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl restart elasticsearch.service
wget https://packages.graylog2.org/repo/packages/graylog-2.4-repository_latest.deb
dpkg -i graylog-2.4-repository_latest.deb
apt-get update && sudo apt-get install graylog-server
vi /etc/graylog/server/server.conf
systemctl daemon-reload
systemctl enable graylog-server.service
systemctl start graylog-server.service

I followed the instructions there, and installed Apache on top of that with the following configuration for the VirtualHost


ServerName example.com

# Letsencrypt it
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf

# The needed parts start here
ProxyRequests Off
Order deny,allow
Allow from all

RequestHeader set X-Graylog-Server-URL "https://example.com/api/"
ProxyPass http://127.0.0.1:9000/
ProxyPassReverse http://127.0.0.1:9000/

This will leave you with a Graylog server ready to receive the logs. Now, how do we get the logs over to Graylog? Easy! Pull them from SQS.

Start by adding a GELF HTTP Input in Graylog (System > Inputs > Select Input: GELF HTTP > Launch new input)
Make sure to get the port there right, you will need to configure the script below.
Then download the script, make sure it’s executable. Do run it manually, that way it will tell you what’s missing (BOTO3)
Make sure to configure AWS credentials. The quickest way is:
* to install awscli: apt-get install awscli
* and run its configuration: aws configure

Edit the script with the right configuration vars, add it to cron to run as much as you feel necessary (I use it @hourly)

import boto3
import json
import requests
from datetime import datetime
import sys
HOST = 'MY.HOST.ADDRESS'
PORT = 12201 # change if you create graylog input with different port
queue_url = 'https://sqs.ZONE.amazonaws.com/ACCOUNT/QUEUENAME'
sqs = boto3.client('sqs')
response = sqs.get_queue_attributes(
QueueUrl=queue_url,
AttributeNames=['ApproximateNumberOfMessages']
)
number_of_messages = int(response['Attributes']['ApproximateNumberOfMessages'])
for i in range(1, number_of_messages + 1):
data = sqs.receive_message(QueueUrl=queue_url)
if 'Messages' in data:
body = json.loads(data['Messages'][0]['Body'])
receipt_handle = data['Messages'][0]['ReceiptHandle']
msg = json.loads(body['Message'])
version = "1.1"
host = "localhost"
short_message = "Type: {}; Source: {}; Destination: {}".format(msg['notificationType'], msg['mail']['source'],
msg['mail']['destination'][0])
full_message = msg
timestamp = datetime.strptime(msg['mail']['timestamp'].strip('Z'), '%Y-%m-%dT%H:%M:%S.%f').timestamp()
to_gelf = {
"version": version,
"host": "localhost",
"short_message": short_message,
"full_message": full_message,
"timestamp": timestamp,
"level": 1
}
r = requests.post('http://{}:{}/gelf'.format(HOST,PORT), json=to_gelf)
if r.ok:
sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=receipt_handle)
sys.exit(0)

Enjoy the dashboard! Oh, there’s plenty to learn about Graylog if it’s your first time, but it’s pretty good once you get the hang of it.

Similar Posts: