Things I tend to forget


Find the owner of an AWS Access Key

This is something you will have to deal with at one time or another after managing AWS IAM users for a while. Basically, it’s straight forward with the following code:

  • Get a list of users
    • for each user get their access key IDs
    • (optional) pipe to grep for checking a specific ID

And here’s the `aws-cli` code

for user in $(aws iam list-users --output text --query 'Users[*].UserName'); do     
  aws iam list-access-keys --user $user --output text; 
done

# or 
Q_ACCESSKEY=AKIA*************
for user in $(aws iam list-users --output text --query 'Users[*].UserName'); do     
  aws iam list-access-keys --user $user --output text | grep $Q_ACCESSKEY 
done

 

Similar Posts:




Handling AWS Simple Email Service notifications

Apparently it’s not enough to gather information on your email sending, bounces and complaints. You also need to act when there’s a problem otherwise Amazon will put your account under probation and might even shut it completely if you don’t fix the problem.

And it’s pretty easy to implement per their sample code and the explanation in the aws blog

Here are the steps I use to set it up:

  • Create an SNS topic. Let’s call it SES-BOUNCES. Or create 2 SNS topics (1 for bounces, another for complaints). I will go with just one.
  • Create a service on your site to handle the bounces. I used some simple PHP code (pasted below) for that.
  • Create a subscription in your SNS topic (created first)

The service should handle 2 things:

  • the subscription (that will happen once!): call the SubscribeURL and you’re done.
  • the bounces/complaints: Remove the email addresses form your mailing list, or blacklist them, or … The main idea is to stop sending emails to those who complain. And stop trying to send emails to addresses that bounce.

Here’s the code for the PHP service, it’s pretty straight forward

if ( 'POST' !== $_SERVER['REQUEST_METHOD'] ) {
	http_response_code( 405 );
	die;
}

require 'vendor/autoload.php';

use Aws\Sns\Message;
use Aws\Sns\MessageValidator;

function parse_bounce( $message ) {
	$recipients = array();
	foreach ( $message['bounce']['bouncedRecipients'] as $recipient ) {
		$recipients[] = $recipient['emailAddress'];
	}
	switch ( $message['bounce']['bounceType'] ) {
		case 'Transient':
			error_log( "BouncesController - Soft bounces occurred for " . join( ', ', $recipients ) );

			return array();
			break;
		case 'Permanent':
			error_log( "BouncesController - Hard bounces occurred for " . join( ', ', $recipients ) );

			return $recipients;
			break;
	}
}

function parse_complaint( $message ) {
	$recipients = array();
	foreach ( $message['complaint']['complainedRecipients'] as $recipient ) {
		$recipients[] = $recipient['emailAddress'];
	}

	return $recipients;
}

function blacklist_recipients( $recipients ) {
	foreach ( $recipients as $email_address ) {
		// blacklist those emails
	}
}

try {
	$sns_message = Message::fromRawPostData();

	$validator = new MessageValidator();
	$validator->validate( $sns_message );

	if ( $validator->isValid( $sns_message ) ) {
		if ( in_array( $sns_message['Type'], [ 'SubscriptionConfirmation', 'UnsubscribeConfirmation' ] ) ) {
			file_get_contents( $sns_message['SubscribeURL'] );
			error_log( 'Subscribed to ' . $sns_message['SubscribeURL'] );
		}

		if ( $sns_message['Type'] == 'Notification' ) {
			$message           = $sns_message['Message'];
			$notification_type = $message['notificationType'];
			if ( $notification_type == 'Bounce' ) {
				blacklist_recipients( parse_bounce( $message ) );
			}
			if ( $notification_type == 'Complaint' ) {
				blacklist_recipients( parse_complaint( $message ) );
			}
		}
	}


} catch ( Exception $e ) {
	error_log( 'Error: ' . $e->getMessage() );
	http_response_code( 404 );
	die;
}

Don’t forget to run composer to get those dependencies:

{
  "require": {
    "aws/aws-sdk-php": "3.*",
    "aws/aws-php-sns-message-validator": "1.4.0"
  }
}

And make sure you don’t send unsolicited emails! it’s just not nice.

Similar Posts:




Slash command for Mattermost

Following up on the code that set the nickname and status via bash function, I wanted to do the same using a slash command

Here’s the code in PHP

<?php require 'vendor/autoload.php';
use GuzzleHttp\Client;
$client = new Client(['base_uri' => 'https://chat.example.com/api/v4/']);
$SLASHCMD_TOKEN = 'GET YOUR TOKEN';
$ADMIN_TOKEN = 'GET ADMIN TOKEN';
if (isset($_POST['token']) && $_POST['token'] == $SLASHCMD_TOKEN && $_POST['command'] == '/status' && !empty($_POST['text']))
{
    $user_id = $_POST['user_id'];
    $text = $_POST['text'];
    $params = preg_split('/("[^"]*")|\h+/', $text, 2, PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE);
    if ($params && count($params) > 0)
    {
        $nickname = $params[0];
        $status = (count($params) == 2) ? $params[1] : '';
    }
    $response = $client->request('PUT', "users/$user_id/patch", ['headers' => ['Authorization' => "Bearer " . $ADMIN_TOKEN], 'json' => ['nickname' => $nickname]]);
    if (in_array($status, array(
        'dnd',
        'online',
        'offline',
        'away'
    )))
    {
        $response = $client->request('PUT', "users/$user_id/status", ['headers' => ['Authorization' => "Bearer " . $PERSONAL_TOKEN], 'json' => ['status' => $status]]);
    }
}

seems to work for me.

You’ll need to follow the instructions in the documentation to create the command on the server. Make sure to save the tokens in a safe place as usual.

Similar Posts:




Change Nickname in Mattermost

A colleague asked me today for a quick way to set the nickname in Mattermost. He needed to do that to provide more information about his status than what the actual “Status” in shows, which is limited to “Online”, “Away”, “Do Not Disturn” and “Offline”

So if you want to tell people you’re away for a couple of hours, or sick, walking the dog, etc. then you need to go IRC style and put the additional information in your nickname. Not too bad actually, just inconvenient.

I checked the Mattermost API docs and wrote a small bash script to get things going

#!/bin/bash
# Requirements:
#  - get the token from Mattermost > Account Settings > Security > Personal Access Tokens > Create New Token
#    make sure to save the Token itself, not the ID!
#  - install jq
TOKEN=GETYOUROWNTOKEN
NICKNAME=${1:-NickNack}
STATUS=${2:-online}
CHANNEL_ID="my_hello_channel_ID"

user_id=$(curl -sH "Authorization: Bearer $TOKEN" \
  https://chat.example.com/api/v4/users/me | jq -r .id)
curl -XPUT -d '{"nickname":"'$NICKNAME'"}' \
  -sH "Authorization: Bearer $TOKEN" \
  "https://chat.example.com/api/v4/users/$user_id/patch"
curl -XPUT -d '{"status":"'$STATUS'"}' \
  -sH "Authorization: Bearer $TOKEN" \
  "https://chat.example.com/api/v4/users/$user_id/status"
if [ -n "$3" ]; then
  curl -XPOST -d '{"channel_id":"'"$CHANNEL_ID"'", "message":"'"$3"'"}' \
  -sH "Authorization: Bearer $TOKEN" "https://chat.example.com/api/v4/posts"
fi

A couple of things to watch out there:

  • You need to save the TOKEN, not the TOKEN ID. Once created and saved the actual TOKEN is no longer showing in the UI. So save that somewhere safe and use it in the script
  • The user needs to be able to create their own token. Follow the procedure per the docs here to allow them to do that. Yes, you need to do all that 🙂
  • The Channel ID can be copied from the channel drop-down menu > View info. In the bottom left, in grey you will see: `ID: xxxxxxxxxx` that’s the one you need!

 

For convenience, I added a few aliases in my bashrc:

alias lunch="mmstatus.sh 'abdallah|lunch' 'dnd' 'going to lunch break'"
alias back="mmstatus.sh 'abdallah|work' 'online' 'back!'"
alias goodmorning="mmstatus.sh 'abdallah|work' online 'Good morning :)'"

I know it’s better to add a slash-command for that. Something like ‘/nick …’ or ‘/status …’. I’ll check out those docs later.

Similar Posts:




Stop un-tagged instances

After setting up a bastion and getting GitLab runner to autoscale on EC2 spot instances, I noticed that some instances are being started but left un-tagged and probably unused. Those seem to slip through the cracks somehow .. I’m still investigating why that’s happening.
Meanwhile, to avoid paying for those instances, I set up a cron to check for non tagged instances on EC2 and terminate them.

Here’s my bash code using aws-cli

#!/bin/bash
INSTANCES=$(aws ec2 describe-instances \
  --filters "Name=key-name,Values=runner-*" \
            "Name=instance-state-name,Values=running" \
  --query 'Reservations[].Instances[?!not_null(Tags[?Key == `Name`].Value)] | [].[InstanceId]' \
  --output text)
if [ -n "$INSTANCES" ]; then 
  aws ec2 terminate-instances --instance-ids $INSTANCES 
fi

This looks for running instances, with the Key Name runner-* and where the tag Name is not not_null (so null!)

It’s working so far. Will keep on looking for a more permanent solution.

Similar Posts:




Scaling Gitlab-CI Runners using AWS/EC2

I started working on this task a week ago. The setup is based on the write-up in https://docs.gitlab.com/runner/configuration/autoscale.html
Almost there …

What I previously had set up was a few persistent EC2 Spot requests for largish machines at half (more) the price. However, I felt (and heard complaints) that those were not enough at rush hour (when everyone and their cat wanted to push their code changes and test them). At the same time, even with Spot instances we had to pay for mostly unused servers for more than half the time. That felt like a huge waste for me and I know many feel the same.

Problems
* Get the right size instances for the right price
* Setup inside a VPC
* Use S3 for caching
* Use docker registry proxy (or setup the gitlab docker registry) for custom docker images used in CI jobs

Similar Posts:




Amazon SES Dashboard

At work, we wanted to switch from Mandrill/Mailchimp to Amazon SES for a long time. But that was not happening mainly because the tools SES offered to monitor sent mail were, how should I say, DIY.
So, after some deliberation and when I found some time to tackle it, I did it 🙂

The setup is not too complex? Well, it is. But once you understand it, it’s pretty basic.

Let’s start at the source: Amazon

You will see this notice under Notifications for each Email Address you create/verify in SES:

Amazon SES can send you detailed notifications about your bounces, complaints, and deliveries.
Bounce and complaint notifications are available by email or through Amazon Simple Notification Service (Amazon SNS).

Next step is to create the SNS Topic, it’s just a label really.

You will also need an Amazon SQS queue. A standard queue should be good. Once it’s there, copy the ARN as you will need that for the SNS subscription.

Let’s go back to the SNS Topic we created and click on the Create subscription button. Choose Amazon SQS for the Protocol and paste the ARN of the SQS queue you created earlier. You may need to confirm that too? Just click the button if it’s there.

That’s all on the Amazon side! See how easy that was?!

Next you need a Graylog setup.

Where do I start? Well, first choose where do you want to put that Graylog “machine”. For Amazon EC2 I would just go with their ready-made AMIs. Here’s the link/docs to follow: http://docs.graylog.org/en/latest/pages/installation/aws.html (but and I quote: The Graylog appliance is not created to provide a production ready solution)

Another way to get started quickly is an Ansible role you can pick/install from Ansible Galaxy. Check out the QuickStart in the README per https://galaxy.ansible.com/Graylog2/graylog-ansible-role/#readme

But since I like doing things the “easy” way, I went with the Ubuntu 16.04 package per http://docs.graylog.org/en/latest/pages/installation/operating_system_packages.html
Seriously, it’s much easier to use and maintain since I know where everything is. Maybe it’s just me …
Anyway, here’s my bash session:

apt update && apt upgrade
sudo apt-get install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
apt update && apt install -y mongodb-org
systemctl daemon-reload
systemctl enable mongod.service
systemctl restart mongod.service
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
apt update && apt-get install elasticsearch
vi /etc/elasticsearch/elasticsearch.yml)
vi /etc/elasticsearch/elasticsearch.yml
systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl restart elasticsearch.service
wget https://packages.graylog2.org/repo/packages/graylog-2.4-repository_latest.deb
dpkg -i graylog-2.4-repository_latest.deb
apt-get update && sudo apt-get install graylog-server
vi /etc/graylog/server/server.conf
systemctl daemon-reload
systemctl enable graylog-server.service
systemctl start graylog-server.service

I followed the instructions there, and installed Apache on top of that with the following configuration for the VirtualHost


ServerName example.com

# Letsencrypt it
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf

# The needed parts start here
ProxyRequests Off
Order deny,allow
Allow from all

RequestHeader set X-Graylog-Server-URL "https://example.com/api/"
ProxyPass http://127.0.0.1:9000/
ProxyPassReverse http://127.0.0.1:9000/

This will leave you with a Graylog server ready to receive the logs. Now, how do we get the logs over to Graylog? Easy! Pull them from SQS.

Start by adding a GELF HTTP Input in Graylog (System > Inputs > Select Input: GELF HTTP > Launch new input)
Make sure to get the port there right, you will need to configure the script below.
Then download the script, make sure it’s executable. Do run it manually, that way it will tell you what’s missing (BOTO3)
Make sure to configure AWS credentials. The quickest way is:
* to install awscli: apt-get install awscli
* and run its configuration: aws configure

Edit the script with the right configuration vars, add it to cron to run as much as you feel necessary (I use it @hourly)

Enjoy the dashboard! Oh, there’s plenty to learn about Graylog if it’s your first time, but it’s pretty good once you get the hang of it.

Similar Posts:




Return a static page for specific users/IPs

Why do I want to do that?
Different reasons. One time, because I wanted a certain user not to access the dynamic WordPress site. Another time, I wanted to provide a bot that has been crawling the site with a “legitimate” page, without actually allowing it to go through the site. The main thing I was looking for is a way to do an internal redirect, so no 3xx code returned. I know there are probably better ways to achieve these goals (are there?). But hey, I learned some stuff about Nginx while doing this.

So here it goes, the first attempt:

  location ~* /some/path/with_numbers/\d+ {
    if ($remote_addr = 11.11.111.1) {
       return 200 "sample reply - should be empty";
    }
    # the next line is reached only when the above is not executed
    try_files $uri $uri/ /index.php$is_args$args;
  }

One problem with the above is that replacing the IP or adding more IPs is a bit problematic. So, we replace it with the following that relies on the Geo module:

geo $bad_ip {
  default 0;
  1.2.3.4/32 1;
  4.3.2.1/32 1;
}

server {
[...]
 location ~* /some/path/with_numbers/\d+ {
    if ($bad_ip) {
       return 200 "sample reply - should be empty";
    }
    # the next line is reached only when the above is not executed
    try_files $uri $uri/ /index.php$is_args$args;
  }

The other problem is that the text returned with the 200 code is a bit simplistic and I really wanted to send an HTML static page, not a stupid line. The fix uses error_page


[...]
 location ~* /some/path/with_numbers/\d+ {
    if ($bad_ip) { return 410; }
    error_page 410 =200 /my_static_page.html;
    # the next line is reached only when the above is not executed
    try_files $uri $uri/ /index.php$is_args$args;
  }

The result is a 200 (OK) code sent to the browser with a static HTML page that should load much faster than a PHP/RoR/etc alternative.

Of course, more can be done to identify the blocked entity, for example using UserAgent string, etc.
Leaving that for another day.

Similar Posts:




How NOT to ban Googlebot

Google do not provide a list of IPs to identify their bots, so you can’t simply add that to fail2ban’s ‘ignoreip =’ line.

Instead, according to their answer per https://support.google.com/webmasters/answer/80553?hl=en you can only verify the bot’s provenance by checking the DNS for the bot’s IP. In fact, they ask you to run 2 queries (1 reverse and 1 forward lookup) to make sure that the IP is not spoofed.

My simple 1 reverse lookup script is:

#!/bin/bash
IP="$1"
HOSTRESULT="$(host -W ${IP})" # updated thanks to comment from Martin
REGEX='.*(googlebot\.com\.|google\.com\.)

And add that to /etc/fail2ban/jail.local

ignorecommand = /usr/local/bin/ignore_ip_check.sh 

This needs more testing, and I should add the second forward lookup, for for now it seems to do the trick

if [[ "$HOSTRESULT" =~ $REGEX ]]; then exit 0; else exit 1; fi

And add that to /etc/fail2ban/jail.local


This needs more testing, and I should add the second forward lookup, for for now it seems to do the trick

Similar Posts:




MySQL Slow Query Log

Why, oh why, is this so complicated?!

Well, it’s not… just a bit confusing when you don’t know where to look.

TL,DR: (Ubuntu 16.04, otherwise YMMV, read below)

Add the following to your active configuration file for mysql

[mysqld]
slow_query_log
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2
log-queries-not-using-indexes

What could go wrong?

Wrong configuration file:

on Ubuntu 16.04 the file can be in /etc/mysql/my.cnf or /etc/mysql/conf.d/mysql.cnf or even /etc/mysql/mysql.conf.d/mysqld.cnf

to find which one is “active”, run: mysqld --verbose --help |grep -A 1 "Default options"

the result is something like the following:
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf

if you open /etc/mysql/my.cnf, you will find at the bottom:
!includedir /etc/mysql/conf.d/

That’s where the actual files are stored.

Wrong variable names:

If you follow different tutorials on the Internet you will find configurations that mention:
log_slow_queries=/var/log/mysql/slow-query.log
DO NOT use that, it’s deprecated.
the new name of the variable is `slow_query_log_file` for the actual log file, and YOU SHOULD have `slow_query_log` as a boolean variable (ON, 1, or just mentioned in the file as per the code above).

Troubleshooting

Tail the /var/log/mysql/error.log in a separate terminal, see what’s failing. Example:

[Warning] option 'slow_query_log': boolean value '/var/log/mysql/mysql-slow.log' wasn't recognized. Set to OFF.
Obviously the variable is set wrongly to the filename. They are now 2 separate variables.

Similar Posts: