Start with the the Ansible configuration. This can be set in /etc/ansible/ansible.cfg or ~/.ansible.cfg (in the home directory) or ansible.cfg (in the current directory)
My suggestion is use one of the first 2 (ie. /etc/ or ~/.ansible.cfg if you’re going to be managing instances from your machine. Update the configuration as needed.
You may need other plugins, this one is for aws_ec2. In the /etc/ansible/ansible_plugins directory, create the *_aws_ec2.yml configuration file for your inventory
So I wanted to have a better alarm system for when AWS hits us with unexpected costs. It’s better to know there’s something wrong rather quickly and not suffer hundreds of dollars costs for something you don’t really need or want.
The AWS provided alarm checks for hikes on a monthly basis. Here’s the doc they published. So that’s an alarm that sounds when your estimated bill is going to be higher than the budgeted amount, or what you had in mind in the first place. Not very useful honestly in our case. It will just be too late.
The only alternative I found was creating a daily check, that will compare yesterday’s costs against a max_amount set by default. Let’s say you want to have your daily bill not exceed 5$US.
For ease of use and maintainability, I’m using a lambda function triggered by a cron (EventBridge rule) for the daily checks. And I’m sending the Alarm using an SNS topic, this way I can subscribe to it by email, or send it to our Mattermost channel, etc.
Note that you will need to add a couple of environment variables to Lambda: cost_metric and max_amount And the following permissions to the role used by the lambda function: ce:GetCostAndUsage, sns:Publish and sns:CreateTopic
After that’s setup, go to your SNS topic (created by Lambda if it doesn’t exist) and subscribe to it. There you go, daily checks and an alarm if the bill is higher than expected.
I’ve been working towards making things simpler when managing distributed resources at work. And since we spend most of our day in the chat room (was Slack, now Mattermost) I thought it’s best to get started with ChatOps
It’s just a fancy word for doing stuff right from the chat window. And there’s so much one can do, especially with simple Slash Commands.
Here’s a lambda function I setup yesterday for invalidating CloudFront distributions.
from time import time
import boto3
import json
import os
import re
EXPECTED_TOKEN = os.environ['mmToken']
ALLOWED_USERS = re.split('[, ]', os.environ['allowedUsers'])
DISTRIBUTIONS = {
'site-name': 'DISTRIBUTIONID',
'another.site': 'DISTRIBUTIONID05'
}
def parse_command_text(command_text):
pattern = r"({})\s+(.*)".format('|'.join(DISTRIBUTIONS.keys()))
m = re.match(pattern, command_text)
if m:
return { 'site': m.group(1), 'path': path}
else:
return False
def lambda_handler(event, context):
# Parse the request
try:
request_data = event["queryStringParameters"]
except:
return {
"statusCode": 400,
"headers": {"Content-Type": "application/json"},
"body": '{ "message": "Use GET for setting up mattermost slash command" }'
}
# Check the token matches.
if request_data.get("token", "") != EXPECTED_TOKEN:
print('Wrong Token!')
return {
"statusCode": 401,
"headers": {"Content-Type": "application/json"},
"body": '{ "message": "Mattermost token does not match" }'
}
# Check the user is allowed to run the command
if request_data.get("user_name", "") not in ALLOWED_USERS:
print('Wrong User! {} not in {}'.format(request_data['user_name'], ALLOWED_USERS))
return {
"statusCode": 401,
"headers": {"Content-Type": "application/json"},
"body": '{ "message": "User not allowed to perform action" }'
}
# parse the command
command_text = request_data.get("text", "")
if not command_text:
print('Nothing to do, bailing out')
return {
"statusCode": 404,
"headers": {"Content-Type": "application/json"},
"body": '{ "message": "No command text sent" }'
}
parts = parse_command_text(command_text)
if not parts:
print('Bad formatting - command: {}'.format(command_text))
return {
"statusCode": 402,
"headers": {"Content-Type": "application/json"},
"body": '{ "message": "Wrong pattern" }'
}
# Do the actual work
cf_client = boto3.client('cloudfront')
# Invalidate
boto_response = cf_client.create_invalidation(
DistributionId=DISTRIBUTIONS[parts['site']],
InvalidationBatch={
'Paths': {
'Quantity': len(parts['path']),
'Items': parts['path']
},
'CallerReference': str(time()).replace(".", "")
}
)['Invalidation']
# Build the response message text.
text = """##### Executing invalidation
| Key | Info |
| --- | ---- |
| Site | {} |
| Path | {} |
| ID | {} |
| Status | {} |""".format(
parts['site'],
parts['path'],
boto_response['Id'],
boto_response['Status']
)
# Build the response object.
response = {
"response_type": "in_channel",
"text": text,
}
# Return the response as JSON
return {
"body": json.dumps(response),
"headers": {"Content-Type": "application/json"},
"statusCode": 200,
}
Note that you need to hook that up with an API Gateway in AWS. Once that’s done, you will have a URL endpoint ready for deployment.
Next, I created the slash command in mattermost with the following:
slash command configuration
That’s pretty much it. Rinse and repeat for a different command, different usage.
On my list next is to have more interaction with the user in mattermost per https://docs.mattermost.com/developer/interactive-messages.html Weekend Project, Yay!
cd package
zip -r9 ../function.zip .
cd ..
zip -g function.zip function.py
aws lambda update-function-code --function-name MyFunctionName --zip-file fileb://function.zip
MFA, 2FA, 2 step validation, etc. are everywhere these days. And it’s a good thing.
Problem with using the phone to get the authentication code is that you need to have it handy at all times (when you want to login at least) and that you have to read the code then type it in (too many steps)
One possible alternative is to use the command line oathtool
Here’s my snippet, I added the following line in my .bashrc
If you want to use the same account with both a phone based MFA generator and the shell, set them up at the same time. Simply use the generated string for setting up the account in the Google Authenticator (as an example) and the add it to the ~/.mfa/account_name.mfa
The use of xclip automatically copies the 6 digit authentication code to the clipboard. You can go ahead and paste it.
The above setup works in Ubuntu. Didn’t try it on other systems.
It’s good to have the status page, especially if you need to troubleshoot issues that are not showing up in the regular logs, such as high load or memory consumption.
However, looking at that page and refreshing it manually is not always useful. Sometimes you need to log that data, or have a way to pinpoint a single PID causing the load.
First make sure you have the status page accessible. Here’s a tutorial I like: https://easyengine.io/tutorials/php/fpm-status-page/
The create this script on the server. Make sure to change the connect part to your PHP-FPM pool’s correct port or socket
#!/bin/bash
# Requirements: cgi-fcgi
# on ubuntu: apt-get install libfcgi0ldbl
RESULT=$(SCRIPT_NAME=/status \
SCRIPT_FILENAME='/status' \
QUERY_STRING=full \
REQUEST_METHOD=GET \
/usr/bin/cgi-fcgi -bind -connect 127.0.0.1:9000)
if [ -n "$1" ]; then
echo -e "$RESULT" | grep -A12 "$1"
else
echo -e "$RESULT"
fi
One way I use it is run `top` and check for the suspect process PID, then run ` fpm_status.sh <PID>`
You’ll need to follow the instructions in the documentation to create the command on the server. Make sure to save the tokens in a safe place as usual.
A colleague asked me today for a quick way to set the nickname in Mattermost. He needed to do that to provide more information about his status than what the actual “Status” in shows, which is limited to “Online”, “Away”, “Do Not Disturn” and “Offline”
So if you want to tell people you’re away for a couple of hours, or sick, walking the dog, etc. then you need to go IRC style and put the additional information in your nickname. Not too bad actually, just inconvenient.
I checked the Mattermost API docs and wrote a small bash script to get things going
#!/bin/bash
# Requirements:
# - get the token from Mattermost > Account Settings > Security > Personal Access Tokens > Create New Token
# make sure to save the Token itself, not the ID!
# - install jq
TOKEN=GETYOUROWNTOKEN
NICKNAME=${1:-NickNack}
STATUS=${2:-online}
CHANNEL_ID="my_hello_channel_ID"
user_id=$(curl -sH "Authorization: Bearer $TOKEN" \
https://chat.example.com/api/v4/users/me | jq -r .id)
curl -XPUT -d '{"nickname":"'$NICKNAME'"}' \
-sH "Authorization: Bearer $TOKEN" \
"https://chat.example.com/api/v4/users/$user_id/patch"
curl -XPUT -d '{"status":"'$STATUS'"}' \
-sH "Authorization: Bearer $TOKEN" \
"https://chat.example.com/api/v4/users/$user_id/status"
if [ -n "$3" ]; then
curl -XPOST -d '{"channel_id":"'"$CHANNEL_ID"'", "message":"'"$3"'"}' \
-sH "Authorization: Bearer $TOKEN" "https://chat.example.com/api/v4/posts"
fi
A couple of things to watch out there:
You need to save the TOKEN, not the TOKEN ID. Once created and saved the actual TOKEN is no longer showing in the UI. So save that somewhere safe and use it in the script
The user needs to be able to create their own token. Follow the procedure per the docs here to allow them to do that. Yes, you need to do all that 🙂
The Channel ID can be copied from the channel drop-down menu > View info. In the bottom left, in grey you will see: `ID: xxxxxxxxxx` that’s the one you need!
For convenience, I added a few aliases in my bashrc:
alias lunch="mmstatus.sh 'abdallah|lunch' 'dnd' 'going to lunch break'"
alias back="mmstatus.sh 'abdallah|work' 'online' 'back!'"
alias goodmorning="mmstatus.sh 'abdallah|work' online 'Good morning :)'"
I know it’s better to add a slash-command for that. Something like ‘/nick …’ or ‘/status …’. I’ll check out those docs later.
At work, we wanted to switch from Mandrill/Mailchimp to Amazon SES for a long time. But that was not happening mainly because the tools SES offered to monitor sent mail were, how should I say, DIY.
So, after some deliberation and when I found some time to tackle it, I did it 🙂
The setup is not too complex? Well, it is. But once you understand it, it’s pretty basic.
Let’s start at the source: Amazon
You will see this notice under Notifications for each Email Address you create/verify in SES:
Amazon SES can send you detailed notifications about your bounces, complaints, and deliveries.
Bounce and complaint notifications are available by email or through Amazon Simple Notification Service (Amazon SNS).
Next step is to create the SNS Topic, it’s just a label really.
You will also need an Amazon SQS queue. A standard queue should be good. Once it’s there, copy the ARN as you will need that for the SNS subscription.
Let’s go back to the SNS Topic we created and click on the Create subscription button. Choose Amazon SQS for the Protocol and paste the ARN of the SQS queue you created earlier. You may need to confirm that too? Just click the button if it’s there.
That’s all on the Amazon side! See how easy that was?!
Next you need a Graylog setup.
Where do I start? Well, first choose where do you want to put that Graylog “machine”. For Amazon EC2 I would just go with their ready-made AMIs. Here’s the link/docs to follow: http://docs.graylog.org/en/latest/pages/installation/aws.html (but and I quote: The Graylog appliance is not created to provide a production ready solution)
But since I like doing things the “easy” way, I went with the Ubuntu 16.04 package per http://docs.graylog.org/en/latest/pages/installation/operating_system_packages.html
Seriously, it’s much easier to use and maintain since I know where everything is. Maybe it’s just me …
Anyway, here’s my bash session:
I followed the instructions there, and installed Apache on top of that with the following configuration for the VirtualHost
ServerName example.com
# Letsencrypt it
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# The needed parts start here
ProxyRequests Off
Order deny,allow
Allow from all
RequestHeader set X-Graylog-Server-URL "https://example.com/api/"
ProxyPass http://127.0.0.1:9000/
ProxyPassReverse http://127.0.0.1:9000/
This will leave you with a Graylog server ready to receive the logs. Now, how do we get the logs over to Graylog? Easy! Pull them from SQS.
Start by adding a GELF HTTP Input in Graylog (System > Inputs > Select Input: GELF HTTP > Launch new input)
Make sure to get the port there right, you will need to configure the script below.
Then download the script, make sure it’s executable. Do run it manually, that way it will tell you what’s missing (BOTO3)
Make sure to configure AWS credentials. The quickest way is:
* to install awscli: apt-get install awscli
* and run its configuration: aws configure
Edit the script with the right configuration vars, add it to cron to run as much as you feel necessary (I use it @hourly)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters