Things I tend to forget


Quick WHM/cPanel All email passwords reset

You generally don’t want to use this script. This is a last resort, when all mailbox passwords for the entire account need to be changed. For this script to work, you need root access to your WHM/cPanel server. This could ruin everything, be warned!

Here be dragons!

#!/bin/bash
newpass='SomeStr0ngPasswordGoesHere!'
accountname='example'
domainname='example.com'

uapi --user=$accountname Email list_pops \
  | grep email \
  | awk '{print$2}' | awk -F\@ '{print$1}' | \
  while read m; do
    echo changing password for $m
    uapi --user=@accountname Email passwd_pop domain=$domainname email=$m password=$newpass
  done
echo done

Here are the links to the official API docs:

  • https://documentation.cpanel.net/display/DD/UAPI+Functions+-+Email%3A%3Apasswd_pop
  • https://documentation.cpanel.net/display/DD/UAPI+Functions+-+Email%3A%3Alist_pops

Is there a better way to do this? probably

Can I randomize the password? sure, but how will you send the new passwords to your users?

Hope it helps!

Similar Posts:




Quick ElasticSearch Index Cleanup

There are many solutions using curator and other libraries. I’ll try to add links here later (if I remember)

However most of the time, all that’s needed is a quick curl call

curl -X DELETE "https://${ES-Endpoint}/${INDEX_NAME}"

Get the ES endpoint from the domain overview

List the index names using

curl "https://${ES-Endpoint}/_cat/indices"

So for a quick cleanup session for everything that matches a grep

curl "https://${ES-Endpoint}/_cat/indices" | grep 'something' | awk '{print$3}' \
while read INDEX_NAME; do
  echo "Deleting ${INDEX_NAME}"
  curl -X DELETE "https://${ES-Endpoint}/${INDEX_NAME}"
done

Similar Posts:




My Ansible AWS EC2 Dynamic Inventory

Start with the the Ansible configuration. This can be set in /etc/ansible/ansible.cfg or ~/.ansible.cfg (in the home directory) or ansible.cfg (in the current directory)

My suggestion is use one of the first 2 (ie. /etc/ or ~/.ansible.cfg if you’re going to be managing instances from your machine. Update the configuration as needed.

[defaults]
inventory = ./ansible_plugins
enable_plugins = aws_ec2
host_key_checking = False
pipelining = True
log_path = /var/log/ansible

You may need other plugins, this one is for aws_ec2. In the /etc/ansible/ansible_plugins directory, create the *_aws_ec2.yml configuration file for your inventory

# /etc/ansible/ansible_plugins/testing_aws_ec2.yml
---
plugin: aws_ec2
aws_profile: testing
regions:
  - us-east-1
  - us-east-2
filters:
  tag:Team: testing
  instance-state-name : running
hostnames:
  - instance-id
  - dns-name
keyed_groups:
  - prefix: team
    key: tags['Team']

I’m filtering using a tag:Team == testing and showing only running instances.

I’m also using the instance-id and dns-name attributes as hostname

And I’m using the tag['Team']as a grouping.

So now, I can do the following from any directory (since my configuration is global in /etc/ansible)

$ ansible-inventory --list --yaml

all:
  children:
    aws_ec2:
      hosts:
        i-xxxxxxxxxxxxxxx:
          ami_launch_index: 0
          architecture: x86_64
          block_device_mappings:
          - device_name: /dev/sda1
            ebs:
              attach_time: 2020-08-10 15:20:58+00:00
              delete_on_termination: true
              status: attached
              volume_id: vol-xxxxxxxxxxxxxx
...
    team_testing:
      hosts:
        i-xyxyxyxyxyyxyxyy: {}
        i-xyxyxy2321yxyxyy: {}
        i-xyxyxyxyxy89yxyy: {}
        i-xyxy1210xyyxyxyy: {}
        i-xyxy999999yxyxyy: {}
        i-xyxyxy44xyyxyxyy: {}
        i-xyx2323yxyyxyxyy: {}
        i-xyxyxyxyxy9977yy: {}
    ungrouped: {}

I can also use the team_testing or the individual instance_id in my Ansible hostscalls.

Similar Posts:




Get daily cost alerts on AWS

So I wanted to have a better alarm system for when AWS hits us with unexpected costs. It’s better to know there’s something wrong rather quickly and not suffer hundreds of dollars costs for something you don’t really need or want.

The AWS provided alarm checks for hikes on a monthly basis. Here’s the doc they published. So that’s an alarm that sounds when your estimated bill is going to be higher than the budgeted amount, or what you had in mind in the first place. Not very useful honestly in our case. It will just be too late.

The only alternative I found was creating a daily check, that will compare yesterday’s costs against a max_amount set by default. Let’s say you want to have your daily bill not exceed 5$US.

For ease of use and maintainability, I’m using a lambda function triggered by a cron (EventBridge rule) for the daily checks. And I’m sending the Alarm using an SNS topic, this way I can subscribe to it by email, or send it to our Mattermost channel, etc.

Here’s the code:

import os
import json
import boto3
from datetime import datetime, timedelta


def lambda_handler(event, context):
    yesterday   = datetime.strftime(datetime.now() - timedelta(1), '%Y-%m-%d')
    twodaysago  = datetime.strftime(datetime.now() - timedelta(2), '%Y-%m-%d')
    cost_metric = os.environ.get('cost_metric')
    max_amount  = os.environ.get('max_amount')
    
    ce  = boto3.client('ce')
    sns = boto3.client('sns')
    
    result = ce.get_cost_and_usage(
        TimePeriod={'Start': twodaysago, 'End': yesterday}, 
        Granularity='DAILY', 
        Metrics=[cost_metric]
    )
    total_amount = result['ResultsByTime'][0].get('Total').get(cost_metric).get('Amount')
    
    if total_amount > max_amount:
        sns_topic = sns.create_topic(Name='BillingAlert')
        sns.publish(
            TopicArn=sns_topic.get('TopicArn'),   
            Message='Total cost "{} USD" exceeded max_amount rate: {}'.format(total_amount, max)   
        )
        
    return {
        'statusCode': 200,
        'body': json.dumps('cost check: {}'.format(total_amount))
    }

Note that you will need to add a couple of environment variables to Lambda: cost_metric and max_amount
And the following permissions to the role used by the lambda function: ce:GetCostAndUsage, sns:Publish and sns:CreateTopic

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ce:GetCostAndUsage",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "sns:Publish",
                "sns:CreateTopic",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:*:log-group:/aws/lambda/checkDailyCosts:*",
                "arn:aws:sns:*:*:*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:us-east-1:*:*"
        }
    ]
}

After that’s setup, go to your SNS topic (created by Lambda if it doesn’t exist) and subscribe to it. There you go, daily checks and an alarm if the bill is higher than expected.

Similar Posts:




How can I copy S3 objects from another AWS account?

Ref: https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/

Well, you can do it manually following the notes in the AWS KB article above linked. I am always confused by those, and I have to read them a few times before I’m able to apply the steps correctly. Recently however I’ve been going the other way around: whenever there are steps to follow, I try to translate them into an Ansible playbook

Here’s the playbook for syncing files between S3 buckets in different accounts:

---
# copy-bucket.yml
- hosts: localhost
  vars:
    source_bucket: my-source-bucket  
    dest_bucket: my-dest-bucket
    dest_user_arn: arn:aws:iam::ACCOUNTID:user/USERNAME
    dest_user_name: USERNAME
    source_profile: src_profile
    dest_profile: dest_profile
  tasks:
    - name: Attach bucket policy to source bucket
      s3_bucket:
        name: "{{ source_bucket }}"
        policy: "{{ lookup('template','bucket-policy.json.j2') }}"
        profile: "{{ source_profile }}"

    - name: Attach an IAM policy to a user in dest account
      iam_policy:
        iam_type: user
        iam_name: "{{ dest_user_name }}"
        policy_name: "s3_move_access"
        state: present
        policy_json: "{{ lookup( 'template', 'user-policy.json.j2') }}"
        profile: "{{ dest_profile }}"
      register: user_policy

    - name: Sync the files 
      shell: aws s3 sync s3://{{ source_bucket }}/ s3://{{ dest_bucket }}/ --profile {{ dest_profile }}

You will also need the following json templates

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::{{ source_bucket }}",
                "arn:aws:s3:::{{ source_bucket }}/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::{{ dest_bucket }}",
                "arn:aws:s3:::{{ dest_bucket }}/*"
            ]
        }
    ]
}
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DelegateS3Access",
            "Effect": "Allow",
            "Principal": {"AWS": "{{ dest_user_arn }}"},
            "Action": ["s3:ListBucket","s3:GetObject"],
            "Resource": [
                "arn:aws:s3:::{{ source_bucket }}/*",
                "arn:aws:s3:::{{ source_bucket }}"
            ]
        }
    ]
}

run: ansible-playbook -v copy-bucket.yml

make sure the profile names are setup on your machine correctly, and the IAM user is there.

Similar Posts:




ChatOps with Mattermost and AWS Lambda

I’ve been working towards making things simpler when managing distributed resources at work. And since we spend most of our day in the chat room (was Slack, now Mattermost) I thought it’s best to get started with ChatOps

It’s just a fancy word for doing stuff right from the chat window. And there’s so much one can do, especially with simple Slash Commands.

Here’s a lambda function I setup yesterday for invalidating CloudFront distributions.

from time import time
import boto3

import json
import os
import re


EXPECTED_TOKEN = os.environ['mmToken']
ALLOWED_USERS = re.split('[, ]', os.environ['allowedUsers'])
DISTRIBUTIONS = {
    'site-name': 'DISTRIBUTIONID',
    'another.site': 'DISTRIBUTIONID05'
}

def parse_command_text(command_text):
    pattern = r"({})\s+(.*)".format('|'.join(DISTRIBUTIONS.keys()))
    m = re.match(pattern, command_text)
    if m:
        return { 'site': m.group(1), 'path': path}
    else: 
        return False

def lambda_handler(event, context):
    # Parse the request
    try:
        request_data = event["queryStringParameters"]
    except:
        return {
            "statusCode": 400,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Use GET for setting up mattermost slash command" }'
        }

    # Check the token matches.
    if request_data.get("token", "") != EXPECTED_TOKEN:
        print('Wrong Token!')
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Mattermost token does not match" }'
        }
        
    # Check the user is allowed to run the command
    if request_data.get("user_name", "") not in ALLOWED_USERS:
        print('Wrong User! {} not in {}'.format(request_data['user_name'], ALLOWED_USERS))
        return {
            "statusCode": 401,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "User not allowed to perform action" }'
        }

    # parse the command
    command_text = request_data.get("text", "")
    if not command_text:
        print('Nothing to do, bailing out')
        return {
            "statusCode": 404,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "No command text sent" }'
        }
    parts = parse_command_text(command_text)
    if not parts: 
        print('Bad formatting - command: {}'.format(command_text))
        return {
            "statusCode": 402,
            "headers": {"Content-Type": "application/json"},
            "body": '{ "message": "Wrong pattern" }'
        }


    # Do the actual work
    cf_client = boto3.client('cloudfront')

    # Invalidate
    boto_response = cf_client.create_invalidation(
        DistributionId=DISTRIBUTIONS[parts['site']],
        InvalidationBatch={
            'Paths': {
                'Quantity': len(parts['path']),
                'Items': parts['path'] 
            },
            'CallerReference': str(time()).replace(".", "")
        }
    )['Invalidation']

    # Build the response message text.
    text = """##### Executing invalidation
| Key | Info |
| --- | ---- |
| Site | {} |
| Path | {} |
| ID | {} |
| Status | {} |""".format(
        parts['site'], 
        parts['path'],
        boto_response['Id'], 
        boto_response['Status']
    )

    # Build the response object.
    response = {
        "response_type": "in_channel",
        "text": text,
    }

    # Return the response as JSON
    return {
        "body": json.dumps(response),
        "headers": {"Content-Type": "application/json"},
        "statusCode": 200,
    }

Note that you need to hook that up with an API Gateway in AWS. Once that’s done, you will have a URL endpoint ready for deployment.

Next, I created the slash command in mattermost with the following:

slash command configuration

That’s pretty much it. Rinse and repeat for a different command, different usage.

On my list next is to have more interaction with the user in mattermost per https://docs.mattermost.com/developer/interactive-messages.html
Weekend Project, Yay!

Similar Posts:




AWS Lambda Function Code

Quick snippet to get the function code

wget -O function.zip $(aws lambda get-function --function-name MyFunctionName --query 'Code.Location' --output text)

And another to update lambda with the latest

cd package 
zip -r9 ../function.zip .
cd ..
zip -g function.zip function.py
aws lambda update-function-code --function-name MyFunctionName --zip-file fileb://function.zip

Similar Posts:




MFA from command line

MFA, 2FA, 2 step validation, etc. are everywhere these days. And it’s a good thing.

Problem with using the phone to get the authentication code is that you need to have it handy at all times (when you want to login at least) and that you have to read the code then type it in (too many steps)

One possible alternative is to use the command line oathtool

Here’s my snippet, I added the following line in my .bashrc

function mfa () { oathtool --base32 --totp "$(cat ~/.mfa/$1.mfa)" | xclip -sel clip ;}

Some preparation work:

sudo apt install oathtool xclip
mkdir ~/.mfa

If you want to use the same account with both a phone based MFA generator and the shell, set them up at the same time. Simply use the generated string for setting up the account in the Google Authenticator (as an example) and the add it to the ~/.mfa/account_name.mfa

The use of xclip automatically copies the 6 digit authentication code to the clipboard. You can go ahead and paste it.

The above setup works in Ubuntu. Didn’t try it on other systems.

Similar Posts:




VaultPress SSH/SFTP access for WordPress site behind AWS load balancer

I recently worked on migrating a large WordPress site from a dedicated server to AWS following the reference architecture as outlined in https://github.com/aws-samples/aws-refarch-wordpress

One of the upsides of this architecture is that the site is entirely behind the load balancer and the instances running the web server are never accessible from the internet (not even via ssh or http). So any updates to the site need to be deployed using Systems Manager. I will post about that later … However due to this restriction, VaultPress is only able to access the site (for backups) over https, which causes extra load on the web instances.

To fix that, we need to allow VaultPress SSH access directly to the web instances (at least to 1 of them). And the only way in via SSH is through a bastion instance.

On this bastion instance, I installed Nginx with the following configuration in /etc/nginx/nginx.conf

stream {
        server {
                listen 1234;
                proxy_pass 10.0.1.12:22;
        }
}

where 10.0.1.12 is the internal IP of one web instances behind the load balancer.

At this stage (after reloading Nginx), I added the vaultpress user to the web instances and set their SSH public key in the .ssh/authorized_keys for that user.

Testing the connection works, all green on the VaultPress site. The SSH and the SFTP connection should work properly. To test manually, I added my own public key and connected to:

ssh -l vaultpress -p 1234 bastion.mydomain.com 

Since the IPs may change in case there’s an auto scaling event, I added a cron job that runs every 5 minutes on the bastion and updates the nginx configuration to point to one of the web instances internal IPs. A simplistic version looks like this:

#!/bin/bash
IP="$(list-ips.sh | awk '{ print $1 }')"
sed -i -E "s/proxy_pass (.+):22/proxy_pass ${IP}:22/" /etc/nginx/nginx.conf
nginx -t && service nginx reload

You can also use CloudWatch Events to add a rule, that runs when AutoScaling happens. Pointing that to a Lambda function that calls Systems Manager to change the IP in Nginx. Here’s a reference for that https://docs.aws.amazon.com/autoscaling/ec2/userguide/cloud-watch-events.html#create-lambda-function

Oh, and don’t forget to lock down the bastion instance. In my case, I set the Security group in the EC2 console to allow only the following:

Type: Custom TCP Rule
Protocol: TCP
Port Range: 1234
Source: 192.0.64.0/18
Description: VaultPress SSH Access

Similar Posts:




List IPs from CloudTrail events

A quick command to list the IPs from AWS CloudTrail events.

#!/bin/bash
ACCESS_KEY_ID=AKIASMOETHINGHERE
MAX_ITEMS=100
aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '.sourceIPAddress' \
  | sort | uniq

This of course can be extended to include more information, for example:

#!/bin/bash
ACCESS_KEY_ID=AKIASMOETHINGHERE
MAX_ITEMS=100
aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=${ACCESS_KEY_ID} --max-items ${MAX_ITEMS} \
  | jq -r '.Events[].CloudTrailEvent' \
  | jq '{ User: .userIdentity.userName, IP: .sourceIPAddress, Event: .eventName }'

Similar Posts: