Start with the the Ansible configuration. This can be set in /etc/ansible/ansible.cfg or ~/.ansible.cfg (in the home directory) or ansible.cfg (in the current directory)
My suggestion is use one of the first 2 (ie. /etc/ or ~/.ansible.cfg if you’re going to be managing instances from your machine. Update the configuration as needed.
You may need other plugins, this one is for aws_ec2. In the /etc/ansible/ansible_plugins directory, create the *_aws_ec2.yml configuration file for your inventory
At work, we wanted to switch from Mandrill/Mailchimp to Amazon SES for a long time. But that was not happening mainly because the tools SES offered to monitor sent mail were, how should I say, DIY.
So, after some deliberation and when I found some time to tackle it, I did it 🙂
The setup is not too complex? Well, it is. But once you understand it, it’s pretty basic.
Let’s start at the source: Amazon
You will see this notice under Notifications for each Email Address you create/verify in SES:
Amazon SES can send you detailed notifications about your bounces, complaints, and deliveries.
Bounce and complaint notifications are available by email or through Amazon Simple Notification Service (Amazon SNS).
Next step is to create the SNS Topic, it’s just a label really.
You will also need an Amazon SQS queue. A standard queue should be good. Once it’s there, copy the ARN as you will need that for the SNS subscription.
Let’s go back to the SNS Topic we created and click on the Create subscription button. Choose Amazon SQS for the Protocol and paste the ARN of the SQS queue you created earlier. You may need to confirm that too? Just click the button if it’s there.
That’s all on the Amazon side! See how easy that was?!
Next you need a Graylog setup.
Where do I start? Well, first choose where do you want to put that Graylog “machine”. For Amazon EC2 I would just go with their ready-made AMIs. Here’s the link/docs to follow: http://docs.graylog.org/en/latest/pages/installation/aws.html (but and I quote: The Graylog appliance is not created to provide a production ready solution)
But since I like doing things the “easy” way, I went with the Ubuntu 16.04 package per http://docs.graylog.org/en/latest/pages/installation/operating_system_packages.html
Seriously, it’s much easier to use and maintain since I know where everything is. Maybe it’s just me …
Anyway, here’s my bash session:
I followed the instructions there, and installed Apache on top of that with the following configuration for the VirtualHost
ServerName example.com
# Letsencrypt it
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# The needed parts start here
ProxyRequests Off
Order deny,allow
Allow from all
RequestHeader set X-Graylog-Server-URL "https://example.com/api/"
ProxyPass http://127.0.0.1:9000/
ProxyPassReverse http://127.0.0.1:9000/
This will leave you with a Graylog server ready to receive the logs. Now, how do we get the logs over to Graylog? Easy! Pull them from SQS.
Start by adding a GELF HTTP Input in Graylog (System > Inputs > Select Input: GELF HTTP > Launch new input)
Make sure to get the port there right, you will need to configure the script below.
Then download the script, make sure it’s executable. Do run it manually, that way it will tell you what’s missing (BOTO3)
Make sure to configure AWS credentials. The quickest way is:
* to install awscli: apt-get install awscli
* and run its configuration: aws configure
Edit the script with the right configuration vars, add it to cron to run as much as you feel necessary (I use it @hourly)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Why do I want to do that?
Different reasons. One time, because I wanted a certain user not to access the dynamic WordPress site. Another time, I wanted to provide a bot that has been crawling the site with a “legitimate” page, without actually allowing it to go through the site. The main thing I was looking for is a way to do an internal redirect, so no 3xx code returned. I know there are probably better ways to achieve these goals (are there?). But hey, I learned some stuff about Nginx while doing this.
So here it goes, the first attempt:
location ~* /some/path/with_numbers/\d+ {
if ($remote_addr = 11.11.111.1) {
return 200 "sample reply - should be empty";
}
# the next line is reached only when the above is not executed
try_files $uri $uri/ /index.php$is_args$args;
}
One problem with the above is that replacing the IP or adding more IPs is a bit problematic. So, we replace it with the following that relies on the Geo module:
geo $bad_ip {
default 0;
1.2.3.4/32 1;
4.3.2.1/32 1;
}
server {
[...]
location ~* /some/path/with_numbers/\d+ {
if ($bad_ip) {
return 200 "sample reply - should be empty";
}
# the next line is reached only when the above is not executed
try_files $uri $uri/ /index.php$is_args$args;
}
The other problem is that the text returned with the 200 code is a bit simplistic and I really wanted to send an HTML static page, not a stupid line. The fix uses error_page
[...]
location ~* /some/path/with_numbers/\d+ {
if ($bad_ip) { return 410; }
error_page 410 =200 /my_static_page.html;
# the next line is reached only when the above is not executed
try_files $uri $uri/ /index.php$is_args$args;
}
The result is a 200 (OK) code sent to the browser with a static HTML page that should load much faster than a PHP/RoR/etc alternative.
Of course, more can be done to identify the blocked entity, for example using UserAgent string, etc.
Leaving that for another day.
Virtualmin team said the next version of Virtualmin/Webmin will automate most of the letsencrypt setup. Meanwhile there’s an ongoing conversation about it in the forums
My setup: ./letsencrypt-auto certonly --webroot --webroot-path /usr/share/nginx/html -d my.vmin.server
Then in Webmin > Webmin Configuration > SSL Encryption set:
I’m assuming you’re running NetworkManager here, you’ve already set up your wireless connection using DHCP and you’re talking about IPv4 here.
While you can’t configure the static addresses in NetworkManager GUI, there’s a hack possible.
Find the connection UUID of the connection configured
$ nmcli con
Add a script in /etc/NetworkManager/dispatcher.d/, containing this starting point:
#!/bin/bash
WLAN_DEV=wlan0
MYCON_UUID=31c48409-e77a-46e0-8cdc-f4c04b978901
if [ "$CONNECTION_UUID" == "$MYCON_UUID" ]; then
# add alias for Network 1: 192.168.0.123/24
ifconfig $WLAN_DEV:0 192.168.0.123 netmask 255.255.255.0 up
# add alias for Network 2: 192.168.1.123/24
ifconfig $WLAN_DEV:1 192.168.1.123 netmask 255.255.255.0 up
fi
Make sure it has the right permissions (chmod +x /path/to/script.sh) and restart NetworkManager:
$ sudo service network-manager restart
Now when you connect to your wireless connection, it should add the two aliases (check with ifconfig.
GitLab is your own GitHub and more (or less). They have pretty good introduction on the home page, so I won’t repeat that here.
The recommended installation method for GitLab is using the Omnibus package. Head to the downloads page and follow the instructions. You should have a GitLab setup in no time, who needs GitHub! oh well, many many people…
Now to the tweaks.
Why?
If you’re like me trying to hide the ports on your server from the bots and prying eyes, they you would have SSH on a different port and your other services all bound to localhost and facing the Internet bravely from behind a proxy server. I use Apache on my personal server, it’s pretty robust and gets the job done.
So let’s say SSH is on port 2022, and apache is taking firm hold on ports 80 and 443. So GitLab’s NGINX should take port 8088.
And the domain you’re using for gitlab is not the machine’s hostname, so hostname is ‘host4339.moodeef.com’ and gitlab’s URL is ‘gitlab.deeb.me’
How?
Edit the “/etc/gitlab/gitlab.rb” file with the following changes/additions:
Then run gitlab-ctl reconfigure and see how it goes from there.
If things seem to be too complicated, you can always get a subscription option with full support from the GitLab folks. Or hire me to fix it for you!
I faced a bit of puzzle today with Tomcat/Apache setup.
Tomcat is running in the background with Apache as frontend via mod_proxy_ajp. The site loads ok except for static files that return a 404 (File Not Found) on first load, then show up normally on refresh!
The apache configuration looks like the following:
An example failing URL: http://example.com/static/images/email.png;jsessionid=3892BC4B4C26073338268AF98ECA73D6
And in the error log I see the following:
[Fri Sep 06 10:18:14 2013] [error] [client 00.00.0.000] File does not exist: /var/www/static/images/email.png;jsessionid=3892BC4B4C26073338268AF98ECA73D6, referer[…]
Then it dawned on me that apache wouldn’t know about the jsessionid if it was not sent over to tomcat for processing. Since apache was handling static files the session id needed to go.
Solution: I added the following rewrite rule
RewriteEngine On
RewriteRule static/(.*);jsessionid=.* /static/$1 [R,L]
related searches I came across while googling:
page not found when including jsessionid in URL
;jsessionid and 404 File Not Found
Apache getting confused by encoded jsessionid’s (404 Not Found)