Rss

Archives for : Systeming

fail2ban quick setup against brute-force ssh

Having a quite smooth way to avoid some brute-force SSH attempts is relatively easy using fail2ban. On Debian, after the “apt install fail2ban” command, ssh is already protected but a little more can be done to improve the efficiency of this filter.

First, override the “dbpurgeage” setting to allow the data to remain up to 7.5 days. Add the file /etc/fail2ban/fail2ban.d/local.conf with:

[Definition]
dbpurgeage = 648000

Then add another config file to enable the “recidive” jail, for instance in /etc/fail2ban/jail.d/local.conf add:

[recidive]
enabled = true
maxretry = 2
bantime = 604800 ; 1 week
findtime = 86400 ; 1 day

Restart the fail2ban service et voilà, fail2ban has now the ability to keep away some brute-force IP a bit longer. You can of course change the desired value to extend the ban or limit the findtime.

From Roundcube to RainLoop

Roundcube Webmail is a great piece of software when you need a self-hosted Webmail. It ahas some nice features and is quite straight forward to run.

But when it comes to using it on a mobile device, the fun disappear right away. The default skin, which fits well on a desktop display, fails to render anything useful on a small device. Even when using the Melanie2 Larry Mobile skin and its plugins (Mobile plugin and roundcube jquery mobile built on top of jQuery Mobile), the result sadly is far from being neat.

So, looking for an alternative to roundcube, I did a short search on teh net and found RainLoop which actually behaves very well on the Desktop and on a mobile device.

The config part is really easy as the web interface allows to set things up quickly. On the plus side, it doesn’t need a RDBMS to run (thumb up for this) and it has native/ready-to-use config domains for Gmail and Outlook (although that may not seem a killer-feature it was quite harsh to do it with Roundcube for I had to set up DNS aliases to do the same).

Using php5-geoip on Debian

Here is a straight way to get a fully functionnal GeoIP module for PHP5 on Debian:

apt-get install php5-geoip geoip-database-contrib

The second package is actually a download facility to fetch the various free databases from Maxmind. Once it’s done, edit the php.ini configuration file to append:

[geoip]
geoip.custom_directory = "/usr/share/GeoIP/"

Reload the web server and it should now work.

A tiny HTTPD error log parser

Probably the tinyest one yet useful, at least to me; here is a tiny perl script used to quickly check what script kiddies are doing on my Nginx web server:


#!/usr/bin/perl -n
/"((GET|HEAD|POST) [^"]+)"/ and $R=$1;
/client: ([^,]+)/ and $C=$1,$cnt{$C}++;
($C && $R) and print"$C\t$R ($cnt{$C})\n"

Executed with the error log file as argument, it gives something like:


23.20.103.233 GET /clientaccesspolicy.xml HTTP/1.1 (1)
114.40.35.173 GET /phpTest/zologize/axa.php HTTP/1.1 (1)
114.40.35.173 GET /phpMyAdmin/scripts/setup.php HTTP/1.1 (2)
114.40.35.173 GET /pma/scripts/setup.php HTTP/1.1 (3)
114.40.35.173 GET /myadmin/scripts/setup.php HTTP/1.1 (4)
184.72.184.113 GET /clientaccesspolicy.xml HTTP/1.1 (1)
46.165.220.215 GET /vtigercrm/vtigerservice.php HTTP/1.1 (1)
54.80.66.122 GET /clientaccesspolicy.xml HTTP/1.1 (3)
202.53.8.82 GET /ossim/session/login.php HTTP/1.1 (1)
23.22.216.162 GET /clientaccesspolicy.xml HTTP/1.1 (1)
178.63.114.68 HEAD /.psi/profiles/default/config.xml HTTP/1.1 (1)
178.63.114.68 HEAD /.purple/accounts.xml HTTP/1.1 (2)
178.63.114.68 HEAD /dsa HTTP/1.1 (3)
178.63.114.68 HEAD /.htpasswd HTTP/1.1 (4)
178.63.114.68 HEAD /.htpasswd~ HTTP/1.1 (5)
54.204.131.75 GET /clientaccesspolicy.xml HTTP/1.1 (1)

And yes, this is a real excerpt from my current logfile.

Distributed storage on Debian made easy with GlusterFS

GlusterFS is a mature, elegant and powerful distributed filesystem targeted at very high capacities and availability. Sponsored by Red Hat Inc. and included in their storage server solution, this open-source software is kindly available for some other Linux distributions package system or as sources.

Unlike many other distributed solutions, there is no need to have many computers in order to have a taste of Gluster ease of use. A few minutes to spare is fairly enough to do it on your own computer. Note also that only the amd64 architecture is present in the repository and thus the following apply to those 64 bits machines only.

First, add the GnuPG key for the repository and the corresponding entry for APT:

wget -O - http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.3/Debian/pubkey.gpg | apt-key add -
echo "deb [ arch=amd64 ] http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.3/Debian/apt wheezy main" >/etc/apt/sources.list.d/glusterfs.list

The arch option is useful, as documented in Multiarch specs in case you’re using multiarch with some foreign architecture package already installed.

Next, update the packages database and install both the server and client packages:

apt-get update
apt-get install glusterfs-server glusterfs-client

Now, either you have a whole disk or partition available or, like me, you don’t. Let’s just use a file as our disk then. In any case, the goal is to format our disk, preferably with XFS, and mount it.

Doing it with a disk or a partition is left to the reader’s discretion and knowledge ;] with a file, it’s as easy as (thanks to this libgfapi doc):

truncate -s 5GB /srv/xfsdisk
mkfs.xfs -i size=512 /srv/xfsdisk
mkdir -p /export/brick
echo "/srv/xfsdisk /export/brick xfs loop,inode64,noatime,nodiratime 0 0" >> /etc/fstab
mount /export/brick

Last tip before starting our cluster, as Gluster doesn’t want us to use localhost as a valid node hostname, we add a definition for another name on our loopback network:

echo "127.0.1.1 localnode" >>/etc/hosts

Now the real work with Gluster may begin; first, create a directory in the dedicated mount-point and add it as a brick on our upcoming volume:

mkdir /export/brick/b1
gluster volume create test localnode:/export/brick/b1

Last, start the volume and enjoy, it’s working.

gluster volume start test

And now…? Now you may play a little with the powerful gluster CLI, gluster help will output the available commands. You may also be a client of your cluster storage (yes, you can) by simply mounting the volume somewhere, like:

mkdir /mnt/gluster
mount -t glusterfs localnode:/test /mnt/gluster

Happy birthday to GNU!

Yes, happy birthday to GNU!

GNU_30th_logo_with_banner

And thank you for the fish 😉

Streaming to Twitch with avconv on Debian

I successfully managed to stream to Twitch using avconv from a Debian Wheezy system.

Here is the command used to display (part of) the desktop:

avconv -f x11grab -s 1680x1050 -r 15 -i :0.0 -c:v libx264 -pre fast -pix_fmt yuv420p -s 640x480 -threads 0 -f flv "rtmp://live.twitch.tv/app/live..."

Which means: grab the top left 1680×1050 pixels of the desktop at 15 fps and encode it using libx264 with the “fast” preset and the yuv240p picture format, resize this to 640×480 pixels using 1 CPU core and format all of this as a FLV stream to be sent using RTMP to the Twitch server (using the secret live streaming key).

Adapted from the FFmpeg streaming guide.

Remote Linux access using X2go

The X2Go project is a fairly efficient way to provide a graphical remote access to a Linux box. It is based on the excellent NoMachine free libraries, which is the technology behind the NX server and the FreeNX project, which seems not developped anymore.

A client will then only need a SSH access to the server to get a full featured graphical remote desktop.

Available for many Linux flavours, MS Windows and Mac OS X, the installation on a Debian client or server is really easy and straightforward, thanks to the packaging effort done by X2Go, and all the relevant information can be found on the wiki of the X2Go project.

Continue Reading >>

Multiple Monit accessed through a single URL scheme with Apache mod_rewrite


Just in case you don’t know it, Monit is a really neat tool for UNIX and alike systems, helping to manage and check a whole lot of stuff, including daemons, files and processes.

The best and preferable way to manage many Monit instances is to buy the M/Monit software, made by the same team and allowing to support their great work. But, in some situations you can’t get M/Monit and one might find useful the following quick’n dirty Apache‘s mod_rewrite tip.

Continue Reading >>

sogo-last, a quick utility for SOGo

The small script is used to parse an nginx (or any other webserver) access log and output the list of the last connected SOGo users, sorted by date of connexion.

#!/usr/bin/perl

use strict;
use warnings;

my $logfile = shift;
$logfile ||= "/var/log/nginx/access.log";

my %seen = ();
open LOG, "<$logfile" or die "Can't open '$logfile': $!\n";
while ()
{
        if (m#\[([^\]]+)\].*POST /SOGo/so/([^/]+)/Mail//#)
        {
                $seen{$2} = $1;
        }
}
close LOG;

sub bydate { $seen{$a} cmp $seen{$b}; }

foreach (sort bydate keys %seen)
{
        printf "%-12s %s\n", $_, $seen{$_};
}

exit(0);