Posts Tagged ‘monitor’

Audit delete commands in Linux

Sunday, October 22nd, 2017

(This article is the essence of a post from this Redhat Archive and it goes as follows:

Problem: You need to detect what deletes files on your Linux

Solution: Using auditd, with the right flags, you could get a lot of information.

In Practice:

  • If the mount point/directory is /oracle, then:
  • (as root:) auditctl –w /oracle -k whodeletedit -p w
    (Explanation: Monitor the directory /oracle, and log everything under the label “whodeleteit”. Monitor write operations)
  • To see, later, who deleted files, run (as root): ausearch -i -k whodeletedit -x /bin/rm
  • You would want to stop the logging as soon as you found the culprit, by running (as root):  auditctl –W /oracle -k whodeletedit -p w

I hope it helps you just as it helps me.

Centreon and batch-adding hosts

Monday, April 27th, 2009

Centreon is a nice GUI wrapper for Nagios. It is using MySQL as its configuration engine, and it functions quite well. One thing Cacti can do but Centreon can’t is mass automatic addition of servers. I have had a new site with an installed Centreon, and I wanted to add about 40 servers to be monitored. This is a tedious work, and I was searching for some semi-automatic method of doing it.
This is not perfect, but it worked for me.
In this case I do not replicate service-group relationship, but only add a mass of servers.

First – create a text file containing a list of servers and IPs. It should look like this:
serv1:1.2.3.4
serv2:10.2.1.3
new_srv:2.3.4.1

I have placed in in /tmp/machines

Second – find the last host entry. In my case the DB name is Centreon, so I run the following command:

mysql -u root -p centreon -e’select host_id from host’

This should return a colum with numbers. Find the largest one and increment it by one. In my example the last one was 19, so my initial host_id will be 20.

You should now find the host_template_model_html_id you are to use. There are few methods for that, but the easiest way is to find another host information which matches to some level your desired information. In my case it was called “DB1”, so this looks like this:

mysql -u root -p centreon -e”select host_template_model_htm_id from host where host_name=’DB1′”

Please note that my blog formatting might change the quote character. You might not want to copy/paste it, but type it yourselves.

The result of the above query should give us a template ID. In my case it was “2”, which is fine by me.

If you want a better reference for the values entered, you can do a whole select for a single host to verify your values match mine:

mysql -u root -p centreon -e”select * from host where host_name=’DB1’G”

This should give you long listing and information of the host, as a reference.

My script goes like this, based on the assumptions made above:

#!/bin/bash
HOST=20
for i in `cat /tmp/machines`
do
   NAME=`echo $i | cut -f1 -d:`
   IP=`echo $2 | cut -f2 -d:`
   echo "insert into host values ('$HOST',2,NULL,NULL,1,NULL,NULL,NULL,NULL,'$NAME','$NAME','$IP',NULL,NULL,'2','2','2','2','2',NULL,'2',NULL,NULL,'2','2','2','2',NULL,NULL,'2',NULL,NULL,'0',NULL,'1','1');" >> /tmp/insert_sql.sql
   echo "insert into extended_host_information values('',$HOST,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);" >> /tmp/insert_sql.sql
   let HOST++
done

This should create a file called /tmp/insert_sql.sql which then should be first reviewed, and then inserted into your database.

Needless to say – back up your database first, just in case:

mysqldump -u root -p –opt -B centreon > /tmp/centreon_backup.sql

and then insert the newly created data:

mysql -u root -p centreon < /tmp/insert_sql.sql

Notice – at this point, no service relationship is created. I think it is quite a chore only to create the nodes. Adding the service relationships complicates things a bit, and I did not want to go there at this specific stage. However, for few tenths of monitored hosts, this is quite a lifesaver.

Notice that this is only Centreon configuration, and you will be required to apply it (through the GUI) to Nagios.

Xen VMs performance collection

Saturday, October 18th, 2008

Unlike VMware Server, Xen’s HyperVisor does not allow an easy collection of performance information. The management machine, called “Domain-0” is actually a privileged virtual machine, and thus – get its own small share of CPUs and RAM. Collecting performance information on it will lead to, well, collecting performance information for a single VM, and not the whole bunch.

Local tools, such as “xentop” allows collection of information, however, combining this with Cacti, or any other SNMP-based collection tool is a bit tricky.

A great solution is provided by Ian P. Christian in his blog post about Xen monitoring. He has created a Perl script to collect information. I have taken the liberty to fix several minor things with his permission. The modified scripts are presented below. Name the script (according to your version of Xen) “/usr/local/bin/xen_stats.pl” and set it to be executable:

For Xen 3.1

#!/usr/bin/perl -w

use strict;

# declare...
sub trim($);
#xen_cloud.tar.gz
# we need to run 2 iterations because CPU stats show 0% on the first, and I'm putting .1 second betwen them to speed it up
my @result = split(/n/, `xentop -b -i 2 -d.1`);

# remove the first line
shift(@result);

shift(@result) while @result && $result[0] !~ /^xentop - /;

# the next 3 lines are headings..
shift(@result);
shift(@result);
shift(@result);
shift(@result);

foreach my $line (@result)
{
  my @xenInfo = split(/[t ]+/, trim($line));
  printf("name: %s, cpu_sec: %d, cpu_percent: %.2f, vbd_rd: %d, vbd_wr: %dn",
    $xenInfo[0],
    $xenInfo[2],
    $xenInfo[3],
    $xenInfo[14],
    $xenInfo[15]
    );
}

# trims leading and trailing whitespace
sub trim($)
{
  my $string = shift;
  $string =~ s/^s+//;
  $string =~ s/s+$//;
  return $string;
}

For Xen 3.2 and Xen 3.3

#!/usr/bin/perl -w

use strict;

# declare…
sub trim($);

# we need to run 2 iterations because CPU stats show 0% on the first, and I’m putting .1 second between them to speed it up
my @result = split(/n/, `/usr/sbin/xentop -b -i 2 -d.1`);

# remove the first line
shift(@result);
shift(@result) while @result && $result[0] !~ /^[t ]+NAME/;
shift(@result);

foreach my $line (@result)
{
        my @xenInfo = split(/[t ]+/, trim($line));
        printf(“name: %s, cpu_sec: %d, cpu_percent: %.2f, vbd_rd: %d, vbd_wr: %dn“,
        $xenInfo[0],
        $xenInfo[2],
        $xenInfo[3],
        $xenInfo[14],
        $xenInfo[15]
        );
}
# trims leading and trailing whitespace
sub trim($)
{
        my $string = shift;
        $string =~ s/^s+//;
        $string =~ s/s+$//;
        return $string;
}

Cron settings for Domain-0

Create a file “/etc/cron.d/xenstat” with the following contents:

# This will run xen_stats.pl every minute
*/1 * * * * root /usr/local/bin/xen_stats.pl > /tmp/xen-stats.new && cat /tmp/xen-stats.new > /var/run/xen-stats

SNMP settings for Domain-0

Add the line below to “/etc/snmp/snmpd.conf” and then restart the snmpd service

extend xen-stats   /bin/cat /var/run/xen-stats

Cacti

I reduced Ian Cacti script to be based on a per-server setup, meaning this script gets the host (dom-0) name from Cacti, but cannot support live migrations. I will try to deal with combining live migrations with Cacti in the future.

Download and extract my modified xen_cloud.tar.gz file. Extract it, place the script and config in its relevant location, and import the template into Cacti. It should work like charm.

A note – the PHP script will work only on PHP5 and above. Works flawlessly on Centos5.2 for me.

It has been a while… Today – Monitor I/O in AIX

Saturday, November 25th, 2006

It is a question I’ve been asked a while back, and didn’t find the time to search for it.

I have searched for an answer just now, and got to an answer given by one of the old-time gurus (most likely) in news servers (where the gurus lurk, usually). The answer is to use the command "filemon" in a syntax such as this:

filemon -O lf -o outputfile.txt

To stop the monitoring session, run "trcstop". Review the output file generated by this command, and you will be able to view the I/O interactions which happened during that time on the system.