Friday, February 7, 2014

Nagios plugin examples

I have seen many nagios beginners faced difficulty to configure the plugin as per their requirement.  Most of the time, they just wanted to visit some existing configuration, which they can use as a model for their configurations. But the online resources rarely gives the live examples. I have picked few basic plugins to demonstrate the configuration
check_users : Plugin to monitor the number of users logged in to the system
check_load  : Plugin to monitor the run-time load of the system
check_disk  : Plugin to monitor the disk usage of the system
check_procs: Plugin to monitor the running processes in a system
check_log   : Plugin to monitor a particular log file in the system. If any particular error logged, then nagios will alert the user.

Example Configurations :

1- Configuration example at Nagios server side
File : remotehost.cfg
define service{
        use                             generic-service
        host_name                     tester1
        service_description             check Load
        is_volatile                     1
        check_command                   check_nrpe!check_load
        max_check_attempts              1
        active_checks_enabled           0
        passive_checks_enabled          1
        contact_groups                  admins

# Define a service to check the users on the Remote  machine  tester1

define service{
        use                             generic-service
        host_name                       pkvm-tester1
        service_description             check Disk
        is_volatile                     1
        check_command                   check_nrpe!check_hda1
        max_check_attempts              1
        active_checks_enabled           0
        passive_checks_enabled          1
        contact_groups                  admins

# Define a service to check the Processes on the Remote  machine tester1

define service{
        use                             generic-service
        host_name                       pkvm-tester1
        service_description             check Procs Zombie
        is_volatile                     1
        check_command                   check_nrpe!check_zombie_procs
        max_check_attempts              1
        active_checks_enabled           0
        passive_checks_enabled          1
        contact_groups                  admins

# Define a service to check the Log on the Remote  machine tester1
define service{
        use                             generic-service
        host_name                       pkvm-tester1
        service_description             check system log for SSL
        is_volatile                     1
        check_command                   check_nrpe!check_sys_log
        max_check_attempts              1
        active_checks_enabled           0
        passive_checks_enabled          1
        contact_groups                  admins

# Define a service to check the Log on the Remote  machine  tester1
define service{
        use                             generic-service
        host_name                       pkvm-tester1
        service_description             check system log for SElinux
        is_volatile                     1
        check_command                   check_nrpe!check_sys_log_2
        max_check_attempts              1
        active_checks_enabled           0
        passive_checks_enabled          1
        contact_groups                  admins

# Define a service to check the Processes on the Remote  machine tester1 
define service{
        use                             generic-service
        host_name                       pkvm-tester1
        service_description             check total Procs
        is_volatile                     1
        check_command                   check_nrpe!check_total_procs
        max_check_attempts              1
        active_checks_enabled           0
        passive_checks_enabled          1
        contact_groups                  admins
File : Commands.cfg
# Command definition for the remote execution. Here $ARG1$ will be the carrier of the remote execution commands defined above in remotehost.cfg.
define command{
command_name    check_nrpe
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ 

2- Configuration example at Nagios client side.
File : nrpe.cfg
command[check_users]=/usr/lib64/nagios/plugins/check_users -w 5 -c 10

command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 30,25,20

command[check_hda1]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1

command[check_zombie_procs]=/usr/lib64/nagios/plugins/check_procs -w 5 -c 10 -s Z

command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200

command[check_sys_log]=/usr/lib64/nagios/plugins/check_log -F '/var/log/messages' -O /tmp/oldlog -q 'Error - Could not complete SSL handshake'

command[check_sys_log_2]=/usr/lib64/nagios/plugins/check_log -F '/var/log/messages' -O /tmp/oldlog -q 'NRPE Error:'

How it works:

 when a Nagios process find a command defined in remotehost.cfg, then it looks the configuration file  commands.cfg for the definition of the command. Based on the command definition found, here check_nrpe, and its argument will send to nagios daemon running in remotehost.cfg.
The command represent the argument to the check_nrpe (Eg:  check_nrpe!check_total_procs) will be defined in remotehost as given in nrpe.cfg file. nrpe daemon will execute it and give back the result. That result will be displayed in nagios server. Thats it!

Thursday, December 12, 2013

Nagios solution on Fedora 18/19


  • Install Nagios core V3.5.1 with web support on any Fedora box [ Monitoring Server]
  • Install Nagios core V3.5.1 on any other suporting server [ Nagios Client ]
  • Install all standard plugins on Nagios client
  • Install NRPE plugin on Monitoring Server and Nagios client

Nagios installation and deployment

Nagios is an industry standard host monitoring software with very flexible architecture. With wide variety of plugin tools, anything in the local/Nagios client can be monitored using this software. And a user can set the thresholds to for alarming the user or administrators on and when an event is occurred.

Installation steps

1 - Install apache web server with php at Monitoring Server
yum install httpd php

2 - Create a new nagios user account and give it a password. Monitoring Server
/usr/sbin/useradd -m nagiospasswd nagios
3 - Create a new nagcmd group for allowing external commands to be submitted through the web interface. Add both the nagios user and the apache user to the group.
/usr/sbin/groupadd nagcmd
/usr/sbin/usermod -a -G nagcmd nagios
/usr/sbin/usermod -a -G nagcmd apache

4- Then install Nagios core and its plugins at both Monitoring Server and client
yum install nagios
yum install nagios-plugins-all.x86_64
yum install nagios-plugins-nrpe.x86_64
5 - Verify the sample Nagios configuration files.
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
6 - Fedora ships with SELinux (Security Enhanced Linux) installed and in Enforcing mode by default. This may cause permission errors when using IPMI plugins and system log viewer plugins. So execute below steps on Monitoring Server.
See if SELinux is in Enforcing mode.
Put SELinux into Permissive mode.
setenforce 0

Start the nagios services

After nagios is installed and configured, execute below commands to start httpd server and nagios service at Monitoring Server.
systemctl start nagios.service
systemctl start httpd.service
Then open a browser instance and add the url as localhost/nagios. This will open nagios main page. On this page the links to different monitoring entities will be available. Eg services, host etc. On clicking these lnks current system status and parameters will displayed on the browser window.

Plugins and verification.

The power of nagios is its plugins. There few standard plugins which monitor load, hard disk, ping are installed on the nagios client box using the command yum install nagios-plugins-all.x86_64 . All these plugins are tested successfully.
The installed plugins are
check_disk - monitored the mounted file system
check_procs - monitors the processes running
check_swap - check swap of the local system.
check_users - number of the users currently loged in.
check_nrpe - runs at the monitoring nagios machine. which executes the nagios daemon on the remote server. this in turn executes the commands defined in the nrpe.cfg(more details are explained later part of this document)

On browser window, the information will be visible by mouse click. On command line execution, first need to find out the location of the plugins. Here in nagios client box these plugins are available in /usr/lib64/nagios/plugins/. And plugins can be executed by passing proper parameters. Sample commands are listed below, which can verify on either Monitoring Server or client.

Note : Here w stands for the warning thresholds and C stands for the critical thresholds.
/usr/lib64/nagios/plugins/check_users -w 5 -c 10
/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
/usr/lib64/nagios/plugins/check_disk -w 20% -c 10%
/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
check_procs -w 2:2 -c 2:1024 -C portsentry ( Warning if not two processes with command name portsentry. Critical if < 2 or > 1024 processes )
check_procs -w 10 -a '/usr/local/bin/perl' -u root ( Warning alert if > 10 processes with command arguments containing '/usr/local/bin/perl' and owned by root)
check_procs -w 50000 -c 100000 --metric=VSZ (Alert if VSZ of any processes over 50K or 100K )
check_procs -w 10 -c 20 --metric=CPU (Alert if CPU of any processes over 10%% or 20%%)

Power KVM box is verified with all the installed plugins both by browser UI and commands line methods.

Monitoring a Remote server using Nagios.

There are different methods to monitor remote servers using nagios. NRPE is the method widely accepted and popular. NRPE is a plugin, which can be installed on monitoring host and Nagios client after nagios is installed. There are two main components with NRPE plugin. One is check_nrpe. This plugin will reside at Monitoring server and second is the nrpe, which resides at Nagios client.

check_log plugin

Check_log plugin can be configured to monitor any log file resides on either nagios client or server. For monitoring the system logs, sufficient permissions needs to be set for the users. Modify both commands.cfg and localhost.cfg for defining the service and commands. The command definition can be like below.
define command{
command_name check_sys_log
command_line $USER1$/check_log -F /var/log/messages -O /tmp/oldlog -q 'Error - Could not complete SSL handshake'
check_log plugin keep a copy of the monitoring log file to a temporary place as part of the initialization process. And then it takes difference between the current log and previously taken copy. If the difference contains the mentioned error, it reports.

NRPE Configuration

NRPE needs to be installed at both server and client side. yum install nagios-plugins-nrpe.x86_64 is the command used to install this plugin. Below are the outline of the Monitoring Server and client configuration
  1. Since nrpe is running with xinetd, install xinetd on Nagios client.
  2. Open /etc/xinetd.d/nrpe at Nagios client and add Monitoring server IP. This allows check_nrpe from Monitoring server to communicate with Nagios client.
  3. After nagios is configured at monitoring host, add remote execution commands at /usr/local/nagios/etc/nrpe.cfg. Whenever a user tries to get the information from a Nagios client, check_nrpe will use these commands to collect the data from Nagios client.

Some sample commands at Nagios client nrpe.cfg
command[check_users]=/usr/lib64/nagios/plugins/check_users -w 5 -c 10
command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200
command[check_hda1]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10%

then execute below commands from Nagios server.
  • check_nrpe -H -c check_total_procs
  • check_nrpe -H  -c check_hda1
Note :
/etc/hosts.allow, /etc/services and /etc/xinetd.d/nrpe needs to be configured appropriately to work NRPE

SNMP plugin configuration

Add below command in command.cfg og monitoring server.
check_snmp -H $HOSTADDRESS$ -C $ARG1$ -o . -w 100 -c 200
here object id needs to be enabled at nagios client machine with proper community string and permissions.

Other configurations at Monitoring server for SNMP

../objects/.cfg: Add 'check_command check_snmp_proc! '
../objects/commands.cfg: Add 'check_snmp_proc' command definition
/etc/nagios/objects.cfg: Add service_description SNMPD_PROC

Note : Another utility snmptt needs to be installed to translate the useful traps to nagios.

If anybody needs any more info, ping here as a comment.

Wednesday, March 14, 2012

C source code to insert string in another string.

Here I have a small code snipet, which i was developed for my personal purpose. This C code will insert a string in another string. I think many people could have these kinds of requirement in their regular work.

char *insert_new_string(int position, char *str_ptr1, char *str_ptr2)
char *new_str,*tmpstr;

new_str=(char *)malloc(strlen(str_ptr1)+strlen(str_ptr1)+6);
int string_count = 0; /* Counts the characters in the string */
tmpstr = new_str;

while (*str_ptr1 != '\0')
if (string_count == position)
while (*str_ptr2 != '\0')


new_str= tmpstr;
return new_str;


int main(int argc, char *argv[])
int pos=3;
char *str_var = " hi everybody";
char *new_str;

new_str= insert_new_string(pos,str_var,"new string" ); // insert new string


return 0;

Please let me know, if you have any problem when executing this code.

Thursday, March 31, 2011

C++ Interview questions - part2

When C++ creates a default constructor ?
- C++ create a default constructor, if an explicit constructor is not defined by the programmer. A default constructor is a constructor that either has no parameters, or if it has parameters, all the parameters have default values.
- default constructor for a class as a constructor that can be called with no arguments (this includes a constructor whose parameters all have default arguments)
- The compiler will implicitly define a default constructor if no constructors are explicitly defined for a class.
- This implicitly-declared default constructor is equivalent to a default constructor defined with a blank body.
- if some constructors are defined, but they are all non-default, the compiler will not implicitly define a default constructor. This means that a default constructor may not exist for a class.
- When an object value is declared with no argument list, e.g. MyClass x;; or allocated dynamically with no argument list, e.g. new MyClass; the default constructor is used to initialize the object
- When an array of objects is declared, e.g. MyClass x[10];; or allocated dynamically, e.g. new MyClass [10]; the default constructor is used to initialize all the elements
- When a derived class constructor does not explicitly call the base class constructor in its initializer list, the default constructor for the base class is called
- When a class constructor does not explicitly call the constructor of one of its object-valued fields in its initializer list, the default constructor for the field's class is called
- In the standard library, certain containers "fill in" values using the default constructor when the value is not given explicitly, e.g. vector(10); initializes the vector with 10 elements, which are filled with the default-constructed value of our type.

Link :

Whats the difference between copy constrctor and overloaded assignment operator ?
- If a new object has to be created before the copying can occur, the copy constructor is used.
- If a new object does not have to be created before the copying can occur, the assignment operator is used.
Eg. base a;
base b;
There are three general cases where the copy constructor is called instead of the assignment operator:
1. When instantiating one object and initializing it with values from another object.
Eg. base b=a;
2. When passing an object by value.
3. When an object is returned from a function by value.
Link :

Why an empty class have size of one byte.?
- The reason is, the language standard states that all classes must have a memory size of at least 1 byte so that the class doesn't occupy the same memory space with another class. It'll be more clear if you think of two empty classes(class with no members).
- Each instance of the class/struct must have its own unique address , compiler compiler just adds a byte for padding. where as in case, if you have a member variable say "int a" inside the class/struct, it will be 4 bytes , not 5 bytes !!!

What is initialization list ?
Link :

How C++ implemented map container ? whats the data-structure it uses?
- Most probably a balanced binary search tree is used to implement map container.

Thursday, March 24, 2011

What is the difference between mutex and semaphore?

The difference between mutex and semaphore

Semaphores can be thought of as simple counters that indicate the status of a resource. This counter is a protected from user access and shielded by kernel.If counter is greater that 0, then the resource is available, and if the counter is 0 or less, then that resource is busy.

Semaphores can be either binary or counting, depending on the number of shared resources.semaphore available accessible to all processes so that they can read and check the value and also initialize and reinitialize the value of semaphore appropriately. For this reason only the semaphore is stored in kernel so that it can be accessed by all processes.
The command
$ ipcs –s
will give the list of existing semaphores.
yoshi# ipcs -s
IPC status from /dev/kmem as of Mon Feb 7 11:21:36 2011
s 0 0x4f1c02d4 --ra------- root root
s 1 0x411c1149 --ra-ra-ra- root root
s 2 0x4e0c0002 --ra-ra-ra- root root
s 3 0x41200ec8 --ra-ra-ra- root root
s 4 0x00446f6e --ra-r--r-- root root
s 25 0x410c035b --ra-ra-ra- root root
s 26 0x712068c5 --ra-ra-ra- root root
s 2075 0x00000000 --ra------- www other
s 28 0x00000000 --ra------- www other

Mutex can be released only by the thread that had acquired it where as in semaphore any thread can signal the semaphore to release the critical section.
There are 3 major differences between Mutex and Binary semaphore.
1. In case of Mutex semaphore the task that had taken the semaphore can only give it, however in the case of binary
semaphore any task can give the semaphore.
2. Calling SemFlush() function in Mutex is illegal.
3. Mutex Semaphore can not be given from an ISR.

Apart from counting behaviour the biggest differnce is in scope of mutex and semaphore. Mutex have process scope that is it is valid within a process space and can be used for thread synchronization (hence light weight), semaphore are can be used accross process space and hence can be used for inter process synch.

Shemaphores are of two types , binary and counting.counting shemaphore can range over an unrestricted domain ,where as binary semaphore which is also known as mutex can range between 0 and 1

Links :

Windows and Unix way of dynamic link library management

Dynamic/Shared libraries: Differences Between Unix and Windows
[ On Unix, linking with a library creates a separate copy of it, But Windows will not create a copy. Its keeps a reference to the functions inside the libraries]

Unix and Windows use completely different paradigms for run-time loading of code. Before you try to build a module that can be dynamically loaded, be aware of how your system works.

In Unix, a shared object (.so) file contains code to be used by the program, and also the names of functions and data that it expects to find in the program. When the file is joined to the program, all references to those functions and data in the file's code are changed to point to the actual locations in the program where the functions and data are placed in memory. This is basically a link operation.

In Windows, a dynamic-link library (.dll) file has no dangling references. Instead, an access to functions or data goes through a lookup table. So the DLL code does not have to be fixed up at runtime to refer to the program's memory; instead, the code already uses the DLL's lookup table, and the lookup table is modified at runtime to point to the functions and data.

In Unix, there is only one type of library file (.a) which contains code from several object files (.o). During the link step to create a shared object file (.so), the linker may find that it doesn't know where an identifier is defined. The linker will look for it in the object files in the libraries; if it finds it, it will include all the code from that object file.

In Windows, there are two types of library, a static library and an import library (both called .lib). A static library is like a Unix .a file; it contains code to be included as necessary. An import library is basically used only to reassure the linker that a certain identifier is legal, and will be present in the program when the DLL is loaded. So the linker uses the information from the import library to build the lookup table for using identifiers that are not included in the DLL. When an application or a DLL is linked, an import library may be generated, which will need to be used for all future DLLs that depend on the symbols in the application or DLL.

Suppose you are building two dynamic-load modules, B and C, which should share another block of code A. On Unix, you would not pass A.a to the linker for and; that would cause it to be included twice, so that B and C would each have their own copy. In Windows, building A.dll will also build A.lib. You do pass A.lib to the linker for B and C. A.lib does not contain code; it just contains information which will be used at runtime to access A's code.

In Windows, using an import library is sort of like using "import spam"; it gives you access to spam's names, but does not create a separate copy. On Unix, linking with a library is more like "from spam import *"; it does create a separate copy.

Wednesday, March 23, 2011

STL- Containers

- Vectors are a kind of sequence containers. As such, their elements are ordered following a strict linear sequence.
- Vector containers are implemented as dynamic arrays; Just as regular arrays, vector containers have their elements stored in contiguous storage locations, which means that their elements can be accessed not only using iterators but also using offsets on regular pointers to elements.
- But unlike regular arrays, storage in vectors is handled automatically, allowing it to be expanded and contracted as needed.
- Vectors are good at:
* Accessing individual elements by their position index (constant time).
* Iterating over the elements in any order (linear time).
* Add and remove elements from its end (constant amortized time).
- Compared to arrays, they provide almost the same performance for these tasks, plus they have the ability to be easily re-sized. Although, they usually consume more memory than arrays when their capacity is handled automatically
Element access:
1 - operator[] - Access element at a particular locattion(ofset)
2 - at - Access element at a particular locattion(ofset)(throws exception, if fails).
3 - front - Access first element
4 - back - Access last element

- Internally, vectors -like all containers- have a size, which represents the amount of elements contained in the vector. But vectors, also have a capacity, which determines the amount of storage space they have allocated, and which can be either equal or greater than the actual size. When number of elements exhausted the capacity of Vector, reallocation of vectors will be required.
- Re-allocations may be a costly operation in terms of performance, since they generally involve the entire storage space used by the vector to be copied to a new location. Therefore, whenever large increases in size are planned for a vector, it is recommended to explicitly indicate a capacity for the vector using member function vector::reserve.
- How vector size is managed ? is it a continuous memory allocation or memory blocks are linked?
- Vectors are dynamic arrays, which have pre-defined capacity in the memory. when the elements are exhausted the capacity, the whole vector will be copied to new place for reallocation of the capacity.

- Lists are a kind of sequence containers. As such, their elements are ordered following a linear sequence.
- List containers are implemented as doubly-linked lists; Doubly linked lists can store each of the elements they contain in different and unrelated storage locations. The ordering is kept by the association to each element of a link to the element preceding it and a link to the element following it.
- This provides the following advantages to list containers:
* Efficient insertion and removal of elements anywhere in the container (constant time).
* Efficient moving elements and block of elements within the container or even between different containers (constant time).
* Iterating over the elements in forward or reverse order (linear time).
- Compared to other base standard sequence containers (vectors and deques), lists perform generally better in inserting, extracting and moving elements in any position within the container, and therefore also in algorithms that make intensive use of these, like sorting algorithms.
- The main drawback of lists compared to these other sequence containers is that they lack direct access to the elements by their position.

- Sets are a kind of associative containers that stores unique elements, and in which the elements themselves are the keys.
- Internally, the elements in a set are always sorted from lower to higher following a specific strict weak ordering criterion set on container construction.
- Sets are typically implemented as binary search trees.
- Therefore, the main characteristics of set as an associative container are:
* Unique element values: no two elements in the set can compare equal to each other. For a similar associative container allowing for multiple equivalent elements, see multiset.
* The element value is the key itself. For a similar associative container where elements are accessed using a key, but map to a value different than this key, see map.
* Elements follow a strict weak ordering at all times. Unordered associative arrays, like unordered_set, are available in implementations following TR1.

- Map is a Sorted Associative Container that associates objects of type Key with objects of type Data.
- Map is a Pair Associative Container, meaning that its value type is pair.
- It is also a Unique Associative Container, meaning that no two elements have the same key.
- Internally, the elements in the map are sorted from lower to higher key value following a specific strict weak ordering criterion set on construction
- As associative containers, they are especially designed to be efficient accessing its elements by their key
- Iterators are using to access the mapped value based on key.
- Iterators to elements of map,access to both the key and the mapped value