Keyboard copy paste shortcuts in putty commandline

Everyone know how to copy and paste characters in putty terminal using mouse (in case you don’t know, selecting the characters using mouse automatically copies and right click paste it) but less people know that it is possible with keyboard as well. You can use the below command to paste the characters into putty terminal

SHIFT + INS 

Enjoy 🙂

Advertisements

Connect to serial console from Linux command line

The easiest way to connect to a serial console from Linux command line is using the screen command. In my example below, I will show you how to connect to a serial console of a server using the serial to usb cable.

[root@test ~]# screen /dev/ttyUSB0
******************************************
 Connected to *******
******************************************

Enjoy 🙂

Hack the permissions in Linux

Many of you reached here because of the term ‘Hack’. Sorry to say, I have to disappoint you(or may be not!). Here I am just discussing about how the Linux permissions work and not anything about hacking the system in a way script kiddies think. I am just mentioning few things about how the permissions are actually working and how it can lead to unwanted results if you are not sure what you are doing. I have put some commands below and for a Linux guy, it is enough and you will get an idea what I am talking about :

root@ubuntu:~# mkdir /root/test
root@ubuntu:~# ls -ld /root/test/
drwxr-xr-x 2 root root 4096 Mar 1 15:27 /root/test/
root@ubuntu:~# useradd tom
root@ubuntu:~# cat /etc/passwd|grep tom
tom:x:1001:1001::/home/tom:
root@ubuntu:~# chown -R tom:tom /root/test/
root@ubuntu:~# ls -ld /root/test/
drwxr-xr-x 2 tom tom 4096 Mar 1 15:27 /root/test/
root@ubuntu:~# userdel tom
root@ubuntu:~# ls -ld /root/test/
drwxr-xr-x 2 1001 1001 4096 Mar 1 15:27 /root/test/
root@ubuntu:~# useradd jerry
root@ubuntu:~# cat /etc/passwd|grep jerry
jerry:x:1001:1001::/home/jerry:
root@ubuntu:~# ls -ld /root/test/
drwxr-xr-x 2 jerry jerry 4096 Mar 1 15:27 /root/test/

See how the user jerry got access to tom’s files. It was caused by the same uid both were having. So if you are dealing with a large number of users, never simply delete the user. Just disable the user or change all the permission of the user to something more suitable before removing the user.

Enjoy 🙂

Run a script at boot time in CentOS 7

By default /etc/rc.local and /etc/rc.d/rc.local are no longer executable in CentOS7 with the new systemd-changes. Follow the below steps to make the script /root/g.sh run at boot time:

1. chmod +x /etc/rc.d/rc.local
2. chmod +x /root/g.sh
2. Mention your script at the bottom of the file /etc/rc.local (/etc/rc.local is a symlink to /etc/rc.d/rc.local)as below :

sh /root/g.sh

Restart and check 🙂

Loadbalancing apache webservers using haproxy

Here I am discussing about setting up a loadbalancer using haproxy to load balance two apache

web servers.

ubuntu1 : 192.168.56.101
ubuntu2 : 192.168.56.102
ubuntu2 : 192.168.56.103

I have setup the first two servers as loadbalancer and installed haproxy on the third one. You

can use

tasksel

to setup the lamp servers easily.

To install haproxy on ubuntu3:

apt-get install haproxy

To start the haproxy at boot time, set ENABLED=1 in /etc/default/haproxy

Configuration :

1. Backup the existing configuration file

cp -pr /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bk

2. Edit the haproxy.cfg as below :

root@ubuntu3:~# more /etc/haproxy/haproxy.cfg

global
    log 127.0.0.1 local0 notice
    maxconn 2000
    user haproxy
    group haproxy

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000


listen appname 0.0.0.0:80
    mode http
    stats enable
    stats uri /haproxy?stats
    stats realm Strictly\ Private
    stats auth Myusername1:mypassword1
    stats auth myusername2:mypassword2
    balance roundrobin
    option httpclose
    option forwardfor
    server ubuntu1 192.168.56.101:80 check
    server ubuntu2 192.168.56.102:80 check

where,
log defines syslog server where the logs should be sent to
maxconn defines the max connection loadbalancer can accept
retries specifies the max number of connection try on a back-end server
timeout connect specifies the max time LB wait for an connection to succeed
option redispatch enables session redistribution
timeout client and server specifies the send and ack time during tcp handshake process.

You can see the haproxy status through the link http://loadbalancer_ip/haproxy?stats
which we have set in uri section. The username/password that can be used is set in the next two
lines. Both of them will work.
In my case the link is

http://192.168.56.103/haproxy?stats

proxy_status

You can use different algorithm for the loadbalancing. Here we are using roundrobin. options

available are static-rr,leastconn,source,uri,url_param etc

ubuntu1 and ubuntu2 are the backend webservers we are forwarding the traffic to

Once the configurations are done, restart the services

service haproxy restart

Testing:

Create a file test.php with the web-server name as content in both the lamp servers.
Try to access the loadbalancer IP from browser and you can see the content are just changing
continuously each time you access via the browser.

To troubleshoot, check the log file : /var/log/haproxy.log

Whenever a host is not available, you can see logs similar to below :

Feb 24 16:23:31 localhost haproxy[2400]: Server appname/ubuntu2 is DOWN, reason: Layer4 

connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup 

servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

and when it come back online :

Feb 24 16:23:31 localhost haproxy[2400]: Server appname/ubuntu2 is DOWN, reason: Layer4 

connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup 

servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

Thanks 🙂

Ubuntu resolv.conf file keep getting reset

For the time being, you might have noticed that the manual entry inside the file resolv.conf is getting reset and you can’t ping to the public domains. For the latest ubuntu releases, we should use resolvconf to edit the dns entries and all the manual entries added directly inside resolv.conf will be overwritten when resovconf triggers the service check

To edit the entries,
vi /etc/resolvconf/resolv.conf.d/base
Add the entries (here I am using google dns) as below :

nameserver 8.8.8.8
nameserver 8.8.4.4

Restart the service as below :

service resolvconf restart

Now you can check your resolv.conf file and you would be able to see the manual entries replicated there.

root@test:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 8.8.4.4

Thanks 🙂

Monitor GlusterFS using nagios plugin

Here I am discussing about how to monitor Glusterfs using nagios plugin. You can download the plugin from the below link :

http://exchange.nagios.org/directory/Plugins/System-Metrics/File-System/GlusterFS-checks/details

I have copied the code at the bottom of this page, in-case you are not able to download from that link in future.

I assume you have already installed nagios packages (we are using nrpe for monitoring glusterfs). I have another post discussing how to configure glusterfs for files replication here :

https://gopukrish.wordpress.com/glusterfs/

Briefly, the concept is as follows :  

Download the script to the gluster node and make sure that it gives the exact output while executing. Once confirmed, add it as an argument to nrpe and call from the nagios server. If you are able to get the exact results, add it in the nagios configuration file of the gluster node you would like to monitor.

So here we start the processes : First, From gluster nodes, confirm that scripts executes fine and gives the exact results : /usr/lib/nagios/plugins/check_glusterfs.sh -v datavol -n 2 If you get any errors, do the following 2 steps in the gluster node (nagios client)

1. install package bc (eg : apt-get install bc)

2. set necessary permissions for the nrpe user : To find out the nrpe user, check in configuration file nrpe.cfg(gluster node). In my case, it was ‘nagios‘(change nagios with ‘nrpe’,if the user is ‘nrpe’ ). So give proper permission vi /etc/sudoers.d/nrpe

Defaults:nrpe !requiretty
nagios ALL=(root) NOPASSWD:/usr/sbin/gluster volume status [[\:graph\:]]* detail,/usr/sbin/gluster volume heal [[\:graph\:]]* info

If you haven’t added these permissions, you may get the below error : <pre>no bricks found </pre>

The same you can test from the gluster server as below :

root@www:/usr/local/nagios/etc/objects# /usr/local/nagios/libexec/check_nrpe -H my_server -c check_glusterfs

CRITICAL: no bricks found

Once the permission is added correctly:

root@www:/usr/local/nagios/etc/objects#

/usr/local/nagios/libexec/check_nrpe -H my_server -c check_glusterfs

OK: 2 bricks; free space 26GB

nrpe gives us the exact output while running from the nagios server. So we can safely add it in the  configuration file In nagios server :

my_server.cfg :

define service {
check_command check_nrpe!check_glusterfs
service_description Gluster Server Health Check
host_name my_server
use generic-service
}

change my_server with your server node or its ip address

If you hadn’t added the required permission for nrpe, you may get the below error :

gluster_critical

Note that in some version of nrpe, you might need to add

check_nrpe_1arg!check_glusterfs

instead of

check_nrpe!check_glusterfs

as the check_command

Edit the commands.cfg and define the below commands for nrpe unless you have a separate config file for nrpe. In my case it was in /etc/nagios-plugins/config/check_nrpe.cfg If you dont have any, you can add nrpe definition in commands.cfg as below :

commands.cfg :

define command {
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -t 30 -c $ARG1$ $ARG2$
}

In client : vi /etc/nagios/nrpe.cfg

command[check_glusterfs]=/usr/lib/nagios/plugins/check_glusterfs.sh -v datavol -n 2

This means, whenever the nagios server check the service using ‘check_nrpe!check_glusterfs’, the check_glusterfs.sh will run and give the output back to the nagios server. Once you have set everything correctly, you can see the status as ‘ok ‘ gluster_ok I know this is little bit confused and not organised. I just posted it for a quick reference for my colleagues. I shall organize it well later. For the mean time if you have any doubt, please let me know…. Thanks 🙂

gluster code :

#!/bin/bash

# This Nagios script was written against version 3.3 & 3.4 of Gluster. Older
# versions will most likely not work at all with this monitoring script.
#
# Gluster currently requires elevated permissions to do anything. In order to
# accommodate this, you need to allow your Nagios user some additional
# permissions via sudo. The line you want to add will look something like the
# following in /etc/sudoers (or something equivalent):
#
# Defaults:nagios !requiretty
# nagios ALL=(root) NOPASSWD:/usr/sbin/gluster volume status [[\:graph\:]]* detail,/usr/sbin/gluster volume heal [[\:graph\:]]* info
#
# That should give us all the access we need to check the status of any
# currently defined peers and volumes.

# Inspired by a script of Mark Nipper
#
# 2013, Mark Ruys, mark.ruys@peercode.nl

PATH=/sbin:/bin:/usr/sbin:/usr/bin

PROGNAME=$(basename -- $0)
PROGPATH=`echo $0 | sed -e 's,[\\/][^\\/][^\\/]*$,,'`
REVISION="1.0.0"

. $PROGPATH/utils.sh

# parse command line
usage () {
echo ""
echo "USAGE: "
echo " $PROGNAME -v VOLUME -n BRICKS [-w GB -c GB]"
echo " -n BRICKS: number of bricks"
echo " -w and -c values in GB"
exit $STATE_UNKNOWN
}

while getopts "v:n:w:c:" opt; do
case $opt in
v) VOLUME=${OPTARG} ;;
n) BRICKS=${OPTARG} ;;
w) WARN=${OPTARG} ;;
c) CRIT=${OPTARG} ;;
*) usage ;;
esac
done

if [ -z "${VOLUME}" -o -z "${BRICKS}" ]; then
usage
fi

Exit () {
$ECHO "$1: ${2:0}"
status=STATE_$1
exit ${!status}
}

# check for commands
for cmd in basename bc awk sudo pidof gluster; do
if ! type -p "$cmd" >/dev/null; then
Exit UNKNOWN "$cmd not found"
fi
done

# check for glusterd (management daemon)
if ! pidof glusterd &>/dev/null; then
Exit CRITICAL "glusterd management daemon not running"
fi

# check for glusterfsd (brick daemon)
if ! pidof glusterfsd &>/dev/null; then
Exit CRITICAL "glusterfsd brick daemon not running"
fi

# get volume heal status
heal=0
for entries in $(sudo gluster volume heal ${VOLUME} info | awk '/^Number of entries: /{print $4}'); do
if [ "$entries" -gt 0 ]; then
let $((heal+=entries))
fi
done
if [ "$heal" -gt 0 ]; then
errors=("${errors[@]}" "$heal unsynched entries")
fi

# get volume status
bricksfound=0
freegb=9999999
shopt -s nullglob
while read -r line; do
field=($(echo $line))
case ${field[0]} in
Brick)
brick=${field[@]:2}
;;
Disk)
key=${field[@]:0:3}
if [ "${key}" = "Disk Space Free" ]; then
freeunit=${field[@]:4}
free=${freeunit:0:-2}
unit=${freeunit#$free}
if [ "$unit" != "GB" ]; then
Exit UNKNOWN "unknown disk space size $freeunit"
fi
free=$(echo "${free} / 1" | bc -q)
if [ $free -lt $freegb ]; then
freegb=$free
fi
fi
;;
Online)
online=${field[@]:2}
if [ "${online}" = "Y" ]; then
let $((bricksfound++))
else
errors=("${errors[@]}" "$brick offline")
fi
;;
esac
done < <(sudo gluster volume status ${VOLUME} detail)

if [ $bricksfound -eq 0 ]; then
Exit CRITICAL "no bricks found"
elif [ $bricksfound -lt $BRICKS ]; then
errors=("${errors[@]}" "found $bricksfound bricks, expected $BRICKS ")
fi

if [ -n "$CRIT" -a -n "$WARN" ]; then
if [ $CRIT -ge $WARN ]; then
Exit UNKNOWN "critical threshold below warning"
elif [ $freegb -lt $CRIT ]; then
Exit CRITICAL "free space ${freegb}GB"
elif [ $freegb -lt $WARN ]; then
errors=("${errors[@]}" "free space ${freegb}GB")
fi
fi

# exit with warning if errors
if [ -n "$errors" ]; then
sep='; '
msg=$(printf "${sep}%s" "${errors[@]}")
msg=${msg:${#sep}}

Exit WARNING "${msg}"
fi

# exit with no errors
Exit OK "${bricksfound} bricks; free space ${freegb}GB"