“This page includes other resources which are not secure.” || “The connection to this website is not fully secure because it contains unencrypted elements(such as images) or the encryption is not strong enough”

“This page includes other resources which are not secure.”

“The connection to this website is not fully secure because it contains unencrypted elements(such as images) or the encryption is not strong enough”

You might have noticed these warnings in your browsers even though you installed your SSL certificates correctly and wondering what is the next step to do. The answer is just replace the http links with https in your websites. Yes, you should find out all the insecure calls(http) to images,videos,css,javascript and replace it with https. It is a hair pulling job to find out these links manually and replace them correctly. But I can definitely help you to find out the http links.

Try the below things.

1. https://www.whynopadlock.com

Just give your website link in this website and check. It will list all the insecure URLs in your website.

In some cases, I have noted that the SSL warning comes only after you login, In that case, you cant use my first suggestion and you can use the chrome console as mentioned below

2. Using Chrome console

Load the site in google chrome -> Press F12 -> Select Console.
You can see the warning in red color that the mixed content should be replaced. Once you replace all those http links with https, your site should load fine.

Hope this helps 🙂

Integrate Linux machine with AD

I am using a software named ‘PowerBroker Identity Services’ to integrate my ubuntu machine with the AD.
First of all, download the corresponding package from the site :

In my case, it was debian and the download link is as below

mkdir /root/theG;cd /root/theG
wget http://download.beyondtrust.com/PBISO/8.2.2/linux.deb.x64/pbis-open-
chmod +x pbis-open-*

Restart the machine

To join domain, give the below command :

domainjoin-cli join DOMAIN.COM adminusername

Once it shows as success, Restart the machine and you can check the status using the below commands :

getent passwd
getent group

You can login to the machine as below :

If you want to allow all the members of a particular group to have full permission, allow it as :

%group^name ALL=(ALL) ALL

Thanks 🙂

Run a script at boot time in CentOS 7

By default /etc/rc.local and /etc/rc.d/rc.local are no longer executable in CentOS7 with the new systemd-changes. Follow the below steps to make the script /root/g.sh run at boot time:

1. chmod +x /etc/rc.d/rc.local
2. chmod +x /root/g.sh
2. Mention your script at the bottom of the file /etc/rc.local (/etc/rc.local is a symlink to /etc/rc.d/rc.local)as below :

sh /root/g.sh

Restart and check 🙂

Ubuntu keeps resetting the Laptop brightness

Atleast some of you might have noticed that Ubuntu keeps resetting the brightness level of our laptops whenever we reboot. I have found a simple solution that could atleast help you to start with a particular brightness level each time you restart. The solution is as follows:

1. First set your brightness level to the desired one and see the actual value

gopu@goputec:~$ cat /sys/class/backlight/intel_backlight/brightness

2. Add this value to the bottom of rc.local file (above ‘exit 0’) so that each time Ubuntu start, it will set the brightness to the value we specified in the same config file:

gopu@goputec:~$ cat /etc/rc.local 
#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.
echo 247 > /sys/class/backlight/intel_backlight/brightness
exit 0

3.Test yourself restarting the laptop.

Enjoy 🙂

Loadbalancing apache webservers using haproxy

Here I am discussing about setting up a loadbalancer using haproxy to load balance two apache

web servers.

ubuntu1 :
ubuntu2 :
ubuntu2 :

I have setup the first two servers as loadbalancer and installed haproxy on the third one. You

can use


to setup the lamp servers easily.

To install haproxy on ubuntu3:

apt-get install haproxy

To start the haproxy at boot time, set ENABLED=1 in /etc/default/haproxy

Configuration :

1. Backup the existing configuration file

cp -pr /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bk

2. Edit the haproxy.cfg as below :

root@ubuntu3:~# more /etc/haproxy/haproxy.cfg

    log local0 notice
    maxconn 2000
    user haproxy
    group haproxy

    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000

listen appname
    mode http
    stats enable
    stats uri /haproxy?stats
    stats realm Strictly\ Private
    stats auth Myusername1:mypassword1
    stats auth myusername2:mypassword2
    balance roundrobin
    option httpclose
    option forwardfor
    server ubuntu1 check
    server ubuntu2 check

log defines syslog server where the logs should be sent to
maxconn defines the max connection loadbalancer can accept
retries specifies the max number of connection try on a back-end server
timeout connect specifies the max time LB wait for an connection to succeed
option redispatch enables session redistribution
timeout client and server specifies the send and ack time during tcp handshake process.

You can see the haproxy status through the link http://loadbalancer_ip/haproxy?stats
which we have set in uri section. The username/password that can be used is set in the next two
lines. Both of them will work.
In my case the link is


You can use different algorithm for the loadbalancing. Here we are using roundrobin. options

available are static-rr,leastconn,source,uri,url_param etc

ubuntu1 and ubuntu2 are the backend webservers we are forwarding the traffic to

Once the configurations are done, restart the services

service haproxy restart


Create a file test.php with the web-server name as content in both the lamp servers.
Try to access the loadbalancer IP from browser and you can see the content are just changing
continuously each time you access via the browser.

To troubleshoot, check the log file : /var/log/haproxy.log

Whenever a host is not available, you can see logs similar to below :

Feb 24 16:23:31 localhost haproxy[2400]: Server appname/ubuntu2 is DOWN, reason: Layer4 

connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup 

servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

and when it come back online :

Feb 24 16:23:31 localhost haproxy[2400]: Server appname/ubuntu2 is DOWN, reason: Layer4 

connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup 

servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

Thanks 🙂

Got packet bigger than ‘max_allowed_packet’ bytes when dumping table

You may get the error mentioned as the post heading while you backup using some automatic backup tools such as MySQL ZRM or mysqldump.Sometimes you will see that the even though adding the


entry in the MySQL server my.cnf file. The reason behind this is, you have to add the ‘max_allowed_packet’ entry in the client side as well. The default value for ‘max_allowed_packet’ at server is 1M and at the client is 16MB. The largest value for ‘max_allowed_packet’ value at client or server is 1G. Add the value appropriate for your situation and your backup should work well.

Enjoy 🙂

Linux Cluster using drbd and heartbeat

Here I am discussing about how to setup a cluster of two linux servers. serverg1 and serverg2 are the two linux servers with IP address and respectively. Flush the iptables before the setup. I am using the floating IP address for this cluster. Before starting the installations, Make sure that you have a partition of equal size available for this setup. Here I am creating a partition /dev/sdb1 of 100 MB size.

fdisk /dev/sdb
Restart the servers
mke2fs -j /dev/sdb1

dd if=/dev/zero bs=1M count=100 of=/dev/sdb1;sync

Now the partition is ready for the installation. We can start installing the packages :
yum install drbd84-utils.x86_64
yum install kmod-drbd84.x86_64
yum install heartbeat
Update the /etc/hosts file with the hostnames for easy dns resolution. serverg2 serverg1

DRBD Configuration

Now edit /etc/drbd.conf with the below values :

global { usage-count no; }
resource r0 {
protocol C;
startup { wfc-timeout 10; degr-wfc-timeout 30; } #change timers to your need
disk { on-io-error detach; } # or panic, ...
net {
after-sb-0pri discard-least-changes;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
cram-hmac-alg "sha1";
shared-secret "my_secret_password_G";
syncer { rate 5M; }
on serverg1 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
on serverg2 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;

copy the configuration file /etc/drbd.conf to the second server.

scp /etc/drbd.conf root@serverg2:/etc/drbd.conf

Now create the metadata on both the servers:

drbdadm create-md r0

Start drbd on both the servers:

service drbd start

Verify both the servers are secondary :

cat /proc/drbd

You can see both the nodes as secondary, that is natural. Now we need to start the initial replication from the master server(here serverg1)

drbdadm -- --overwrite-data-of-peer primary r0

watch the drbd process until it shows uptodate:

watch -n 1 cat /proc/drbd

Now format the drbd partition and mount it on the master server

mkfs.ext4 /dev/drbd0
mkdir /replication
mount /dev/drbd0 /replication

At any time, you can check the server role by using the below command :

drbdadm role r0

Primary server should return,

[root@serverg1 ~]# drbdadm role r0

If you want to switch to the second server, run the below in the primary server:
1. umount /replication
2. drbdadm secondary r0
run the below in current seconday server:
mkdir /replication
drbdadm primary r0
mount /dev/drbd0 /replication
Once you are switched the servers, confirm the status by

df -h
drbdadm role r0

So we have gone through how to switch manually. we can automate the same using heartbeat.

Heartbeat configuration:
Create a configuration file /etc/ha.d/ha.cf on the primary server serverg1

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node serverg1
node serverg2

set auto_fallback to ‘off’ to avoid switching the server back to the primary from the slave server.
Create /etc/ha.d/authkeys file and add the below :

auth 1
1 sha1 MySecret

Change the permission to 600

chmod 600 /etc/ha.d/authkeys

Edit /etc/ha.d/haresources as below:

serverg1 drbddisk::r0 Filesystem::/dev/drbd0::/replication::ext4 IPaddr:: mysqld
serverg1 MailTo::firstaddress@gmail.com,secondaddress@gmail.com::DRBD/HA-ALERT

It is important to keep the contents in separate lines. Now start the heartbeat service in serverg1
service heartbeat start
Now copy all the heartbeat files to the second server:

scp /etc/ha.d/ha.cf /etc/ha.d/authkeys /etc/ha.d/haresources root@serverg2:/etc/ha.d/

Now start the heartbeat service in second server(serverg2)

service heartbeat start
chkconfig --add heartbeat
chkconfig heartbeat on

Now verify the primary and secondary numbers using the command :

drbdadm role r0
df -h

If you stop the heartbeat service in one server or shutdown the server, partition will mount automatically to the other server and the services mentioned in the haresources file will start.