• VMware

    Learn about VMware virtualization for its products like vsphere, vCenter Server, VMware View, VMware P2V and many more

  • Linux

    Step by step configuration tutorials for many of the Linux services like DNS, DHCP, FTP, Samba4 etc including many tips and tricks in Red Hat Linux.

  • Database

    Learn installation and configuration of databases like Oracle, My SQL, Postgresql, etc including many other related tutorials in Linux.

  • Life always offers you a second chance...Its called tomorrow!!!

    Sunday, October 19, 2014

    Interview Questions on Red Hat Cluster with Answers

    How can you define a cluster and what are its basic types?
    A cluster is two or more computers (called nodes or members) that work together to perform a task. There are four major types of clusters:
    • Storage
    • High availability
    • Load balancing
    • High performance
    What is Storage Cluster?
    • Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. 
    • A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. 
    • The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2
    What is High Availability Cluster?
    • High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. 
    • Typically, services in a high availability cluster read and write data (via read-write mounted file systems). 
    • A high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. 
    • Node failures in a high availability cluster are not visible from clients outside the cluster. 
    • High availability clusters are sometimes referred to as failover clusters.
    What is Load Balancing Cluster?
    • Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. 
    • Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. 
    • Node failures in a load-balancing cluster are not visible from clients outside the cluster. 
    • Load balancing is available with the Load Balancer Add-On.
    What is a High Performance Cluster?
    • High-performance clusters use cluster nodes to perform concurrent calculations. 
    • A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. 
    • High performance clusters are also referred to as computational clusters or grid computing.
    How many nodes are supported in Red hat 6 Cluster?
    A cluster configured with qdiskd supports a maximum of 16 nodes. The reason for the limit is because of scalability; increasing the node count increases the amount of synchronous I/O contention on the shared quorum disk device.

    What is the minimum size of the Quorum Disk?
    The minimum size of the block device is 10 Megabytes.

    What is the order in which you will start the Red Hat Cluster services?
    In Red Hat 4
    service ccsd start
    service cman start
    service fenced start
    service clvmd start (If CLVM has been used to create clustered volumes)
    service gfs start
    service rgmanager start

    In RedHat 5
    service cman start
    service clvmd start
    service gfs start
    service rgmanager start

    In Red Hat 6
    service cman start
    service clvmd start
    service gfs2 start
    service rgmanager start

    What is the order to stop the Red Hat Cluster services?
    In Red Hat 4
    service rgmanager stop
    service gfs stop
    service clvmd stop
    service fenced stop
    service cman stop
    service ccsd stop

    In Red Hat 5
    service rgmanager stop
    service gfs stop
    service clvmd stop
    service cman stop

    In Red Hat 6
    service rgmanager stop
    service gfs2 stop
    service clvmd stop
    service cman stop

    What are the performance enhancements in GFS2 as compared to GFS?
    • Better performance for heavy usage in a single directory
    • Faster synchronous I/O operations
    • Faster cached reads (no locking overhead)
    • Faster direct I/O with preallocated files (provided I/O size is reasonably large, such as 4M blocks)
    • Faster I/O operations in general
    • Faster Execution of the df command, because of faster statfs calls
    • Improved atime mode to reduce the number of write I/O operations generated by atime when compared with GFS
    • GFS2 supports the following features.
    • extended file attributes (xattr)
    • the lsattr() and chattr() attribute settings via standard ioctl() calls
    • nanosecond timestamps
    • GFS2 uses less kernel memory.
    • GFS2 requires no metadata generation numbers.
    • Allocating GFS2 metadata does not require reads. Copies of metadata blocks in multiple journals are managed by revoking blocks from the journal before lock release.
    • GFS2 includes a much simpler log manager that knows nothing about unlinked inodes or quota changes.
    • The gfs2_grow and gfs2_jadd commands use locking to prevent multiple instances running at the same time.
    • The ACL code has been simplified for calls like creat() and mkdir().
    • Unlinked inodes, quota changes, and statfs changes are recovered without remounting the journal.
    What is the maximum file system support size for GFS2?
    • GFS2 is based on 64 bit architecture, which can theoretically accommodate an 8 EB file system. 
    • However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB. 
    • The current supported maximum size of a GFS2 file system for 32-bit hardware for Red Hat Enterprise Linux Release 5.3 and later is 16 TB. 
    • NOTE: It is better to have 10 1TB file systems than one 10TB file system.
    What is the journaling filesystem?
    • A journaling filesystem is a filesystem that maintains a special file called a journal that is used to repair any inconsistencies that occur as the result of an improper shutdown of a computer.
    • In journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it is put into place. 
    • This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is automatically replayed at mount time.
    • GFS2 requires one journal for each node in the cluster that needs to mount the file system. For example, if you have a 16-node cluster but need to mount only the file system from two nodes, you need only two journals. If you need to mount from a third node, you can always add a journal with the gfs2_jadd command.
    What is the default size of journals in GFS?
    When you run mkfs.gfs2 without the size attribut for journal to create a GFS2 partition, by default a 128MB size journal is created which is enough for most of the applications

    In case you plan on reducing the size of the journal, it can severely affect the performance. Suppose you reduce the size of the journal to 32MB it does not take much file system activity to fill an 32MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the storage.

    What is a Quorum Disk?
    • Quorum Disk is a disk-based quorum daemon, qdiskd, that provides supplemental heuristics to determine node fitness.
    • With heuristics you can determine factors that are important to the operation of the node in the event of a network partition
    For a 3 node cluster a quorum state is present untill 2 of the 3 nodes are active i.e. more than half. But what if due to some reasons the 2nd node also stops communicating with the the 3rd node? In that case under a normal architecture the cluster would dissolve and stop working. But for mission critical environments and such scenarios we use quorum disk in which an additional disk is configured which is mounted on all the nodes with qdiskd service running and a vote value is assigned to it.

    So suppose in above case I have assigned 1 vote to qdisk so even after 2 nodes stops communicating with 3rd node, the cluster would have 2 votes (1 qdisk + 1 from 3rd node) which is still more than half of vote count for a 3 node cluster. Now both the inactive nodes would be fenced and your 3rd node would be still up and running being a part of the cluster.

    What is rgmanager in Red Hat Cluster and its use?
    • This is a service termed as Resource Group Manager
    • RGManager manages and provides failover capabilities for collections of cluster resources called services, resource groups, or resource trees
    • it allows administrators to define, configure, and monitor cluster services. In the event of a node failure, rgmanager will relocate the clustered service to another node with minimal service disruption
    What is luci and ricci in Red Hat Cluster?
    • luci is the server component of the Conga administration utility
    • Conga is an integrated set of software components that provides centralized configuration and management of Red Hat clusters and storage
    • luci is a server that runs on one computer and communicates with multiple clusters and computers via ricci

    • ricci is the client component of the Conga administration utility
    • ricci is an agent that runs on each computer (either a cluster member or a standalone computer) managed by Conga
    • This service needs to be running on all the client nodes of the cluster.
    What is cman in Red Hat Cluster?
    • This is an abbreviation used for Cluster Manager. 
    • CMAN is a distributed cluster manager and runs in each cluster node. 
    • It is responsible for monitoring, heartbeat, quorum, voting and communication between cluster nodes.
    • CMAN keeps track of cluster quorum by monitoring the count of cluster nodes.
    What are the different port no. used in Red Hat Cluster?
    IP Port no.
    Protocol
    Component
    5404,5405
    UDP
    corosync/cman
    11111
    TCP
    ricci
    21064
    TCP
    dlm (Distributed Lock Manager)
    16851
    TCP
    Modclustered
    8084
    TCP
    luci
    4196,4197
    TCP
    rgmanager

    How does NetworkManager service affects Red Hat Cluster?
    • The use of NetworkManager is not supported on cluster nodes. If you have installed NetworkManager on your cluster nodes, you should either remove it or disable it.
    • service NetworkManager stop
    • chkconfig NetworkManager off
    • The cman service will not start if NetworkManager is either running or has been configured to run with the chkconfig command
    What is the command used to relocate a service to another node?
    clusvcadm -r service_name -m node_name

    What is split-brain condition in Red Hat Cluster?
    • We say a cluster has quorum if a majority of nodes are alive, communicating, and agree on the active cluster members. For example, in a thirteen-node cluster, quorum is only reached if seven or more nodes are communicating. If the seventh node dies, the cluster loses quorum and can no longer function.
    • A cluster must maintain quorum to prevent split-brain issues.
    • If quorum was not enforced, quorum, a communication error on that same thirteen-node cluster may cause a situation where six nodes are operating on the shared storage, while another six nodes are also operating on it, independently. Because of the communication error, the two partial-clusters would overwrite areas of the disk and corrupt the file system.
    • With quorum rules enforced, only one of the partial clusters can use the shared storage, thus protecting data integrity.
    • Quorum doesn't prevent split-brain situations, but it does decide who is dominant and allowed to function in the cluster.
    • quorum can be determined by a combination of communicating messages via Ethernet and through a quorum disk.
    What are Tie-breakers in Red Hat Cluster?
    • Tie-breakers are additional heuristics that allow a cluster partition to decide whether or not it is quorate in the event of an even-split - prior to fencing. 
    • With such a tie-breaker, nodes not only monitor each other, but also an upstream router that is on the same path as cluster communications. If the two nodes lose contact with each other, the one that wins is the one that can still ping the upstream router.That is why, even when using tie-breakers, it is important to ensure that fencing is configured correctly.
    • CMAN has no internal tie-breakers for various reasons. However, tie-breakers can be implemented using the API.
    What is fencing in Red Hat Cluster?
    • Fencing is the disconnection of a node from the cluster's shared storage. 
    • Fencing cuts off I/O from shared storage, thus ensuring data integrity. 
    • The cluster infrastructure performs fencing through the fence daemon, fenced.
    • When CMAN determines that a node has failed, it communicates to other cluster-infrastructure components that the node has failed. 
    • fenced, when notified of the failure, fences the failed node. 
    What are the various types of fencing supported by High Availability Add On?
    • Power fencing — A fencing method that uses a power controller to power off an inoperable node.
    • storage fencing — A fencing method that disables the Fibre Channel port that connects storage to an inoperable node.
    • Other fencing — Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others.
    What are the lock states in Red Hat Cluster?
    A lock state indicates the current status of a lock request. A lock is always in one of three states:
    Granted — The lock request succeeded and attained the requested mode.
    Converting — A client attempted to change the lock mode and the new mode is incompatible with an existing lock.
    Blocked — The request for a new lock could not be granted because conflicting locks exist.
    A lock's state is determined by its requested mode and the modes of the other locks on the same resource.

    What is DLM lock model?
    • DLM is a short abbreviation for Distributed Lock Manager.
    • A lock manager is a traffic cop who controls access to resources in the cluster, such as access to a GFS file system.
    • GFS2 uses locks from the lock manager to synchronize access to file system metadata (on shared storage)
    • CLVM uses locks from the lock manager to synchronize updates to LVM volumes and volume groups (also on shared storage)
    • In addition, rgmanager uses DLM to synchronize service states.
    • without a lock manager, there would be no control over access to your shared storage, and the nodes in the cluster would corrupt each other's data.
    References


    Tuesday, October 14, 2014

    How to find the path of any command in Linux

    The so called term used "command" is basically a binary file. If you try to read any of those files using any editor then you won't understand any thing as it is not human readable. These all commands are installed along with the relative packages with the installation of OS.

    These binary files are mostly located in the below 4 locations
    • /bin
    • /usr/bin 
    • /sbin 
    • /usr/sbin.
    You can either manually use the find or locate command to look out for the command in the above mentioned locations or alternatively you can use the below most suitable commands for this purpose

    which command

    # which "command_name"
    For eg:
    # which cp
    alias cp='cp -i'
    /bin/cp


    # which useradd
    /usr/sbin/useradd

    In case you get the below error# which samba
    /usr/bin/which: 
    no samba in (/usr/lib/qt-3.3/bin:/usr/local/sbin
    /usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)

    It means your PATH variable does not contains the path for samba command. In that case which will not help you find the location of the command.

    whereis command

    So, you can use the below command for the same purpose
    # whereis command_name
    For example
    # whereis useradd
    useradd: /usr/sbin/useradd /usr/share/man/man8/useradd.8.gz


    I hope I made my self clear

    Related Articles
    How to set environment (PATH) variable permanently in Linux
    How to mount windows share on linux

    Follow the below links for more tutorials:

    Tutorial for Monitoring Tools SAR and KSAR with examples in Linux
    How to configure Samba 4 Secondary Domain Controller
    How to secure Apache web server in Linux using password (.htaccess)
    How to register Red Hat Linux with RHN (Red Hat Network )
    Red hat Enterprise Linux 5.5 Installation Guide (Screenshots)
    15 tips to enhance security of your Linux machine
    Why is Linux more secure than windows and any other OS
    What is the difference between "su" and "su -" in Linux?
    What is swappiness and how do we change its value?
    How to log iptables messages in different log file
    What are the s and k scripts in the etc rcx.d directories
    How to check all the currently running services in Linux
    How to auto start service after reboot in Linux

    Wednesday, October 08, 2014

    How to configure a Clustered Samba share using ctdb in Red Hat Cluster

    As of the Red Hat Enterprise Linux 6.2 release, the Red Hat High Availability Add-On provides support for running Clustered Samba in an active/active configuration.

    This requires that you install and configure CTDB on all nodes in a cluster, which you use in conjunction with GFS2 clustered file systems.

    NOTE: Red Hat Enterprise Linux 6 supports a maximum of four nodes running clustered Samba.

    Course of action to be performed
    • Create a multinode cluster(for this article I will be using 2 nodes)
    • Create 2 logical volumes each for samba and ctdb in gfs2 format
    • Create one partition for qdisk
    • Start the cluster services
    • Install Pre-requisite Packages
    • Configure ctdb and samba
    • Start ctdb services
    ....and you are all set

    Overall pre-requisites (server)
    • 1 server for openfiler (192.168.1.8)
    • 2 servers for CentOS 6.5 (192.168.1.5 and 192.168.1.6)
    • 1 server for Conga server (192.168.1.7)
    Overall pre-requisites (rpms)
    • High Availability Management (rpm group)
    • High Availability (rpm group)
    • iSCSI Storage Client (rpm group)
    • gfs2-utils
    • ctdb
    • samba
    • samba-client
    • samba-winbind
    • samba-winbind-clients
    This was a brief overview of what we are going to perform but its going to take a while to reach the end of this course so lets start with creating our cluster

    Creating 2 node cluster

    For this article I am using CentOS 6.5 which is fully compatible with Red Hat so the packages and commands used here would be same as in Red Hat Linux.

    Node 1: 192.168.1.5 (node1.example)
    Node 2: 192.168.1.6 (node2.example)
    Mgmt Node: 192.168.1.7 (node3.mgmt)
    Openfiler: 192.168.1.8 (of.storage)

    Tuesday, October 07, 2014

    How to delete an iscsi-target from openfiler and Linux

    In my last article I had shown you steps to create iscsi-target using openfiler. So in this article I will be showing you step by step method to delete an iscsi-target from openfiler.

    This is my GUI console of openfiler where I want to delete the target marked in  red .

    Now you won't be able to delete the target from the GUI, so login to the CLI of the server using putty or directly via the console of the server.

    Login using the root account and follow the below steps
    # cd /opt/openfiler/etc/iscsi

    # ls -l
    total 28
    -rw-r--r-- 1 openfiler openfiler 2000 Jul 28 19:52 ietd.conf.tmp
    drwxr-xr-x 2 root      root      4096 Jul 28 19:41 iqn.2006-01.com.openfiler:ctdb
    drwxr-xr-x 2 root      root      4096 Jul 28 19:41 iqn.2006-01.com.openfiler:qdisk
    drwxr-xr-x 2 root      root      4096 Jul 28 19:40 iqn.2006-01.com.openfiler:san
    drwxr-xr-x 2 root      root      4096 Jul 28 19:52 iqn.2006-01.com.openfiler:tsn.dc0b0f3d8f75
    drwxr-xr-x 2 root      root      4096 Jul 28 19:43 targets
    drwxr-xr-x 2 root      root      4096 Apr 12  2011 transforms

    As you can see all the iscsi-targets created are visible. Go ahead and remove the target which you do not require any more
    # rm -rf "iqn.2006-01.com.openfiler:tsn.dc0b0f3d8f75"
    Now change directory into targets
    # cd targets/

    # pwd
    /opt/openfiler/etc/iscsi/targets

    Open the iscsi_settings.xml file and remove the iscsi-target entry as shown below in  red 
    # vi iscsi_settings.xml
    <?xml version="1.0"?>
    <iet>
      <globalsettings>
        <isns/>
      </globalsettings>
      <targets>
        <target Name="iqn.2006-01.com.openfiler:san">
          <HeaderDigest value="None"/>
          <DataDigest value="None"/>
          <MaxConnections value="1"/>
          <InitialR2T value="Yes"/>
          <ImmediateData value="No"/>
          <MaxRecvDataSegmentLength value="131072"/>
          <MaxXmitDataSegmentLength value="131072"/>
          <MaxBurstLength value="262144"/>
          <FirstBurstLength value="262144"/>
          <DefaultTime2Wait value="2"/>
          <DefaultTime2Retain value="20"/>
          <MaxOutstandingR2T value="8"/>
          <DataPDUInOrder value="Yes"/>
          <DataSequenceInOrder value="Yes"/>
          <ErrorRecoveryLevel value="0"/>
          <Wthreads value="16"/>
          <QueuedCommands value="32"/>
          <lun Id="0" Path="/dev/san/work" Type="blockio" ScsiId="hgrEcw-zeTg-cXpE" ScsiSN="hgrEcw-zeTg-cXpE" IOMode="t"/>
        </target>
        <target Name="iqn.2006-01.com.openfiler:ctdb">
          <HeaderDigest value="None"/>
          <DataDigest value="None"/>
          <MaxConnections value="1"/>
          <InitialR2T value="Yes"/>
          <ImmediateData value="No"/>
          <MaxRecvDataSegmentLength value="131072"/>
          <MaxXmitDataSegmentLength value="131072"/>
          <MaxBurstLength value="262144"/>
          <FirstBurstLength value="262144"/>
          <DefaultTime2Wait value="2"/>
          <DefaultTime2Retain value="20"/>
          <MaxOutstandingR2T value="8"/>
          <DataPDUInOrder value="Yes"/>
          <DataSequenceInOrder value="Yes"/>
          <ErrorRecoveryLevel value="0"/>
          <Wthreads value="16"/>
          <QueuedCommands value="32"/>
          <lun Id="0" Path="/dev/san/ctdb" Type="blockio" ScsiId="zegB7p-3BHl-ySne" ScsiSN="zegB7p-3BHl-ySne" IOMode="t"/>
        </target>
        <target Name="iqn.2006-01.com.openfiler:qdisk">
          <HeaderDigest value="None"/>
          <DataDigest value="None"/>
          <MaxConnections value="1"/>
          <InitialR2T value="Yes"/>
          <ImmediateData value="No"/>
          <MaxRecvDataSegmentLength value="131072"/>
          <MaxXmitDataSegmentLength value="131072"/>
          <MaxBurstLength value="262144"/>
          <FirstBurstLength value="262144"/>
          <DefaultTime2Wait value="2"/>
          <DefaultTime2Retain value="20"/>
          <MaxOutstandingR2T value="8"/>
          <DataPDUInOrder value="Yes"/>
          <DataSequenceInOrder value="Yes"/>
          <ErrorRecoveryLevel value="0"/>
          <Wthreads value="16"/>
          <QueuedCommands value="32"/>
          <lun Id="0" Path="/dev/san/qdisk" Type="blockio" ScsiId="VWq9z7-aa2l-6RMj" ScsiSN="VWq9z7-aa2l-6RMj" IOMode="wt"/>
        </target>
        <target Name="iqn.2006-01.com.openfiler:tsn.dc0b0f3d8f75">
          <HeaderDigest value="None"/>
          <DataDigest value="None"/>
          <MaxConnections value="1"/>
          <InitialR2T value="Yes"/>
          <ImmediateData value="No"/>
          <MaxRecvDataSegmentLength value="131072"/>
          <MaxXmitDataSegmentLength value="131072"/>
          <MaxBurstLength value="262144"/>
          <FirstBurstLength value="262144"/>
          <DefaultTime2Wait value="2"/>
          <DefaultTime2Retain value="20"/>
          <MaxOutstandingR2T value="8"/>
          <DataPDUInOrder value="Yes"/>
          <DataSequenceInOrder value="Yes"/>
          <ErrorRecoveryLevel value="0"/>
          <Wthreads value="16"/>
          <QueuedCommands value="32"/>
          <lun Id="0" Path="/dev/san/ctdb" Type="blockio" ScsiId="zegB7p-3BHl-ySne" ScsiSN="zegB7p-3BHl-ySne" IOMode="wt"/>
        </target>
      </targets>
    </iet>
    Once done save and exit the file and restart the iscsi-target services
    # /etc/init.d/iscsi-target restart
    Stopping iSCSI target service: ......                      
    [  OK  ]
    Starting iSCSI target service:                             [  OK  ]

    Verify the changes as you can see the target is successfully deleted.


    Once the iscsi-target is removed from the openfiler you might get the below error on your client linux machine while attempting to restart the iscsi services
    # service iscsi restart
    Stopping iscsi:                                            [  OK  ]
    Starting iscsi: iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.dc0b0f3d8f75, portal: 192.168.1.8,3260].
    iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
    iscsiadm: Could not log into all portals

    To remove iscsi target from linux machine
    # service iscsi stop
    Stopping iscsi:                                            
    [  OK  ]

    Delete the selected target using the below command
    # iscsiadm -m node -o delete -T "iqn.2006-01.com.openfiler:tsn.dc0b0f3d8f75" --portal 192.168.1.8,3260
    Re-discover the iscsi target from the openfiler server and as you see below we only have the required targets
    # iscsiadm -m discovery -t sendtargets -p 192.168.1.8
    192.168.1.8:3260,1 iqn.2006-01.com.openfiler:qdisk
    192.168.1.8:3260,1 iqn.2006-01.com.openfiler:ctdb
    192.168.1.8:3260,1 iqn.2006-01.com.openfiler:san

    Start the required iscsi services on the linux box
    # service iscsi start
    Starting iscsi:                                            
    [  OK  ]


    I hope I made my self clear.

    Related Articles
    Configuring iSCSI storage using openfiler
    How to install openfiler
    How to configure iscsi target using Red Hat Linux


    Follow the below links for more tutorials

    How to perform a local ssh port forwarding in Linux
    What are the different types of Virtual Web Hosting in Apache
    Comparison and Difference between VMFS 3 and VMFS 5
    How to configure PXE boot server in Linux using Red Hat 6
    How to secure Apache web server in Linux using password (.htaccess)
    How to register Red Hat Linux with RHN (Red Hat Network )
    How does a DNS query works when you type a URL on your browser?
    How to create password less ssh connection for multiple non-root users
    How to create user without useradd command in Linux
    How to give normal user root privileges using sudo in Linux/Unix
    How to do Ethernet/NIC bonding/teaming in Red Hat Linux
    How to install/uninstall/upgrade rpm package with/without dependencies
    Why is Linux more secure than windows and any other OS
    What is the difference between "su" and "su -" in Linux?
    What is the difference/comparison between Unix and Linux ?
    RAID levels 0, 1, 2, 3, 4, 5, 6, 0+1, 1+0 features explained in detail

    Friday, October 03, 2014

    How to perform a local ssh port forwarding in Linux

    SSH supports a variety of ways of moving data across psuedo-VPN tunnel. One of the methods is Port Forwarding which creates a secure connection between a local computer and a remote machine through wich data can be transferred through an encrypted tunnel.

    Types of Port Forwarding

    There are 3 types of Port Forwarding which can be performed on Linux namely
    • Local
    • Remote
    • Dynamic

    Features

    • Tunneling support - psuedo-VPN for moving data
    • Tunnels local port(s) to remote systems for use by local system users
    • SSHD binds to a local port and provides access to a remote port
    • Default port forwarding binds to loopback addresses for IPv4(127.0.0.1) and IPv6(::1)
    • Ability to forward local ports to destination server via a third server.

    Lets suppose we want to protect to access of telnet daemon on machine1. Since telnet is controlled by xinetd so we will force telnet to instead of binding to all IP addresses on all network interfaces only to bind to local loopback address.

    Make sure you have telnet-server installed
    # rpm -qa | grep telnet-server
    If not you can install using yum
    # yum -y install telnet-server
    Next lets bind telnet to loopback address (on machine1)
    # vi /etc/xinetd.d/telnet
    service telnet
    {
            flags           = REUSE
            socket_type     = stream
            wait            = no
            user            = root
            server          = /usr/sbin/in.telnetd
            log_on_failure  += USERID
           
    disable         = no
            bind            = 127.0.0.1

    }

    [root@machine1 ~]# /etc/init.d/xinetd restart
    Stopping xinetd:                                          
    [  OK  ]
    Starting xinetd:                                           [  OK  ]

    Let us verify if we can telnet locally which we can also check using netstat
    [root@machine1 ~]# netstat -ntlp | grep 23
    tcp        0      0
    127.0.0.1:23                0.0.0.0:*        LISTEN      3405/xinetd

    So we should be able to telnet our localhost on port 23
    # telnet localhost 23
    Trying ::1...
    telnet: connect to address ::1: Connection refused
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    CentOS release 6.5 (Final)
    Kernel 2.6.32-431.el6.x86_64 on an x86_64
    login: deepak
    Password: [Password for deepak]

    As you see I was able to login using my system's credential locally but what if I try to do the same from remote machine.

    Lets try to telnet machine1 from machine 2 on port 23
    [root@machine2 ~]# telnet machine1 23
    Trying 192.168.1.11...
    telnet: connect to address 192.168.1.11: No route to host
    telnet: Unable to connect to remote host: No route to host

    It didn't worked out too well since we have bind the telnet requests to loopback address

    Now lets use ssh to encrypt a tunnel between client and out server

    How it works

    ssh_client -> ssh_server - to create encrypted tunnels

    We will bind to a port like 2323 and forward that through the ssh tunnel to the remote system port 23 all bound to the loopback adapter. So locally we are connecting to the 2323 which a local socket so in case there is a trojan binary that functions as a network sniffer in the network then it will be able to pickup these type of connection but it won't be able to make out the data which goes inside the VPN tunnel.

    local(127.0.0.1:2323) -> SSH_TUNNEL -> Remote(127.0.0.1:23)

    Syntax:
    ssh -L ip_address:2222:destination_system:destination_port user@gateway
    We will leave the ip_address field blank and let ssh determine itself to use the default ip address bound to our ethernet card i.e. eth0 using the gateway of 192.168.1.12

    lets take an example. We are going to bound the port 2323 locally to port 23 using a gateway i.e. machine2
    # ssh -L 2323:127.0.0.1:23 machine2
    NOTE: We donot need to be root to perform this action unless the port we are going to use requires root level privilege i.e. (< 1024) also known as well-known ports.

    [root@machine1 ~]# ssh -L 2323:127.0.0.1:23 machine2
    Last login: Fri Oct  3 05:30:26 2014 from machine1
    [root@
    machine2 ~]#

    So we are on machine2. Now lets attempt to connect out telnet client using new port 2323 from machine1
    [root@machine1 ~]# telnet localhost 2323
    Trying ::1...
    Connected to localhost.
    Escape character is '^]'.
    CentOS release 5.2 (Final)
    Kernel 2.6.18-92.el5 on an i686
    login: deepak
    Password:
    Last login: Fri Oct  3 05:31:09 from machine1
    [deepak@machine2 ~]$

    As yo see I provided my user credential and even though I tried logging into localhost I connected to machine2. So as long as the ssh session is in place we ran earlier the VPN session will the alive.

    Related Articles:
    How to create password less ssh connection for multiple non-root users
    Iptables rules to allow/block ssh incoming/outgoing connection in Linux
    Putty session disconnects when idle


    Follow the below links for more tutorials

    How to configure iscsi target using Red Hat Linux
    What are the different types of Virtual Web Hosting in Apache
    Comparison and Difference between VMFS 3 and VMFS 5
    How to configure PXE boot server in Linux using Red Hat 6
    How to secure Apache web server in Linux using password (.htaccess)
    How to register Red Hat Linux with RHN (Red Hat Network )
    How does a DNS query works when you type a URL on your browser?
    How to create password less ssh connection for multiple non-root users
    How to create user without useradd command in Linux
    How to give normal user root privileges using sudo in Linux/Unix
    How to do Ethernet/NIC bonding/teaming in Red Hat Linux
    How to install/uninstall/upgrade rpm package with/without dependencies
    Why is Linux more secure than windows and any other OS
    What is the difference between "su" and "su -" in Linux?
    What is the difference/comparison between Unix and Linux ?
    RAID levels 0, 1, 2, 3, 4, 5, 6, 0+1, 1+0 features explained in detail