Posted in Java, JEE / J2EE / JSP / Servlets

Install Tomcat 6 or 6.0.35 on Ubuntu 11.10 or 11.04 or 10.10 or 10.04 LTS


While it is possible that older versions of Tomcat may not be compatible with newer JVMs, all the currently supported Apache Tomcat versions (5.5.x, 6.0.x and 7.0.x) are known to run correctly on Java 6 JVMs. (Ref-http://tomcat.apache.org/migration.html)

Cloud Computing – Download Free EBooks and Whitepapers
Java – Download Free EBooks and Whitepapers
Windows – Download Free EBooks and Whitepapers

we’ll download and extract Tomcat 6 or 6.0.35 from the Apache site

Find the appropriate installation version from : http://apache.hoxt.com/tomcat/tomcat-6/, Download it manually or by wget command from the console.

Installing Tomcat 6 on Ubuntu. If you are running Ubuntu and want to use the Tomcat servlet container, you should not use the version from the repositories as it just doesn’t work correctly. Instead you’ll need to use the manual installation process that I’m outlining here. Before you install Tomcat you’ll want to make sure that you’ve installed Java. Use Synaptic Package Manager to install Java.

Use Synaptic Package Manager to install Java
Use Synaptic Package Manager to install Java
Verify Java Version
Verify Java Version
Now Extract Tomcat files with tar xvzf apache-tomcat-6.0.35.tar.gzOpen Terminal, Go to Tomcat Directory/binRun ./startup.shOpen the default page- http://localhost:8080 Done!!!
Advertisements
Posted in Java, JEE / J2EE / JSP / Servlets

Install Tomcat 5 or 5.5.35 on Ubuntu 11.10 or 11.04 or 10.10 or 10.04 LTS


While it is possible that older versions of Tomcat may not be compatible with newer JVMs, all the currently supported Apache Tomcat versions (5.5.x, 6.0.x and 7.0.x) are known to run correctly on Java 6 JVMs. (Ref-http://tomcat.apache.org/migration.html)

Cloud Computing – Download Free EBooks and Whitepapers
Java – Download Free EBooks and Whitepapers
Windows – Download Free EBooks and Whitepapers

we’ll download and extract Tomcat 5 or 5.5.35 from the Apache site

Find the appropriate installation version from : http://apache.hoxt.com/tomcat/tomcat-5/, Download it manually or by wget command from the console.

Installing Tomcat 5 on Ubuntu. If you are running Ubuntu and want to use the Tomcat servlet container, you should not use the version from the repositories as it just doesn’t work correctly. Instead you’ll need to use the manual installation process that I’m outlining here. Before you install Tomcat you’ll want to make sure that you’ve installed Java.

Use Synaptic Package Manager to install Java.

Use Synaptic Package Manager to install Java
Use Synaptic Package Manager to install Java
Verify Java Version
Verify Java Version
Now
Extract Tomcat files with tar xvzf apache-tomcat-5.5.35.tar.gzOpen Terminal, Go to Tomcat Directory/binRun ./startup.shOpen the default page- http://localhost:8080

Done!!!

Posted in BIG Data, Cloud Computing, How To..., Private Cloud, VMware, Windows

Tutorial on Hadoop with VMware Player


Tutorial on Hadoop with VMware Player

Map Reduce (Source: google)
Map Reduce (Source: google)

Download Free EBooks and Whitepapers on Big DATA

Functional Programming
According to WIKI, In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Since there is no hidden dependency (via shared state), functions in the DAG can run anywhere in parallel as long as one is not an ancestor of the other. In other words, analyze the parallelism is much easier when there is no hidden dependency from shared state. Map/reduce is a special form of such a directed acyclic graph which is applicable in a wide range of use cases. It is organized as a “map” function which transform a piece of data into some number of key/value pairs. Each of these elements will then be sorted by their key and reach to the same node, where a “reduce” function is use to merge the values (of the same key) into a single result.
Map Reduce

A way to take a big task and divide it into discrete tasks that can be done in parallel. Map / Reduce is just a pair of functions, operating over a list of data.

MapReduce is a patented software framework introduced by Google to support distributed computing on large data sets on clusters of computers.

The framework is inspired by map and reduce functions commonly used in functional programming,[3] although their purpose in the MapReduce framework is not the same as their original forms.
Hadoop
A Large scale Batch Data Processing System.

It uses MAP-REDUCE for computation and HDFS for storage.

Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google’s MapReduce and Google File System (GFS) papers.

It is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System and of MapReduce. HDFS is a highly fault-tolerant distributed file system and like Hadoop designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.

Hadoop is an open source Java implementation of Google’s MapReduce algorithm along with an infrastructure to support distributing it over multiple machines. This includes it’s own filesystem ( HDFS Hadoop Distributed File System based on the Google File System) which is specifically tailored for dealing with large files. When thinking about Hadoop it’s important to keep in mind that the infrastructure it has is a huge part of it. Implementing MapReduce is simple. Implementing a system that can intelligently manage the distribution of processing and your files, and breaking those files down into more manageable chunks for processing in an efficient way is not.

HDFS breaks files down into blocks which can be replicated across it’s network (how many times it’s replicated it determined by your application and can be specified on a per file basis). This is one of the most important performance features and, according to the docs “…is a feature that needs a lot of tuning and experience.” You really don’t want to have 50 machines all trying to pull from a 1TB file on a single data node, at the same time, but you also don’t want to have it replicate a 1TB file out to 50 machines. So, it’s a balancing act.

Hadoop installations are broken into three types.

v  The NameNode acts as the HDFS master, managing all decisions regarding data replication.

v  The JobTracker manages the MapReduce work. It “…is the central location for submitting and tracking MR jobs in a network environment.”

v  Task Tracker and Data Node, which do the grunt work

Hadoop - NameNode, DataNode, JobTracker, TaskTracker
Hadoop – NameNode, DataNode, JobTracker, TaskTracker

The JobTracker will first determine the number of splits (each split is configurable, ~16-64MB) from the input path, and select some TaskTracker based on their network proximity to the data sources, then the JobTracker send the task requests to those selected TaskTrackers.

Each TaskTracker will start the map phase processing by extracting the input data from the splits. For each record parsed by the “InputFormat”, it invoke the user provided “map” function, which emits a number of key/value pair in the memory buffer. A periodic wakeup process will sort the memory buffer into different reducer node by invoke the “combine” function. The key/value pairs are sorted into one of the R local files (suppose there are R reducer nodes).

When the map task completes (all splits are done), the TaskTracker will notify the JobTracker. When all the TaskTrackers are done, the JobTracker will notify the selected TaskTrackers for the reduce phase.

Each TaskTracker will read the region files remotely. It sorts the key/value pairs and for each key, it invoke the “reduce” function, which collects the key/aggregatedValue into the output file (one per reducer node).

Map/Reduce framework is resilient to crash of any components. The JobTracker keep tracks of the progress of each phases and periodically ping the TaskTracker for their health status. When any of the map phase TaskTracker crashes, the JobTracker will reassign the map task to a different TaskTracker node, which will rerun all the assigned splits. If the reduce phase TaskTracker crashes, the JobTracker will rerun the reduce at a different TaskTracker.
Let’s try Hands on Hadoop
Objective of the tutorial is to set up multi-node Hadoop cluster using the Hadoop Distributed File System (HDFS) on Ubuntu Linux with the use of VMware Player.

Hadoop and VMware Player
Hadoop and VMware Player

Installations / Configurations Needed:

Laptop

Physical Machine

Laptop with 60 GB HDD, 2 GB RAM, 32bit Support, OS – Ubuntu 10.04 LTS – the Lucid Lynx

IP Address-192.168.1.3 [Used in configuration files]

Virtual Machine

See VMware Player sub section

Download Ubuntu ISO file

Ubuntu 10.04 LTS – the Lucid Lynx ISO file is needed to install on virtual machine created by VMware Player to set up multi-node Hadoop cluster.

Download Ubuntu Desktop Edition
Download Ubuntu Desktop Edition

http://www.ubuntu.com/desktop/get-ubuntu/download

Note: Login with user “root” to avoid any kind of permission issues (In your machine and Virtual Machine).

Update the Ubuntu packages: sudo apt-get update

VMware Player [Freeware]

Download it from http://downloads.vmware.com/d/info/desktop_downloads/vmware_player/3_0

Download VMware Player
Download VMware Player
Select VMware Player to Download
Select VMware Player to Download
VMware Player Free Product Download
VMware Player Free Product Download

Install VMware Player on your physical machine with the use of the downloaded bundle.

VMware Player - Ready to install
VMware Player – Ready to install
VMware Player - installing
VMware Player – installing

Now, create virtual machine with the use of it and install Ubuntu 10.04 LTS on it with the use of ISO file and do appropriate configurations for the virtual machine.

Browse Ubuntu ISO
Browse Ubuntu ISO

Proceed with instructions and let the set up finish.

Virtual Machine in VMware Player
Virtual Machine in VMware Player

Once you are done with it successfully*, Select Play virtual Machine.

Start Virtual Machine in VMware Player
Start Virtual Machine in VMware Player

Open Terminal (Command prompt in Ubuntu) and check the IP address of the Virtual Machine.

NOTE: IP address may change so if Virtual machine cannot be connected by SSH from physical machine then have a look on IP address 1st.

Ubuntu Virtual Machine - ifconfig
Ubuntu Virtual Machine – ifconfig

Apply following configuration in physical & virtual machine for Java 6 and Hadoop installation only.

Installing Java 6

sudo apt-get install sun-java6-jdk

sudo update-java-alternatives -s java-6-sun [Verify Java Version]

Setting up Hadoop  0.20.2

Download Hadoop from http://www.apache.org/dyn/closer.cgi/hadoop/core and place under /usr/local/hadoop

HADOOP Configurations

Hadoop requires SSH access to manage its nodes, i.e. remote machines [In our case virtual Machine] plus your local machine if you want to use Hadoop on it.

On Physical Machine

Generate an SSH key

Generate an SSH key
Generate an SSH key

Enable SSH access to your local machine with this newly created key.

Enable SSH access to your local machine
Enable SSH access to your local machine

Or you can copy it from $HOME/.ssh/id_rsa.pub to $HOME/.ssh/authorized_keys manually.

Test the SSH setup by connecting to your local machine with the root  user.

Test the SSH setup
Test the SSH setup

Use ssh 192.168.1.3 from physical machine as well. It will give same result.

On Virtual Machine

The root user account on the slave (Virtual Machine) should be able to access physical machine via a password-less SSH login.

Add the Physical Machine’s public SSH key (which should be in ) to the authorized_keys file of Vitual Machine (in this user’s ). You can do this manually

(Physical Machine)$HOME/.ssh/id_rsa.pub -> (VM)$HOME/.ssh/authorized_keys

SSH Key may look like (Can’t be same though J)

ssh

rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwjhqJ7MyXGnn5Ly+0iOwnHETAR6Y3Lh3UUKb

aCIP2/0FsVOWhBvcSLMEgT1ewrRPKk9IGoegMCMdHDGDfabzO4tUsfCdfvvb9KFRcB

U3pKdq+yVvCVxXtoD7lNnMtckUwSz5F1d04Z+MDPbDixn6IAu/GeX9aE2mrJRBq1Pz

n3iB4GpjnSPoLwQvEO835EMchq4AI92+glrySptpx2MGporxs5LvDaX87yMsPyF5tutu

Q+WwRiLfAW34OfrYsZ/Iqdak5agE51vlV/SESYJ7OqdD3+aTQghlmPYE4ILivCsqc7w

xT+XtPwR1B9jpOSkpvjOknPgZ0wNi8LD5zyEQ3w== root@mitesh-laptop

Use ssh 192.168.1.3 from virtual machine to verify ssh access and have a feel of it to understand ssh working.

For more understanding, Ping 192.168.1.3 and 192.168.28.136 from each other.

For detail information on Network Settings in VMWare Player visit http://www.vmware.com/support/ws55/doc/ws_net_configurations_common.html VMware Player has similar concepts.

Using 0.0.0.0 for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses of Ubuntu box.

To disable IPv6 on Ubuntu 10.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the following lines to the end of the file:

#disable ipv6

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

Ubuntu - Disable IPv6
Ubuntu – Disable IPv6

 <HADOOP_INSTALL>/conf/hadoop-env.sh -> set the JAVA_HOME environment variable to the Sun JDK/JRE 6 directory.

 

# The java implementation to use.  Required.

export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.20

 

<HADOOP_INSTALL>/conf/core-site.xml ->

 

Configure the directory where Hadoop will store its data files, the network ports it listens to, etc. Our setup will use Hadoop’s Distributed File System,

Hadoop - core-site.xml
Hadoop – core-site.xml

HDFS, even though our little “cluster” only contains our single local machine.

<property>

  hadoop.tmp.dir

  /usr/local/hadoop/tmp/dir/hadoop-${user.name}

</property>

 <HADOOP_INSTALL>/conf/mapred-site.xml ->

<property>

  <name>mapred.job.tracker</name>

  <value>192.168.1.3:54311</value>

</property>

Hadoop - mapred-site.xml
Hadoop – mapred-site.xml

 <HADOOP_INSTALL>/conf/hdfs-site.xml

 

<property>

  <name>dfs.replication</name>

  <value>2</value>

</property>

Physical Machine vs Virtual Machine (Master/Slave) Settings on Physical Machine only

<HADOOP_INSTALL>/conf/masters

The conf/masters file defines the namenodes of our multi-node cluster. In our case, this is just the master machine.

192.168.1.3

<HADOOP_INSTALL>/conf/slaves

 This conf/slaves file lists the hosts, one per line, where the Hadoop slave daemons (datanodes and tasktrackers) will be run. We want both the master box and the slave box to act as Hadoop slaves because we want both of them to store and process data.

192.168.1.3

192.168.28.136

NOTE: Here 192.168.1.3 & 192.168.28.136 are the IP addresses of Physical Machine and Virtual machine respectively which may vary in your case. Just Enter IP Addresses in files and you are done!!!

Let’s enjoy the ride with Hadoop:

All Set for having “HANDS ON HADOOP”.

Formatting the name node

ON Physical Machine and Virtual Machine

The first step to starting up your Hadoop installation is formatting the Hadoop filesystem which is implemented on top of the local filesystem of your “cluster” (which includes only your local machine if you followed this tutorial). You need to do this the first time you set up a Hadoop cluster. Do not format a running Hadoop filesystem, this will cause all your data to be erased.

hadoop namenode -format
hadoop namenode -format

Starting the multi-node cluster

1.    Start HDFS daemons

Run the command /bin/start-dfs.sh on the machine you want the (primary) namenode to run on. This will bring up HDFS with the namenode running on the machine you ran the previous command on, and datanodes on the machines listed in the conf/slaves file.

Physical Machine

Hadoop - start-dfs.sh
Hadoop – start-dfs.sh

VM

Hadoop - DataNode on Slave Machine
Hadoop – DataNode on Slave Machine

1.    Start MapReduce daemons

Run the command /bin/start-mapred.sh on the machine you want the jobtracker to run on. This will bring up the MapReduce cluster with the jobtracker running on the machine you ran the previous command on, and tasktrackers on the machines listed in the conf/slaves file.

Physical Machine

Hadoop - Start MapReduce daemons
Hadoop – Start MapReduce daemons

VM

TaskTracker in Hadoop
TaskTracker in Hadoop

Running a MapReduce job

Here’s the example input data I have used for the multi-node cluster setup described in this tutorial.

All ebooks should be in plain text us-ascii encoding.

http://www.gutenberg.org/etext/20417

http://www.gutenberg.org/etext/5000

http://www.gutenberg.org/etext/4300

http://www.gutenberg.org/etext/132

http://www.gutenberg.org/etext/1661

http://www.gutenberg.org/etext/972

http://www.gutenberg.org/etext/19699

Download above ebooks and store it in local file system.

Copy local example data to HDFS

Hadoop - Copy local example data to HDFS
Hadoop – Copy local example data to HDFS

Run the MapReduce job

hadoop-0.20.2/bin/hadoop jar hadoop-0.20.2-examples.jar wordcount examples example-output

Failed Hadoop Job
Failed Hadoop Job

Retrieve the job result from HDFS

To read the file directly from HDFS without copying it to the local file system. In this tutorial, we will copy the results to the local file system though.

mkdir /tmp/example-output-final

bin/hadoop dfs -getmerge example-output-final /tmp/ example-output-final

Hadoop - Word count example
Hadoop – Word count example

Hadoop - MapReduce Administration
Hadoop – MapReduce Administration
Hadoop - Running and Completed Job
Hadoop – Running and Completed Job

Task Tracker Web Interface

Hadoop - Task Tracker Web Interface
Hadoop – Task Tracker Web Interface

Hadoop - NameNode Cluster Summary
Hadoop – NameNode Cluster Summary

References

http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)

http://www.michael-noll.com/wiki/Writing_An_Hadoop_MapReduce_Program_In_Python

http://java.dzone.com/articles/how-hadoop-mapreduce-works

http://ayende.com/Blog/archive/2010/03/14/map-reduce-ndash-a-visual-explanation.aspx

http://www.youtube.com/watch?v=Aq0x2z69syM

http://www.gridgainsystems.com/wiki/display/GG15UG/MapReduce+Overview

http://map-reduce.wikispaces.asu.edu/

http://blogs.sun.com/fifors/entry/map_reduce

http://www.vmware.com/support/ws55/doc/ws_net_configurations_common.html

http://www.ibm.com/developerworks/aix/library/au-cloud_apache/

 

ssh

rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwjhqJ7MyXGnn5Ly+0iOwnHETAR6Y3Lh3UUKb

aCIP2/0FsVOWhBvcSLMEgT1ewrRPKk9IGoegMCMdHDGDfabzO4tUsfCdfvvb9KFRcB

U3pKdq+yVvCVxXtoD7lNnMtckUwSz5F1d04Z+MDPbDixn6IAu/GeX9aE2mrJRBq1Pz

n3iB4GpjnSPoLwQvEO835EMchq4AI92+glrySptpx2MGporxs5LvDaX87yMsPyF5tutu

Q+WwRiLfAW34OfrYsZ/Iqdak5agE51vlV/SESYJ7OqdD3+aTQghlmPYE4ILivCsqc7w

xT+XtPwR1B9jpOSkpvjOknPgZ0wNi8LD5zyEQ3w== root@mitesh-laptop

Install WildFly on Ubuntu 12.04 LTS and Ubuntu 12.10


WildFly = JBoss

Install JBoss 7.0.2 Application Server on on Ubuntu 12.04 LTS and Ubuntu 12.10 Ubuntu 11.10 or Ubuntu 11.04 or Ubuntu 10.10 or Ubuntu 10.04 LTS

Java – Download Free EBooks and Whitepapers

Application Server vs. Web Server
Application Server vs. Web Server
Application Servers in Java, .net and PHP
Application Servers in Java, .net and PHP

Application Server is a framework that provides execution environment for application written in specific Programming Language.

JBoss is an Open Source Java EE based Application server developed by Redhat which operates on Cross Platform. It supports Servlet Specification 3.3 and JSP specification of 2.2.

JBoss Application Server (Source: JBoss)
JBoss Application Server (Source: JBoss)

Before you install JBoss you’ll want to make sure that you’ve installed Java.

Use Synaptic Package Manager to install Java.

Use Synaptic Package Manager to install Java
Use Synaptic Package Manager to install Java
Synaptic Package Manager installing Java
Synaptic Package Manager installing Java
java -version
java -version
java -version
java -version

Licensing

JBoss is distributed under LGPL; LGPL is a free software license published by the Free Software Foundation (FSF). It was designed as a compromise between the strong-copyleft GNU General Public License or GPL and permissive licenses such as the BSD licenses and the MIT License.

Red Hat charges to provide a support service for:

  • ·JMS integration
  • ·Java Naming and Directory Interface (JNDI)
  • ·Java Transaction API (JTA)
  • ·Java Authorization Contract for Containers (JACC) integration
  • ·JavaMail
  • ·JavaServer Faces 1.2 (Mojarra)
  • ·Jport subscription for JBoss Enterprise Middleware.

Features

  • ·Java Server Pages
  • ·Java Servlet
  • ·JBoss Web ServicesS
  • ·JDBC
  • ·Load balancing
  • ·Aspect-oriented programming (AOP) support
  • ·Clustering
  • ·Deployment API
  • ·Distributed caching (using JBoss Cache, a standalone product)
  • ·Distributed deployment (farming)
  • ·Enterprise JavaBeans versions 3 and 2.1
  • ·Failover (including sessions)
  • ·Hibernate integration
  • ·Java Authentication and Authorization Service (JAAS)
  • ·Java EE Connector Architecture (JCA) integration
  • ·Java Management Extensions
  • ·Management API
  • ·OSGi framework
  • ·RMI-IIOP
  • ·SOAP with Attachments API for Java
  • ·Teiid data virtualization system- Teiid is a data virtualization system that allows applications to use data from multiple, heterogeneous data stores.

Requirements

Java SE 6 or later

Download

http://www.jboss.org/jbossas/downloads/

Download JBoss
Download JBoss

Extract the file into usr/share/

Extract files from JBoss Archive
Extract files from JBoss Archive
Extracting files from JBoss Archive
Extracting files from JBoss Archive
Extracted files for JBoss Standalone installation
Extracted files for JBoss Standalone installation

Now, Lets Test it…If we haven’t installed Java then we will get an ERROR.

Java not Found Error in JBoss installation
Java not Found Error in JBoss installation
JBoss Running
JBoss Running

Now lets open Jboss in Web Browser

JBoss Application Server 7
JBoss Application Server 7

Lets Verify the Admin Console

JBoss Application Server 7 Admin Console
JBoss Application Server 7 Admin Console

Start JBoss 7 as a service on Ubuntu

Previous versions of JBoss included a scripts (like jboss_init_redhat.sh) that could be copied to /etc/init.d in order to add it as a service – so it would start on boot up. I can’t seem to find any similar scripts in JBoss 7.

If you have copied from other editor or any web page then you will find some unwanted character in the file which will give you error when you will try to run the script.

Error: Bad Interpreter…

Verify Interpreter by “which sh” command in console

Result will be /bin/sh

Then verify the script openining it in VI editor.

Error: JBoss 7 as a service on Ubuntu
Error: JBoss 7 as a service on Ubuntu

Remove all unwanted Characters

JBoss 7 as a service on Ubuntu
JBoss 7 as a service on Ubuntu

Save it

JBoss 7 as a service on Ubuntu - Save Changes
JBoss 7 as a service on Ubuntu – Save Changes

Restart the Machine and try following

Restart JBoss Service
Restart JBoss Service

Done!!!

References

http://en.wikipedia.org/wiki/JBoss_application_server

http://en.wikipedia.org/wiki/Comparison_of_application_servers

http://en.wikipedia.org/wiki/Application_Server

http://stackoverflow.com/questions/6880902/start-jboss-7-as-a-service-on-linux

Posted in AWS, Cloud Computing, Java

Install WildFly on Ubuntu in Amazon EC2 Micro Instance


WildFly == Jboss

Licensing

JBoss is distributed under LGPL; LGPL is a free software license published by the Free Software Foundation (FSF). It was designed as a compromise between the strong-copyleft GNU General Public License or GPL and permissive licenses such as the BSD licenses and the MIT License.

Java – Download Free EBooks and Whitepapers

  1. Create Amazon Machine Instance (AMI) with Ubuntu server 12.04.01 LTS in AWS Free Usage Tier? -> https://clean-clouds.com/2013/01/12/how-to-create-amazon-machine-instance-ami-in-aws-free-usage-tier/
  2. Verify whether Java is installed or not. in AWS instance we created, verify it with command java -version
  3. Download http://www.jboss.org/jbossas/downloads/
  4. Extract the file into usr/share/
  5. Edit standalone.xml file at /usr/share/jboss-as/standalone/configuration and edit the interface definitions to use address 0.0.0.0 instead of the default.
        <inet-address value="0.0.0.0"/>    <inet-address value="0.0.0.0"/>
    </interface>

    JBoss-standalone

  6. Run standalone.sh from bin
[root@domU-12-31-39-04-9C-B2 jboss-as-7.0.2.Final]# cd bin/[root@domU-12-31-39-04-9C-B2 bin]# ./standalone.sh=========================================================================JBoss Bootstrap EnvironmentJBOSS_HOME: /tmp/jboss-as-7.0.2.FinalJAVA: /usr/lib/jvm/jre/bin/javaJAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================15:15:21,489 INFO  [org.jboss.modules] JBoss Modules version 1.0.2.GA15:15:21,910 INFO  [org.jboss.msc] JBoss MSC version 1.0.1.GA

15:15:21,975 INFO  [org.jboss.as] JBoss AS 7.0.2.Final “Arc” starting

15:15:23,444 WARN  [org.jboss.as] No security realm defined for native management service, all access will be unrestricted.

15:15:23,464 INFO  [org.jboss.as] creating http management service using network interface (management) port (9990)

15:15:23,467 WARN  [org.jboss.as] No security realm defined for http management service, all access will be unrestricted.

15:15:23,481 INFO  [org.jboss.as.logging] Removing bootstrap log handlers

15:15:23,497 INFO  [org.jboss.as.connector.subsystems.datasources] (Controller Boot Thread) Deploying JDBC-compliant driver class org.h2.Driver (version 1.2)

15:15:23,511 INFO  [org.jboss.as.clustering.infinispan.subsystem] (Controller Boot Thread) Activating Infinispan subsystem.

15:15:23,660 INFO  [org.jboss.as.naming] (Controller Boot Thread) JBAS011800: Activating Naming Subsystem

15:15:23,672 INFO  [org.jboss.as.naming] (MSC service thread 1-1) JBAS011802: Starting Naming Service

15:15:23,681 INFO  [org.jboss.as.osgi] (Controller Boot Thread) JBAS011910: Activating OSGi Subsystem

15:15:23,722 INFO  [org.jboss.as.security] (Controller Boot Thread) Activating Security Subsystem

15:15:23,726 INFO  [org.jboss.remoting] (MSC service thread 1-1) JBoss Remoting version 3.2.0.Beta2

15:15:23,785 INFO  [org.xnio] (MSC service thread 1-1) XNIO Version 3.0.0.Beta3

15:15:23,823 INFO  [org.xnio.nio] (MSC service thread 1-1) XNIO NIO Implementation Version 3.0.0.Beta3

15:15:24,149 INFO  [org.apache.catalina.core.AprLifecycleListener] (MSC service thread 1-1) The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

15:15:24,204 INFO  [org.jboss.as.remoting] (MSC service thread 1-2) Listening on /0.0.0.0:9999

15:15:24,212 INFO  [org.jboss.as.ee] (Controller Boot Thread) Activating EE subsystem

15:15:24,501 INFO  [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-1) Starting Coyote HTTP/1.1 on http–0.0.0.0-8080

15:15:24,502 INFO  [org.jboss.as.jmx.JMXConnectorService] (MSC service thread 1-1) Starting remote JMX connector

15:15:24,735 INFO  [org.jboss.as.connector] (MSC service thread 1-2) Starting JCA Subsystem (JBoss IronJacamar 1.0.3.Final)

15:15:24,966 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) Bound data source [java:jboss/datasources/ExampleDS]

15:15:25,893 INFO  [org.jboss.as.deployment] (MSC service thread 1-2) Started FileSystemDeploymentService for directory /tmp/jboss-as-7.0.2.Final/standalone/deployments

15:15:25,921 INFO  [org.jboss.as] (Controller Boot Thread) JBoss AS 7.0.2.Final “Arc” started in 4812ms – Started 93 of 148 services (55 services are passive or on-demand)

Jboss

Done!