Archive

Archive for the ‘16ms’ Category

parallelization + array = ParallelArray

October 17th, 2008 AVC Comments off

The idea is that a ParallelArray represents a collection of structurally similar data items, and you use the methods on ParallelArray to create a description of how you want to slice and dice the data. You then use the description to actually execute the array operations, which uses the fork-join framework under the hood, in parallel…This idea coming in JDK 7 :)

In fact, here are some operations supported by ParallelArray:

  • Filtering: selecting a subset of the elements
  • Mapping: convert selected elements to another form
  • Replacement: create a new ParallelArray derived from the original
  • Aggregation: combine all values into a single value
  • Application: perform an action for each selected element

- avc

Categories: 16ms, Coding, Common, Java, Technology Tags:

Intelligent Route Balancing & Consistent Hashing Algorithm…

October 10th, 2008 AVC Comments off

I have run into consistent hashing a couple of times. So what is consistent hashing and why should you care?

The need for consistent hashing arose from limitations experienced while running collections of caching machines. The common way, if you have a collection of n cache machines, is to put the object o in cache machine number hash(o) mod n.This shude works until you remove or add a cache machine. The reason then n changes and every object is hashed to a new location

The conclusion of that is , it would be usefull if a cache machine was added, it took its fair share of objects from all the other cache machines or when a cache machine was removed the objects were shared between the remaining machines.

Thats was what consistent hashing does – consistently maps objects to the same cache machine, as far as is possible, at least.

Architecture with consistent hashing and route balancing in a Grid (yes it a data-grid):

affinity_white.png

This  diagram illustrates the difference between using route balancing in data grid or without route balancing . The left side shows execution flow without route balancing. In this case the data is then delivered to caller (master) node that call the back nodes ,wich can be a Cache Node. This case can be faster than DB access, but results into unnecessary network traffic.

The right side you can see the routing and balancing. The whole computation logic together with data access logic is brought to data server for local execution. Assuming that serialization of computation logic is much lighter than serializing data, the network traffic in this case is minimal. Also, your computation may access data from both, Node 2 and Node 3.This case is without unnecessary network traffic and quit fast than DB access.

That’s is the idea behind the Intelligent Route Balancing  Consistent Hashing Algorithm :)

-avc (Arkadiusz Victor Czarnik)

Categories: 16ms Tags:

Use MemCacheStore in Tomcat…

October 10th, 2008 AVC Comments off

About:

memcache-client is a Java interface to memcached; memcached is a distributed caching system. Originally, memcached was developed for LiveJournal.com, which was one of the earliest popular blogging communities. It is reported that the newly developed memcached was able to decrease LiveJournal’s database load to nearly nothing using only existing hardware. For a site that at the time handled over 20 million page views a day and had over a million different users, that’s very significant. A number of other popular sites use memcached, including Slashdot and Wikipedia. This section describes how to setup  MemCache with Sessionstore module from Tomcat 5.x(it works in jboss also). A simple class that implements Store interface and use a memcacheclient to store sessions.

Hier is a simple MemCacheStore class that implemts all functionality:

import com.danga.MemCached.MemCachedClient;
import com.danga.MemCached.SockIOPool;

/**
 * Implementation of a Tomcat Session {@link Store} that's backed by
 * memcached.
 *
 * @author avc
 */
public class MemCacheStore extends StoreBase implements Store {

	/**
	 * The descriptive information about this implementation.
	 */
	protected static String info = "MemCacheStore/1.0";

	/**
	 * The thread safe and thread local memcacheclient instance.
	 */
	private static final ThreadLocal<MemCachedClient> memclient = new ThreadLocal<MemCachedClient>();

	/**
	 * The server list for memcache connections.
	 */
	private List<String> servers = new ArrayList<String>();

	/**
	 * all keys for current request session.
	 */
	private List<String> keys = Collections
			.synchronizedList(new ArrayList<String>());

	/**
	 * Return the info for this Store.
	 */
	public String getInfo() {
		return (info);
	}

	/**
	 * Clear all sessions from the cache.
	 */
	public void clear() throws IOException {
		getMemcacheClient().flushAll();
		keys.clear();
	}

	/**
	 * Return local keyList size.
	 */
	public int getSize() throws IOException {
		return getKeyList().size();
	}

	/**
	 * Return all keys
	 */
	public String[] keys() throws IOException {
		return getKeyList().toArray(new String[] {});
	}

	/**
	 * Load the Session from the cache with given sessionId.
	 *
	 */
	public Session load(String sessionId) throws ClassNotFoundException,
			IOException {

		try {

			byte[] bytes = (byte[]) getMemcacheClient().get(sessionId);
			if (bytes == null)
				return null;
			ObjectInputStream ois = bytesToObjectStream(bytes);

			StandardSession session = (StandardSession) manager
					.createEmptySession();
			session.setManager(manager);
			session.readObjectData(ois);
			if (session.isValid() && !keys.contains(sessionId)) {
				keys.add(sessionId);
			}
			return session;

		} catch (Exception e) {
			return (null);
		}
	}

	/**
	 * transform a vaild Session from objectinputstream.
	 * Check which classLoader is responsible for the current instance.
	 *
	 * @param bytes
	 * @return ObjectInputStream with the Session object.
	 * @throws IOException
	 */
	private ObjectInputStream bytesToObjectStream(byte[] bytes)
			throws IOException {
		ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
		ObjectInputStream ois = null;
		Loader loader = null;
		ClassLoader classLoader = null;
		Container container = manager.getContainer();
		if (container != null)
			loader = container.getLoader();
		if (loader != null)
			classLoader = loader.getClassLoader();
		if (classLoader != null)
			ois = new CustomObjectInputStream(bais, classLoader);
		else
			ois = new ObjectInputStream(bais);
		return ois;
	}

	/**
	 * remove the session with given sessionId
	 */
	public void remove(String sessionId) throws IOException {
		getMemcacheClient().delete(sessionId);
		List<String> keyList = getKeyList();
		keyList.remove(sessionId);
	}

	/**
	 * Store a objectstream from the session into the cache.
	 */
	public void save(Session session) throws IOException {
		ByteArrayOutputStream baos = new ByteArrayOutputStream();
		ObjectOutputStream oos = new ObjectOutputStream(baos);
		StandardSession standard = (StandardSession) session;
		standard.writeObjectData(oos);
		getMemcacheClient().add(session.getId(), baos.toByteArray());
		Object ob = getMemcacheClient().get(session.getId());
		List<String> keyList = getKeyList();
		keyList.add(session.getId());
	}

	/**
	 *
	 * @return
	 */
	private List<String> getKeyList() {
		return keys;
	}

	/**
	 * Simple instanc of the Memcache client and SockIOPool.
	 * @return memchacheclient
	 */
	private MemCachedClient getMemcacheClient() {
		if (memclient == null) {

			Integer[] weights = { 1 };
			// grab an instance of our connection pool
			SockIOPool pool = SockIOPool.getInstance();
			if (!pool.isInitialized()) {
				String[] serverlist = servers.toArray(new String[] {});
				// set the servers and the weights
				pool.setServers(serverlist);
				pool.setWeights(weights);

				// set some basic pool settings
				// 5 initial, 5 min, and 250 max conns
				// and set the max idle time for a conn
				// to 6 hours
				pool.setInitConn(5);
				pool.setMinConn(5);
				pool.setMaxConn(250);
				pool.setMaxIdle(1000 * 60 * 60 * 6);

				// set the sleep for the maint thread
				// it will wake up every x seconds and
				// maintain the pool size
				pool.setMaintSleep(30);

				// set some TCP settings
				// disable nagle
				// set the read timeout to 3 secs
				// and don't set a connect timeout
				pool.setNagle(false);
				pool.setSocketTO(3000);
				pool.setSocketConnectTO(0);

				// initialize the connection pool
				pool.initialize();
			}

			// lets set some compression on for the client
			// compress anything larger than 64k

			memclient.get().setCompressEnable(true);
			memclient.get().setCompressThreshold(64 * 1024);
		}
		return memclient.get();
	}

	public List<String> getServers() {
		return servers;
	}

	public void setServers(String serverList) {
		StringTokenizer st = new StringTokenizer(serverList, ", ");
		servers.clear();
		while (st.hasMoreTokens()) {
			servers.add(st.nextToken());
		}
	}

}

The configuration for the context:

<Context path="/web" docBase="./deploy/web-0.0.1.war">
	<Manager className="org.apache.catalina.session.PersistentManager"
		distributable="true">
		<Store className="com.avc.hq.MemcachedStore"
			servers="192.168.17.90:11211" />
	</Manager>

</Context>

Thats all. The quite simple way :)

-avc (arkadiusz victor czarnik)

Categories: 16ms Tags:

in 5 min to Tomcat Cluster…

October 9th, 2008 AVC Comments off

Overview

This section describes how to setup a Tomcat 5.5 cluster with in-memory session replication and configure the cluster to use JK as the load balancing module.
Steps to Setup a Tomcat Cluster and Configure it for Load Balancing

1. Configure Tomcat application servers for load balancing
Edit TOMCAT_HOME/conf/server.xml. Locate the <Engine> element and add a jvmRoute attribute to it:

<Engine name="Catalina" defaultHost="localhost" jvmRoute="NODE_NAME">
... ...
</Engine>

The NODE_NAME must match the name that you will define in the JK configuration (for example “node1″ or “node2″).
2. Edit EP5_DEPLOY_DIR/WEB-INF/web.xml
Set the application as distributable in the web.xml descriptor. e.g.:

<?xml version="1.0"?>
<web-app>
<distributable/>
<!-- ... -->
</web-app>

3. Configure session replication
Add the following lines to TOMCAT_HOME/conf/server.xml and (replacing SERVER_IP_ADDRESS with the IP address of your server).


<Host ......>
...
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
managerClassName="org.apache.catalina.cluster.session.DeltaManager"
expireSessionsOnShutdown="false"
useDirtyFlag="true"
notifyListenersOnReplication="true">
<Membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000" />
<Receiver
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="SERVER_IP_ADDRESS"
tcpListenPort="4001"
tcpSelectorTimeout="100"
tcpThreadCount="6" />
<Sender
className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
replicationMode="pooled"
ackTimeout="15000"
waitForAck="true" />
<Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;" />
<Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false" />
<ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener" />
</Cluster>
...
</Host>

Steps to Setup a Tomcat Cluster and Configure it for Load Balancing with database persisence of sessions

1. Configure Tomcat application servers for load balancing
Edit TOMCAT_HOME/conf/server.xml. Locate the <Engine> element and add a jvmRoute attribute:

<Engine name="Catalina" defaultHost="localhost" jvmRoute="{NODE_NAME}">
... ...
</Engine>

The NODE_NAME must match the name that you will define in the JK configuration (for example “epnode1″ or “epnode2″).
2. Create a table to store sessions
Choose a database server and create a table to store sessions.
For example, with MySQL the table can be created with the following SQL.


create database tomcat;
create table tomcat$sessions
(
id varchar(100) not null primary key,
app varchar(255),
valid char(1) not null,
maxinactive int not null,
lastaccess bigint,
data mediumblob
);

3. Configure Tomcat to store sessions in the session table
Configure the Tomcat context to store sessions into the session table.
For example, the connection to a MySQL table can be configured as follows.

<Context>
....
<Manager className="org.apache.catalina.session.PersistentManager"
debug="0"
saveOnRestart="true"
maxActiveSessions="-1"
minIdleSwap="-1"
maxIdleSwap="-1"
maxIdleBackup="-1">
<Store className="org.apache.catalina.session.JDBCStore"
driverName="com.mysql.jdbc.Driver"
connectionURL="jdbc:mysql://MYSQL_SERVER_IP_ADDRESS:MYSQL_SERVER_PORT/tomcat?
user=USER_NAME&password=PASSWORD"
sessionTable="tomcat$sessions"
sessionIdCol="id"
sessionAppCol="app"
sessionDataCol="data"
sessionValidCol="valid"
sessionMaxInactiveCol="maxinactive"
sessionLastAccessedCol="lastaccess"
checkInterval="60"
debug="99" />
</Manager>
</Context>

gretz avc(arkadiusz v. czarnik)

Categories: 16ms Tags:

The Challenge

March 10th, 2008 makii Comments off

OK, if anybody et al has read this blog up to now, you might have asked yourself why we do the funny things we do and what it’s all about with this 16 millisecods and stuff.

We are a small team of (right now) three software developers which were assigned to replace a PHP web application which is now up and running for about 10 years with a new one written in Java. The website itself, despite the underlying technology is quite successful, right now only present in the german market, and has more than 100 million page impressions per month, by more than 13 million visits, attracting more than 600.000 uniqe users in the same time period. OK, it’s not Google, but it’s not that small either.

The former PHP website isn’t that bad. It does what it is ment to do. It’s pretty fast. So why the rewrite?

As often it all comes down to TCO. The basic principles of the PHP webapp aren’t bad and even aren’t that outdated, for a ten year old vessel, but over time, with a lot of different maintainers and no good documentation or big picture, software degrades and becomes a pain in the ass to maintain. Thus the decision for the rewrite was born, with the main goals of easier (read: cheaper) maintenance and as least as fast, or not this much slower than the current website.

As you might see from the previous posts the general feeling right now is quite good, but we will see in the next few days/weeks when QA starts testing the thing. Knock-on-wood.

Categories: 16ms, Common, Technology Tags:

Struts2 interceptor stack for cache index future page…

March 7th, 2008 AVC Comments off

This is the first interceptor stack performance log. What you see below is the log on my local jboss while requesting the index page. It needs only 125 ms to get through the interceptor stack.
All other elements are cached and the interceptor stack is not invoked for them. Maybe we can optimize this for better performance… :-)


16:08:21,449
[125ms] - Handling request from Dispatcher
[0ms] - create DefaultActionProxy:
[0ms] - create DefaultActionInvocation:
[0ms] - actionCreate: index
[125ms] - interceptor: exception
[125ms] - interceptor: alias
[125ms] - interceptor: servletConfig
[125ms] - interceptor: prepare
[125ms] - interceptor: i18n
[110ms] - interceptor: chain
[110ms] - interceptor: debugging
[110ms] - interceptor: profiling
[110ms] - interceptor: scopedModelDriven
[110ms] - interceptor: modelDriven
[110ms] - interceptor: fileUpload
[110ms] - interceptor: checkbox
[110ms] - interceptor: staticParams
[110ms] - interceptor: params
[110ms] - interceptor: conversionError
[110ms] - interceptor: validation
[110ms] - interceptor: workflow
[110ms] - interceptor: navigationInterceptor
[63ms] - interceptor: defaultPageInterceptor
[63ms] - interceptor: queryInterceptor
[63ms] - interceptor: cookie
[16ms] - invokeAction: index
[47ms] - executeResult: success
[47ms] - create DefaultActionProxy:
[47ms] - create DefaultActionInvocation:
[47ms] - actionCreate: tbsContentAction
[47ms] - interceptor: exception
[47ms] - interceptor: alias
[47ms] - interceptor: servletConfig
[47ms] - interceptor: prepare
[47ms] - interceptor: i18n
[40ms] - interceptor: chain
[40ms] - interceptor: debugging
[40ms] - interceptor: profiling
[40ms] - interceptor: scopedModelDriven
[40ms] - interceptor: modelDriven
[40ms] - interceptor: fileUpload
[40ms] - interceptor: checkbox
[40ms] - interceptor: staticParams
[40ms] - interceptor: params
[40ms] - interceptor: conversionError
[40ms] - interceptor: validation
[40ms] - interceptor: workflow
[40ms] - interceptor: navigationInterceptor
[20ms] - interceptor: defaultPageInterceptor
[20ms] - interceptor: queryInterceptor
[20ms] - interceptor: FinanceDataInterceptor
[20ms] - invokeAction: tbsContentAction
[20ms] - executeResult: success

have nice day,
(AVC)

Categories: 16ms Tags:

News from the 16ms frontier

March 7th, 2008 derlanders Comments off

Yay! We were finally able to fake some posable statistics which show that the cache system we use is uber. Okay, uber against partial caching in the file system, but still superior by factor 24. (Sorry, should be 23, we are working on it) Below a test result table for a time test:

42 ms > 1337 ms

“It is all lies! Everything I took for real all these years – a lie.”

As it seems, firebug only measures the time needed for a http response to load, which means our previous results were somewhat wrong. And the results for the PHP system too. The processing time from click to the start of the response is not included in either system. Now we used highly sophisticated software tools to overcome this flaw and produced… interesting results.

Figures spat out by some testing automat have to be regarded with suspicion though, especially if it is a quickly clicked test. However, if these Numbers are right, the Average click time of our JBoss (in transwarp overdrive mode, with solid fuel boosters and native library extensions along with our supportive pedaling) is about 35ms, with an ugly peak of 70 ms. The peak of the php system is 10.

Seconds.

So if you take a look at the graph below, there are the two pointy graphs with the live system (blue) and the qa instance (green). The red graph (The JBoss system) does not look pointy here because of the 10 second peak from the live system which ruins the scale. (The red and blue graph drop to zero because they finished their 100 clicks before the QA system)

ooover niiinethoousand!!!!

Categories: 16ms, Technology Tags:

How to escape from PermGen-Space hell( or escape from “Die Strafe Gottes”)?

March 6th, 2008 AVC 1 comment

How many times on a day have you a PermGen-Space Exception? So you must restart an application server and that makes you really angry. So what we can do ?

The easy way to have one workaround is to add java option for better handling this problem. On the jboss application server try to change the JAVA_OPTS in the run.(bat/sh) with flower options:

With Sun JVMs reduce the RMI GCs to once per hour

set  JAVA_OPTS=%JAVA_OPTS% -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC \
    -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -XX:PermSize=128m \
    -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 \
    -Dsun.rmi.dgc.server.gcInterval=3600000

By default PermGenSpace is set with value of 64mb, which is to small for large Applications. So what we must add is simple three options :

-XX:+UseConcMarkSweepGC ( UseConcMarkSweepGC tells the JVM to use the concurrent low-pause collector for the old generation heap; as most of the collection with this collector is done in parallel with the application threads, it only has a few brief stop-the-world pauses, instead of one large pause.)
-XX:+CMSPermGenSweepingEnabled
-XX:+CMSClassUnloadingEnabled
-XX:PermSize=128m -XX:MaxPermSize=256m (The MaxPermSize=128m sets a new max for the JVM which can help with some of the Java.lang.OutOfMemoryError issues some people see.)

have nice day,

AVC (Arkadiusz Victor Czarnik)

PS. It’s optimze the performance of the application-server also. :-)

Categories: 16ms Tags:

We hit it!

March 6th, 2008 makii Comments off

under 16ms We did it. The current website using PHP hits 16ms per request, we can do it as fast as they can, or even faster, as Firebug tells me here. It gets better and better…

Categories: 16ms Tags: