Thursday, June 28, 2012

Why mobile application can be a deciding factor

I work at appdynamics (www.appdynamics.com) which is into application performance management domain. People who do not know about appdynamics, it is a product which can track a business transaction in a complex distributed application and give you a real time picture of how the system is working. It can detect anomalies and diagnose a problem in the application which is great for any modern day complex web application.

Now in a system problem do not happen everyday. It happens once in a while and to detect and find out the root cause for it one need to monitor the application at all times. That means having a crystal clear and logical "User Interface" which should be easily accessible from anywhere. And it is this monitoring capability which is going to be a deciding factor at many sites as more and more vendors develop the technology to monitor the next generation applications. Having your dashboard accessible by a web browser is great but not enough. A heavy UI (like the one of appdynamics) will not be that useful with smartphones and tablets. The OPs guys who are responsible to have the app run without any down time should be enabled to not only monitor but also diagnose a problem in the app via their smart phones or tablets and i think this is becoming more and more important.

So my advise to all next generation APM vendors is to develop the mobile app for their UI as soon as they can. This could be a deciding factor in winning or loosing a deal as bigger and bigger enterprises move to the disruptive change you have brought in this space. 

Monday, June 25, 2012

Payment page, where absolutely nothing should ever fail.

These days clouds are all over us and they are pouring money down. From simple business applications to all kind of storage, everyone wants to do it on cloud. Since i had been storing all my personal data so far on my hard disk which is a disaster waiting to happen, i decided to move all my personal contents like pictures taken in last ten years to cloud. I looked around and the cheapest one i got was from microsoft skydrive.com.
They give 7GB of storage for free and you can get additional 20 GB for $10 per year which is fairly cheap. I had close to 23GB of data to store so i decided to buy their subscription and started filling my credit card information. Now this is the place where nothing and absolutely nothing should fail since this is the bread and butter, and microsoft's product failed measurably, not because of any technical glitch but because of utter stupidity of developer and tester of the product.

Below is the screen shot for their page where they ask for credit card info. They ask for credit card number and CVV on the back of the card. My card's CVV number starts with 0 (e.g. 051) which i honestly filled in the box and clicked the next box.


Now the developer converted this number (051) to absolute integer automatically (which is 51) and bingo i get an error that the CVV number is too short

This is a failure which will cost money to microsoft every minute it exists on their page. What is the point of making a product and maintaining it when the very purpose of making money out of it is hampered buy a stupid bug (i want to call it incompetence and negligence). I am sure many like me would have faced this issue so far and i am sure that so far it must have cost hundred of thousands of dollars if not millions to microsoft.

Sunday, May 22, 2011

What to remember while using singleton classes in agile software development

During my time at Terracotta i have been following pretty strict policy of agile software development methodology. A feature is not complete if there are no unit tests to test each class and subsystem or if it does not has a system tests to check whether the code behaves as expected. The build system we have uses ruby and it had become my habit to write test assuming that each test would be run in a separate JVM. But in changing worlds where maven seems awesome this assumption fails and this gives a whole lot problems for test cases for singleton objects.

Let me explain it with an example.
Lets say we have a factory which creates arrows for archers in a city. Every time a archer comes it gives 2*n arrows to it where n is the number of the soldier while making sure no two archers get the arrows at the same time.
It comes naturally that this factory should be a singleton instance, since archers can come from all over the place. So we write a singleton class like this

public class WeaponFactory {

private int arrowCount = 0;


private WeaponFactory(){

// make it private so that no one can create it

}

//Let class loader do the magic of creating a singleton Instance for u

public static WeaponFactory getWamboo(){

return WeaponFactoryHolder.instance;

}

private static class WeaponFactoryHolder {

static final WeaponFactory instance = new WeaponFactory();

}

//no two archers can take an arrow at the same time

public synchronized Weapon[] getArrows() {

arrowCount++;

Weapon weapons[] = new Arrow[this.arrowCount];

for(int i = 0; i < arrowCount; i++){

weapons[i] = new Arrow(Arrow.FIRE_POWER);

}

return weapons;

}

//to create an arrow with specified firepower

public Weapon createArrow(int firePower){

return new Arrow(firePower);

}

}


Its a pretty standard class which uses singleton design pattern and fulfills the requirement. Now to test this class we have bunch of tests. For example lets say we have these two classes

public class WeaponFactoryTest1 extends TestCase {

public void testFactory(){

WeaponFactory weaponFactory = WeaponFactory.getWamboo();

Weapon[] weapons = weaponFactory.getArrows();

Assert.assertEquals(1, weapons.length);

}

}


public class WeaponFactoryTest2 extends TestCase {

public void testFactory(){

WeaponFactory weaponFactory = WeaponFactory.getWamboo();

Weapon[] weapons = weaponFactory.getArrows();

Assert.assertEquals(1, weapons.length);

}

}


Now as long as the two tests run in a different JVM we are fine, each test will have the newly created singleton instance of WeaponFactory and the test logic will test the classes correctly. But the moment both test run in the same JVM(e.g. mvn clean install command is fired, all tests will run in the same JVM) we have an issue. In the above example among the two tests the one which ran first will pass while the next one will fail since the first test created the instance of WeaponFactory and made changes to its state which the second test was not expecting.

In a traditional build system where each test is run in a separate JVM we were doing good. Now imagine that you have to change your build system and use maven instead. After the pain you will take to migrate, you will realize that all the tests which were using singleton classes and had logics like what is explained in the example will start failing. While running the same test using mvn test command individually will pass.

Its not always possible to have a quick fix for it but simple thing like resetting the state to initial state of the singleton instance for each test run might fix it. For example in the above case exposing this method in the class WeaponFactory

public void reset(){

this.arrowCount = 0;

}

and adding this in the setup in each of your junit test will fix it


@Override

protected void setUp() throws Exception {

Marina.getInstance().reset();

WeaponFactory.getWamboo().reset();

EnemyFactory.getInstance().reset();

}


The point here is that its always tricky to test singleton classes in your code and it can be pretty confusing to figure out why a test which passes when its run individually fails when "mvn clean install" command is fired and while designing your singleton classes you should keep this in mind.

Tuesday, May 10, 2011

A Compressed Set of Longs

In terracotta's world every shared object is associated with an Object Id which is a long. Depending on the use case objects are created and old one might get dereferenced and collected by Terracotta's Distributed Garbage Collector. So during the cluster operation we end up having a large number of object Ids which are not contiguous in nature. In a lot of operations we need the all or fraction of the object ids present in the system. Creating a collection for all those object ids will result in occupying a large space on heap and moreover sending that collection over wire will also be not that optimized. So we needed to compress those object ids efficiently. These are the two approaches we took to implement our compressed set for object ids called ObjectIdSet

1. Range Based Compression: As the name suggests the object ids are compressed based on the range they are in. So we have a set of Range objects under ObjectIdSet which have start and end defined. Any Range object present in the ObjectIdSet means that Range.start to Range.end (both inclusive) are present in the ObjectIdSet.
While adding Object ids in the set two Range objects can be merged to get replaced by one Range objects. For example if you have Range(5,8) and Range(10,15) present and add id 9 in the set, the two Range objects would get merged into Range(5,15). Similarly while removing an id from ObjectIdSet a Range object might get split into two. For example Range(5,15) will get split into Range(5,8) and Range(10,15) if object id 9 is removed.

2. BitSet Based Compression: In this approach we have a set of Objects called BitSet. BitSet contains two long variables.

public class BitSet {

private long start;

private long nextLongs = 0;

......

......

}

Where BitSet.start defines the start index and BiSet.nextLongs represents the next 64 a bit set representing the next 64 ids in the set. For example if we have only two Ids 6 and 84 present in the set we will have Two BitSet Objects having these as start and nextLongs

1. BitSet(0, 0x0000000000000000000000000000000000000000000000000000000001000000)
2. BitSet(64, 0x0000000000000000000000000000000000000000001000000000000000000000)

With this approach the compression is based on fix sets. To add any id in ObjectIdSet we just have to set corresponding bit in BitSet.nextRange and similarly to remove any id just need to unset the bit. This approach is less complex and compresses more generally since the previous approach would have a lot of fragmented Range objects.

Ofcourse there would be scenarios where the Range based object id set would perform better but in our testing of general cases we have found that BitSet based approach worked better in most cases. So by default in terracotta the compression is based on BitSet approach.

The implementation can be checked out by these classes

Monday, April 25, 2011

Ehcache bulk operation APIs

People who have used ehcache know that there is only one bulk operation that is provided as of now which is removeAll(). This operation removes all the entries from the cache. In the next release of ehcache there is a plan to provide these operations as well

Collection <Element> getAll(<Object> keys)
void putAll(Collection<Element> elements)
void removeAll(Collection<Object> keys)

The goal of these new APIs is to provide bulk operations which should be faster than normal operations. Now currently we have these two consistency mode for cache operations

1. Strong: All writes are done under a lock, so once anything is changed (add/remove/update) in the cache, rest all nodes and threads will see the change
2. Eventual: As the name suggests, no clustered lock is taken for each operation and eventually the cache would become coherent. This is used when speed, predictability and uptime is of primary concern in an application.

The challenge is to provide the new bulk APIs to do the operations faster than doing single operation in a loop.

Here is what is planned to achieve this.

1. PutAll (eventual consitency)
We create transaction boundaries by taking a lock and releasing it. There are bunch of optimization which is done like transaction folding etc. Since we already know how many entries we need to put under this call, and since all entries can not be put under one transaction, this can be broken and sent in one batch according to "ehcache.incoherent.putsBatchSize" to the server.
Some thing like
lock.takeLock()
for(int i =0; i < batchSize; i++){
doPut(key, value)
}
lock.releaseLock()
notify listeners for the puts that were done in the for loop

2. PutAll (Strong consitency)
In strong consistency for each operation a lock is taken, operation is performed and then the lock is released. For bulk putAll this can be optimized. To avoid dead lock and be efficient, lock request would be sent asynchronously to the server. Every few milliseconds its checked to see how many lock responses have come from the server. Whatever locks have been granted so far, put will be done for those elements and locks will be released. The putAll call we keep trying to get all the locks and eventually put everything in the cache. Listeners will be immediately notified for all the puts which were done after lock has been granted.

3. removeAll (eventual consitency)
Same as putAll (eventual consistency)

4. removeAll (strong consitency)
Same as putAll (strong consistency)

5. getAll
The implementation can be understood by these steps
  1. Collection getAll(Collection) will return a custom collection whose iterator will be overridden
  2. The request to server will be based on CDSMDso which will let us split the request in stripes in case of multiple terracotta server stripes.
  3. In return we will get a map of keys and object ids of the values, which we will use in our custom iterator
  4. When the object ids of the values corresponding to the keys are returned a lookup request will be initiated with a configurable number of object ids batched and the values returned will be added in the local cache
  5. When the collection is iterated the value corresponding to the object id associated with the value of a particular key will be returned if present in the local cache. If not then we need to fetch the value from the server in batch
  6. Implementation of step 5 is little tricky since some of the values which were added in the local cache while looking up objects might get evicted. The strategy for how the next batch of values will be looked up need to be thought of.

For strong consistency case cluster read lock would be acquired for the whole key set as in the case explained in removeAll(strong consistency) case with the difference of having readlock instead.

Performance Comparison
Right now these new APIs are in development phase. I will be updating the result of this exercise after its development and testing.

Thursday, April 21, 2011

ehcache search example

Search was released in ehcache recently and has been getting quite a traction from various users. This small blog is to explain how you can use search in your application with an example application.

1. What is Search
When the search is enabled then while building the cache an index of element is built according to what has been provided as searchable attribute by the user. This knowledge can later be used to execute complex queries in the cache. This cache be standalone ehcache or Terracotta clustered cache. For example a query like this can be executed

2. What can you Search
The search is to get the results out of the Elements of the cache based on keys or values. The criteria upon which search can be done is provided by the user at the time of initialization of cache against which indexing would be done.

3. Enabling Search
Enabling search is fairly easy. All you need to do is to add searchable tag in cache definition section of the ehcache.xml file. Here is an example.

<cache name="cache2" maxElementsInMemory="10000" eternal="true" overflowToDisk="false">
<searchable/>
</cache>

This the simplest way to enable search. This will simply see all the keys and values and check whether they are searchable type and if the are then will add them as search attributes. This will by default start automatic indexing. To disable it you can do this

<cache name="cache3" ...>
<searchable keys="false" values="false">
...
</searchable>
</cache>

When keys or values are not directly searchable then we need to extract searchable attributes out. In that case you can provide the method name which should return a searchable type and can be used for indexing. A typical example is
<cache name="cache3" maxElementsInMemory="10000" eternal="true" overflowToDisk="false">
<searchable>
<searchAttribute name="age" class="net.sf.ehcache.search.TestAttributeExtractor"/>
<searchAttribute name="gender" expression="value.getGender()"/>
</searchable>
</cache>

You can also do this programatically like this

SearchAttribute sa = new SearchAttribute();
sa.setExpression("value.getAge()"); sa.setName("age"); cacheConfig.addSearchAttribute(sa);
4. Search Attribute
The user has to define search attribute either in config file or programatically to enable the indexing and query in the cache. Search attributes are a way to tell the cache what is need to be indexed so that it can be queried later on. Here is how you define a search attribute in the cache

5. Querying the Cache
If the rest of steps you have followed correctly then you are pretty much done and ready to do complex queries from your cache. All you need to do is to crate a query, add specific criteria, add aggregators if you wish to, and execute. Here is an example.

Query query = cache.createQuery().addCriteria(age.eq(35)).includeKeys().end(); Results results = query.execute();
Now you have the result of your specific query. You can do these operations on your result set to server your purpose.

discard() :Discard this query result. This call is not mandatory but is recommended after the caller is done with results. It can allow the cache, which may be distributed, to immediately free any resources associated with this result.


List all() : Retrieve all of the cache results in one shot


List range(int start, int count) : Retrieve a subset of the cache results


int size(): returns size of the result set


boolean hasKeys() : Whether the Results have cache keys included


boolean hasValues() : Whether the Results have cache values included.


boolean hasAttributes() : Whether the Results have cache attributes included.


boolean hasAggregators() : Whether the results contains aggregates



More documentation can be found here

Tuesday, April 12, 2011

Mocking java System class and override System.currentTimeMillis() using JMockit

Recently i came across a very interesting problem while writing a system test for a component. The problem statement was to throw an operator event when the server's and client's system time is out of sync by some seconds. The system test needed to run on one box and still give different time for System.currentTimeMills()

To do this i used JMockit. Its fairly easy to get JMockit in your environment by mvn repo or by adding ivy settings in your project. Here is the repo that you can use to fetch JMockit using mvn

<repositories>
<repository>
<id>download.java.net</id>
<url>http://download.java.net/maven/2</url>
</repository>
</repositories>

This is the library that you need to add as dependency for testing

<dependency>
<groupId>mockit</groupId>
<artifactId>jmockit</artifactId>
<version>0.993</version>
<scope>test</scope>
</dependency>

The other way of getting the jar is through adding this in your ivy.xml file

<dependency name="dspace-jmockit" rev="0.999.4" org="org.dspace.dependencies.jmockit"/>
Now comes the part of how you can override System.currentTimeMills
Below is the way you write the class be used to override the static methods of System

@MockClass(realClass = System.class)

public class MockSystem {

private int i = 0;


@Mock

public long currentTimeMillis() {

i++;

return i * 10000;

}

}


To use this in your test class you need to call this before you start the actual test.

Mockit.setUpMocks(new MockSystem());


Also you have to be careful to call this before your test ends so that you do not screw up any other test


Mockit.tearDownMocks(System.class);


A few things to be noticed here.
1. The overriding of System class static method can only be done if you are using JAVA 1.6. It will fail for lower versions.
2. If you get the following exception in your test then its because you are not initializing JMockit before you overrode its static method. To get rid of this you need to make sure that the Jmockit jar is above the junit jar in the export order of the libraries in your project.


INFO Caused by: java.lang.IllegalStateException: JMockit has not been initialized. Check that your Java 6 VM has been started with the -javaagent:/Users/rsingh/work/branches/enterprise-1/community/code/base/dependencies/lib/dspace-jmockit-0.999.4.jar command line option.

INFO at mockit.internal.startup.AgentInitialization.initializeAccordingToJDKVersion(AgentInitialization.java:44)

INFO at mockit.internal.startup.Startup.verifyInitialization(Startup.java:247)

INFO at mockit.Mockit.(Mockit.java:82)