Tuesday, 10 March 2015

Scaling webservices- Its easy!!-2

After following the part one “Scaling webservices- Its easy!!“ you are successful scaled your application server horizontally and the application is serving thousands of request concurrently. You are celebrating the initial success of the lucky application. But somewhere down the line your application get slower and slower and finally Oops!! Application break down :( .


The culprit is helpless beast database “MySql”.

Now adding more and more cloned application server does not helps at all but will cause the situation even worse. After brainstorming we come to two major possibilities to tackle it.

#1. Keep the beast MySql running and make the master slave replication of DB and upgrade master by adding RAM, and more RAM. In longer run use the tachnique of database sharding, denormalization of DB and do some fine tuning of the database in use. But as time goes the optimization become time consuming and costlier.

#2. Keep your DB bit less normalized from day1 and include less and less joins in your DB queries like in case of noSQL DB or you can switch to hogh scalable NoSql DB like mongoDB. Join is now need to be done in the application code and the application code is running on couple of servers. The sooner you take the move lesser the code change and rework need to done. But this is also have some gotcha. Lets your application do several joins on dataset. Sooner this database request feels like getting slower. And this is the time you need to introduce cache. Cache works with lightning -fast. It holds the data in memory and serves the data request as fast as technically possible. Eg: Redis can serve thousands of data request per second when being hosted on standard server.


Caching can be done in two ways :

Database query caching-

Whenever you do a database query in your application store the result-set in cache. You can use the hashed version of query string as a key and result-set as value. But this leads to the problem of cache expiration in case of complex queries. When one small data changes you need to delete all the cached data that containing that small data.


Object caching -

This is more preferable now a days. In this case we set the model object and catch this model object in memory. This allow us to easily get rid of unwanted object when something is changed and make overall operation faster and logical.

This approach make asynchronous processing possible!! Just feel the situation where a group of server is setting the attributes of your fatty object by fetching the data from slow pitty server. The application now just consume the cached object and never touch the database anymore in near time.

I am a big fan of caching and using memcache it scales like a charm. Keep caching and enjoy the lightning fast performance!!




Monday, 9 March 2015

Uber 5-Free Ride in various city -Bangalore, Delhi, Hyderabad, Chennai- COUPON Code NAPED

Steps:

1. Download and install the application from

   https://play.google.com/store/apps/details?id=com.ubercab&hl=en

2. Use promocode:
    NAPED

3. Using above code you will get minimum 1 free ride and max 5.

Enjoy the ride!!


Monday, 2 March 2015

Scaling webservices- Its easy!!

Just recently i was asked to develop a massively scalable webservice for a client called FADU Bakery shop ;). I am also a software developer like you who does less but only i know!!. I explored the concepts involved in making a web service scalable and i am just trying to sharing it with you. Have fun and hope you like it. Your comment is always welcomed!!


When we talk of scalability it involves:

1. Application server scaling.
2. Database server scaling.
3. Memory scaling- Cache and MQ.

I am trying to explore each concepts respectively.

Application server scaling by cloning:

Usually our application servers are hidden behind a black box called load balancer. This load balancer evenly distribute the load onto the group of hidden app server with high probability using the intelligent algorithm based on time-stamp or loads. That means if User Mrs Ssoni interact with your service for the first time she may be served by server 5 and in second request is served by server 2 and may be in third request it again come to server5.

Ssoni always get the same result of her same request independent of which server she lands on. That gives us the first golden rule of scaling -'Every server should have same code base and should not store user data like profile pic or session data'. Otherwise imagine if Ssoni has checked some milk bread on server1 but in second request server is changed and the the milk-bread is not in her basket :(.

So what next... where to store this user data?

One among many solution is to store this user data in a centralize data storage accessible to all the application server or store this user data by scaling the memory of load balancer. It could be a database in load balancer or the persistence cache like redis or memcached. Please dont use the file based caching more frequently as it make auto scaling and cloning pain.

So far so good...But what about deployment on multi server?

How do you assure that the code changes is effectively on all app server. Dont worry this is tricky problem and already solved by multiple tools like Capistrano or you can write own script for that to show you are not lazy as me.
Now we are serving the same codebase from all app servers, we can now create an image file from one of these servers (AWS calls this AMI - Amazon Machine Image.) Use this AMI as a “super-clone” that all your new instances are based upon. Whenever you start a new instance, just do an initial deployment of your latest code and you are ready! Is not it simple enough...?


#database and memory will be in next blog ... soon!!

#Names are virtual.