views:

129

answers:

2

Hello there.

I have a research project on distributed systems, I asked the Prof. if i can work on MapReduce and he is giving me hard time that MapReduce is very broad and asked me to pick a specific problem about either distributed systems frameworks like MapReduce or something else that has networking and distributed computing in it.

Can you recommend me some research topics in MapReduce or in distributed systems?

Thank you.

+2  A: 

One thing that is currently becoming important in the cloud computing world is the ability to automatically scale capacity based on expected and unexpected variances in demand.

For example, my company provides sales tax calculation on a SaaS (software as a service) basis. We know that certain large customers have a sale every morning at 9am our time. We also know that the holidays produce more calculations than any other time of the year. There is a peak at the start of each month when some customers post a large batch of transactions.

Then again, there are the unexpected peaks. Often customers will run large promotions that we don't know about in advance or one will have a product that's a runaway success.

We can add or remove application servers from our cloud to meet demand. The process is semi-automated for us, but some companies are moving toward full automation.

Full automation also has it's drawbacks. By the time new instances come online, temporary peaks may be gone. A DoS (denial of service) attack could cause you to launch a huge number of instances (and pay for them). Sometimes it's not immediately clear exactly what type of resource needs to be launched (if you have application servers that fill different roles, also the question of launching more instances vs. larger instances in terms of CPU and memory).

Hopefully that gives you some ideas.

Eric J.
Thanks Eric. I asked him about cloud computing too and he said it s very broad. Then when I was working or AWS, i remember the multicast wasnt working so that we werent able to use terracotta back then. So this might be a research topic but there is no papers about this :(
+1  A: 

One suggestion I have is in relation to Hadoop. I think this is in a similar vein to the answer by Eric J.

When you submit a job, ie MapReduce job, to Hadoop its standard response is to queue up that job and submit it as resources on the cluster become available. This is an area of research and development that is quite active as simple scheduling systems like this are not always going to meet the requirements of the users.

For example, your cluster might have a number of critical jobs which need to be run at certain times, but you also use the cluster for ad-hoc analysis of the data. How does the scheduler deal with situations like this? A FIFO type queue might mean that your critical jobs don't get run due to their position in the FIFO, as less important jobs are using the clusters resources.

Two organizations who have found this to be a problem, and thus contributed their own schedulers are:

Also on this subject are:

This isn't a simple problem to solve and I doubt any solution would ever meet every requirement, but I'd say there is a lot of scope for different approaches. The Facebook approach is actually based on the Completely Fair Scheduler in Linux for example.

However, you would have a number of different schedulers to work from, as its easy to get the code and thus see how they work.

Just a thought, hope it helps.

Binary Nerd
thanks man. i like ur username :)

related questions