My question is one I have pondered around when working on a demanding network application that would explicitly share a task across the network using a server to assign the job to each computer individually and "share the load".
I wondered: could this be done in a more implicit manner?
Question
Is there a possibility of distributing processor intensive tasks around a voluntary and public network of computers to make the job run more efficiently without requiring the job's program or process to be installed on each computer?
Scenario
Lets say we have a ridiculously intensive mathematics scenario where I am trying to get my computer to calculate every prime factorization break down for all numbers from 1 to 10,000,000 and store them in a database (assuming I have the space and that the algorithms are already implemented in their own class, program, dynamic link library or any runnable process.)
Now it would be more efficient to share this burdening process across a network or on a multi-core super computer, however these are both expensive. To my knowledge you would require a specifically designed program to run the specific algorithm and have the program installed across the said cloud/distributed computing network whilst you have a server keep track of what each computer is doing (ie. what number they are currently calculating the primes for).
Conclusion
Overall:
Would it be possible to create a cloud program / OS / suite where you could share processor time for an unspecified type of process?
If so how would you implement it, where would you start?
Would you make an OS dedicated to being able to run unspecified non-explicit tasks or would it be possible to do with a cloud enabled program installed on volunteers computers volunteers who were willing to share a percentage of their processor clock to help the general community).
If this was implementable, would you be a voluntary part of the greater cloud?
I would love to hear everyone's thoughts and possible solutions as this would be a wonderful project to start.