tags:

views:

331

answers:

4

I've participated in a few online programming contests and found the online judges used quite remarkable in functionality.

Coming to the point of the topic, my college is also about to hold an online programming contest and I'm in charge of the event. I was evaluating my options for implementing an online judge. Sure I could make use of some already available judges like the one at SPOJ but it struck me that I and a few of my friends can as well try building one. If we fail, we can always fall back on these.

So can anyone please give me some outline or resources as to how do I get about it? It'd be also helpful if I get some idea about how the judges achieve 'sandboxing'. We got about a couple of months in hand.

UPDATE This is the outcome of my effort so far in 2 weeks after asking a couple of more questions on SO itself http://github.com/anomit/loki

A: 

One search term might be for autograder.

For example:

http://www.users.muohio.edu/helmicmt/autograder/index.php
http://prisms.cs.umass.edu/mcorner/autograder

http://74.125.95.132/search?q=cache:VSuCE566d1oJ:www.cs.odu.edu/~gpd/msprojects/cpasupul.0/AutoGrader.ppt+autograder&cd=5&hl=en&ct=clnk&gl=us&client=firefox-a

Tim
Thanks for the quick reply. But I see two problems there:1. The svn for the source code is down2. It doesn't seem to be under active development anyway
Just for the record, I didn't downvote it. Considering my current knowhow, any suggestion is a good suggestion :)
Why the downvotes o this one?
chakrit
yeah, I'm not sure about that either... (re the downvote)
Tim
+2  A: 

Not sure what an online judge is, but I assume it is a piece of software to evaluate programs for correctness

I would use some build, test and analysis libraries for this. Examples would be ant, junit, checkstyle.

You would take the code provided by the participant, and drop it into a file. Use the build tool to compile it. - build fails: 0 points - build succeeds with warnings 1 point - build succeeds without warnings 2 points

then run some tests, that verify the correctness of the solution. - For each test passed: 1 point

Finally run some code analysis utility to judge the quality of the code. - minus 1 point for each complain of the utility

Of course you might want to shift the point values to your needs

Jens Schauder
+1  A: 

I'm really not sure what your question is about. It's not hard to write a design spec for a judge from scratch.

You run a thing with given input data and feed the output data to the test program written by the question's author (because there's not always unique answer). People do sandboxing by running it remotely on a clean machine.

Add: and please, no code analysis. You have two choices, either you make this code analysis available to contestants during the contest, or not.

  • If you do: they spend last 5 minutes of their time to make sure there are no down points. The code grows much worse in the process.

  • If you don't: you break the "rule of law": that people know the mechanism for giving points (also, that's why you always give them the first test in the text of the problem).

Update: Sorry, I didn't notice at first that you ask some specific questions. Sandboxing may be less important than you think -- in a good competition the code becomes publicly available, so the "hackers" would be really embarrassed. However, I think I saw a practice where you can't do the i/o, filesystem or any other interaction with the system directly (they write main() for you and it's always the same; you only write the algorithm part with given input/output streams). Your judge should run only what it itself compiled from the source.

ilya n.
A: 

Sorry for the shameless plug, but please check the project link in the updated question section. It'd be really helpful if someone could help me out with using ptrace() effectively.

More details on the github page if someone wants to really help me out by testing it.