How to fix the scientific evaluation process in conferences

Nowadays, conferences are being run using a century-old approach that puts a lot of trust into the reviewers and the program committee.

However, examples of successful social networks like Slashdot, Digg and Reddit show that it is possible to implement an system that trusts none but works well enough to push up the most insightful texts.

The modification of the evaluation processes shall evolve over at least
three stages.

First, an existing system shall introduce the ability to review the
reviewers, that is, vote for the quality of reviews and thus -- for the
quality of the work done by a particular reviewer. This will allow to
rebuild trust in the reviewers in a transparent way.

Once the reviewers are less of a problem, the major attack should be
done on the selection itself. The only reason that only a handful of
submissions are published among the others is the limited amount of
space in the printed journal. It is time to allow everyone to publish
their papers on the conference website, ranked by the reviews, so that
the selection is gradual, instead of binary. Until paper publications
naturally die out, a cutoff ratio or number can be kept.

The last and the most difficult stage is to abandon the reviews by
dedicated reviewers all together and move onto the review by peers
where everyone who publishes can review peers in the same
conferences openly. A fair share of meta-reviewing is expected for this
system to keep up against abuse.

We will thus be able to come to a system that combines the advantages of
the selection that provide traditional conferences with the openness of
arXiv.