Good day to all who read!
The Christmas card contest is long over, and the controversy continues. Many are waiting for the investigation to continue. Many now do not trust the work of the jury. I propose to participate in the process of organizing the effective work of the jury.
There are many questions in the comments, such as: “Why such strange ratings?”, " Why 1 where other jury members put 10?", " Why are the comments so strange?" etc. I also participated in this contest and asked these questions. I asked these questions to the people involved and found out the following:
There are many problems with the organization of the voting process itself. Simply put, very little is organized for the convenience of not only the contestants, but also the jury.
I would like to note that @mzonder has done a lot of work to make it at least something comfortable. At the same time, he was not a member of the jury.
Prior to the postcard contest, there was a contest in the analytics section aimed at the effectiveness of the jury’s work. There are good suggestions out there, but it takes time for all of this to work. Therefore, none of these metrics were used in the postcard contest.
And yet, how were the works judged? Just like/dislike? Some - perhaps so. Separately, I would like to mention one of the jury members with his table.
Quality / Creativity / Idea: 1 - Bad. 2 - Normal. 3 - Good. (0 - very bad)
Overall impression: 0 - not impressed / 1 - impressed
That’s better than nothing. It’s simple, if the work is not liked by a member of the jury, but it is done qualitatively, then it will not receive 1 point. It is a pity that not all members of the jury had such a table.
What else could help the jury in the rating system? There may be more criteria than in this table. Such an item as compliance with the requirements for the competition work (no link to Twitter, poor quality, lack of competition topics in the work, etc.). Everything is according to the rules-1. Something is wrong-0. The criteria can be completely different:
- Compliance of the content of the competition work with the subject of the competition
- Expressiveness of the work, creative approach to the disclosure of the topic
- Technique of execution
- Depth of disclosure of the topic, etc.
The main thing is that they are common to all members of the jury. Such a table may be issued to the jury members prior to the competition.
If this competition was a “trial” to identify shortcomings in the system, then let it all be for good reason. No one wants this to happen again.
It would be great if users, former and future contestants write here in the comments what our jury still lacks in order to make the ratings less subjective.
I send my regards to the jury member who “thought the scale was five-point.” Given the fact that today there was such an article, where the screenshot shows that the maximum is 10 points.
Thank you to those who read it! I wish you justice