Free TON

Contest proposal: Modeling mainnet development scenarios

Preamble:

This contest is critically important for all the validators!

How will the new 365 validators behave? Are they going to accumulate tokens or drop the received rewards on the market? Which behavior model would be the most beneficial? Would there be any other models? Would they be better or worse?

Currently it is unknown which development scenario the main network events would take: it is a Game Theory in essence. However, I suggest taking a sneak peak into the future! :slight_smile:

The purpose of the contest is to create an interactive tool that would be able to create models of all the potential event scenarios. If the contest will be organized, we will get a tool that will in turn give us a possibility to analyze data and select the most beneficial collective and individual validator behavior models.

Short description:

Creation of the analytic tool that would be able to comprehensively test all the potential event scenarios and develop the optimal validator behavior model in the FreeTON blockchain main network.

Type:

Contest

Contest entry period:

October 5, 12:00 PM UTC - November 1, 24:00 PM UTC.

Motivation:

To give the Community an instrument that would allow it to forecast the possible future developments in case of the various present and short-term future event scenarios would take place, and, therefore, create a possibility to develop an optimal, long-term strategy for all its members.

General requirements:

  • The submitted work should contain event models based on a minimum of 3 potential scenarios:
  1. Validators accumulate stakes
  2. Validators actively sell their tokens
  3. Validators get divided into groups of big and small stake holders
  • The submitted work should answer the following questions:
  1. What is the bid size in the validation rounds, what is its change dynamics and what is going to happen with it in case of the given event scenario developments?
  2. What would be the stake size that would make the participation in the validation cycle insufficient and when would it happen?
  • Dashboard can be developed in any of the existing platforms: Excel, PowerBI, Tablo, Qlik, etc. (even a proprietary platform will do).

  • All the submitted works should include a brief presentation of you analytics and a short manual on the dashboard use.

Contest specification

Evaluation criteria and winning conditions:

  • High-quality work.
  • The tool meets all the points listed in general requirements.
  • User friendly design.
  • Clear visualizations and possibility of scenario comparison.
  • Compliance with the current color design 66;
  • Proposals will be judged strictly on the merit of their accuracy in addressing all requirements.

Voting

  • Each juror will vote by rating each submission on a scale of 0 to 10 or can choose to reject it if it does not meet requirements, or they can choose to abstain from voting if they feel unqualified to judge.
  • Jurors will provide feedback on your submissions
  • The submissions that accumulated less than 3 points will not be rewarded, even if they are included in the reward list.

Reward:

1st prize…………………………50,000 Tons

2nd prize……………………….30,000 Tons

3rd prize………………………. 20,000 Tons

Next 10 runners up…………. 5,000 Tons each

Jury rewards:

An amount equal to 5% of the sum total of all total tokens actually awarded will be distributed equally between all jurors who vote and provide feedback. Both voting and feedback are mandatory in order to collect the reward.

Procedural remarks:

  • All the contest participants have to ensure that the developed dashboard tool would remain operational and will be kept updated for a period of minimum 12 months.

Disclaimer:

Anyone can participate, but Free TON cannot distribute Tons to US citizens or US entities.

6 Likes

Good. But I also like the idea of modeling contest in devnet.

looks like a one-sided game, with an overrated prize.
50k it’s ok,
but 365k? are you joking?

2 Likes

This task has a high level of complexity and for its execution we would have to involve a massive resource base and a team of world-class specialists. Preliminary calculations estimate an approximate cost of about $100’000. Despite the fact of ongoing discussions with Yandex, I am not sure if anyone would even consider to start this project. Considering the potential practical use for validators, a 1000 tokens per validator is not that much.

  1. Only initial jurors? Are you sure?

  2. What is the real value of this contest for the community? (Más adoption, development experience or technology improvement? )

1 Like

OMG! It looks like a contest written for oneself. With buzzy words like “maching learning” and “mathematically proven” and super-prize! :see_no_evil:

The 3 scenarios could be roughly estimated in spreadsheet in half an hour, and no trendy dashboard would add any accuracy to predicting the future. Not to mention the irrelevance of machine learning or artificial intelligence in this case. Saying we should do something and generously pay for doing it IMHO should be backed by stating the value for the community.

6 Likes
  1. This is a mistake! The “Vote” section was copied from the previous contests. I have fixed that, thanks for the heads up. I can see who is the true juror now :slight_smile:
  2. Regarding the real value, it is in demonstration of the optimal validator behavior model and avoiding the situation where the majority of validators would not be able to participate in the new validation cycles, which will undermine the entire project’s decentralization.

Regarding the super prize and involvement of the neural networks: the community’s feedback was heard and that entire section was removed.

@AlexNew @szaitseff подключайтесь к общению: https://us02web.zoom.us/j/87617622496

2 Likes