Free TON

DePR Stage #2 Problems overview & Conclusions

My vote goes to option #3. There are far too many options for misunderstanding and loopholes in the current model.

Another concern I have is that participates have pushed out a ton of articles yet none of them were noticed. None of them were circulated, picked up by other sources or seen on social media.

What is the point of your efforts if it ends up going nowhere? Don’t you want to see your work appreciated and live on vs published and forgotten? The current PR contest model should be reworked from the ground up, one where people don’t hide their works, work towards value and appreciation, and above all the benefit of the entire FT network.

“If a tree falls in a forest and no one is around to hear it, does it make a sound?”


Good issues you raise here.

Here should deal only with sites which provide any report for placements.
Boosted views can be checked technically or relative to site’s traffic.

As concerning regional crypto engagement I hope such stats can be googled and weighted in submission points.

I agree with someone who talk that PR can be black colored too.
So publishing articles should be discussed by community or submitted by participants in order to whole strategy. And this issue opens for discuss.

1 Like

Suggestions for DePR #3.

This is rough, but very practical.

Partially based on opinions of other participants.

In order to achieve optimal results and focus on Value of the publications.
I suggest that a ‘scoring system’ is used. A system that’s analogous to a credit rating.

  1. Points are awarded across all the criteria for each publication (article post)
  2. All points are summarized across all of the publications of a participant.


Below are the lists with criteria for scoring, divided into 2 groups.
Automated checking and manual checking.
List of the criteria for scoring.

Criteria examples for Automated checking (Software/DeBot v2.0):

  1. Amount of new wallets created by new users that were referred from the resources with the posted articles. TON Surf, Crystal Wallet, etc. [+wallet_count10pt]
    2)Visits to resources of the Free TON ecosystem, where anonymized user tracking is available. I.e.:,, [+click_count
    3)Publications in Top Media (main sections) by SimilarWeb/Alexa (5’000)? Provides better trust. [scores =scores*1.5]
  2. Account for amounts of new articles that may pop, linking back to the original article (reposts will not be counted. References are provided by Participants and automatically checked by Software. [+citation_count5pt]
    Criteria examples for Manual checking (Jurors):
    5)If main focus is on “business” or “technology” category, matching publication will score extra points [scores=scores
    6)Editorial content (sites with editorial policy). If published on a site with no editorial policy, the value is way lower, but not a zero. You can reduce the score. [scores=scores*0.3]
    7)Impressions/views for publication (if applicable). It is possible to get artificial figures. It’s impossible to check actual views. Not recommended! [+(impression_count/1000)*5pt].


Creation of Content Audit Group (like ECAG) for the purpose of signing of each article by 2 or more members. This is done to ensure that publications are top quality, approved by experienced members.

Content verification methods aimed to make sure Value is achieved.

  1. Verification has to be predominately a software based and driven by a scoring algorithm via a (i.e) DeBot. A separate contest can be launched for development of such or an upgrade to an existing one.
  2. Certain aspects that are not possible to evaluate via a Debot will be done manually by a team of judges. It should be prohibited to influence, lead or coordinate judges in any way after publications are submitted.
  3. There’s no post reporting necessary, especially the ones that are difficult to verify.
  4. It’s prohibited to request participants to reveal any information that’s related to а) personal data/communications; b) information, unauthorized distribution of which is prohibited by the laws of the countries where participants reside c) related to commercial and business intelligence
  5. Optionally, rules governing publication uniqueness, exclusivity to a particular resource, reposts would also be counted (however, a source of the publication must be present).
  6. Judges will be liable for setting scores that are against the objective information presented in the Debot. Systems of fines should be developed to discourage blatant misjudgment.
  7. “usage of Mass PR Services must be clarified. a) PR labels on top/bottom parts of the article (except and such that are obviously not related to PR services) b) a paragraph that clearly indicates an absence of an editorial policy c) URL/section location that’s indicative of PR
    At least one of the criterions must be present in order to exclude the rest.

Hackernoon contest with x times less reward was much more efficient. DePR bring very little value for significant amount of tokens bur fortunately brought some new lessons to the community.

1 Like


Scoring system is a good practice for contests. I’m sure that jury should voting according weighted criteria of contest’s aim.

But should understand that PR advertising is not measurable as banner or context ads!
So such things as wallet installing is not task for PR campaing. Lets imaging convertion rate from articles views, it will be small parts of percentage(0.01%-0,01%). So from the article with 1000 views is 1 wallet is a good result.
Alexa and similar wab rating is far from real site traffic. It is like large strokes for understanding the whole picture. Just look stats correlation of article views and alexa rating I published in this thread.
Other criteria you described may be useful.

2 cents for dePR3…
I see it as 2 stages contest:
stage 1) Media plan + articles review by community.
Contestants submit media plan with supposed article views(reading time, uniques users, etc) and articles themself avaliable via dePress debot.
stage 2) Placement PR articles+ Post Campaign reports.
Contestants should make site placements according media-plan and submit reports after 2 weeks of the placement.

Final Report

Link to big report in the pdf file and here.


This document is not considered to be just one opinion, it is of a collective nature of the judges’ comments, viewing of works and their assessment.

A controversial article does not mean that it was not accepted for consideration. Comments are collected from judges and participants of the contest.

The document is not of an executive nature, it only sums up the results.

The document does not serve for disputes / discussions, only information for thought, which can be used to study future similar competitions.

The competition and voting is over, it is better to transfer all ideas and discussions and efforts to them to the next similar competition.

This document is for informational purposes only.

Thank you all for your participation and once again a reminder, the past is the past, it is important what will be done, I see that many have different ideas for assessment, etc., all of them will be discussed in future Contest, and opinions will be taken into account.

1 Like