Designing incentives for Sybil detection

Following the dicussion in another thread:

1 Like

One example which does not involve money staking:

  • Each user has “reputation” expressed as a number, starting with 0.
  • When you connect with someone, you put your reputation at stake and the other side can steal it.
  • We design a game similar to prisoner’s dilemma. If both sides cooperate they both get their reputation increased. Otherwise one side gets an increase and another side loses reputation, or they both lose.
  • The effect is delayed and reputation increases/decreses gradually over some period. So you can’t know immediately what the other side did and if you made several connections it’s hard to know who exactly stole your stake.

This should incentivize you to connect only with people you know well.

1 Like

To keep core of BrightID simple and benefit from the important feature of BrightID enabling definition of different verifications, we can implement such a feature as an application beside other apps using BrightID.
Suppose we have a staking app under IDChain context where BrightID users can use to deposit some IDChain Dai and select some of friends and specify how much should be staked on each person.
Such staking app can be implemented as web3 js client working with the staking smart contract and using the way explorer load connections data to load required information.
If we have such a staking app, different verification algorithms can consider the staking smart contract data as an input.

2 Likes

I second this sentiment. Sometimes the simplest solution is the most robust and powerful. Often times people try to attack the problem of being able to isolate someone’s jurisdiction but as long as their contributions are of high quality then I think keeping the distribution of awards equitable and judging someone on the quality of their content is a good egalitarian way to go.

1 Like