If we want to allow connections to have a confidence level, it should be a separate operation. The reason is: If I’m a spammer / scammer and someone tries to get me to co-sign a connection operation that says I’m a spammer, I wouldn’t sign it and I would keep phishing until I find people that give me a good confidence level.
Being able to set the confidence all the way to zero probably means we don’t need to have the concept of “removing” a connection. Setting certain flags can set the confidence to zero; other flags could set the confidence to some other low or medium level (such as the “I forgot who this person is” flag Luke suggested). This is simpler than supporting a remove operation. A low-confidence connection could be hidden in the UI, and not returned by the API–so it appears as if it has been removed. If you want to raise the confidence of a connection, connect to the person again. Having flags change the confidence level is more fine-grained than having them remove people from groups. We can reserve removing people from groups to group admins.
I have some thoughts on how confidence data could be used.
The first use is related to spammers used as part of a larger sybil attack. If a user has more low-confidence connections than high-confidence connections to verified users, the anti-sybil algorithm doesn’t consider them–it’s as if they don’t exist on the graph unless their situation improves (which is likely for real people and unlikely for sybils). They can’t be verified and no trust flows through them. This stops accounts used in partially-successful spam attacks from boosting the scores of collaborating sybil accounts. This should prevent wide-net spam attacks from being useful.
The second use is probably obvious. The confidence could be used to weight the edge between two connections; since setting confidence is unilateral, the lower of the two confidences levels could be used as the edge weight.
Neither of these addresses the issue of targeted sybil attacks, where the attacker abuses existing trust or attempts to deceive a small number of people. I’m still of the opinion that what’s described in the white paper about “health” (which is staking something which external apps look at to give or deny extra rewards) is the best approach, but I’m open to changes or alternatives.