Importing trust


Anti-sybil analysis based on dissemination of trust through a social graph requires pre-trusted seeds at strategic locations. Being a seed doesn’t assure that a node in the graph will be labeled “honest,” but the distribution of seeds allows the analysis to decide whether a region of the graph is honest. Some of our previous thoughts on seed selection are in the BrightID whitepaper and this document.

There are external facts seen by society as contributing to someone’s unique identity. While BrightID’s method of verifying uniqueness depends on a graph of interpersonal relations, these trusted facts may be seeded into the graph in a privacy-preserving way to strengthen analysis methods.


Consider a trusted registry R of facts referring to unique individuals for an external purpose (for example, a birth registry). The trust from R can be imported into BrightID by individuals proving their inclusion in R through zero-knowledge without revealing any facts in R to BrightID nodes.

A member m_i of R must prove possession to BrightID servers (ideally several nodes) of an individual secret s_i used by m_i to control the benefits they receive from being in R. It’s important that s_i controls these benefits so that m_i will risk losing them if they sell s_i to an attacker.

The proofs must be such that a verifier (i.e. a BrightID node) can tell if they’ve seen a proof generated by the same member before. When BrightID nodes receive a proof they haven’t already seen, they label the member’s vertex (authorized by a signature from their BrightID signing key) in the graph as having received trust from R. Analysis methods are free to seed labeled vertices with additional trust.

Details on how this proof scheme is achieved is the subject of a future post.

R should have versions that expire and can be replaced. R_2 can be imported (but not used yet) prior to the expiration of R_1 to allow a gapless update to seeded trust.

Qualities of a beneficial registry

Mathematical considerations

Consider the BrightID graph containing vertices (V), a subset of which has been previously labeled as verified (L). Within this, some are honest (correctly labeled) (H) and some are actually sybils (incorrectly labeled) (S).

A trusted registry is imported as labels creating the subset R. L (and therefore H and S) were previously determined without considering R.

A registry is beneficial to BrightID if R is proportionally greater in H than in S. I.e.

\frac{|H \cap R|}{|H|} > \frac{|S \cap R|}{|S|}

To give an example with numbers, if |L| has 100 vertices: 90 honest and 10 sybil and importing R adds 9 labels to the (correct) honest region (H), and 2 labels to the (incorrectly-labeled) sybil region (S), it would not be beneficial because 20% of the sybil region and only 10% of the honest region would be labeled as R.

In the above example, we would expect importing R to cause the incorrectly-labeled sybil region to grow or remain constant in proportion to the correctly-verified honest region.

This demonstrates that if we think that the (incorrectly-labeled) sybil region is much smaller than the correctly-labeled honest region, we should only import trust from very accurate registries; whereas if we think an algorithm is admitting large numbers of sybils proportional to honest users, we can afford to use a less accurate registry to seed trust and improve results.

Size of the registry / combining registries

Seeded trust can be aggregated from multiple sources, even small ones.

Giving users multiple options to import trust doesn’t by itself create new attack vectors (other than those particular to individual registries as discussed above and below.) There is no trust threshold above which a user is able to split themselves into multiple accounts. This is because adding trust to a vertex doesn’t automatically result in its verification; instead, it increases the overall trust in that area of the graph.


Does the maintainer of the registry have a method for proving membership without exposing unnecessary personal details? For example, can they sign a verifiable credential that asserts membership only?


How secure is the registry against internal and external attacks that might leak a false or unauthorized proof-of-membership? What are the attack vectors? A few considerations are enumerated below.


How motivated is the organization that created the registry (or its operators) to defraud the system if given the chance? How are internal attacks detected and mitigated?

Something at stake / bribery

As mentioned previously, members of a registry should stand to lose something if they sell or rent access to their membership to an attacker.


How easily can an attacker create fake memberships?


How does the registry handle cases where a person wants to re-register? How prevalent are duplicates? How easily can an attacker create or obtain a duplicate membership?


Can a membership be stolen? Can it be stolen without the member knowing? Are signing keys and signatures rotated to mitigate attacks?

Recovery / revocation

Can a membership be quickly recovered or invalidated and replaced? (This also helps with bribery attacks.)


Can unauthorized usage be detected?

Examples of registries

The following are some examples of registries with preliminary assessments of their suitability for importing trust into the graph.


Passport authorities publish lists of signing certificates. When a passport is issued, an authority signs a block of personal data and encodes it in a chip embedded in the passport.


There are techniques to prove knowledge of a digital signature of a message with zero-knowledge. This technique only hides the signature (in order to prevent verifiers from re-using the proof and impersonating the holder); it doesn’t hide the message itself, which contains personal information and must be revealed in plain-text form. Attempts to validate the signature using only the hash of the message and not the original message (as in UBIC’s implementation) allow existential forgeries where attackers can create fake accounts at will. This flaw can be fixed, but only with the cooperation of governments. E-Passports in their current form don’t support anonymous proof-of-uniqueness.


It’s unclear what protects the signing keys / certificates, other than the fact that the private keys are destroyed and new ones issued every three months.

Something-at-stake / bribery

It’s unclear what might be at stake if someone sells the signed message retrieved from their E-Passport. An attacker could probably obtain many such signatures through bribery.

Duplication / Revocation

It’s unclear if or how previous signed messages are invalidated if a new E-Passport is issued to a person. This could result in previously-issued passports being used as duplicates.

There are cases where a person can legally hold multiple E-Passports.


An thief in temporary possession of an E-Passport could copy the signed message and sell it to attackers.

Curated lists of profiles


Curated lists of profiles depend on personal information being publicly reviewable, but membership can be imported as trust without revealing any profile information to BrightID nodes.


The process of curating a list of profiles has the potential to be decentralized and transparent.

Forgery / Something-at-stake

It’s hard to imagine that a curated list of profiles would prevent attackers from creating a large number of forgeries (sybils) unless there is something at stake. Requiring a large stake to register a profile could reduce sybils and deter the selling and renting of profiles, but might not stop a well-funded group of colluding attackers.

Opolis (Proof of work eligibility)


Opolis is committed to using self-sovereign identity, limiting the data shared to only what’s necessary. This makes a non-transferable anonymous proof-of-membership through ZKPs possible.


Opolis is currently centralized, but committed to becoming more transparent and decentralized.

Something-at-stake / bribery

Maintaining work eligibility is highly valuable.


The number of credentials required to prove work eligibility is extensive, making it difficult for a person to forge all of them.


Decentralized identity standards (such as DIDs) make it harder for a person to re-register with the same credentials.

Auditing / Recovery / Revocation

Opolis can adapt quickly as a startup and would probably accept our help setting up best practices for account auditing, recovery and revocation.

Proof-of-attendance protocol

Proof of attendance protocol (POAP) assigns a badge to a verified attendee by a trusted organizer of an event.

Graph analysis methods are free to assign different amounts of trust to badges. A carefully secured event (imagine locking entrance doors after a certain time and distributing badges on exit) that happens less frequently could be assigned higher trust than one that is less secure or happens more often. A less secure event may not be used at all. (See the “Mathematical considerations” section, above.)


POAP badges can be imported as trust into BrightID. Graph vertexes will be labeled with the badge description (e.g. which event was attended) so that graph analysis can assign the proper trust. There is a danger that this will reveal more about a person in the graph than they would like, e.g. dates and locations. If users are reluctant to import badges due to privacy concerns, it lowers their efficacy, because sybil attackers would presumably not have the same hesitancy to use POAPs to help their attacks.


A secured event with badge distribution suitable for importing trust should be audited by multiple external auditors. The auditing methods that might be employed are beyond the scope of this study.

Something-at-stake / bribery

POAPs as originally envisioned do not attempt to lock any value to the original holder; i.e. they are freely transferable or sellable along with any value they may have. To be usable for importing trust, we would want there to be some locked value (possibly staked by the issuer, the holder, or a combination of both) that can only be unlocked by the original holder and would be destroyed upon transfer. A new kind of POAP with this quality would be needed. We want a POAP that is a true badge, not a transferable token.

Forgery / Duplication

We expect POAP badges to have a low instance of forgery or duplication if organizers are careful and trustworthy.

Theft / Recovery / Revocation

As a token, we expect a low instance of theft–approximately equal to that of a blockchain wallet being stolen. Even if it is stolen, the chance of the thief having the motivation and knowledge to use a POAP in a sybil attack against BrightID is negligable. It probably has some other value and is unlikely to be sold to a sybil attacker.

Another benefit of having many anonymous seeds is to prevent target attacks.