Bob Ni's profileImmersive Computing's profile

Web3 User Authentication and Behavior Moderation

Decentralized Approach on User Authentication and Behavior Moderation for Twitter
Problem Statement

Twitter has a bot problem. Although it's estimated that only 5% of bots exist on the platform, a disproportional 20-29% of contents are published by bots and influencing platform's users in significant ways.

The real root cause of Twitter's bot problem is the lack of a scalable and privacy-preserving solution to identify users and moderate their behaviors on the platform. Currently, Twitter cannot effectively link Twitter accounts to a unique physical human owner. As a result, malicious users can magnify their impact through the use of bots. Moderations on Twitter is also done in a centralized manner which discourages free speech and user engagement, and also limits scalability of solutions. The recently launched "Blue Check" verification process was met with widespread criticism for its lack of meaning.

In this project, we focus on the core question: How do we mitigate the impact of malicious users on Twitter?
Solution Overview

We propose a decentralized two-step solution to solve the problem statement.

The first part of the solution will focus on linking twitter accounts to unique humans. By linking one unique human to one or more twitter accounts efficiently, Twitter will be able to assess behavior of all associated accounts as one entity.

This is followed by the second part of the solution -- a novel decentralized system to moderate user behavior persistently on the platform. We propose a reputation system that will serve as a way to identify malicious users through consensus and reduce impact of malicious behavior on platform, with supporting options to correct for false negatives and positives made by the user community.

Combined, the two-step solution will be able to identify each physical user's conduct on the platform in a decentralized and privacy-preserving manner, and mitigate malicious behavior across different virtual accounts owned by the same user.
Solution 1/2: Unique Human Identification to Establish Digital-Physical Identity Link

In order to lead to a Sybil-resistant consensus for unique human identification, our system needs to ensure that every identity within the domain is (i) unique, so that no two people should have the same identifier, and (ii) singular, so that one person should not be able to obtain more than one identifier.

Method 1: Virtual Pseudonym Party with Reverse Turing Tests

A virtual pseudonym party requires all users to be present at a virtual
location, and grants a token representing their unique online identity to each user. As long as a user cannot be present twice at such an event, they cannot receive duplicate identities. Reverse Turing Tests are required for token hand-out, making the pseudonym party robust to bots attempting to receive tokens. This process can include liveliness verification with cameras and a series of AI-hard problems generated by peers attending the same event.

Pros:
The method provides significant accountability. A physical pseudonym party is verified by the Proof of Personhood white paper (Borge et al., 2017) to offer Sybil resistance. Several applications, including Idena network succeeded in taking pseudonym parties online. The method is also privacy-preserving, a liveliness test with a camera can be performed locally, and no further identifying information is required.

Cons:
Participants must attend live authentication ceremonies held simultaneously for the entire network to gain personal tokens, which can be unrealistic for a user base as large as Twitter if every user were to go through the process. Twitter can apply this process for users who request a verified account but refuse to go through the traditional KYC process (offering their government-issued IDs, etc.).

Method 2: Biometrics
This system entails using some kind of physical biometric reading in order to determine if 1) the user is human and 2) the user is distinct from other users.

Examples of this methodology can include iris scanning, face recognition, finger veins, and hand geometry. Users will need to submit this information in order to be verified. System checks need to be in place to indicate whether the submitter is actually human, and that their credentials are not already being used by another verified user.

Pros:
Types of biometrics can be very strong, such as iris scanning, which can nearly guarantee the ability to recognize different human beings, and can also be difficult for a bot to fake. This system can also be decentralized if set up properly

Cons:
This method is often seen as intrusive, especially with more "personal" methods; users will want guarantees on secure storage of their credentials. The equipment necessary for these metrics is often not available to general consumers. It's unclear if these systems can be broken by attackers in the future. It's also unclear what the tolerance for errors is in this system.
Solution 2/2: Behavior Moderation to Guarantee Persistent Compliant Behavior

Following verification, it is necessary to ensure that a user is still acting in accordance with platform policies. Such guidelines can include: human, non-automated behavior; unique human activity; no hate speech or otherwise illegal speech. Similar to verification, an ideal solution should be decentralized, such that systems do not rely on a centralized authority to make decisions, but should allow fallbacks in case the user community makes mistakes.

Proposed Solution: Reputation System

A decentralized reputation system allows user activity and crowdsourced consensus to verify compliance. This consists of 3 features:
- Positive reinforcement for good behavior: Users will have reputation boosted if they interact (e.g. get likes/retweets) from other high-reputation users. Their reputation will also increase in proportion to the amount of time their account has been active.
- Negative punishment for non-compliant behavior: Users will be able to report others for negative behavior, based on community guidelines. Negative reports against your account will result in dropped reputation; and a sufficiently low score could mean de-platforming or de-verification.
- Self-policing/Decentralized with fallbacks: The system should be able to pull on public Twitter data for ingestion and calculation. Once the system has been set up, the system should be self-sustaining. In the event that a user is falsely reported for impersonation, there should be options for the user to prove their real identity via traditional KYC, if necessary.

Other considered solutions

Automated Bot Detection: This methodology has been thoroughly explored via bot-detection algorithms. Issues include its centralized nature and the ability for adversaries to circumvent
Trust, but Verify: This methodology consists of using crypto collateral to guarantee a user's behavior. Issues include lack of precision for compliance, legal liability for Twitter, and wealth barriers for users.
Recurrent Re-Authentication: This can be done with biometrics or reCaptcha. Issues here include the ability to fake credentials/humanness depending on method, and that this is only tests humanness.
Implementation Details

Because the goals of the project were aligned around research and surveying existing methodology, implementation was not emphasized as a deliverable in this project. However, as part of the demo, we will be displaying a interactive mock-up of how the two-step solution might work in a simulated environment.
Web3 User Authentication and Behavior Moderation
Published:

Web3 User Authentication and Behavior Moderation

Published:

Creative Fields