Decentralized Moderation: How TrustSearch Leverages Collective Intelligence to Identify & Remove Bad Actors

TrustSearch has a clearly defined purpose: to provide people with the ability to trust each other.

To achieve this, TrustSearch provides users with a wide variety of tools, such as TrustSearch profiles, Transitive Trust between user profiles, and Trust Trend, which identifies and reveals the recent trust-related actions of users.

TrustSearch is able to quantify and track the trust level of any given user by collecting data from a broad spectrum of sources such as social media, online marketplaces, and sharing economy platforms. Interpersonal trust is a key element of TrustSearch — users are able to create “empowered trust” connections with closely trusted friends and family members, forging strong trust bonds between them.

What happens when a user wants to do the opposite, however?

The TrustSearch ecosystem provides a highly accurate assessment of how trustworthy an individual or business is by collecting data from a wide range of different external sources, such as sharing economy accounts and online marketplaces.

If a TrustSearch user takes actions that damage their trust on these external platforms or lower review or ratings scores, this is immediately reflected in both the TrustScore and Trust Trend values of the user. This reactive system significantly minimizes the need for moderation on the TrustSearch platform — user Trust values are based on verifiable actions in real time.

There are, however, scenarios in which a user may want to flag a TrustSearch profile for review. Potentially fraudulent businesses or users, for example, may be flagged by TrustSearch users to notify other users. To facilitate this, TrustSearch incorporates a robust moderation system that allows the TrustSearch community to collectively self-moderate.

The Importance of Moderation & Fraud Prevention in Online Ecosystems

In any complex system or ecosystem that involves interpersonal interaction there exists individuals that attempt to abuse it. These individuals are often referred to as “bad actors,” a term used to describe individuals that abuse nominally value-neutral technologies or online services in order to exploit others.

Trust-based ecosystems and platforms just a feature that allows users to identify potential bad actors, warn users of the trust issues presented by bad actors, or even restrict bad actor access to the platform altogether.

Demonstrative_visualization_of_personal_attacks_on_Wikipedia

Bad actors exist in all online communities. The above visualization displays 30 days of Wikipedia community activity — both red and gray represent “toxic” comments or interpersonal attacks.

Different platforms take different approaches to moderation. Popular online freelancing platform Upwork, for example, takes a vertical approach to moderation, with a centralized moderation team reviewing exploitative or abusive behavior. Facebook, in contrast, uses a hybridized semi-decentralized model: users are invited to report potentially abusive content, posts, or profiles to moderator teams, who review these reports and take action.

The TrustSearch platform, however, is built on a foundation of decentralized peer-to-peer trust. For this reason, it’s not possible to moderate TrustSearch profile reports in a centralized manner with a centralized moderation team.

Rather than centralize the moderation process, TrustSearch places moderation entirely in the hands of the TrustSearch community, decentralizing the moderation system.

Decentralized Trust and Moderation

Trust is based on interaction between individuals. TrustSearch provides a measurement of the trustworthiness of any given individual. The moderation process that identifies, flags, and removes bad actors within the TrustSearch platform is thus based on interaction between individuals.

The purpose of TrustSearch’s moderation system is to provide users with the means to flag potential bad actors and subject their profile to a review process. This review process is conducted by highly trusted TrustSearch users, and is executed in a democratic manner.

TrustSearch users, depending on their TrustScore and several other factors, are able to join an ethics committee that is integral to the moderation and governance process.

Why build a decentralized moderation process, though?

Users of any online platform are more likely to participate in discussion, moderation, and use of a platform that is moderated in a democratic manners. Major online platforms are now moving towards decentralized moderation systems due to higher participation rates and efficiency they deliver — Twitter, for example, is currently developing a decentralized moderation system based on an open decentralized standard.

Political Trust, Satisfaction and Voter Turnout data published in Comparative European Politics reveals that voters across 17 different countries are more likely to vote when they have a high level of satisfaction in the efficacy and trustworthiness of their governments,

Political Trust, Satisfaction and Voter Turnout data published in Comparative European Politics reveals that voters across 17 different countries are more likely to vote when they have a high level of satisfaction in the efficacy and trustworthiness of their governments.

In-depth community engagement in the moderation process plays a critical role in trust between platform users and in the platform itself. The expertise of individuals that are visibly engaged in the moderation process is a major contributing factor the perceived reliability and fairness of a moderation system.

TrustSearch’s decentralized moderation system builds trust between individuals and in the TrustSearch platform itself by allowing both expert moderators and laypersons to contribute to the moderation process.

The TrustSearch community works together to collaboratively flag, review, and take action against bad actors in a transparent and fair manner.

Problems Presented by Decentralized Moderation

We’ve established that accurate and fair decentralized moderation is built on trust and community involvement. There are a number of issues presented by decentralized moderation, however, that must be addressed in order to establish a functional decentralized moderation model.

Decentralized moderation must overcome three key issues in order to work:

  1. Decentralized autonomy:
    Community members must be able to make moderation decisions without the centralized leadership of a single leader, team, or organization. TrustSearch operates across multiple different regions, cultures, and perspectives, and must thus establish a trust assessment process based on collective autonomy.
  2. Collusion resistance:
    Autonomous decentralized moderation systems must be resistant to collusion. Community-based review systems, such as the systems present in platforms like TripAdvisor, Yelp, media review site Metacritic, and more, are subject to “review bombing” in which groups of bad actors collaboratively flood a single profile with negative reviews. TrustSearch must be resistant to this practice in order to deliver accurate and fair moderation.
  3. Authoritative consensus:
    Community consensus in contemporary collaborative moderation systems does not take into account the expertise level of community members. TrustSearch’s community moderation system must take into account factors such as regionalization, community member authority, and individual expertise in the subject matter governing a dispute in order to deliver an accurate moderation decision.

Building a Functional Decentralized Trust Moderation Model

TrustSeach addresses these three issues by leveraging the TrustScore rating of all TrustSearch users involved in the threat identification, assessment, and moderation process of the platform. The TrustSearch decentralized moderation system uses TrustScore to solve these problems through the following methods:

  1. TrustSearch provides users with decentralized autonomy in the moderation process by allowing any TrustSearch member with a sufficiently high TrustScore to flag a profile for review. If a profile is flagged for investigation by a sufficient number of high-TrustScore community members, the profile is automatically forwarded to a collaborative ethics committee. This process occurs regardless of location or authority, allowing the TrustSearch community to make collaborative moderation decisions without the input of a single centralized moderator.
  2. TrustSearch eliminates the possibility of collusion within the TrustSearch decentralized moderation system by eliminating count-based moderation automation. Simulation data of count-based moderation systems that flag a profile for review once a sufficient number of flags have been submitted reveals that such systems are highly prone to collusion.Instead of relying on flag count alone or allowing users to submit review “comments,” TrustSearch assesses a range of factors that include the community structure of user accusations, the TrustScore of individuals that flag a profile, their individual Trust networks, and the TrustScore and network of the flagged profile.
  3. TrustSearch facilitates authoritative community moderation decisions by establishing a weighted voting system that gives a stronger vote weight to individuals that possess expertise or higher trust levels in the specific industry, category, or subject that the profile under review is classified under.In this regard, the TrustSearch moderation process is a “closed” crowdsourced online dispute resolution process. Both “laypersons” and subject matter experts are able to participate in the moderation process, but vote preference is given to individuals that demonstrate extensive knowledge or trust in their specific field.

This moderation process ensures that a dispute or moderation action is executed in a collusion-proof, collaborative manner while still accounting for and leveraging the perspective of community members that are able to contribute highly relevant perspectives.

The Filter Bubble Problem

Bad actors, scammers, and fraudsters cost consumers hundreds of millions of dollars every year, both online and offline. Investment scam frequency in the UK alone increased by 152 percent in 2019, with fraud victims losing £12,200 on average.

Investment scams often target individuals that are unfamiliar with complex investment schemes, persuading investor victims to part with money in order to invest in industries such as carbon credits, land banks, digital currency, or precious metals.

One UK-based fraudulent investment scheme in 2019, for example, convinced investors to invest over £107.9 million in a fake eco-investment scheme aimed at wealthy investors. Investment scams can have a devastating effect on both sophisticated and unsophisticated investors alike —the 2018 Bitconnect digital currency Ponzi scheme resulted in over $2.6 billion in losses to families and investors around the world.

Investment scams and other fraud mechanisms commonly rely on the “filter bubble” phenomenon to attract potential investors and keep them from critically assessing a fraudulent scheme. Also referred to as the cognitive echo chamber effect, the filter bubble phenomenon refers to the habit of internet users to engage with content that confirms already-held beliefs and ignore warnings or red flags.

Scam artists and fraudsters are aware of the confirmation bias that drives investor behavior. High-profile Wall Street fraudster Marc Dreier was able to scam hundreds of millions of dollars out of investors in 2008 through the confirmation bias effect. In a complex investment scheme, Drier arranged for seemingly-neutral third party accountants or CEOs to endorse a promissory note scam to potential investors, providing them with assurance that the offer was legitimate.

twitter filter bubbles

Filter bubbles visualized: MIT data reveals Twitter accounts divided by political stance, far left to far right, with no interaction between opposite perspectives.

The same practice is used in modern scams and investment fraud schemes — scammers will create large social networks of potential investors, herding them into closed groups on platforms such as Twitter or Telegram. By directing the flow of information within these filter bubbles, scammers are able to drown out the voices of informed individuals that attempt to alert others to the falsity of a scam and reaffirm the legitimacy of their scheme.

Filter bubbles and cognitive bias doesn’t just restrict our ability to see through scams, however. The rise of algorithmic features in major search engines, social media, and news platforms has caused an online environment in which the news presentation is highly individualized based on their personal browsing habits and preferences.

Divisions between political perspectives and groups offline, for example, are reflected in online interactions — the information flow and discourse between two disparate political or geographic parties is restricted, minimizing transparent discourse and ultimately reducing inter party trust.

Leveraging Authority to Prevent Scams, Fraud, & Cognitive Bias

The TrustSearch decentralized moderation system minimizes the impact of the filter bubble effect and cognitive bias by placing a greater emphasis on the perspectives of highly authoritative individuals.

The forum of public opinion, as evidenced in the filter bubble problem, is prone to collusion and manipulation by bad actors that seek to obscure fraudulent action or misinformation. The US SEC issues a constant stream of scam and fraud alerts, but US citizens still suffer over $1.48 billion in losses to financial fraud every year.

The TrustSearch community moderation model allows highly trusted individuals to highlight scams, frauds, and bad actors when they are identified and speak from a position of authority. Rather than provide bad actors or less-informed community participants to contribute an equally-weighted opinion to the assessment of a bad actor and potentially contribute to cognitive bias, TrustSearch’s weighted voting system provides users with a trusted authoritative expertise.

By leveraging crowdsourced scam identification and giving the perspective of trusted experts higher value, TrustSearch provides users with a collaboratively-created, objective, and accurate assessment of the trustworthiness of any given profile — one that can’t be manipulated by filter bubbles or cognitive bias.

How TrustSearch Decentralized Moderation Works

TrustSearch doesn’t follow a traditional reporting and moderation model. Rather than allow any individual to flag a profile for review, TrustSearch allows users that meet a specific TrustScore threshold to submit a profile for consensus review.

The primary factor that determines whether or not a profile proceeds to consensus review is the cumulative trust score of the individuals that flag it weighed against the TrustScore of the profile itself. One single individual may not be enough to open a consensus review case — a smaller number of high-TrustScore users, or an abnormal percentage of total viewers can also trigger a consensus review.

Once a profile has been flagged by an abnormal number of users, the profile is then subject to review by an ethics committee. This ethics committee is open to any TrustSearch user with a high TrustScore and a high number of “Trusters”.

The ethics committee then votes on the best course of action to resolve the issue presented by the profile. The voting process includes an important feature — ethics committee membership is open to any user with a sufficiently high TrustScore, but committee members don’t all share an equal vote.

Each vote weight is calculated based on the TrustScore of a user, the number of users that Trust them, their previous participations in the assessment of other moderation cases, and their relative expertise or knowledge in the subject or category that the dispute or profile falls under.

TrustSearch ethics committees are classified into a broad spectrum of different fields. TrustSearch users that possess tenure, credibility, or extensive knowledge in their specific field are identified by the TrustSearch moderation system and provided with a higher vote weight than other committee members.

For example: a TrustSearch profile may be flagged by other users for finance or securities trading-related actions. Should this profile reach the consensus review stage, an ethics committee will vote on the best response.

This ethics committee will consist of individuals that are highly experienced in the security and commodities trading industry as well as “everyday” individuals that are not widely regarded as experts in their field. Expert individuals, in this case, carry a stronger vote than everyday individuals.

Should the ethics committee in the above example include an individual that is widely regarded as an expert in the subject matter and Trusted by similar subject matter experts, their vote will carry a stronger weight than a layperson committee member. Each ethics committee member will then have a different impact on the final decision.

A moderation case is solved when consensus is reached between the greatest number of ethics committee members. The moderation process concludes when a minimum vote count and weight is achieved over a maximum time period.

The TrustSearch decentralized moderation system is an innovative way to moderate a platform based on the trust of users and their individual specialized knowledge. For the first time, moderation is open to anyone and takes into account the knowledge of trusted specialists in their field.

TrustSearch Decentralized Moderation and Review Process

The TrustSearch moderation and review process is extremely straightforward in practice. A user that wishes to flag a profile for review is able to press a “Consensus” button on the profile, which will contribute toward the opening of a new case and allow them to view the history of previous cases.

Once a profile is flagged in an abnormal manner enough times, the review process will be triggered and an ethics committee will be formed. This ethics committee will votes in two steps:

  1. Assessment
    The ethics committee will vote on whether action is needed or not. If no action is needed, the review process will be closed and logged in the consensus history of the profile. If action is voted as necessary, the committee will proceed to step two.
  2. Sanctioning
    The ethics committee will determine what sanction is necessary at step two of the review process. Members will vote on whether a profile will receive a sanction from the ethics board, which will affect the profile’s presence on the TrustSearch platform.

Depending on the outcome of ethics committee voting, the following sanctions can be placed on a TrustSearch profile:

  • Warning by private message
  • A TrustScore sanction that reduces the profile’s TrustScore in increments depending on ethics committee decision. This TrustScore reduction can range from -10, to -20, to -30, or more.
  • Temporary account suspension for a specific period, such as 1 week, 30 days, 90 days, or longer. When suspended, a profile will display a “Warning” badge that indicates the length of the suspension, a warning description, and a red color code
  • Permanent suspension. This resolution is typically reserved for confirmed bad actor such as scam or fraud profiles. The TrustSearch profile of a permanently suspended account is disabled, but still accessible — the content of the profile is blurred and displayed alongside a warning side, sanction details, vote details, and moderation history.

The consensus details of all ethics committee and all voting statistics alongside the details of voting members and their TrustSearch profiles are made available to all TrustSearch members.

TrustSearch Moderation is Built on Collaborative Community Trust

Decentralized dispute resolution and moderation models are still in a developmental stage — most sharing economy, review, and marketplace platforms online today rely on traditional moderation models that fail to take into account the intricacies of interpersonal trust and individual expertise.

The TrustSearch decentralized moderation system provides every member of the TrustSearch community with a voice, delivering collaborative input in moderation decisions in a trust-based manner. By bringing the TrustSearch community together and providing a transparent means of transparent moderation, TrustSearch has established the first democratic dispute resolution process that leverages the collective intelligence and trust of every platform user.