Cross-Platform Abuse

Definition

Where a bad actor or group will organize a campaign of abuse (such as harassment, trolling or disinformation) using multiple online services.

Related Terms

Harassment, Multi-Platform Campaign, Stalking, Brigading.

Background

Cross-Platform Abuse refers to situations where a malicious actor or group deliberately organises and perpetrates a campaign of harmful behaviour – such as harassment, trolling, doxxing, or disinformation – targeting an individual or group across multiple online services or platforms. This abuse is not confined to a single community; it might span several Fediverse instances, or extend to other social media platforms, forums, messaging apps, or other digital spaces.

Perpetrators of cross-platform abuse exploit the interconnected, and sometimes interoperable, nature of digital services to amplify their impact, maintain persistence, and make it significantly more challenging for victims to escape the harassment; or for any single platform’s moderation team to respond effectively in isolation. The campaign is often orchestrated on one platform (sometimes a private or less moderated one) and then executed across many others where the target is present or where their reputation can be damaged.

Why We Care

Dealing with Cross-Platform Abuse is critically important because it represents a severe and often overwhelming form of victimisation. When abuse is coordinated across multiple platforms, it can feel inescapable to the target, leading to intense psychological distress, fear, and a significant chilling effect on their online (and sometimes offline) life. It also presents a complex challenge for moderation, as actions taken on one platform may not stop the abuse occurring elsewhere, and evidence of the coordination might be fragmented across different services.

Recognising and understanding the dynamics of cross-platform abuse helps Moderators and Service Administrators to support victims more effectively, collaborate where possible, and implement policies that acknowledge the limitations of single-platform interventions.

Spotting Cross-Platform Abuse: What to Look For

Identification of cross-platform abuse often relies heavily on reports from the targeted individual(s) who are experiencing the abuse across different services, or through inter-community alerts.

Account Traits: The abusive accounts might be a mix of established personas, throwaway accounts, or accounts specifically created for the campaign on various platforms. They often exhibit coordinated messaging or timing, even if appearing on different services.

Content Characteristics: Look for consistent themes, narratives, derogatory terms, specific false claims, or harassment tactics being deployed against the same target(s) across multiple platforms. This might include your community as well as reports of similar activity targeting the same individual on other social media sites, forums, or messaging apps.

Posting Patterns: Evidence of coordination is key. This could involve:

  • Reports from a user on your instance that they are being harassed with similar messages on your service and on unrelated platforms.
  • Sudden spikes of negative attention on your platform towards an individual, which coincide with similar spikes elsewhere.
  • Abusers on your platform referencing or linking to harassment occurring on other services, or vice-versa.
  • Use of dedicated hate sites or forums to coordinate off-platform and then direct harassment towards targets on your instance and others. (IFTAS maintains a list of “Do Not Interact” instances that include sources of cross-platform abuse.)

Behaviour: The behaviour is characterised by its organised and multi-platform nature. The abusers’ intent is clearly to cause distress, silence, or harm the reputation of the target, and they leverage different platforms to maximise this impact. Victims may report feeling “followed” or systematically targeted wherever they go online.

Key Questions for Assessment:

  • “Is the victim reporting harassment or abuse with similar themes/actors occurring on other platforms in addition to ours?”
  • “Is there evidence (e.g., screenshots, links provided by the victim or other moderators) of coordinated abuse originating from or extending to other online services?”
  • “Does the abusive activity on our platform appear to be part of a larger, organised campaign rather than an isolated incident?”
  • “Are other instance administrators or contacts on other platforms reporting similar targeted abuse against the same individual, potentially from the same or linked perpetrators?”

Before You Act: Common Pitfalls & Nuances

Moderators may not have access to or awareness of abusive content on other platforms, making it difficult to assess the full scope of the abuse. Different platforms and servers have varying policies and thresholds for what constitutes abusive behavior, complicating coordinated responses. Effective response to cross-platform abuse requires cooperation between different platforms, which can be hindered by privacy concerns, and technical barriers. Abusers often use multiple accounts across platforms to evade detection, making it hard to track and mitigate their actions.

Online platforms operate globally, subject to a patchwork of laws and regulations across different countries. What’s considered abusive or punishable in one jurisdiction might not be in another, complicating platforms’ enforcement policies and their responsibility toward cross-platform abuse.

Each platform or server has its own set of community guidelines and terms of service that define acceptable behavior. However, these guidelines can vary significantly, leading to challenges in creating a unified approach to combat cross-platform abuse. A behavior penalized on one platform might be tolerated on another, creating inconsistencies in user experiences and expectations.

Sharing information about users or their activities between platforms to address cross-platform abuse raises significant privacy issues. Platforms must navigate the legal and ethical implications of exchanging user data, ensuring they do not infringe on privacy rights or data protection laws.

Determining a platform’s responsibility involves not just mitigating the abusive activity but also supporting any victims of the activity. Moderators must assess how to provide resources and assistance to those affected by cross-platform abuse while navigating the complexities of identifying and taking action against abusers who operate across multiple services. Determining the extent of your responsibility for user behavior that spans multiple services is a complex requirement.

Addressing cross-platform abuse effectively requires careful consideration due to its distributed nature.

Limited Scope of Action: Moderators on one Fediverse instance can typically only take direct action against accounts and content on their own instance. They cannot directly moderate other platforms.

Evidence Collection Challenges: Gathering comprehensive evidence of a cross-platform campaign can be difficult for a single moderation team, often relying on the victim to collate this.

Attribution Difficulties: Proving that abuse across different platforms is orchestrated by the same actor(s) can be hard without cooperation between services (which is rare).

Victim Fatigue and Safety: Victims of cross-platform abuse are often exhausted and highly distressed. Prioritise their immediate safety and well-being on your platform.

Common Gotchas:

  • Dismissing a user’s report of off-platform abuse as “not our problem,” without considering how it might link to on-platform behaviour or inform the severity of on-platform abuse.
  • Giving advice that is impractical for a victim to follow across all platforms (e.g., “just block them” – which can be an overwhelming task in a large campaign).
  • Publicly engaging with or calling out the abusers in a way that could escalate the cross-platform campaign.
  • Key Point: Cross-Platform Abuse is about an organised campaign that transcends single services. While your direct moderation is limited to your community, acknowledging the broader context is vital for assessing severity and supporting the victim effectively.

Managing Suspected Cross-Platform Abuse: Key Steps

When cross-platform abuse is reported or suspected:

  • Prioritise Victim Support and Safety on Your Platform: Take all possible steps to stop any abuse occurring on your instance. Remove abusive content, ban offending local accounts, and de-federate/block instances that are knowingly facilitating the abuse if necessary and within your Service Administrator’s policy.
  • Listen to and Document Victim Reports: Allow the victim to describe the full scope of the abuse they are experiencing, even the parts happening off your platform. This context is important for understanding intent and severity. Document what they share.
  • Preserve Evidence (On Your Platform): Securely save evidence of any abuse occurring on your instance that is part of the broader campaign.
  • Advise the Victim:
    • Encourage them to report the abuse on each platform where it occurs, according to that platform’s specific procedures.
    • Suggest they document and screenshot everything.
    • Advise on privacy and security settings across all their accounts.
    • Direct them to specialist support organisations that deal with online harassment, stalking, and cybercrime. These organisations may have experience with multi-platform abuse.
  • Inter-Community Communication (Service Admin Level): If the abuse involves other Fediverse communities, your service administrator may, with the victim’s consent if appropriate, communicate with the administrators of those instances to share information about the coordinated activity and potentially request action.
  • Focus on Your Sphere of Control: Recognise the limits of your ability to stop off-platform abuse, but do everything within your power to make your instance a safe space for the victim.
  • Avoid Direct Cross-Platform Confrontation: Do not use your official instance accounts to confront abusers on other platforms, as this can escalate the situation or draw your instance into wider conflicts.

Example Community Guidance

Strike System: “Participation in harassment campaigns, including those coordinated from or extending to other platforms, will be treated with high severity, often bypassing initial warnings.”

General Prohibition: “Engaging in or coordinating abusive behaviour that targets individuals in our community, regardless of whether that campaign also utilises other online services (cross-platform abuse), is strictly prohibited. We will take action against any on-platform activity that contributes to such campaigns.”

Strict Enforcement: “Accounts found to be participating in organised cross-platform abuse targeting members of our community or using our platform to facilitate such abuse will be permanently banned. We will support affected users by taking all possible measures on our service and providing guidance for addressing off-platform elements. We may also defederate from instances found to be consistently enabling or hosting coordinated cross-platform abuse targeting our users.”


IFTAS
IFTAS
@about.iftas.org@about.iftas.org

Nonprofit trust and safety support for volunteer social web content moderators

47 posts
291 followers

IFTAS is a non-profit organisation committed to advocating for independent, sovereign technology, empowering and supporting the people who keep decentralised social platforms safe, fair, and inclusive..