About

Facts matter. So does knowing where they come from.

During major conflicts and geopolitical crises, social media accounts become primary news sources for millions of people. Journalists cite them in live blogs. Researchers quote them in reports. Policymakers read them. The public shares them.

Most of that happens without any systematic understanding of who runs these accounts, how accurate they actually are, or what positions they hold. Some of the most widely followed accounts in this space have significant documented accuracy problems. Others hold clear political positions that colour everything they post — something their audiences may not be aware of.

OSIRIS exists to change that. We rate social media accounts covering conflicts and geopolitical events through a structured, transparent methodology. The scores are published openly. The methodology is published openly. There is no right of reply.

Three Scores, Three Questions

Every account receives three distinct outputs, designed to answer different questions about reliability:

Factuality Score

0.0 to 10.0

Does this account get its facts right? Are its claims grounded in credible sourcing?

Content Integrity Score

A+ to F

How does this account operate? Does it use genuine media, credit sources, and frame events honestly?

Position & Stance Tags

Descriptive

What editorial or geopolitical position does this account demonstrate? Tags are disclosures, not verdicts.

An account can hold a clear political stance and still score highly for factual accuracy. We treat factual reliability and editorial positioning as distinct things because conflating them produces misleading results and because audiences deserve to know both.

How It Works

Each account is assessed against a reviewed sample of posts. Every post is classified by type (Factual Claim, Analysis, Quote, Paraphrase, Retweet, Stolen Content, or Social), then rated on the dimensions appropriate to that category.

Factual claims are assessed for accuracy and source quality. Did the event happen as described? Is the sourcing credible?

Analysis and opinion are assessed for framing integrity. Is the evidence fairly represented? Is context provided?

Media is checked separately. Is this imagery genuine and current, or is it archival footage, AI-generated, or manipulated?

More recent posts are weighted more heavily than older ones. An account's behaviour today matters more than what it posted two years ago. Sample size is shown on every scorecard — a score based on 10 posts is less reliable than one based on 60.

Posts marked as pending — where the claim cannot yet be verified or ground truth has not emerged — are excluded from score calculations. They do not count for or against the account until resolved. The number of pending posts is always shown.

What We Check For

✓ Accuracy

Verified, Likely True, Mostly True, Misleading, Mostly False, False, or Pending

✓ Source Quality

Verified primary sources, credible secondary sources, organisation statements, or anonymous/unknown sourcing

✓ Media Integrity

Authentic, archival (disclosed or undisclosed), AI-generated (disclosed or undisclosed), manipulated, or stolen

✓ Attribution

Does the account credit sources properly, or present others' work as its own?

✓ Framing

Is analysis intellectually honest? Is context provided? Is the language proportionate to the evidence?

✓ Stance

What geopolitical or editorial position does the account's content demonstrate? Recorded separately from accuracy.

Hard Flags

Certain integrity failures trigger hard-visibility flags that appear on the scorecard regardless of overall grade:

  • Undisclosed AI-generated media (M6) — Synthetic media presented as authentic footage without disclosure
  • Manipulated or doctored media (M7) — Authentic media edited to change its meaning or remove important context

These are not averaged into obscurity by a strong performance elsewhere. They are surfaced directly to readers.

Independence

Accounts are not notified when they are being rated, and there is no right of reply once a rating is published. Rated accounts cannot pay to influence scores. The project operates entirely separately from any other work by the people involved in it.

The project has no financial relationship with any government, military organisation, platform, or media outlet. Ratings are not offered for sale and are not commissioned. The methodology is public specifically so that anyone can assess the scoring process for themselves.

Who It Is For

OSIRIS is for anyone who needs to make informed decisions about which social media accounts to trust when covering or following conflicts. That includes journalists and editors who cite OSINT accounts in news coverage, researchers who use them as sources, media organisations assessing their own sourcing practices, and anyone who wants to understand the information they are consuming.

What We Are Not

We are not a fact-checking organisation in the conventional sense. We do not investigate individual claims in isolation. We rate accounts based on patterns across a sample of posts, not single instances. An account that gets one thing wrong is not automatically rated poorly. An account that consistently misrepresents events, relies on unverifiable media, or presents speculation as established fact will reflect that in its score.

We are also not a political project. Our concern is accuracy and transparency, not which side of a conflict an account supports. We apply our criteria consistently regardless of which direction a bias runs.

Read the full methodology Browse scorecards