
How OfficePoll Protects Against Bad-Faith Reviews
Anonymous feedback is only valuable if the platform actively prevents abuse. Here is how OfficePoll uses credibility weighting, automated abuse detection, cooldown periods, and crowd thresholds to make gaming the system extremely difficult.
The Problem No One Talks About
Every anonymous feedback platform faces the same uncomfortable question: what stops someone from using anonymity as a weapon?
It is a legitimate concern. A disgruntled colleague could tank your scores out of spite. A competitor could create an account just to leave a damaging review. A former manager with a grudge could use the cover of anonymity to settle a score they could never settle publicly.
Most platforms address this with vague assurances. "We have moderation." "We review flagged content." These are policies, not systems. A human moderator reading through submissions cannot detect a revenge review that sounds perfectly professional. And a platform that relies entirely on manual review will always be playing catch-up.
OfficePoll takes a different approach. We built multiple layers of automated and human protection that work together to make bad-faith reviews extraordinarily difficult to pull off — and nearly impossible to have meaningful impact even when they slip through.
No system is perfect. We are not going to claim ours catches everything. But we are going to show you exactly how it works, because we think transparency about our defenses is more reassuring than vague promises about our intentions.
Credibility Weighting: Not All Reviews Count Equally
This is the single most important anti-abuse mechanism in OfficePoll, and it is baked into the mathematics of every report we generate.
Every reviewer on OfficePoll has a credibility tier based on their history of giving feedback across the platform. The more you contribute — reviewing multiple colleagues over time — the more weight your feedback carries. New accounts and first-time reviewers have significantly reduced influence. Long-standing, active contributors have substantially more.
What this means for someone trying to game the system: a person who creates an account to leave a single revenge review has very little impact. Their scores are automatically discounted. Meanwhile, the honest feedback from established colleagues who have been using the platform for months carries far more weight in the final report.
Credibility tiers are not just a discount on numerical scores. They also influence the AI synthesis engine. Themes from high-credibility reviewers carry meaningfully more significance than those from unproven accounts. A revenge review from a brand-new account barely registers against honest feedback from trusted contributors.
The system rewards exactly the behavior we want. People who engage broadly and consistently earn more influence. People who show up once with an axe to grind get almost none.
Automated Abuse Detection: Patterns Humans Miss
Credibility tiers handle the baseline. But some patterns require more targeted detection. OfficePoll runs multiple automated abuse signals on every submission, looking at patterns across a reviewer's entire history. When suspicious patterns are detected, the system applies additional weight reductions or flags the reviewer for human review.
We look for several categories of suspicious behavior. Reviews that appear targeted — coming from accounts with no organic connection to the person being reviewed, combined with extreme scores — receive significant credibility penalties that compound with their tier discount. The result is that a bad-faith review from an unconnected account with suspicious patterns has a near-negligible impact on the final report.
We also detect patterns that suggest lazy or coordinated abuse — reviewers who give suspiciously uniform scores that do not reflect the natural variation you see in honest feedback. Real humans have uneven strengths. Someone who rates a colleague identically across every dimension is not providing thoughtful assessment, and the system treats their feedback accordingly.
These signals are not binary pass/fail checks. They are layered and compounding — multiple suspicious signals on the same reviewer result in progressively steeper discounts. And the specific thresholds and detection methods are intentionally not published, because the goal is to catch bad actors, not teach them what to avoid.
Honest feedback from people who actually work with you.
Weighted by credibility. Protected against abuse. Synthesized into a report no single reviewer can distort.
Three-Month Cooldown: No Harassment Campaigns
One bad review is annoying. A sustained campaign of bad reviews is harassment. OfficePoll prevents the second scenario with a simple but effective rule: you cannot re-review the same person for 90 days.
This means that even if someone is determined to drag down a colleague's scores, they get one submission every three months — and each submission goes through the full credibility weighting and abuse detection pipeline before it influences the report.
When someone does re-review after the cooldown, their new review is treated as an additional data point — but it does not double their influence. The same credibility tier applies, the same abuse detection runs, and their voice is still just one among many in the synthesized report. Having two reviews from the same person does not mean twice the impact. It means the system has two snapshots of one person's perception, weighted the same as any other single reviewer's contribution.
The 90-day window is deliberate. It aligns with quarterly coaching cycles — the natural rhythm at which people's work behaviors actually change. If you reviewed someone in January and want to update your assessment in April, that makes sense. Genuine perceptions evolve over time. What does not make sense is reviewing the same person every week, which is what a harassment campaign would require.
The cooldown applies across all submission paths. Whether you submit feedback through someone's profile link, through a feedback round, or through the people search feature, the system tracks your review history and blocks repeat submissions until the cooldown expires. There is no way to circumvent it by using a different entry point.
Crowd Threshold: One Voice Cannot Dominate
OfficePoll does not generate a report until at least five different people have submitted feedback. This is not just a data quality measure — it is a fundamental abuse protection.
When a report synthesizes five or more perspectives, no single reviewer can dominate the outcome. Even in the worst case — one reviewer with extreme scores — those scores are reduced by their credibility tier, potentially reduced further by abuse detection, diluted across multiple honest reviewers, and synthesized by an AI that identifies themes across sources rather than amplifying outliers.
The compounding effect of all these layers means a single bad-faith review has a near-negligible impact on the final report. The math simply does not allow one person to meaningfully distort what five or more honest reviewers have said.
And importantly, individual reviews are never shown to the recipient. OfficePoll does not display a list of scores with one suspicious outlier glaring at you. The recipient sees only the synthesized report — a narrative that blends multiple perspectives into themes and patterns. One voice shouting into a crowd of five gets lost in the synthesis.
The Anonymization Pipeline: A Surprising Abuse Filter
OfficePoll's anonymization pipeline was designed to protect reviewers. But it turns out that the same system also protects recipients from certain types of abuse.
When someone submits feedback, the text passes through a multi-layer AI pipeline before anything is stored. The pipeline scrubs identifying information, neutralizes writing style, and then permanently deletes the original text. But there is a step that specifically helps with abuse: if feedback is too personally identifiable, the AI flags it and asks the reviewer to revise.
This matters because the most damaging bad-faith reviews are often the most specific ones. "She tanked the Henderson deal on purpose" or "He only got promoted because of his relationship with Sarah in HR" — these are the kinds of statements designed to hurt, and they rely on specificity for their impact. The anonymization pipeline catches exactly this type of content, because the same details that make feedback harmful also make it identifiable.
The reviewer gets a message explaining that their feedback is too specific to anonymize safely, and they are asked to rewrite it in more general terms. Most bad-faith reviewers, faced with the requirement to strip out the poisonous specifics, either give up or produce something generic enough that the credibility weighting and crowd dilution handle the rest.
And the original text? Permanently deleted. The moment feedback passes through the pipeline, the raw submission ceases to exist. There is no database of original reviews that could be mined, leaked, or weaponized. What remains is the anonymized version only — and it bears no stylistic resemblance to what was originally typed.
Admin Oversight: When Automation Is Not Enough
Automated systems catch patterns. Humans catch context. OfficePoll uses both.
When the abuse detection system identifies patterns that are suspicious but ambiguous — a single-target reviewer who does have social graph overlap with their target, or someone whose ratings are extremely negative across multiple people — those reviews are flagged for admin review rather than auto-reduced.
An admin reviewing a flagged case can see the reviewer's full submission history, their credibility tier, the specific signals that triggered the flag, and the social graph context. Based on that information, they have three options:
- Dismiss — the flag was a false positive. The review stands at its current weight.
- Ghost — the reviewer's future submissions are silently de-weighted. The reviewer does not know their reviews carry reduced influence. This prevents retaliation without creating a confrontation.
- Ban — the reviewer's account is restricted from submitting further feedback.
There is also a whitelist option for the opposite case — a reviewer who was flagged by automated signals but is clearly legitimate. Whitelisted reviewers bypass all automated penalties. This prevents the system from punishing genuine reviewers who happen to have unusual patterns.
The ghost mechanism deserves special attention. In most moderation systems, banning a user is obvious — they try to post and get an error message. This tells them they have been caught, which can escalate the situation. Ghosting is quieter. The reviewer continues to submit feedback, sees no error messages, and has no indication that anything has changed. Their submissions simply carry minimal weight in report synthesis. The target never sees the impact, and the bad actor never knows their effort is wasted.
Permanent Opt-Out: The Ultimate Protection
All of the mechanisms above protect people within the system. But OfficePoll also respects a more fundamental choice: the decision not to participate at all.
Anyone can permanently opt out of OfficePoll. Once you do, no one can submit feedback about you, no one can add you to a feedback round, and we will never contact you again. It is not a pause or a temporary deactivation. It is permanent, and it is enforced at the infrastructure level — your opt-out status is checked before any feedback submission is accepted, not after.
This matters for abuse prevention because it gives potential targets an escape hatch that does not depend on our detection systems working perfectly. If someone believes they are being targeted and does not trust the automated protections to handle it, they can remove themselves from the system entirely. No questions asked, no waiting period, no "are you sure" confirmation loop.
Why This Is Harder to Game Than the Alternatives
It is worth putting OfficePoll's protections in context, because the alternatives — traditional reference checks and LinkedIn recommendations — have their own abuse problems that no one talks about.
Reference checks are the most gameable professional feedback system in existence. You choose who speaks about you. Of course the references are positive — you hand-picked them. The hiring manager knows this, the candidate knows this, and yet we all pretend the resulting feedback is meaningful. A system where you curate your own reviewers is not feedback. It is marketing.
LinkedIn recommendations are even worse. They are public, attributed, and reciprocal. You write something nice for me, I write something nice for you. The social pressure to be positive is overwhelming. Nobody writes "adequate at their job but difficult to work with" on a public recommendation. The format makes honest feedback structurally impossible.
Traditional 360 reviews are better but still vulnerable. They typically have no credibility weighting — a new hire's assessment counts the same as a ten-year veteran's. They rarely have cooldown periods. They often show individual responses (even if anonymized), making it easy for a single negative review to dominate the recipient's attention. And they run once a year, giving bad actors a concentrated window to cause maximum damage.
OfficePoll is not immune to abuse. No system that accepts human input ever will be. But the layered defenses — credibility weighting, automated detection, cooldown enforcement, crowd thresholds, anonymization filtering, admin oversight, and permanent opt-out — create a system where gaming requires sustained effort across multiple accounts over many months, and even then, the mathematical impact is minimal.
A revenge review on OfficePoll is not just anonymous. It is also diluted, de-weighted, pattern-checked, cooldown-limited, and synthesized into irrelevance. That is not a guarantee of perfection. It is a guarantee that we have thought carefully about every way someone might try to abuse this system, and we have built specific, measurable defenses against each one.
The goal was never to eliminate bad-faith reviews entirely. The goal was to make them so costly to execute and so limited in impact that they are not worth the effort. We think we have gotten there.