Behavior Analytics
Behavior Analytics is the planned behavioral layer of hoaxeye — a per-server inference agent that watches what happens on a player's screen and flags patterns that look statistically off. It runs off the same upstream channel that powers the operator's multi-stream view; no additional data is collected from the player. We produce verdicts; operators decide what happens next.
Roadmap · details follow. This family is in design and pilot-test scoping. We're not announcing a launch window yet — when we do, it will be paired with the privacy-policy update and the operator AVV addendum that have to land first. Email [email protected] if you want a heads-up the moment the pilot opens.
How it fits together
Four layers, one direction of travel — frames in, verdicts out. The diagram below shows the path; everything is in-flight until the verdict reaches the rule engine.
Bar widths are illustrative — they show where the data shape narrows down (frames → findings → verdicts), not throughput. Concrete cadence numbers ship with the launch announcement and the published methodology.
Detection classes
Four named classes at launch. We name the class, not the signals — publishing the signals would just be a tuning sheet for the people we're trying to detect.
- Mod-menu indicators. Visual signals consistent with third-party menu overlays at the rendering layer.
- Camera-state anomalies. Sustained still-frame patterns and free-camera signatures that deviate from expected gameplay state.
- Movement-state anomalies. Discrepancies between rendered state and expected in-engine motion.
- Pattern correlation. Cross-frame signals that only become significant in aggregate — never a single-frame trigger.
Modes
Same mode model as every other family. New servers ship with disabled at launch — the family is real but the operator has to promote it deliberately after their own observation window.
| Mode | Behavior |
|---|---|
observe | Detection runs and produces a verdict, but no player-facing action is taken. Verdicts are written to the audit log so operators can review tuning before going live. |
score | The verdict contributes to the player's aggregate risk score. Action is triggered when the score crosses the operator-configured threshold combined with other signals — never from this single check alone. |
enforce | The verdict triggers the operator-configured action immediately (deny / queue / kick). Operators choose which actions a verdict maps to per server. |
Privacy & data handling
Behavior Analytics processes image data, so the privacy boundary deserves to be stated loudly rather than buried in legal copy.
- No additional data collection. The stream source is the operator's existing multi-stream channel. Behavior Analytics adds inference, not collection. If the operator turns off the multi-stream feature for a server, the family has nothing to read.
- Raw frames are not retained. The per-server agent processes them in-flight. What goes to durable storage is the structured verdict and the metadata required for an audit trail — never the underlying imagery.
- Player notice and operator AVV addendum land before launch. The public launch announcement is paired with the corresponding Privacy Policy §9 entry and the AVV addendum that operators sign — neither this page nor any pilot opens before that.
- Retention follows the established schedule. The structured findings inherit the same Privacy Policy §9 retention windows as other detection families. A specific row is added when the family ships.
- Player appeals stay on the existing path. DSA Art. 16 / 17 / 21 via [email protected]. Same procedure as every other family — see False positives.
What we'll publish at launch
- The concrete launch window (currently "details follow").
- The structured verdict strings the family emits.
- Latency and cadence numbers with their methodology.
- A pilot-server false-positive history block, the same way every other family has one — including the iteration story. See Backdoor scanner for the format.
- The corresponding Privacy Policy §9 entry and AVV addendum diff.
What we deliberately don't publish
- The visual signals or feature-extraction internals that drive each detection class. Publishing them would be a tuning sheet for bad actors — same logic as on the rest of the docs (see Rule engine).
- Internal model identifiers. We say "in-house AI systems" — same wording used on the False positives page, for the same reasons.
- Numeric score thresholds. Risk buckets
low/medium/highonly. - Frame rates, sample windows, or any other timing detail an attacker could map to a specific evasion strategy. Cadence numbers ship at launch with the methodology.
Operator interest
If you operate a server and want an early-access slot once the pilot opens, email [email protected] with your server scale and a short description of what you're seeing. Selection is by operational need, not pricing tier — same line as on the HoaxShield roadmap page.