fraud risk model built mid-incident
when fraud hit at POPTech, worked with the data team to map every confirmed fraud user's transaction pattern - amount type, merchant category, error codes, failure rates. built a risk score from that dataset. the model kept getting better as new inputs came in, with a plan to wire it into the rule engine as a live feedback loop. i defined what signals mattered, shaped what the model should look for, and validated whether the outputs made sense in a fraud context - built it with the data team.
risk signals from real data
identified and defined the right signals for detection - velocity, device fingerprinting, geo clustering, error code sequences. these fed both the rule engine and the risk scoring layer at POPTech and FamPay.
rule engine typologies from real patterns
seeded AI vendor typologies from confirmed fraud cases - not hypothetical ones. rules that come from actual data stay accurate longer and generate fewer false positives.
model validation and outcome tracking
tracked TPR, FPR, and analyst workload as indicators of whether a rule or model is doing its job. a rule that blocks everything has perfect recall and terrible precision. both matter.
fraud risk model built mid-incident
when fraud hit at POPTech, worked with the data team to map every confirmed fraud user's transaction pattern - amount type, merchant category, error codes, failure rates. built a risk score from that dataset. the model kept getting better as new inputs came in, with a plan to wire it into the rule engine as a live feedback loop. i defined what signals mattered, shaped what the model should look for, and validated whether the outputs made sense in a fraud context - built it with the data team.
risk signals from real data
identified and defined the right signals for detection - velocity, device fingerprinting, geo clustering, error code sequences. these fed both the rule engine and the risk scoring layer at POPTech and FamPay.
rule engine typologies from real patterns
seeded AI vendor typologies from confirmed fraud cases - not hypothetical ones. rules that come from actual data stay accurate longer and generate fewer false positives.
model validation and outcome tracking
tracked TPR, FPR, and analyst workload as indicators of whether a rule or model is doing its job. a rule that blocks everything has perfect recall and terrible precision. both matter.