Tag: ML model defenses

Tecniche difensive per proteggere modelli di machine learning da attacchi adversarial, poisoning, evasion e model extraction. Include adversarial training, input sanitization, differential privacy, model watermarking, query rate limiting e monitoraggio delle predizioni anomale in fase di inferenza.