A major challenge in using machine learning in cybersecurity is that adversaries are ubiquitous, yet the vast majority of machine learning solutions are not designed with adversaries in mind. The past few years of work have led to a considerable progress on machine learning against data poisoning where the adversary is allowed to insert a few training examples in order to fool machine learning methods. However, the vast majority of work is in the image domain, and the threat model describes unrealistic adversaries. Our goals include building machine learning methods that are robust to realistic adversaries in the cybersecurity domain.