As machine learning becomes a core component in malware detection, new risks emerge from adversarial manipulation. This talk explores how ML-based malware classifiers respond to targeted feature modifications. In order to experimentally assess their robustness, several models were trained to classify malicious and benign files and then tested with adversarially altered samples. The presentation focuses on data preparation, attack simulation, and a comparative analysis of model robustness under adversarial conditions.