...In this work, we propose a method to mitigate the risk of code leakage when using LLM-based code assistants. CodeCloak is a novel deep reinforcement learning agent that manipulates the prompts before sending them to the code assistant service. CodeCloak aims to achieve the following two contradictory goals: (i) minimizing code leakage, while (ii) preserving relevant and useful suggestions for the developer. Our evaluation, employing StarCoder and Code Llama, LLM- based code assistant models, demonstrates CodeCloak's effectiveness on a diverse set of code repositories of varying sizes, as well as its transferability across different models. We also designed a method for reconstructing the developer's original codebase from code segments sent to the code assistant service (i.e., prompts) during the development process, to thoroughly analyze code leakage risks and evaluate the effectiveness of CodeCloak under practical development scenarios. By: Amit Finkman Full Abstract and Presentation Materials: https://ift.tt/v7dRS4J
source https://www.youtube.com/watch?v=Nv2rRkVvzKY
Subscribe to:
Post Comments (Atom)
-
Germany recalled its ambassador to Russia for a week of consultations in Berlin following an alleged hacker attack on Chancellor Olaf Scho...
-
Android’s May 2024 security update patches 38 vulnerabilities, including a critical bug in the System component. The post Android Update ...
No comments:
Post a Comment