Protect Your Prompts: Protocols for IP Protection in LLM Applications
van Wyk, M. A., Bekker, M., Richards, X. L., Nixon, K. J.
–arXiv.org Artificial Intelligence
LLMs, including those in the generative pre-trained transformer (GPT) family, are known to exhibit emergent properties [1]. Emergent behavior in such complex nonlinear adaptive systems manifests in a seemingly stochastic manner [3], which impacts directly on an LLM's responses to instructions for performing tasks given in the form of prompts. Consequently, querying an LLM repeatedly with the same prompt may yield different responses, while a tweak in a prompt may result in either no difference in an LLM's response, or a significant change. For critical applications, for example, in assistive surgery [4], a substantial amount of time is spent on ensuring that the performance achieved is within an acceptable tolerance. Therefore, the monetary value of a well-crafted prompt (regardless of the field) which has painstakingly been developed through trial and error, including several hundred versions of iterative phrasing, and possibly also exploiting a particular LLM's architecture, will be considerable [5]. This has led to the emergence of a new field, called prompt engineering, which refers to the art and science of engineering incantations that will evoke the desired response from an LLM [6, 7]. This has underscored a simple fact that since the end of 2022, prompts themselves have become valuable. A prompt thus does not represent the desire of a user for an artifact that an LLM might produce, instead it stands as a proxy for the artifact it will "unlock".
arXiv.org Artificial Intelligence
Jun-9-2023
- Country:
- Africa > South Africa
- Gauteng > Johannesburg (0.04)
- North America > United States
- Florida > Palm Beach County > Boca Raton (0.04)
- Africa > South Africa
- Genre:
- Research Report (0.40)
- Technology: