Agent instructions

A crucial setting of the agent is its instruction, or prompt, the defining description of how the agent should behave and what it should do.

In general, "prompt engineering" is a concept of the age of large language models (LLM) and refers to the skill set to configure an LLM to behave in the way you, the developer, wants. Many resources exist to provide help on how to prompt engineer an LLM best. Be aware, that the optimal prompt is fine-tuned to the specific LLM that is being used.

Prompts/instructions are versioned in Neptune DXP - Open Edition. That is, whenever you edit the instruction, you save a new version of it to the database. This is done to ensure traceability of AI outputs.

By selecting Show all Prompts, you can review different prompt versions and revert back to an older version, if necessary.

Package and deployment

If you change the package of an agent, this change trickles down to all the prompts assigned to this agent. Consequently, move to the new package.

When you deploy an agent to a remote system, only the active prompt is included. Inactive prompts are not included in the deployment.

Variables

You can use variables when you write the prompt. These are filled in at runtime, replacing the placeholder {{variableName}} with the variable value.

System variables

A number of system variables are provided, mostly from the requesting users data. You can also insert {{currentTime}} to provide knowledge about the current time to the AI.

Custom variables

You are free to create a custom variable and insert it in the prompt. At runtime, you must provide that variable.