Models
With the Cockpit tool Model, you can set up references to AI models. These AI models need to be deployed externally to Neptune DXP - Open Edition.
| While the Neptune DXP - Open Edition supports a range of established providers, such as OpenAI, AWS, and Google, you can also deploy your own language model in your network and utilize it. If you do so, either use the OpenAI model type (as many deployment providers support a similar API response structure) or use OLlama. |
Set up an AI model
When setting up an AI model, you need to provide the following information:
-
Input Type: The data input type of the model. If you use a well-known LLM such as GPT-* from OpenAI, the input type will be Text.
-
Output Type: The data output type of the model. In the above example, it would again be Text. However, you may also set up an embedding model, which would create vector output from text input.
-
Provider-specific details: Depending on the provider, you must add details such as an API token, an endpoint, or a model name.
| When setting up an embedding model, do not forget to set the vector dimensionality in the model settings. This enables you to add an index on a vector column in the Table Defintion tool for faster semantic search. |