Content
View differences
Updated by Dominic Bräunlein about 2 months ago
**As** an OpenProject admin
**I want to** be able to connect OpenProject to a LLM
**so that** I can use the LLM for a multitude of new features
**Acceptance criteria**
* In "Administration" settings there is a menu item called "AI"
* When the user clicks on the menu item "AI" the user enters the AI settings pages which contains a tab menu. At the moment only one tab item is available and it's content should be shown by default when entering AI settings.
* The first (and only) tab is "LLM settings" which contains:
* Main content, a 2 step settings workflow
* "LLM connection" is the first step of the settings in the main content. This section is open by default when no settings are entered yet. (The second section "Model" can not be set when no LLM server is defined yet.). The section contains:
* Displayed information: "The LLM server needs to support OpenAI-API compatible endpoints."
* Boolean: "Enabled" (deactivates the LLM server settings and all LLM related AI features)
* URL Input: "Base URL" (placeholder: [`https://example.com/v1/`](https://example.com/v1/)
* Question: Host Url or host?
* Text Input: "API key"
* Only visible when setting the API key
* Later on edit the saved API key should be shown masked
* "Save and fetch models"-Button
* When clicked a request to fetch the installed models of the LLM are set
* `GET` [`https://example.com/v1/models`](https://example.com/v1/models)
* On error, the settings are not saved and following possible errors are shown inline
* "SSL error": an inline error is shown underneath the "Base URL" input
* "Host could not be reached": an inline error is shown underneath the "Base URL" input
* "Invalid API key": an inline error is shown underneath the "API key" input
* Other (unexpected) errors are shown as toast message.
* Info: This list is not complete yet
* On success
* Settings are saved, the first section is closed and the second section "Model" is opened"
* The fetched models from the request are shown in the select in the "Model"-section
* Select Models
* In the next step the admin needs to select a model per capability.
* The different capabilities are:
* Embedder (required, but behind an additional feature flag: ai\_search)
* Reranker (required, but behind an additional feature flag: ai\_search)
* Generative (required)
* Each model select shows a hint that helps admins understand what they select. e.g.
* _Used by: Semantic search and duplicate detection_
* _Used by: Ask AI in documents_
* For each capability there is one select to choose a model
* Lists all models fetched from the LLM server
* There is no default model selected automatically
* The admin as to select a model
* "Save"-Button
* If no model is selected, show an inline error on the model select: Please select a model
* If a model was selected
* the LLM settings are saved
* a toast is shown mentioning the successful setup of the LLM server (?)
* The sidebar is shown
* During edit mode (after the initial setup)
* there is an additional "refresh" button.
* This button will refresh the list of models.
* The list of models should be persisted and only overwritten if the refresh was successful.
* Sidebar
* The sidebar is only shown when the LLM was successfully setup already
* The sidebar contains a "Test connection"-button
* When the user clicks the button, two request are made
* List models
* `GET` [`https://example.com/v1/models`](https://example.com/v1/models)
* A chat completion test
* `POST` [`https://example.com/v1/chat/completions`](https://example.com/v1/chat/completions)
* `{ "model": "SELECTED_MODEL", "messages": [ {"role": "system", "content": "You are a helpful assistant that answeres with one word."}, {"role": "user", "content": "Hi."} ] }`
* If the requests were successful, a success message is shown underneath the button
* If an error occurred the error is shown underneath the button
* The sidebar contains additionally a "Status and errors" section
* The status of the last request should be shown with timestamp
* The error message of the last error should be shown with timestamp
<br>
* When the LLM server settings are enabled and set up BlockNote have basic AI features enabled.
* When disabled those features are not available in BlockNote.
**Technical notes**
* <br>
**Permissions and visibility considerations**
* Only administrators should be able to see or edit the AI settings page
* Everyone access to edit Documents should have access to AI features in Blocknote
**Translation considerations**
* _Key terms and phrases in the key languages_
**Out of scope**
* Enable AI features per project
* Enable AI features for specific users
* Usage counter and budget limits
**I want to** be able to connect OpenProject to a LLM
**so that** I can use the LLM for a multitude of new features
**Acceptance criteria**
* In "Administration" settings there is a menu item called "AI"
* When the user clicks on the menu item "AI" the user enters the AI settings pages which contains a tab menu. At the moment only one tab item is available and it's content should be shown by default when entering AI settings.
* The first (and only) tab is "LLM settings" which contains:
* Main content, a 2 step settings workflow
* "LLM connection" is the first step of the settings in the main content. This section is open by default when no settings are entered yet. (The second section "Model" can not be set when no LLM server is defined yet.). The section contains:
* Displayed information: "The LLM server needs to support OpenAI-API compatible endpoints."
* Boolean: "Enabled" (deactivates the LLM server settings and all LLM related AI features)
* URL Input: "Base URL" (placeholder: [`https://example.com/v1/`](https://example.com/v1/)
* Question: Host Url or host?
* Text Input: "API key"
* Only visible when setting the API key
* Later on edit the saved API key should be shown masked
* "Save and fetch models"-Button
* When clicked a request to fetch the installed models of the LLM are set
* `GET` [`https://example.com/v1/models`](https://example.com/v1/models)
* On error, the settings are not saved and following possible errors are shown inline
* "SSL error": an inline error is shown underneath the "Base URL" input
* "Host could not be reached": an inline error is shown underneath the "Base URL" input
* "Invalid API key": an inline error is shown underneath the "API key" input
* Other (unexpected) errors are shown as toast message.
* Info: This list is not complete yet
* On success
* Settings are saved, the first section is closed and the second section "Model" is opened"
* The fetched models from the request are shown in the select in the "Model"-section
* Select Models
* In the next step the admin needs to select a model per capability.
* The different capabilities are:
* Embedder (required, but behind an additional feature flag: ai\_search)
* Reranker (required, but behind an additional feature flag: ai\_search)
* Generative (required)
* Each model select shows a hint that helps admins understand what they select. e.g.
* _Used by: Semantic search and duplicate detection_
* _Used by: Ask AI in documents_
* For each capability there is one select to choose a model
* Lists all models fetched from the LLM server
* There is no default model selected automatically
* The admin as to select a model
* "Save"-Button
* If no model is selected, show an inline error on the model select: Please select a model
* If a model was selected
* the LLM settings are saved
* a toast is shown mentioning the successful setup of the LLM server (?)
* The sidebar is shown
* During edit mode (after the initial setup)
* there is an additional "refresh" button.
* This button will refresh the list of models.
* The list of models should be persisted and only overwritten if the refresh was successful.
* Sidebar
* The sidebar is only shown when the LLM was successfully setup already
* The sidebar contains a "Test connection"-button
* When the user clicks the button, two request are made
* List models
* `GET` [`https://example.com/v1/models`](https://example.com/v1/models)
* A chat completion test
* `POST` [`https://example.com/v1/chat/completions`](https://example.com/v1/chat/completions)
* `{ "model": "SELECTED_MODEL", "messages": [ {"role": "system", "content": "You are a helpful assistant that answeres with one word."}, {"role": "user", "content": "Hi."} ] }`
* If the requests were successful, a success message is shown underneath the button
* If an error occurred the error is shown underneath the button
* The sidebar contains additionally a "Status and errors" section
* The status of the last request should be shown with timestamp
* The error message of the last error should be shown with timestamp
<br>
* When the LLM server settings are enabled and set up BlockNote have basic AI features enabled.
* When disabled those features are not available in BlockNote.
**Technical notes**
* <br>
**Permissions and visibility considerations**
* Only administrators should be able to see or edit the AI settings page
* Everyone access to edit Documents should have access to AI features in Blocknote
**Translation considerations**
* _Key terms and phrases in the key languages_
**Out of scope**
* Enable AI features per project
* Enable AI features for specific users
* Usage counter and budget limits