Content
View differences
Updated by Dominic Bräunlein 2 days ago
# User Problem
* **OpenProject (MVP phase):** We need LLM infrastructure to build and test AI-powered features with production data. First priority is figuring out what model to use and how many users a single server can handle.
* **OpenProject (production readiness):** The infrastructure must be production-grade for our SaaS and on-premise deployments, this includes scalability as as demand grows.
* **On-premise administrators:** Need to deploy and maintain AI capabilities within their own infrastructure and compliance boundaries.
* **Regulated-industry customers:** Operate under strict DSGVO and sector-specific regulation. Any AI that routes data externally is a non-starter.
* **Organizations with own AI infrastructure:** Have access to their own AI data centers and should be able to bring and run their own models with OpenProject.
## Problem
* Users and organizations want AI features in OpenProject, but they would need to send data to third-party LLM providers. For customers handling sensitive project data, this is unacceptable.
* Self-hosted LLM are either underperforming and hard to set up (or both). This leads customers to up, so they go for external providers which leaves them customers to either accept the data-sovereignty risk or go without AI.
* Sending data to third-party LLM hosters frequently violates DSGVO and may conflict with EU AI Act requirements. Customers carry real legal risk.
* OpenProject has no LLM infrastructure.
<br>
## Pain
* **Shadow AI** – Users copy-paste project data into ChatGPT or similar tools, leaking sensitive information without IT oversight.
* **No easy to setup LLM options available** self-hosted AI option in the market** – On-prem and regulated customers have no path to AI features that respects their data sovereignty.
# Business Case
## Reach
Especially important for _About how many users, customers who chose OpenProject because of data control. or potential customers currently have this problem? (Low / Worst Case)_
> Add this value to the custom field and delete this section.
## Impact
AI assistance is _Among relevant customers or prospects, how much value do they get from a decision driver in2026 for project management tools. A fully self-hostable, DSGVO/EU AI Act-compliant AI stack is a genuine differentiator that reinforces why customers chose us. Also unblocks every future AI feature. comprehensive solution to this problem? (Conservative case)_
> Add this value to the custom field and delete this section.
## Confidence
Key risks: _What are the top risk factors that could inhibit our ability to deliver this solution? Please consider how we can mitigate these risks._
* Model and inference stack selection requires empirical testing.
* On-prem hardware constraints. Customers may lack access > Add this value to GPUs or the funds for it.
* EU AI Act compliance still evolving.
* Real-time latency requirements for interactive use cases.
custom field and delete this section.
## Urgency and Priority
No AI feature ships without it. Other open source products ship AI _What is the wrong way via third-party providers. We have relative priority of this opportunity in your backlog? What tradeoffs must you make? Is there a window hard deadline or could this wait?_
> Add this value to be the first open-source project management tool that does custom field and delete this right. section.
## Solution
<br>
#### Core Principle
All AI processing stays within _How do we solve the customer's controlled environment. No project data user’s problem. What is sent our “pain killer”? What must we achieve in the first version of the solution in order to third-party LLM providers.
#### **Phase 1:** Research & Benchmarking Spike
* Test candidate open-weight models on our core tasks
* text rewriting and translation
* summarization
* intent classification
* real-time assistance
* Model constraints:
* open-weight
* license-compatible with commercial use
* only as large as needed
* transparent training data provenance
* ethical and environmental footprint considered.
* Test inference runtimes e.g. vLLM, llama.cpp achieve value for streaming, K8s compatibility (and maybe CPU-only viability)
* Benchmark on reference hardware
* **Result should be a decision document with benchmarks, selected stack, hardware requirements, license review, ethical/environmental assessment.**
#### Phase 2: Infrastructure implementation the user?_
* Model-agnostic inference layer behind an OpenAI-compatible API. ....
* (Helm charts for GPU (+ CPU-only deployment), integrated into existing Helmfile setup.) not sure yet
* API gateway with per-tenant key management, rate limiting, usage logging.
* Needs to be clarified how keys can be managed
* We try avoiding a full UI for it except we can use something already build
* Guardrails middleware e.g. for prompt injection detection including audit log (needs clarification). This should be configurable per deployment.
* Streaming support for real-time use cases.
* Documentation with deployment guide, hardware specs, compliance checklist. <br>
## Out of Scope for the MVC
_What should NOT be in the minimal viable change, and can be considered for future iterations? Why? Please order them by importance._
* Fine-tunining
* Multi-model routing
* RAG and semantic search use cases ...
## Differentiation
* **Data sovereignty:** The model runs inside our _What do you believe will differentiate us from the current experience or the customer's infrastructure. No third-party LLM hoster in the loop.
competitive experiences?_
* **EU AI Act / DSGVO compliance by design**: Guardrails, audit logging, transparency built in from day one.
* **Right-sized, not over-sized:** Smallest model that delivers, keeping hardware realistic and energy consumption honest. With transparent energy consumption reporting ...
## Next iteration
_What is the next solution that would allow us to release meaningful customer value quickly?_
* UI for token, guardrail and account management ...
* Multi-model usage <br>
# Launch and Growth
## Measures
_How will you know you solved the problem? Please list measurable, quantitative indicators (preferred) or qualitative ways you plan on assessing the solution?_
* ...
## Messaging
_If you were to write a press release, how would you describe the value to customers?_
<figure class="table op-uc-figure_align-center op-uc-figure"><table class="op-uc-table"><tbody><tr class="op-uc-table--row"><th class="op-uc-table--cell op-uc-table--cell_head"><p class="op-uc-p">Headline</p></th><td class="op-uc-table--cell"><p class="op-uc-p">OpenProject enables the secure use of AI features by providing a safe LLM infrastructure for SaaS customers and tested and easy to install self-hosted solutions for on-prem customers</p></td></tr><tr class="op-uc-p"><br data-cke-filler="true"></p></td></tr><tr class="op-uc-table--row"><th class="op-uc-table--cell op-uc-table--cell_head"><p class="op-uc-p">First Paragraph</p></th><td class="op-uc-table--cell"><p class="op-uc-p"><br data-cke-filler="true"></p></td></tr><tr class="op-uc-table--row"><th class="op-uc-table--cell op-uc-table--cell_head"><p class="op-uc-p">Customer Quote</p></th><td class="op-uc-table--cell"><p class="op-uc-p"><br data-cke-filler="true"></p></td></tr></tbody></table></figure>
## Go to market
_How are you planning on getting this into users' hands?_
* **OpenProject (MVP phase):** We need LLM infrastructure to build and test AI-powered features with production data. First priority is figuring out what model to use and how many users a single server can handle.
* **OpenProject (production readiness):** The infrastructure must be production-grade for our SaaS and on-premise deployments, this includes scalability as
* **On-premise administrators:** Need to deploy and maintain AI capabilities within their own infrastructure and compliance boundaries.
* **Regulated-industry customers:** Operate under strict DSGVO and sector-specific regulation. Any AI that routes data externally is a non-starter.
* **Organizations with own AI infrastructure:** Have access to their own AI data centers and should be able to bring and run their own models with OpenProject.
## Problem
* Users and organizations want AI features in OpenProject, but they would need to send data to third-party LLM providers. For customers handling sensitive project data, this is unacceptable.
* Self-hosted LLM are either underperforming and hard to set up (or both). This leads customers to
* Sending data to third-party LLM hosters frequently violates DSGVO and may conflict with EU AI Act requirements. Customers carry real legal risk.
* OpenProject has no LLM infrastructure.
<br>
## Pain
* **Shadow AI** – Users copy-paste project data into ChatGPT or similar tools, leaking sensitive information without IT oversight.
* **No easy to setup LLM options available**
# Business Case
## Reach
Especially important for
AI assistance is
Key risks:
* Model and inference stack selection requires empirical testing.
* On-prem hardware constraints. Customers may lack access
* EU AI Act compliance still evolving.
* Real-time latency requirements for interactive use cases.
No AI feature ships without it. Other open source products ship AI
> Add this value
## Solution
<br>
#### Core Principle
All AI processing stays within
#### **Phase 1:** Research & Benchmarking Spike
* Test candidate open-weight models on our core tasks
* text rewriting and translation
* summarization
* intent classification
* real-time assistance
* Model constraints:
* open-weight
* license-compatible with commercial use
* only as large as needed
* transparent training data provenance
* ethical and environmental footprint considered.
* Test inference runtimes e.g. vLLM, llama.cpp
* Benchmark on reference hardware
* **Result should be a decision document with benchmarks, selected stack, hardware requirements, license review, ethical/environmental assessment.**
#### Phase 2: Infrastructure implementation
* Model-agnostic inference layer behind an OpenAI-compatible API.
* (Helm charts for GPU (+ CPU-only deployment), integrated into existing Helmfile setup.) not sure yet
* API gateway with per-tenant key management, rate limiting, usage logging.
* Needs to be clarified how keys can be managed
* We try avoiding a full UI for it except we can use something already build
* Guardrails middleware e.g. for prompt injection detection including audit log (needs clarification). This should be configurable per deployment.
* Streaming support for real-time use cases.
* Documentation with deployment guide, hardware specs, compliance checklist.
## Out of Scope for the MVC
* Multi-model routing
* RAG and semantic search use cases
## Differentiation
* **Data sovereignty:** The model runs inside our
* **Right-sized, not over-sized:** Smallest model that delivers, keeping hardware realistic and energy consumption honest. With transparent energy consumption reporting
## Next iteration
* Multi-model usage
# Launch and Growth
## Measures
_How will you know you solved the problem? Please list measurable, quantitative indicators (preferred) or qualitative ways you plan on assessing the solution?_
* ...
## Messaging
_If you were to write a press release, how would you describe the value to customers?_
<figure class="table op-uc-figure_align-center op-uc-figure"><table class="op-uc-table"><tbody><tr class="op-uc-table--row"><th class="op-uc-table--cell op-uc-table--cell_head"><p class="op-uc-p">Headline</p></th><td class="op-uc-table--cell"><p class="op-uc-p">OpenProject enables the secure use of AI features by providing a safe LLM infrastructure for SaaS customers and tested and easy to install self-hosted solutions for on-prem customers</p></td></tr><tr
## Go to market
_How are you planning on getting this into users' hands?_