Local LLMs in companies
Local LLMs enable companies to use AI while ensuring full data protection compliance. Leverage the potential of your internal data with complete data sovereignty – learn how in this blog post.

It’s no surprise: The use of artificial intelligence (AI) in German companies is on the rise. According to recent figures from the Federal Statistical Office, around 20% of companies now employ AI technologies. Larger companies tend to be early adopters more frequently than smaller ones. The reasons preventing many businesses from integrating AI into their daily operations are clear:
Of the companies that have not yet implemented AI technologies, 18% have at least considered their use. When asked about the reasons for not using AI, these companies cited: Lack of knowledge (71%), uncertainty about legal implications (58%), and concerns about data protection and privacy (53%).
These figures highlight that the widespread adoption of AI is not just a matter of technical feasibility but also of awareness and trust-building. In this article, we aim to address some of these concerns and demonstrate how companies can harness the potential of AI technologies securely and in compliance with data protection regulations by deploying locally hosted LLMs. Many of the cited challenges already have practical solutions – especially when AI solutions are operated locally or within privacy-compliant infrastructures in Germany.
The role of data protection and legal security
Data protection and legal security are key aspects that companies must consider when implementing AI technologies. Simply entering trade secrets or personal data into services like ChatGPT is not a good idea. This is particularly relevant in Germany and Europe, where the General Data Protection Regulation (GDPR) imposes strict requirements. Businesses must ensure that any personal data processed by AI systems is protected in accordance with legal regulations.
One of the most effective ways to address concerns around data protection and security is by using locally implemented AI solutions. Running AI systems on in-house servers or within GDPR-compliant hosting services in Germany allows companies to maintain full control over their data. This not only reduces legal risks but also enhances data security and fosters trust.
That being said, the reality is that there are currently no open-source language models that can be operated locally while fully matching the capabilities of ChatGPT and other commercial offerings. However, there are still numerous ways to use language models while adhering to German data protection standards and maintaining data sovereignty. One of these approaches is the deployment of local LLMs.
What is an LLM?
An LLM (Large Language Model) is an AI system specifically trained to understand and generate language. It relies on vast text datasets to recognize patterns, structures, and meanings in language. Through deep learning, the model learns to establish connections between words and sentences, enabling it to provide precise, context-aware responses to user inputs.
LLMs have a wide range of applications: They power chatbots, automated translation systems, text generation, and complex analyses of large text corpora. However, they also have limitations, such as difficulties in interpreting ambiguous language or avoiding biases present in the training data.
How does an LLM behave?
There are a few characteristics that apply to all LLMs, regardless of whether the model is run locally or used as part of a public AI service:
- LLMs have a cut-off date and do not have access to real-time information.
- LLMs can hallucinate – sometimes in a very convincing way.
- LLMs are trained on publicly available information and can only process what they have learned from these sources.
The fact that LLMs can generate incorrect or misleading information means that AI should not be used without a basic level of knowledge within the team. Awareness of both the potential and the limitations of language models is essential—this includes data protection concerns as well as the possibility of inaccurate outputs. With proper team training, careful usage, and a privacy-friendly technical setup, there is nothing standing in the way of successfully integrating language models into a business.
There are now many large language models developed by various organizations and research institutions. The most well-known is OpenAI’s GPT series, which powers ChatGPT. Other major players include Meta (Llama 3), Google (Gemma), Microsoft (Phi-3), and DeepSeek, all of which have developed their own language models. Some of these models are freely available, allowing businesses and individuals to use the trained models for their own purposes.
Solution Approach: Local LLMs with makandra.ai as an example
At the end of 2024, we launched the makandra.ai project, developing our own AI based on the locally hosted Ollama model. Our goal: to connect our internal knowledge base, makandra cards, with a language model via a chat interface.
makandra cards encapsulates our way of working and our processes, including publicly available best practices and tips on web application development and DevOps – our core expertise. With makandra.ai, we wanted to make this vast knowledge base of thousands of cards even more accessible and useful for coding and client projects.
The key advantage: Full data control. A crucial aspect of this setup is that our internal information does not need to be retrained into the model – it remains entirely on our own servers. Depending on the use case, we utilize different language models from Ollama.
This is how It works: Internal company data is integrated into the LLM using Retrieval-Augmented Generation (RAG). This process involves embedding models that store existing documents – such as our knowledge base – in a vector database. In this database, semantically similar documents are placed close together.
When a user queries makandra.ai, the system retrieves the most relevant documents from the vector database and incorporates them into the prompt. This approach enables secure use of sensitive, internal information within an LLM.
Additionally, this method compensates for the cut-off date limitation mentioned earlier, as the LLM effectively gains access to up-to-date information.
Best practices for implementing local AI solutions
For companies looking to deploy local AI solutions, careful planning is essential. A well-thought-out strategy should consider existing processes, security requirements, and cultural shifts within the organization. Successful implementation relies on close collaboration between IT teams, business departments, and external specialists. Below are some best practices to guide this process:
- Choosing the right model
Select a language model that can be adapted to internal requirements. Opt for providers that offer transparent training processes and GDPR-compliant solutions. - Secure infrastructure
Build an IT environment that meets the highest security standards, ideally in compliance with German and European data protection laws. - Seamless integration into existing workflows
Ensure that AI solutions are smoothly embedded into business processes. Close collaboration between IT teams, domain experts, and external partners is key to success. - Training and change management
Support AI adoption with comprehensive training for employees. This helps unlock the full potential of the technology while reducing resistance and uncertainty.

LLMs with internal data: More than just language processing
To fully leverage an LLM’s potential, integrating internal, company-specific data is often essential. With a local LLM, this is easily achievable.
By incorporating RAG (Retrieval Augmented Generation) with proprietary data – such as internal documents, emails, or best practices – an LLM becomes a true knowledge repository, tailored to your company’s specific terminology and needs. With a local deployment, all data remains within the company, ensuring that sensitive information – such as trade secrets or customer data – is securely processed without exposure to external cloud environments.
Integrating internal data not only improves response accuracy but also enables the LLM to address industry-specific challenges and complex business inquiries. Companies benefit from a custom AI solution that is finely tuned to their exact requirements.
Use cases for local LLMs
Local AI solutions offer a wide range of applications that go far beyond simple text generation – such as using ChatGPT.
- Knowledge management
Internal knowledge databases become easier to search and utilize with AI. Employees can quickly find relevant information, best practices, and solutions from past projects. - Software development
By integrating best practices and code examples from internal repositories, developers can access proven solutions more quickly, helping to shorten development cycles. - Customer service and email management
AI can automatically categorize and prioritize customer inquiries. For example, thousands of emails – including those containing sensitive data – can be scanned within seconds. The AI can assess the content to determine whether an issue is low priority, such as a minor hotel room cleanliness complaint, or whether immediate action is required. This ensures that messages are not only sorted efficiently but also that sensitive information is identified and handled appropriately – without leaving the company’s secure environment. - Data analysis and reporting
Sensitive data from various departments can be centrally analyzed. AI generates reports or provides actionable recommendations based on the latest company data – without the need to move data outside the secure server environment.
Conclusion
The key to success in the digital future lies in the intelligent use of data – while maintaining data privacy and legal security. Local LLMs provide an ideal solution by combining AI technology with full control over company data. Businesses that invest early in privacy-compliant AI solutions not only gain a valuable competitive edge but also position themselves as industry leaders in the long run. Additionally, targeted AI applications can help bridge talent shortages and automate administrative processes efficiently.
Since every industry and company has unique requirements, tailored local AI solutions enable precise responses to specific challenges.
We invite you to explore the potential of local AI solutions with us – whether in a workshop, a detailed analysis of your current processes, or through personalized consulting. Let’s find out how our AI solutions can help you overcome challenges and take the next step toward a secure, data-driven future.