FVI experts' breakfast
08. FVI Experts' Breakfast
Data Security, Cyber Security, and On-Premise of LLMs in Maintenance
Key Takeaways
Topic: "Is the Enemy Listening?" – AI Security, Data Protection & Local LLMs.
In this session, Marcel Hahn and Jens Reißenweber addressed the biggest brake on AI in medium-sized businesses: the fear of data. Together with data protection expert Ina, it was shown that AI can be operated securely – if you know how.
- The 3 Security Levels: Marcel Hahn demonstrated live three ways to use AI, depending on the level of paranoia:
- Level "Comfort": ChatGPT (OpenAI) – fast, powerful, but data potentially ends up in the USA (and in training).
- Level "Business": Azure OpenAI (Microsoft) – same power, but GDPR-compliant in the German data center (Frankfurt), without training on customer data.
- Level "Fort Knox": Local LLMs (e.g., Llama 3 on your own server/laptop) – no data leaves the house, even runs offline (albeit slower/more expensive).
- RAG as a Data Protection Lever: Instead of training the company knowledge into the model (expensive & insecure), RAG (Retrieval Augmented Generation) is used. The model remains "dumb" (knows nothing about the company) but gets temporary access to internal documents for each question. Advantage: The data is securely stored in the company index, not in the AI model.
- Awareness is Key: Ina (data protection) emphasized: The problem is usually not the technology, but the person who copies sensitive information into the wrong chat. Companies need clear guidelines ("AI driving license") and secure tools, instead of bans. Banning AI promotes shadow IT.
- Liability & EU AI Act: A look at regulation shows: AI systems will receive CE markings like machines. Those using "High-Risk" AI (e.g., safety components) must prove compliance. ADAM prepares for this by offering transparency and traceability (source citations).
- Executives are Responsible: AI is a leadership matter. If the boss uses AI (role model), the team follows. But the boss must also know and manage the risks.
Classification: Security as a Selling Point
This episode provides the ammunition for the conversation with the CISO (IT security) and the works council:
- ADAM = Enterprise-Grade Security: We don't tinker with ChatGPT. ADAM runs on the Azure OpenAI infrastructure (Level 2) or even on-premise (Level 3) if desired. We guarantee: Your data does not train the competitor's AI.
- Compliance "Out of the Box": Through ADAM's RAG architecture, it is always traceable which document was used for an answer (source citation). This prevents hallucinations and creates the audit security that the AI Act will demand.
- Sovereignty through Technology: We free the customer from dependency on US startups. Due to the flexible architecture, ADAM could theoretically switch models tomorrow (e.g., to a European "Mistral" model) without the customer having to change anything. This is true technological sovereignty.
Conclusion: Security is not a showstopper for AI, but a question of architecture. ADAM delivers the most secure architecture on the market.