AI Service - Security and Data Protection

We’ve built our AI Service with a core principle in mind: your data stays yours.

Our goal is to deliver powerful AI without compromising the trust that makes enterprise service management possible. Legal, HR, Security, and IT teams can be confident: our AI respects boundaries—technical, legal, and ethical.

  • No Generative Training on Customer Data: We use multiple trusted pre-trained models (like GPT-4, Gemini, AWS models or Llama) but we never train those models on your information. If someone types a phone number or sensitive info into a ticket, it is never used to "teach" the model.

  • Isolated, Encrypted Storage: All data within our system—including responses—is stored in isolated databases with ciphered storage, meaning it's never public, shareable, or accessible to third parties.

  • Training of customer-specific models: Some non-generative AI capabilities need customer specific data. In those cases (like for classifier models or regression models) when we do train models on customer data, we do it on a per customer basis, only with the data of that customer, and serving that customer only. For example, we train an escalation suggestions model that considers configured SLAs and historical resolution times for different types of tickets.

Frequently Asked Questions

Yes, InvGate adheres to relevant data privacy laws and regulations, including GDPR, in alignment with industry best practices. Our AI services operate on Azure, Google Cloud, and AWS infrastructure, all of which are certified for handling data under GDPR and similar frameworks. For detailed information on our data privacy compliance and AI-powered features, please refer to the following documents: