Wisent-Guard is not fully optimized. We will continue to add new features to support your journey and decrease the time to happiness.
Having the ability to really easily use Qwen, Deepseek and Mistral with utility functions for determining the optimal layer to read representations from and extensive tests
Please let us know what open-source inference frameworks Wisent-Guard should be integrated with. We want to make it easy and intuitive for you to deploy our software!
Especially Mandarin to help our users
Research and provide recommended optimal layers for different model families (Qwen, Mistral, Llama) to maximize representation quality and detection accuracy
Task-specific layer recommendations for different use cases (harmful content detection, hallucination detection, bias detection, etc.)
Improved methods for efficiently collecting and processing activations to minimize computational overhead while maintaining detection quality
Built-in tools to measure and optimize latency when using Wisent-Guard for generating detection scores in production environments
Research-backed recommendations for the optimal number of training samples needed to achieve good classifier performance for different detection tasks
One-click deployment on Azure infrastructure with seamless integration into Azure Machine Learning and Azure OpenAI Service
Deploy on AWS with auto-scaling support, integration with Amazon SageMaker, and compatibility with AWS Bedrock models
Integration with Vertex AI and Cloud Run for scalable deployment, with support for Google Cloud's Gemini and PaLM models
We value your feedback and want to build features that matter most to you. Let us know what you'd like to see!