Frequently Asked Questions
Traditional honeypots are static, easily fingerprinted, and require high maintenance. Beelzebub is an AI-Native Security Platform. We use Large Language Models (LLMs) to create dynamic, high-interaction environments, such as Linux servers, API services, and MCP tools, that engage attackers for hours while remaining completely secure.
Key differentiators:
- Zero false positives - only real threats trigger alerts
- No human supervision required - fully automated operation
- High interaction without risk - LLMs provide safe, realistic responses
- Infinite scalability - A single Beelzebub instance can simulate thousands of diverse network assets instantly.
Beelzebub supports both cloud and on-premises LLM deployments to ensure data sovereignty:
- OpenAI
- Ollama (for local/air-gapped deployments)
- Anthropic
- Gemini
- Grok
- OpenRouter
This flexibility allows you to choose based on your specific security requirements and corporate policies.
Beelzebub deploys in 1-2 minutes using Docker containers or Kubernetes with our official Helm chart. It works seamlessly on any cloud provider (AWS, Azure, GCP) or on-premises infrastructure, requiring zero disruption to your production workloads.
Beelzebub supports multiple protocols and can dynamically simulate various services across your attack surface:
- SSH & Telnet with realistic, AI-generated terminal interactions
- HTTP/HTTPS for web application simulation
- TCP services for custom protocol emulation
- Databases (MySQL, PostgreSQL simulation)
- MCP (Model Context Protocol) to secure enterprise AI Agents
- IoT & OT devices for industrial environments
Yes! The core framework of our AI-Native Security Platform is open source with 1,800+ GitHub stars. You can audit the code and deploy the self-hosted version for free. Our managed Enterprise platform adds critical capabilities like AI SOC automation, continuous Red Teaming, centralized management, and 24/7 SLA-backed support.
Our sensors are extremely lightweight. Minimum requirements per instance:
- CPU: 2 cores
- RAM: 4GB
- Storage: 20GB for logs and analysis
- Network: Internet access for cloud LLM API calls, or local network only for Ollama deployment
- Container runtime: Docker or Kubernetes
Supported platforms:
- Linux (Ubuntu, RHEL, CentOS)
- Container orchestration (Kubernetes, Docker Swarm)
- All major cloud providers (AWS, Azure, GCP)
Beelzebub integrates seamlessly with your existing security stack:
- SIEM integration: Splunk, Elastic Stack, Microsoft Sentinel
- Prometheus: OpenMetrics for monitoring and alerting
- RabbitMQ: For scalable event processing
Yes! For air-gapped, defense, or high-security environments, deploy Beelzebub with local Ollama instances. This keeps all data strictly within your network perimeter while maintaining full AI capabilities. Popular models like CodeLlama and Llama 3 work excellently for generating realistic sensor interactions.
Configuration requires zero coding and is handled via simple YAML files. You can create custom scenarios by defining:
- Service types (SSH, HTTP, MCP, database...)
- Response patterns and realistic system behaviors
- Vulnerability simulations to attract specific targeted attacks
- Custom prompts for LLM interactions
Our documentation includes ready-to-use templates for web servers, databases, IoT devices, and more.
Our AI sensors are ultra-secure by design:
- LLM sandbox: Attackers interact exclusively with the AI, never with a real operating system
- No real vulnerabilities: Simulated responses prevent any actual compromise
- Isolated containers: Complete separation from your production networks
Trusted by critical infrastructure including telecommunications and financial services.
Beelzebub provides the post-breach visibility required to meet multiple compliance standards:
- NIS2 Directive: Addresses advanced lateral movement detection requirements
- DORA: Enhances operational resilience for financial services
- EU AI Act: Supports secure AI deployments by continuously validating enterprise AI models
- SOC 2: Provides continuous security controls and monitoring
- NIST Cybersecurity Framework: Strengthens the "Detect" and "Respond" capabilities
Absolutely! You can deploy our AI-Native Security Platform completely on-premises with:
- Local LLM deployment using Ollama
- On-site data processing - no cloud dependencies
- Private threat intelligence that stays strictly within your network
- Custom compliance for highly regulated industries
Perfect for government, defense, and Tier-0 environments.
Yes! We provide 30-day Proof of Concept (PoC) deployments for enterprise customers:
- Guided evaluation in your environment
- Dedicated technical support during the PoC
- Custom scenarios matching your specific threat landscape
- ROI assessment demonstrating cost savings and threat detection improvements
- Migration assistance from legacy deception solutions
Community Support (Open Source):
- GitHub issues and discussions
- Documentation and tutorials
Professional Support:
- Email support with 24h response
- Video consultation sessions
- Configuration assistance
Enterprise Support:
- 24/7 phone and chat support
- Dedicated Customer Success Manager
- On-site deployment assistance
- Custom training programs
Our customers typically see immediate ROI through:
- 60% reduction in SOC operational costs through autonomous triage
- 80% time savings for security analysts by eliminating false positives
- Faster threat detection - minutes vs hours/days
- Reduced breach impact through instant lateral movement containment
- Compliance cost savings through automated reporting
For Developers/Technical Users:
- Star us on GitHub: github.com/mariocandela/beelzebub
- Quick start: Deploy with Docker in 2 minutes
- Join our community: Telegram channel for real-time updates
For Enterprise Customers:
- Book a demo: See our AI-Native Security Platform in action against real attacks
- PoC deployment: 30-day evaluation in your environment
- Enterprise deployment: Full implementation with dedicated support
Yes! We offer comprehensive training for our platform:
- Technical onboarding for security teams
- Executive briefings for leadership
- Custom workshops for specific enterprise use cases
- Certification programs for advanced users
- Best practices guides tailored to different industries
Absolutely! There are multiple ways to evaluate the platform:
- Open source version: Free on GitHub with core functionality
- Managed platform trial: 14-day free trial with full enterprise features
- Enterprise PoC: 30-day Proof of Concept with guided support
- Demo sessions: Live demonstration with real attack scenarios
Sales Inquiries:
- Executive Coffee Chat: https://calendly.com/mario-candela/coffee-chat
- Email: info@beelzebub.ai
Technical Support:
- GitHub Issues: For open-source questions
- Email: support@beelzebub.ai
Partnership Opportunities:
- Email: info@beelzebub.ai