GPUs in Every PoP: Inside Cato Neural Edge and the Shift to GPU-Accelerated Cloud Security
Cato Networks just made the most aggressive architectural bet in the SASE market: NVIDIA GPUs deployed directly inside every one of its 85+ global Points of Presence. The new Neural Edge platform c...

Source: DEV Community
Cato Networks just made the most aggressive architectural bet in the SASE market: NVIDIA GPUs deployed directly inside every one of its 85+ global Points of Presence. The new Neural Edge platform closes the gap between traffic inspection and AI-driven analysis by running both in the same location, in a single pass. For anyone building or operating cloud security infrastructure, this raises a fundamental architecture question that applies far beyond one vendor: where does your AI actually run? The Problem: Inspect Here, Analyze There, Enforce Later Most cloud-delivered security platforms inspect traffic at their PoPs using CPU-based engines — stateful firewalling, URL filtering, signature-based IPS — and that works fine for traditional workloads. But AI-driven security models (semantic DLP, behavioral analytics, LLM-based threat detection) require a fundamentally different compute profile: matrix multiplication and tensor operations that CPUs handle poorly at scale. The common workaroun