
The Detection at Scale Podcast is dedicated to helping security practitioners and their teams succeed at managing and responding to threats at a modern, cloud scale. Every episode is focused on actionable takeaways to help you get ahead of the curve and prepare for the trends and technologies shaping the future.
Episodes

Tuesday Feb 25, 2025
Tuesday Feb 25, 2025
Managing security for a device that can autonomously interact with third-party services presents unique orchestration challenges that go beyond traditional IoT security models. In this episode of Detection at Scale, Matthew Domko, Head of Security at Rabbit, gives Jack an in-depth look at building security programs for AI-powered hardware at scale.
He details how his team achieved 100% infrastructure-as-code coverage while maintaining the agility needed for rapid product iteration. Matt also challenges conventional approaches to scaling security operations, advocating for a serverless-first architecture that has fundamentally changed how they handle detection engineering. His insights on using private LLMs via Amazon Bedrock to analyze security events showcase a pragmatic approach to AI adoption, focusing on augmentation of existing workflows rather than wholesale replacement of human analysis.
Topics discussed:
- How transitioning from reactive SIEM operations to a data-first security approach using AWS Lambda and SQS enabled Rabbit's team to handle complex orchestration monitoring without maintaining persistent infrastructure.
- The practical implementation of LLM-assisted detection engineering, using Amazon Bedrock to analyze 15-minute blocks of security telemetry across their stack.
- A deep dive into security data lake architecture decisions, including how their team addressed the challenge of cost attribution when security telemetry becomes valuable to other engineering teams.
- The evolution from traditional detection engineering to a "detection-as-code" pipeline that leverages infrastructure-as-code for security rules, enabling version control, peer review, and automated testing of detection logic while maintaining rapid deployment capabilities.
- Concrete examples of integrating security into the engineering workflow, including how they use LLMs to transform security tickets to match engineering team nomenclature and communication patterns.
- Technical details of their data ingestion architecture using AWS SQS and Lambda, showing how two well-documented core patterns enabled the team to rapidly onboard new data sources and detection capabilities without direct security team involvement.
- A pragmatic framework for evaluating where generative AI adds value in security operations, focusing on specific use cases like log analysis and detection engineering where the technology demonstrably improves existing workflows rather than attempting wholesale process automation.
Listen to more episodes:
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.