Hey there, | I hope you've been doing well! | | | Next week I'm stoked to moderate a panel with Chainguard's founders Dan Lorenc, Kim Lewandowski, Matt Moore, and Ville Aikas! | We'll discuss the most impactful developments in software supply chain security from 2025 and share predictions for 2026.
We'll also cover recent high-profile attacks, AI's role in engineering productivity, and emerging trends that every security and engineering leader should know about.
I'm a huge proponent of secure-by-default software, so I'm excited for the discussion on where things are headed and what we can do to build scalable security programs! | | |
|
| 🗓️ Reflections | | I had a bit of a surreal moment yesterday. | In the morning I did a webinar with my good friend Daniel Miessler about his personal AI setup (GitHub), and a few hundred people joined our discussion, and asked a ton of great questions. | I then grabbed lunch at Anthropic with my bud Xiaoran Wang. | Most of the rest of the day I spent writing this love letter newsletter to you, dear reader. | A few years ago I would not have expected I'd have a day like this. It's a bit humbling I guess. | I'm going to try to carve out some time to do a deeper, longer reflection over the holiday break. | Feel free to let me know if there are some annual review resources that you really like! | | P.S. One of the ways I find great security content is following what people share. Feel free to connect with me on LinkedIn so your stuff shows up in my feed 👋 | |
|
| Sponsor | 📣 7 Best Practices for Secrets Security | | Take the guesswork out of secrets management. | API keys, tokens, and credentials often end up scattered across repos, pipelines, and SaaS tools. This cheat sheet helps you cut through the noise, identify what matters, and build guardrails that prevent future leaks. | You'll learn: | 7 best practices to discover, validate, and protect secrets across your SDLC Real-world examples and ready-to-use GitHub + Gitleaks snippets Tips for assigning ownership and fixing issues directly in code Guidance to secure vaults without slowing developers down
| Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start. | |
|
| As we've seen with recent supply chain attacks, secrets are a common initial access as well as privilege escalation vector, it's important to lock them down! | | | Keeping Secrets Out of Logs The blog version of Allan Reyes' LocoMocoSec 2024 talk. Allan argues that there's no silver bullet for keeping secrets out of logs, but shares 10 lead bullets that can work together, including: domain primitives (type-safe wrappers for sensitive data), read-once objects (that throw errors after initial use), log formatters (that redact sensitive patterns), taint checking (for tracking data flows), sensitive data scanners, and more. | 💡 This is very thoughtful and detailed post, great work 👌 Maybe one of the best I've seen in this space. Related: my bud Nathan Brahms previously wrote a blog post about using a custom type + a Semgrep rule to prevent logging secrets in SQLAlchemy. | | Scanning 5.6 million public GitLab repositories for secrets Luke Marshall scanned 5.6 million public GitLab repositories with TruffleHog, discovering 17,430 verified live secrets (2.8x more than Bitbucket) and earning over $9,000 in bounties. He built an AWS Lambda/SQS-based scanning architecture that completed the scan in 24 hours and cost $770. | 💡 One of the coolest parts of the methodology to me was how he automated the triage process by using an LLM with web search (Sonnet 3.7) to identify the best path to report a security vulnerability to each organization, searching for if the company has a Bug Bounty Program, Vulnerability Disclosure Program, security contact, etc. Neat! | | Stop Putting Your Passwords Into Random Websites (Yes, Seriously, You Are The Problem) Another banger/facepalmer from watchTowr. Jake Knott discovered thousands of sensitive credentials and data exposed through online code formatting tools JSONFormatter and CodeBeautify, where users inadvertently saved and shared confidential information via public links. An analysis of 80,000+ saved JSON entries found Active Directory credentials, API keys, database passwords, and PII (and much more) from organizations in sectors including government, banking, cybersecurity, healthcare, and many others. | Jake uploaded a canary token to see if malicious actors thought to look for the same exposure, and they are: the canary fired in ~48 hours, indicating attackers are already actively scraping and testing these platforms. | 💡 More broadly, I like the idea of whenever you think of a clever and potentially novel idea, put a canary token in that attack path and see if attackers are already testing for it. | | Sponsor | 📣 5 Critical Microsoft 365 Security Settings You Might Be Missing | | Set it and forget it? Not when it comes to M365 security. Configuration drift, admin sprawl, and risky integrations creep in over time, opening up security gaps that attackers love to exploit. This checklist from Nudge Security will help you catch common pitfalls and keep your environment secure. | |
|
| M365 is pretty complex and nuanced, I'm sure there's a lot of stuff in here I'd forget to consider 👆️ | | Cloud Security | | Phishing for AWS Credentials via the New 'aws login' Flow Adan Alvarez describes how attackers can abuse the new aws login --remote command to phish for AWS credentials, bypassing phishing-resistant MFA. Basically the social engineering attack works by tricking victims into visiting a malicious site that runs the command in the background, then having them authenticate to a legitimate AWS login page and provide the resulting verification code back to the attacker, who can exchange it for temporary credentials. Adan also shares an SCP that blocks this attack and how to detect suspicious activity. | | The log rings don't lie: historical enumeration in plain sight Exaforce's Bleon Proko shows how attackers can use cloud logs as an intelligence gathering tool, demonstrating techniques to enumerate permissions, resources, and identities across AWS, Azure, and GCP environments. | Bleon shows how attackers with log access (CloudTrail events, Azure Activity Logs, GCP Audit Logs) can extract valuable information from audit trails (e.g. sign-ins, permission assignment, etc.) by analyzing fields like identity details, resource names, request parameters, and error messages to map the cloud environment's infrastructure and determine their permissions, without triggering additional alerts. | | All Paths Lead to Your Cloud: A Mapping of Initial Access Vectors to Your AWS Environment Palo Alto's Golan Myers and Ofir Balassiano describe two primary classes of AWS misconfigurations that create identity-driven initial access vectors: service exposure (misconfigurations allowing public access, shadow resource creation etc. that enable unintended access) and access by design (misconfigurations of services that provide access control for AWS resources). | The post describes how Lambda functions, EC2 instances, ECR repositories, and DataSync can be inadvertently exposed through resource-based policies and network configurations, and how IAM/STS, IoT, and Cognito services can be misconfigured to allow unintended access, highlighting specific pitfalls like public role assumption, IoT certificate misuse, and Cognito's default self-registration. | |
|
| Supply Chain | | chainguard-dev/malcontent By Chainguard: A tool to detect supply chain compromises through using context, differential analysis, and over 14,000 YARA rules. It scans programs for their capabilities (gathers system information, evaluates code dynamically using exec(), …) to determine if it's potentially sketchy, and can do a diff between two versions of an app or library, determining if high risk functionality has been added. | | Inside the GitHub Infrastructure Powering North Korea's Contagious Interview npm Attacks Socket's Kirill Boychenko describes details from tracking North Korea's "Contagious Interview" operation, which has recently deployed 197+ malicious npm packages targeting blockchain and Web3 developers through fake job interviews. The malware-serving code lives on GitHub, the latest payload is fetched from Vercel, and a separate command and control (C2) server handles data collection and tasking. |
|
| | Actions pull_request_target and environment branch protections changes GitHub is updating how GitHub Actions' pull_request_target and environment branch protection rules are evaluated for security reasons, with changes taking effect on 12/8/2025. | 💡 This is great! 👏 Platform-level hardening / secure-by-default improvements FTW. | | Shai-Hulud 2.0 Supply Chain Attack: 25K+ Repos Exposing Secrets Shout-out to threat actors for being busy when I'm taking a week off 🤙 Wiz's Hila Ramati, Merav Bar, Gal Benmocha, and Gili Tikochinski describe the Shai-Hulud 2.0 (Sha1-Hulud) supply chain worm: ~700 compromised NPM packages, 25,000+ malicious repos across ~500 GitHub users, 775 compromised GitHub access tokens, 373 AWS credentials, etc. | The payload used a pretty neat persistence mechanism: it registers the infected machine as a self hosted GitHub runner, then adds a workflow that contains an injection vulnerability that allows the attacker to execute arbitrary commands on the machine in the future by opening discussions in the GitHub repo. | Wiz has a part 2 with more stats and a CSV list of affected packages. | Post mortems: The PostHog one is pretty good. See also: Postman, Zapier and Trigger.dev. | ⭐️ Jason Hall made a great point about the risk reduction of using something like OctoSTS, a GitHub app to mint short-lived GitHub tokens. | An alphabetical, incomplete list of other vendor coverage: Aikido, Datadog (IoC CSV), GitGuardian, Netskope, Orca, Palo Alto, Semgrep, Socket, Step Security, … 😪 | | 💡 I didn't realize at the time, but clearly this Moana song was a message from supply chain threat actors to security vendor marketing teams. | Real talk for a sec: it's great that the security community comes together to respond to ecosystem-level attacks like this, but to be honest it feels like a lot of duplicate work, and it's not clear to me what the value is of having every security company write a >80% overlapping post about the same event. | Maybe in the heat of the moment certain teams have specific findings/IoCs that others miss so there is value to concurrent, not totally overlapping work. But it feels to me more like marketing than true novel value. Maybe I'm just being grumpy. | To be clear, I'm not saying this about any person or company, I think people are doing great work, there's just a part of that wishes we could funnel all that security talent and time into preventative, longer term, solve-this-class-of-problem efforts vs highly overlapping, reactive discussion of the same event. Which will just occur again in a few months. | | | Blue Team | | North Korea lures engineers to rent identities in fake IT worker scheme Researchers Mauro Eldritch and Heiner García exposed North Korean IT recruitment tactics by creating a honeypot to document how Famous Chollima (part of Lazarus group) lures developers into renting their identities for remote jobs at targeted companies, offering 10-35% of salaries in exchange. The researchers used ANY.RUN sandboxes to create a simulated environment that recorded the threat actors' activities in real-time. Nice! |
|
| | Meet Rey, the Admin of 'Scattered Lapsus$ Hunters' Brian Krebs describes how he identified "Rey," a 15-year-old administrator of the Scattered LAPSUS$ Hunters (SLSH) cybercriminal group, via various operational security mistakes. Brian reached out to Rey's father, and then ended up chatting with Rey directly. | Pretty neat sleuthing process: Rey posted a screenshot with a password and partially redacted email address → that unique password was found in the breach tracking service Spycloud, showing it was associated with an email address that was exposed when the user's device was infected with an infostealer trojan. Rey also shared various personal life details in Telegram accounts that could then be tied with "autofill" data lifted from Rey's family PC (via the infostealer data). | 💡 Companies being breached by "APTs" - Advanced Persistent Teenagers. | | Introducing LUMEN: Your EVTX Companion Daniel Koifman introduces LUMEN, a privacy-first, browser-native Windows Event Logs (EVTX) analysis platform that processes the logs entirely client-side using WebAssembly. The tool integrates 2,349 SIGMA detection rules and has advanced correlation capabilities (reconstructs process trees, user movement and privilege escalation, etc.) without requiring installation, backend servers, or log uploads. | Optional AI features: LUMEN integrates with the main AI providers to provide natural-language log queries, automated incident summaries, pattern anomaly detection, and executive-level reporting. | | AI + Security | | OWASP AI Testing Guide New 250 page v1 guide (PDF) by Matteo Meucci, Marco Morana, and many others on how to threat model AI systems, and a detailed testing methodology for AI applications, models, AI infrastructure, data, and more. | | FuzzingLabs/fuzzforge_ai By FuzzingLabs: An AI-powered workflow automation and AI Agents platform for AppSec, fuzzing & offensive security. Automate vulnerability discovery with intelligent fuzzing, AI-driven analysis, and a marketplace of security tools. It currently integrates a number of fuzzers (Atheris for Python, cargo-fuzz for Rust, OSS-Fuzz campaigns), has specialized agents for AppSec, reversing, and fuzzing, has a secret detection benchmark, and more. | | gadievron/raptor By Gadi Evron, Daniel Cuthbert, Thomas Dullien (Halvar Flake), & Michael Bargury: Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent, using Claude.md and rules, sub-agents, and skills. Raptor can scan your code with Semgrep and CodeQL, fuzz binaries with AFL, analyze vulnerabilities using LLMs, generate proof-of-concept exploits, patch code, report findings in a structured format, and more. | 💡What I think is especially cool about this is that RAPTOR really leans into using Claude Code as the orchestrator/work bench and leveraging its existing functionality (custom slash commands, agents, skills) vs trying to roll it all from scratch. Makes a lot of sense to me. And of course having tools and deterministic code for the parts that would benefit from it. | | aliasrobotics/cai By Alias Robotics: A lightweight, open-source framework that empowers security professionals to build and deploy AI-powered offensive and defensive automation. Supports 300+ AI models, built-in security tools for reconnaissance, exploitation, and privilege escalation, agent-based architecture (modular framework design to build specialized agents for different security tasks), evaluated on HackTheBox CTFs, bug bounties, and real-world security case studies, and more. See also their papers: | | CAI: An Open, Bug Bounty-Ready Cybersecurity AI Paper by Victor Mayoral-Vilches et al presenting the first classification of autonomy levels in cybersecurity and introducing Cybersecurity AI (CAI). CAI achieved first place among AI teams and secured a top-20 position worldwide in the "AI vs Human" CTF live Challenge, and reached top-30 in Spain and top-500 worldwide on Hack The Box within a week. The framework enabled non-professionals to discover significant security bugs (CVSS 4.3-7.5) at rates comparable to experts during bug bounty exercises. | | Cybersecurity AI: Evaluating Agentic Cybersecurity in Attack/Defense CTFs Paper by Francesco Balassone, Victor Mayoral-Vilches et al on evaluating whether AI systems are more effective at attacking or defending in cybersecurity. Using CAI's parallel execution framework, they deployed autonomous agents in 23 Attack/Defense CTFs. Defensive agents achieve 54.3% unconstrained patching success versus 28.3% offensive initial access, but this advantage disappears under operational constraints: when defense requires maintaining availability (23.9%) and preventing all intrusions (15.2%), no significant difference exists. | 💡 Having AI defensive and offensive agents play against each other on existing (Hack The Box) CTFs is neat. I like experiments like this, though I wonder if the difference in outcome could be attributed to scaffolding, prompting, etc. (e.g. maybe the defensive prompt is just written a lot better), vs AI is inherently better at attack or defense. | | AI agents find $4.6M in blockchain smart contract exploits Anthropic researchers + collaborators evaluated AI agents' ability to exploit smart contracts by creating Smart CONtracts Exploitation benchmark (SCONE-bench)— a new benchmark of 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoff (March 2025), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million. | They also evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities, both uncovering two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. (I assume they didn't share the Sonnet 4.5 API cost because it was higher 🤔) |
|
| | | | ✉️ Wrapping Up | | Have questions, comments, or feedback? Just reply directly, I'd love to hear from you. | If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏 | Thanks for reading! | Cheers, Clint | P.S. Feel free to connect with me on LinkedIn 👋 |
|
|
|
0 Comments
VHAVENDA IT SOLUTIONS AND SERVICES WOULD LIKE TO HEAR FROM YOU🫵🏼🫵🏼🫵🏼🫵🏼