Note that this a collection of tweets about DAST (excluding any specific company pitch). In general, it seems like many companies are unable to utilize the potential of DAST yet because of limitations in most of DAST tools. This opens up opportunity for people to create new DAST tool to overcome current problems.
Summary
- Many AppSec folks are struggling to get any real value out of commercial DAST tools. Many problems include tools being unable to record Authentication properly and test coverage issues.
- OWASP ZAP and Burp Enterprise Scanner are popular tools used in DAST automation in DevSecOps pipeline.
- Some AppSec folks are proxying their QA stage to ZAP or Burp in order to improve test coverage of DAST scan.
- “DAST biggest issue in modern apps is not exactly ‘testing’ or even ‘detecting’ vulns, but crawling the same website to identify to the attack surface.” ~ Jeremiah Grossman
- OWASP Attack Surface Mapper tries to use SAST to pre-seed attack surface for DAST scan.
Interesting Tweets about DAST
Decide what ROI looks like and make sure the tool meets your expectations. If your apps are complicated might be hard for DAST scanner to cover them properly. Not convinced I have ever seen a company get real value out of DAST. Maybe just make sure your pentester is uses Burp/ZAP
— Josh Grossman 👻 (tghosth) (@JoshCGrossman) June 4, 2020
I have experience with Netsparker, and. Wracked DAST. Honestly, depending on your environment and whose managing the tool I’d just go with BurpSuite Enterprise. DAST is fairly limited anyways.
— Casey (@CaseyDunham) June 4, 2020
This maybe controversial, but another point that came up w/ @clintgibler is that orgs are having trouble seeing value of DAST tools. Suspect coverage, problematic depth, FPs, and a high price tag make ppl think twice. #appsec
— Chenxi Wang (@chenxiwang) April 23, 2020
I don't think anyone is claiming 100% coverage, but a 20% coverage is just abysmal. #AppSec
— Chenxi Wang (@chenxiwang) April 23, 2020
This talk gives some examples of why good DAST coverage on modern web apps is fundamentally hard: https://t.co/pL4AWlCgQy
— Clint Gibler (@clintgibler) April 23, 2020
Many companies using C/C++ are adopting fuzzing, which is 💯@thedavidbrumley Do you think fuzzing provides value for non-C/C++ repos e.g. Python, Java…?
Coverage isn't the goal. Finding real bugs is. And if an automatic tool can increase your code coverage by 20% without having to resource a developer to do it, that seems like a win (that static analysis can't do).
— David Brumley (@thedavidbrumley) April 24, 2020
DAST just hasn't kept up with frontend tech. We struggled to find a scanner that could authenticate and nothing understands GraphQL.
— Leif Dreizler (@leifdreizler) April 24, 2020
I have long wanted to setup burpsuite to be the proxy for all tests used for QA and then allow it to run the attacker mode. Now that they have a headless option, I wonder how it would do. Since QA is likely hitting most endpoints, the tool knows endpoints and params to fuzz.
— Justin Massey (@jmassey09) April 24, 2020
Have you given @Doyensec's GraphQL scanning tool a whirl? https://t.co/tuTzAo0S5F
— Clint Gibler (@clintgibler) April 26, 2020
And re: finding bugs using Swagger/OpenAPI, thought this work on trying to automatically find IDORs was neathttps://t.co/PjMDhFIB8x
I know a number of AppSec teams have tried proxying integration tests through Burp then fuzz/attack mode, and found that it provides at least some baseline coverage.
— Clint Gibler (@clintgibler) April 26, 2020
Won't necessarily help with multi-step flows, but might be worth playing with.
Noooooooo you can't just point a DAST tool at a modern web application and expect it to be able to find all the content and test it for complex vulnerabilities
— Ian Melven (@imelven) August 6, 2020
HA HA WEB SCANNER GO BRRRRRRRRRRRRRR
But they've been doing that for 15 years. Still none that I've seen actually do a good job of injesting and merging data from outside their tool (i.e. pen-tests, DAST tools, RASP tools, etc). Its frustrating because I've seen so many refer to running scans as their VM program.
— Alyssa Miller (Speaking at All Things Open) (@AlyssaM_InfoSec) May 14, 2020
I disagree. DAST is not perfect, but as it is often the last check prior to going live, it is hugely impirtant. If your company uses manual testing, that could be an alternative. As far as cost, use the popular open source tool #OWASP ZAProxy https://t.co/8ba6UDZdUN
— Richard Greenberg (@RAGreenberg) April 23, 2020
IAST can give some of the benefits of DAST when used in the pipeline. You get speed but can still miss out on the coverage of DAST. You can "fail forward" by pushing to production after passing SAST&DAST scans and then run an asynchronous DAST scan and fix issues on next deploy.
— W🕷ld P🎃nd (@WeldPond) February 13, 2020
That says that you don't know that your tool has failed you. It has. Did it fail to scan a SPA because it couldn't read routes? Did you have one where authentication stopped working and it ran for a week unauthenticated? Did your DAST tool ever find an IDOR vuln? 5/16
— Rebecca Deck (@ranger_cha) January 3, 2020
Previous oodles of data (defect densities, "what finds what", & measures related to remediation) lead me an opinion:
— jOHN Steven (@m1splacedsoul) December 1, 2019
Activities (i.e. DAST, threat modeling, or tool du jour) don't tie directly to effectiveness. Effectiveness results from combining activities into capabilities.
Wonder why Vuln Management sucks? A test scan of 14 hosts with NO critical vulns takes 1hr+ and results in … a 571 page PDF. FIVE HUNDERD SEVENTY ONE PAGES???? This is insane. pic.twitter.com/R65KpTNIx6
— Wim Remes (@wimremes) August 3, 2019
It's more that the appsec toolsets themselves are integrating into the sdlc as it exists today (highly iterative), so SAST/SCA built into the IDE, invoked at check in, etc…DAST/IAST running continously in test or kicked off by a build tool, etc.
— Dan Kennedy 🚫 (@danielkennedy74) July 25, 2019
Whenever DAST vendors hmu I tell them to create a segment account (free), run their tool, send me the results, and if they're good we can talk. So far nobody has done this, not sure if it's because their results are bad or if they just aren't even trying.
— Leif Dreizler (@leifdreizler) March 9, 2019
I think it depends on the team. In my experience, some only want to make sure that the DAST tool doesn't log a new finding for every time a scan is run, others do want it to be collapsed. I know devs love smaller tickets, so something with 53 things to fix across all code…
— Andrew van der Stock (@vanderaj) February 15, 2019
DAST is meant to find vulnerabilities that adversaries may exploit. SAST is meant to find those same vulnerabilities earlier in the SDLC — but does it really? What’s the best data-backed evidence available in order to support or refute this claim? My search results are sparse.
— Jeremiah Grossman (@jeremiahg) December 6, 2018
When evaluating DAST vendors we struggled to find one that could authenticate to our app, and those that did limped through spidering. That sector has not kept up with single-page React apps.
— Leif Dreizler (@leifdreizler) December 7, 2018
A novice bug bounty hunter with Burp is the new DAST.
Proxying your acceptance tests through ZAP (or Burp) provides it with surface area for attacks. I agree, it's hard for these tools to get the full grasp of the surface area. That's why it's important to use a tool that is flexible & scriptable
— Andres Hermosilla (@dandr3ss) December 7, 2018
As someone who help build a sast engine and also built appsec programs, I find dast mostly useless unless your org is using tech from 10 years ago.
— Ray (@Raybeorn) December 7, 2018
DAST is anything but useless. For example it's tremendously helpful in complex application where the relationships between "code actually run in the browser" and "code in the repo" is anything but obvious. That's most modern apps.
— Claudio Criscione (@paradoxengine) December 8, 2018
Panel down plays value from heavy SAST/DAST tools in security automation – so what works? < Grep is the only tool used by all panelists
— bryan owen (@bryansowen) October 12, 2018
Panel #AppSecUSA2018 says cannot do DAST fast in tool chain – yes that’s true for WebApps. Hundreds of mobile app dev pipelines use @NowSecureMobile for automated DAST in 15mins post build 99% Accurate @owasp #AppSecUSA18 #NotAnAdvertJustAFact
— Reed_on_the_Run @ Home (@reed_on_the_run) October 12, 2018
This is an interesting tool, which used the power of SAST to feed DAST tools.
— Omer Levi Hevroni (@omerlh) October 9, 2018
Thinking about it – isn't the same as using OpenAPI/Swagger to feed DAST tools? What is the difference? https://t.co/v0aMD0rlgP
3 critical factors while automating #DAST in #DevSecOps
— Rahul Raghavan (@Rahul_Raghav) September 17, 2018
1. The Tool – What is/are the tools that map to the target tech stack
2. Deployment environment – Are scans intended for Dev, Integration or Staging
3. Time – How much time can be spared within the release cycle
Any one tool for API analysis is incomplete. A combo of IAST, DAST, Swagger, Manual investigation and other tools is what I see most mature shops (with good risk management) doing.
— Jim Manico (@manicode) July 6, 2018
It's probably high time we saw some decent comparative analysis of DAST products, unless there's already some out there. There's room for everyone in the space, I think, but I also think it would be worth a look at the different qualities.#AppSec https://t.co/DU30agOqDD
— Mike Thompson (@AppSecBloke) April 29, 2020
IME, highly depends on DAST training and complexity of target being analyzed. Scan blind and go will result in most vulns being found by SAST. If you train DAST through selenium, swagger, and proxy logs, the two will relatively even. Neither will detect biz logic flaws.
— Steve Springett (@stevespringett) April 1, 2019
My only concern with ZAP is if junior level AppSec engineers will be able to set it up, embed it into the CI environment, parse the results and provide enough info to sw dev teams (attack replay) to fix the issues found. Thoughts?
— Walter Martín Villalba (@act1vand0) May 1, 2020
I've been anecdotally tracking the effectiveness of DAST (web app security scanners) for years. @Burp_Suite 's active scanner and @ArachniScanner are still my 100% front-runners.
— Jason Haddix (@Jhaddix) December 29, 2017
Ayup. DAST biggest issue in modern apps is not exactly ‘testing’ or even ‘detecting’ vulns, but crawling the same website to identify to the attack surface.
— Jeremiah Grossman (@jeremiahg) December 7, 2018
Proxying your acceptance tests through ZAP (or Burp) provides it with surface area for attacks. I agree, it's hard for these tools to get the full grasp of the surface area. That's why it's important to use a tool that is flexible & scriptable
— Andres Hermosilla (@dandr3ss) December 7, 2018
Attack Surface Detector (ASD)
— SecDec (@secdec) October 10, 2018
Provides complete picture of web app’s exposed attack surface. Output used to “pre-seed” DAST tools for more thorough pen testing. Plugin available in the Portswigger Burp BApp Store. Download https://t.co/D48XBeAGHc @Burp_Suite @BApp_Store #PenTest
I expect that you'll be disappointed with the results for two reasons:
— Scott Norberg (@scottnorberg) November 11, 2019
One, I did a comparison of free DAST scanners and ZAP was about the worst. https://t.co/hCKYXiwQbz
Two, DAST scanners aren't built to work smoothly with CI/CD pipelines. https://t.co/q8sStmjZ0g
There is a lot of potential for product security teams to write passive/active scanner checks to avoid regressions or cover similar vulns – however the number of companies having dynamic testing as part of the CI/CD is still too small
— Luca Carettoni (@lucacarettoni) February 25, 2020
These problems are not unique to Burp. The DAST market is wrought with products that can't handle modern application architecture, and I don't know of a single one that can. They are trying to accomplish a near impossible task. (5/10)
— Tim Tomes (@LaNMaSteR53) September 3, 2020
DAST solutions simply cannot effectively discover modern applications, and even if they could, cannot analyze with the context that is needed to discover authentication, authorization, session management, and business logic issues. (8/10)
— Tim Tomes (@LaNMaSteR53) September 3, 2020
Slow feedback loops in SAST, DAST, penetration tests, etc. things that force a leftward movement (a reset) in the pipeline are the problem. Feedback must be immediate, and the path for getting the feedback has to connect with the developer's existing environment. /FIN
— Alyssa Miller (Speaking at All Things Open) (@AlyssaM_InfoSec) September 16, 2020