Leviathan Security Group - Penetration Testing, Security Assessment, Risk Advisory

View Original

Client-Side Authorization

“Client-side authorization (authZ)” is a well-known and critical design error. Well, it is well-known to security people, but apparently it is not well known to non-security-oriented web app developers, because I have been encountering this error in production. So, I wrote security reviews that said “don’t use client-side authZ” and then discovered that there is no canonical reference for what client-side authZ is, and why not to use it. If there is actually a great reference for this, please send it to me and I will revise this post to point at it.

To understand the client-side authZ problem, we have to go back, way back, to before the web was born, and look at the history of thin clients vs. thick clients. In the 1970s, computing was done by mainframe computers, which if you were lucky[1] had green-screen terminals[2] connected to it. These thin-client terminals handled the business of typing characters and cursor-key movements around the screen, and when the user had filled out the form, they could hit “submit” and the whole batch of data was sent to the mainframe for processing. This offloaded a large amount of compute load from the mainframe, so that it did not have to handle every keystroke by every user.

Fat/thick clients took this even further. Before the web became popular in the mid-1990s, businesses would develop their business applications as C/C++ desktop software that provides a custom GUI to enter data into fields and forms and submit that data to a centralized server. The PCs and the software running on them was considered trusted software, and so business logic, such as “spend must not exceed employees spending $ limit” could be enforced in the client, and then just submitted to the server.

Trouble happened when fat client software was ‘ported’ from PC native client code to the web. New client software was written in JavaScript/HTML5 to execute in web browsers. Because it was a port, this web client software enforced the same set of rules, allowing the server to not change what it was doing. Problem: browsers are not trusted software, especially on the Internet. A malicious employee can just press F12 in their browser, edit the JS code, and bypass all business logic that was supposed to be enforced in the browser.

So why did people think that those PC fat clients were trusted? It required that the user not have write access to the client application software, either on disk (folders such as C:\WINDOWS\system32 ) or in memory (having enough privileges to apply a debugger to the client application processes). The usual way to achieve that is to enforce that users run as a true standard user, i.e., non-admin[3]. Unfortunately, running Windows as a non-admin is more hassle than many people will tolerate, and so that step is often skipped, and the client software is not as secure as had been assumed.

So, when writing web apps, and especially when porting thick client software to the web, never attempt to enforce business rule that matters in the browser code. Put another way, always enforce all rules in the server, assume that all data coming in to your web APIs is being sent by a maximally evil attacker.

[1] If you were not lucky, you had punch cards

[2] Such as the IBM 3270 https://en.wikipedia.org/wiki/IBM_3270 or the DEC VT100 https://en.wikipedia.org/wiki/VT100

[3] https://blogs.msdn.microsoft.com/aaron_margosis/2004/06/17/the-easiest-way-to-run-as-non-admin/