Respect XSS had a nice blog post on an XSS vulnerability that existed a while back on the Mozilla Add-Ons website. This was a stored XSS. Stored XSS is where the XSS attack was stored server-side likely in a database table and is loaded over and over from the database every time any user goes to a particular page. The issue was with the Name field, so if you entered html / javascript into the Name field, it was actually getting stored into a database table probably in a name column, and then on the next page, the name column was pulled out, still unsanitized, and displayed as the Title of the next page. So an attacker could Name their collection with javascript instead of a real name, and anytime a person when to that collection the javascript would execute. Thus classic XSS. This could lead to session or cookie theft, keylogger, malware installation, browser hijacking, or much more. XSS is bad, and as that blog says "Respect" it.
What I thought was worth discussing a bit further though was 2 other XSS fixes (albiet simple) that the same blogger found and mentioned at the bottom of the post. There were 2 github checkins worth looking at
1.) forms.js fix
The above change is in the populateErrors function, and it's looping thru each error message and printing out the error. This appears to be related to user input validation, and if user input was incorrect then an error would display, and when an error was displayed, the error display field bwas actually vulnerable to XSS and javascript could be executed. Thus to prevent that it was very simple, any output that is controllable by a user in any way should first be escaped/sanitized. This project is using the _.escape function to escape or sanitize the user input before displaying.
2.) mobile-search-results.html fix
The above change is to fix an issue where the search keyword was being displaying in the search results page unsanitized thus it was vulnerable to XSS. This project uses Swig javascript templates, thus the reason you see all the curly braces and modulus symbols {%. Then it's basically taking the doc.url (or current url) and encoding it with the encodeUri() javascript function. This strips out the malicious characters that enable the XSS from the search keyword prior to displaying it.
It's just good sometimes to see how simple it is to fix XSS and thus you should take time and make time to remediate quickly if you find one.
u>More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Thursday, March 17, 2016
Unvalidated Redirects
Today there was a Krebs article on Spammers abusing .gov domains. The main focus of the article is on urls like the one below, which he referred to as "open redirects", that take a url as a parameter and redirect the user to that url.
http://www.mywebsite.com/redirectaspx?url=www.evilwebsite.com
I thought it was worth talking a bit more about this situation. The OWASP Top 10, a very popular list of the top 10 web vulnerabilities that web developers should look for and remediate, describes this as an Unvalidated Redirect. Since it's listed in the OWASP Top 10, this is something that for example PCI compliance standards require be checked for an remediated immediately if identified. (E.g. PCI DSS says "6.5.1 through 6.5.10 were current with industry best practices when this version of PCI DSS was published. However, as industry best practices for vulnerability management are updated (for example, the OWASP Guide, SANS CWE Top 25, CERT Secure Coding, etc.), the current best practices must be used for these requirements.").
There are a few scenarios I can envision where developers may need a "redirect" parameter ...
1.) Login Page Returns - If the website you're building has an area where you login, and you try to get to a page but you're not logged in yet or your session expired. As a developer you redirect the user to the login page, but you want a "url" parameter that can return the user back to the page they were on after successfully authenticating.
2.) Workflow Navigation - If the website has a complicated workflow or user flow through the site and you need to know which page to go to next you might need a url parameter. For example if you need to go from a product page, to a shopping cart page, to a checkout page, to a receipt page, but you can go back and forth and perhaps inject upsell pages, etc.
3.) Click Tracking - If you are sending a user to another website (perhaps for an advertisment, or to a 3rd party, etc.) and you want to be able to track who clicked thru or how many times a page was clicked to in order to collect revenue or track analytics, then you might want a centralized "redirect" page that takes a url, records all the analytics/statistics, and then redirects to the final location.
Having a url parameter like above for any reason in which you are not validating can actually create multiple security concerns such as ...
1.) Impact Trust - User are getting security education from their companies. They have been trained to hover over hyperlinks before they click. Users generally are looking at one thing, the primary domain name. If they see www.mywebsite.com , and it's spelled correctly, then they'll trust it. The problem with unvalidated redirects is that the primary domain is your good one, www.mywebsite.com, so a user hovers and trusts it, but when they click they are actually taken to an untrusted evil site.
2.) Phishing - A common practice is for an attacker to clone or make a copy of your website and host it on their own website. For example, they might make a copy of www.mywebsite.com (copy the pages, images, styles, content, etc.) and host it on another similar url such as www.my-website.com. Then they'll use your vulnerable redirect page
http://www.mywebsite.com/redirectaspx?url=www.my-website.com
Then they send it in an email saying 'Urgent, Please re-enter your credentials". The user sees the email, hovers over the link, see's that it's from the trusted www.mywebsite.com , clicks it and ends up on the evil website (www.my-website.com) that looks identical (cloned) to the trusted site. The only difference is that hyphen in the url. The user doesn't notice the hyphen since everything else looks identical, so the user enters their username & password, and since the user is actually on the attacker website ,the attacker has just stolen the user credentials.
3.) Install Malware - Similar to the phishing example, if the attacker can get the user to hover over the trusted url, but be redirected to the attacker's website, the attacker can write javascript or some plugin code on their evil site that installs malware onto the user's workstation. And the user really did nothing wrong, because they thought they were clicking on the trusted website link.
This issue can be fixed by web developers with a few simple methods ...
1.) Eliminate User Controlled Redirects - The obvious solution is to eliminate url parameters that are editable by the user. For example, don't use query strings or cookies but instead use server-side session variables, database tables, or some other area that prevents a user from even modifying the value.
2.) Url Maps - If the parameter cannot be eliminated, then another option is creating url maps. Basically if you have a database table or similar that holds all possible urls that can be redirected, and each is given an identifier
1 - http://www.google.com
2 - http://www.ebay.com
3 - http://www.amazon.com
Then you use the identifier (1,2,3) to redirect such as
http://www.mywebsite.com/redirectaspx?url=2
Then the user would get redirected to www.ebay.com because the url #2 is chosen. And this way the attacker can only inject urls that you control and allow (1,2, or 3) and cannot inject any evil urls.
3.) Domain Whitelisting - Another valid solution besides url mapping is to whitelist domains. Thus you have a database table or regular expression that every url is run through BEFORE the redirect actually occurs.
Sample Regex: (?i)^http(s|)\:\\\\www\.(mywebsite|google|ebay|amazon)\.com
On the server-side you would take the url that was passed into the parameter, and valiate is matches the whitelist or regular expression prior to redirecting. This way again an attacker can only enter in a url that matches the ones you allow and everything else is blocked. PLEASE NOTE this also applies to same-origin redirects. Even if your parameter is just redirecting to relative paths, you should also perform whitelisting or validation to ensure a.) That the attacker hasn't found a way to redirect away from your origin b.) That the attacker hasn't found a way to upload a new page to your website and redirect to that new page.
Thus Krebs article was very relevant to a common issue that impacts many websites and web developers. Please take the time to check for these types of vulnerabilities, take them seriously, and remediate them.
u>More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
http://www.mywebsite.com/redirectaspx?url=www.evilwebsite.com
I thought it was worth talking a bit more about this situation. The OWASP Top 10, a very popular list of the top 10 web vulnerabilities that web developers should look for and remediate, describes this as an Unvalidated Redirect. Since it's listed in the OWASP Top 10, this is something that for example PCI compliance standards require be checked for an remediated immediately if identified. (E.g. PCI DSS says "6.5.1 through 6.5.10 were current with industry best practices when this version of PCI DSS was published. However, as industry best practices for vulnerability management are updated (for example, the OWASP Guide, SANS CWE Top 25, CERT Secure Coding, etc.), the current best practices must be used for these requirements.").
There are a few scenarios I can envision where developers may need a "redirect" parameter ...
1.) Login Page Returns - If the website you're building has an area where you login, and you try to get to a page but you're not logged in yet or your session expired. As a developer you redirect the user to the login page, but you want a "url" parameter that can return the user back to the page they were on after successfully authenticating.
2.) Workflow Navigation - If the website has a complicated workflow or user flow through the site and you need to know which page to go to next you might need a url parameter. For example if you need to go from a product page, to a shopping cart page, to a checkout page, to a receipt page, but you can go back and forth and perhaps inject upsell pages, etc.
3.) Click Tracking - If you are sending a user to another website (perhaps for an advertisment, or to a 3rd party, etc.) and you want to be able to track who clicked thru or how many times a page was clicked to in order to collect revenue or track analytics, then you might want a centralized "redirect" page that takes a url, records all the analytics/statistics, and then redirects to the final location.
Having a url parameter like above for any reason in which you are not validating can actually create multiple security concerns such as ...
1.) Impact Trust - User are getting security education from their companies. They have been trained to hover over hyperlinks before they click. Users generally are looking at one thing, the primary domain name. If they see www.mywebsite.com , and it's spelled correctly, then they'll trust it. The problem with unvalidated redirects is that the primary domain is your good one, www.mywebsite.com, so a user hovers and trusts it, but when they click they are actually taken to an untrusted evil site.
2.) Phishing - A common practice is for an attacker to clone or make a copy of your website and host it on their own website. For example, they might make a copy of www.mywebsite.com (copy the pages, images, styles, content, etc.) and host it on another similar url such as www.my-website.com. Then they'll use your vulnerable redirect page
http://www.mywebsite.com/redirectaspx?url=www.my-website.com
Then they send it in an email saying 'Urgent, Please re-enter your credentials". The user sees the email, hovers over the link, see's that it's from the trusted www.mywebsite.com , clicks it and ends up on the evil website (www.my-website.com) that looks identical (cloned) to the trusted site. The only difference is that hyphen in the url. The user doesn't notice the hyphen since everything else looks identical, so the user enters their username & password, and since the user is actually on the attacker website ,the attacker has just stolen the user credentials.
3.) Install Malware - Similar to the phishing example, if the attacker can get the user to hover over the trusted url, but be redirected to the attacker's website, the attacker can write javascript or some plugin code on their evil site that installs malware onto the user's workstation. And the user really did nothing wrong, because they thought they were clicking on the trusted website link.
This issue can be fixed by web developers with a few simple methods ...
1.) Eliminate User Controlled Redirects - The obvious solution is to eliminate url parameters that are editable by the user. For example, don't use query strings or cookies but instead use server-side session variables, database tables, or some other area that prevents a user from even modifying the value.
2.) Url Maps - If the parameter cannot be eliminated, then another option is creating url maps. Basically if you have a database table or similar that holds all possible urls that can be redirected, and each is given an identifier
1 - http://www.google.com
2 - http://www.ebay.com
3 - http://www.amazon.com
Then you use the identifier (1,2,3) to redirect such as
http://www.mywebsite.com/redirectaspx?url=2
Then the user would get redirected to www.ebay.com because the url #2 is chosen. And this way the attacker can only inject urls that you control and allow (1,2, or 3) and cannot inject any evil urls.
3.) Domain Whitelisting - Another valid solution besides url mapping is to whitelist domains. Thus you have a database table or regular expression that every url is run through BEFORE the redirect actually occurs.
Sample Regex: (?i)^http(s|)\:\\\\www\.(mywebsite|google|ebay|amazon)\.com
On the server-side you would take the url that was passed into the parameter, and valiate is matches the whitelist or regular expression prior to redirecting. This way again an attacker can only enter in a url that matches the ones you allow and everything else is blocked. PLEASE NOTE this also applies to same-origin redirects. Even if your parameter is just redirecting to relative paths, you should also perform whitelisting or validation to ensure a.) That the attacker hasn't found a way to redirect away from your origin b.) That the attacker hasn't found a way to upload a new page to your website and redirect to that new page.
Thus Krebs article was very relevant to a common issue that impacts many websites and web developers. Please take the time to check for these types of vulnerabilities, take them seriously, and remediate them.
u>More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Tuesday, March 1, 2016
Developing Pedagogical Visualizations of Dense Matrix Operations on Interconnection-network SIMD Computers
Throwback Tuesday Developing Pedagogical Visualizations of Dense Matrix Operations on Interconnection-network SIMD Computers
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Don't Write your own XSS Filter
There was a recent blog by Sjoerd Langkemper that walked through bypassing XSS Filters. It's a great example of why as a web developer you should NOT write your own XSS filter, but instead you a trusted and vetted security library written and reviewed by the pros. By Custom XSS (or SQLi) filter, I mean you should not try to write your own regular expression, pattern matching, character blacklists, etc. It's just too complex and you're bought to miss something or make a mistake. You need to use a library that everybody else has reviewed and is known to be correctly written and secure.
In the blog he provides great example. There was a regex written to remove this malicious code
(javascript\s*:)
And it would work great if the attacker followed the traditional pattern and entered malicious code like this
<a href="javascript:alert('test')">link</a>
But what if the attacker varied a little bit and URL encoded the letter s?
<a href="javascript:alert('xss')">link</a>
Uh-oh, your attacker just bypassed your XSS filter and your website is vulnerable to XSS.
Here's another example of a decent regex to blog javascript event attributes.
(ondblclick|onclick|onkeydown|onkeypress|onkeyup|onmousedown|onmousemove|onmouseout|onmouseover|onmouseup|onload|onunload|onerror)=[^<]*(?=\>)
But guess what, you missed one (or probably many). What about onmouseenter?
<div onmouseenter="alert('xss')">
Please trust me when I say, you can't do it yourself. I would never attempt it and you shouldn't either. Use a trusted library that covers all these scenarios and has thought of all the things that you have forgotten.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
In the blog he provides great example. There was a regex written to remove this malicious code
(javascript\s*:)
And it would work great if the attacker followed the traditional pattern and entered malicious code like this
<a href="javascript:alert('test')">link</a>
But what if the attacker varied a little bit and URL encoded the letter s?
<a href="javascript:alert('xss')">link</a>
Uh-oh, your attacker just bypassed your XSS filter and your website is vulnerable to XSS.
Here's another example of a decent regex to blog javascript event attributes.
(ondblclick|onclick|onkeydown|onkeypress|onkeyup|onmousedown|onmousemove|onmouseout|onmouseover|onmouseup|onload|onunload|onerror)=[^<]*(?=\>)
But guess what, you missed one (or probably many). What about onmouseenter?
<div onmouseenter="alert('xss')">
Please trust me when I say, you can't do it yourself. I would never attempt it and you shouldn't either. Use a trusted library that covers all these scenarios and has thought of all the things that you have forgotten.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
HTTP Login Pages with HTTPS Posts
A while back Troy Hunt talked about HTTP Login forms that post to HTTPS. The long story short is these are still unsecure. As a web developer, don't be fooled into thinking that just because you're POSTing to HTTPS that your customers are safe. No, you need to have an HTTPS login form/page or you're at risk. The HTTPS POST may prevent sniffing because the traffic is encrypted, but with an unsecure HTTP form posting to HTTPS you are still at risk for man-in-the-middle. With a man-in-the-middle the form action url could tampered with and changed so your credentials get posted to some attacker website instead of the real one.
Now finally FireFox will make this even clearer by warning users if they're logging in with on a website with this insecure configuration.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Now finally FireFox will make this even clearer by warning users if they're logging in with on a website with this insecure configuration.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Labels:
FireFox,
HTTP,
HTTPS,
Man-in-the-Middle,
SSL
EMET Blog
DFIR wrote a good simple to read blog about EMET, Microsoft's tool that blocks things like Buffer Overflow in userland.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Insecure Direct Object Reference 101
As a web developer have you ever gone through a Code Review or used the OWASP Top 10 and gotten to the "Insecure Indirect Object Reference" and wondered, what does that mean?
Well, Adam Logue recently posted a blog about a real world example of Insecure Direct Object Reference going bad.
The blog talks about a vulnerability they discovered on TGI friday's mobile website. There was an HTTP GET request getting sent to the TGI Friday server that passed a parameter (in bold below) called 'acctid'.
GET /alchemy-master/ws/TgifAccountActivity.asmx/AccountActivity?stoken=8970853507518770&acctid=123213123
This 'acctid' was an account id of the user and could be used to redeem for free food at the restaurant. Thus all an attacker had to do was replace their account id with some other user's account id, and then instead of redeeming their own points, they would be redeeming somebody else's points. Thus FREE FOOD! And this is a great example of an Insecure Direct Object Reference. Poor programming.
Here are 2 ways this could've been prevented if you were the web developer writing the code.
a.) Check access. Before committing those changes to the database, confirm ... does the account id match the user logged in? If no, deny.
b.) Use indirect object references. In this example, let's say the user logged in has 3 gift card account numbers (1=43554345, 2=344234, 3=4444422). Instead of passing the actual account numbers as query string parameters (43554345, 344234, 4444422) pass indirect/mapped references such as the numbers 1,2,or 3 ... and then when you get to the database unmap and determine that gift card 1=43554345 , 2=344234, and 3=4444422. This way the attacker could only inject the numbers 1,2, or 3 which all belong to this user, and thus the attacker could not inject an account number of another user.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Well, Adam Logue recently posted a blog about a real world example of Insecure Direct Object Reference going bad.
The blog talks about a vulnerability they discovered on TGI friday's mobile website. There was an HTTP GET request getting sent to the TGI Friday server that passed a parameter (in bold below) called 'acctid'.
GET /alchemy-master/ws/TgifAccountActivity.asmx/AccountActivity?stoken=8970853507518770&acctid=123213123
This 'acctid' was an account id of the user and could be used to redeem for free food at the restaurant. Thus all an attacker had to do was replace their account id with some other user's account id, and then instead of redeeming their own points, they would be redeeming somebody else's points. Thus FREE FOOD! And this is a great example of an Insecure Direct Object Reference. Poor programming.
Here are 2 ways this could've been prevented if you were the web developer writing the code.
a.) Check access. Before committing those changes to the database, confirm ... does the account id match the user logged in? If no, deny.
b.) Use indirect object references. In this example, let's say the user logged in has 3 gift card account numbers (1=43554345, 2=344234, 3=4444422). Instead of passing the actual account numbers as query string parameters (43554345, 344234, 4444422) pass indirect/mapped references such as the numbers 1,2,or 3 ... and then when you get to the database unmap and determine that gift card 1=43554345 , 2=344234, and 3=4444422. This way the attacker could only inject the numbers 1,2, or 3 which all belong to this user, and thus the attacker could not inject an account number of another user.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.
ModSecurity Virtual Patching 101
There is a great article by High-Tech Bridge Security Research team about the Open Source WAF ModSecurity. I thought it'd be interesting to cover a few of the topics they mentioned at a high level.
Have you ever had a scenario where a security vulnerability was identified (perhaps by a scanner, or an outside resources, etc.) but you were unable to immediately patch it. Perhaps you were in the middle of a large project and had no resources. Perhaps the vulnerability was in a fragile high risk area of the sites and numerous hours or days of testing are required. Perhaps the site is hosted/built by a 3rd party and you have to deal with formalities and other delays. A possible solution to any of these problems would be to apply a temporary "virtual patch" with your WAF in order to block the attack from occurring until you get the developers to build & test the real patch. Remember you still want to perform real patching, your virtual patching should only be temporary because WAFs are just another layer, and that layer could also have vulnerabilities or weaknesses of their own (such as WAF bypasses). Thus the only real way to prevent exploit is to perform a full patch.
But for the temporary fix, you might be wondering ... what does a virtual patch look like? Well essentially you can write a rule (think of it as similar to a SNORT IDS/IPS rule) that restricts what data can be utilized on the website to hopefully allow the good data and block that attackers data.
XSS Example
Exploit Url: http://www.mysite.com/product.aspx?productid=alert(document.cookie)
Virtual Patch:
SecRule REQUEST_FILENAME "/product.aspx" "phase:2, t:none, t:normalisePath, t:lowercase, t:urlDecodeUni, chain, deny, log, id:1001"
SecRule ARGS_GET:productid "!^[0-9]+$" "t:none"
To explain further, let's say in the example above that you confirm the productid parameter on the product.aspx page is vulnerable to XSS but you cannot apply permanent patch yet. Thus you want to create a temporary WAF virtual patch to block attackers from exploiting it. The 'SecRule' keyword allows you to analyze and act upon variables. You'll notice there are 2 lines thus we are analyzing 2 variables. The 1st is the 'REQUEST_FILENAME' variable and it holds the name of the file being requested. In this case we validate that it's the product.aspx page. Then we can set a bunch of actions. The first one I want to point out is the word 'chain'. This indicates that there are multiple 'SecRule's that are getting chained together (in this case our 2 lines/2 variables we're comparing). Also it says 'deny' and 'log' which means if these chained rules match we are denying and logging it. Just like a snort rule there is an "id" also for tracking. There are also a bunch that start with the letter 't' which stand for transformation functions. 'none' starts you with a clean slate, then it's saying do all the comparisons in 'lowercase', and use the 'normalisePath' to eliminate any double slashes, and use unicode with 'urlDecodeUni'. The other action in the first line is 'phase:2' which indicates for the WAF to look at the Request. Phase 1 is the request headers, Phase 2 is the request, Phase 3 is the Response headers, phase 4 is the Response, and Phase 5 is logging. The phase is for performance.
The second line is another 'SecRule' on a variable called 'ARGS_GET'. More specifically, it's comparing the value of the 'productid' query string argument. This line creates a whitelist to basically attempt to allow the good data and block the attackers bad data. In this case it's providing a regular express that says the productid can only contain numbers (1 to many). Thus by allowing only numbers, the WAF will 'deny' the request and 'log' if anybody tries to pass anything other than numbers into the productid parameter. Just like that you've prevented the XSS.
SQLi Example
Exploit Url: http://www.mysite.com/search.aspx?keyword=value';insert+into+user+('admin','password');--
Virtual Patch:
SecRule REQUEST_FILENAME "/search.aspx" "phase:2, t:none, t:normalisePath, t:lowercase, t:urlDecodeUni, chain, deny, log, id:1002"
SecRule ARGS:keyword "'" "t:none, t:urlDecodeUni"
Just to provide a second example, above is a url that you've identified as having a keyword parameter vulnerable to SQL injection. In the case above, the attacker terminates the keyword value in SQL with the apostrophe, then inserts an admin user into the user table, then comments out the rest of the SQL. To prevent this we chain 2 'SecRule's again. We first check that we're on the vulnerable 'search.aspx' page and we're going to 'deny' and 'log' again.
The second line then look for the 'keyword' query string parameter, and if it contains an apostrophe or any unicode variation, then it will 'deny' the requst, thus you've temporarily prevented the SQL injection.
The article has many more great examples of how to block things like CSRF, Path Traversal, etc.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Have you ever had a scenario where a security vulnerability was identified (perhaps by a scanner, or an outside resources, etc.) but you were unable to immediately patch it. Perhaps you were in the middle of a large project and had no resources. Perhaps the vulnerability was in a fragile high risk area of the sites and numerous hours or days of testing are required. Perhaps the site is hosted/built by a 3rd party and you have to deal with formalities and other delays. A possible solution to any of these problems would be to apply a temporary "virtual patch" with your WAF in order to block the attack from occurring until you get the developers to build & test the real patch. Remember you still want to perform real patching, your virtual patching should only be temporary because WAFs are just another layer, and that layer could also have vulnerabilities or weaknesses of their own (such as WAF bypasses). Thus the only real way to prevent exploit is to perform a full patch.
But for the temporary fix, you might be wondering ... what does a virtual patch look like? Well essentially you can write a rule (think of it as similar to a SNORT IDS/IPS rule) that restricts what data can be utilized on the website to hopefully allow the good data and block that attackers data.
XSS Example
Exploit Url: http://www.mysite.com/product.aspx?productid=alert(document.cookie)
Virtual Patch:
SecRule REQUEST_FILENAME "/product.aspx" "phase:2, t:none, t:normalisePath, t:lowercase, t:urlDecodeUni, chain, deny, log, id:1001"
SecRule ARGS_GET:productid "!^[0-9]+$" "t:none"
To explain further, let's say in the example above that you confirm the productid parameter on the product.aspx page is vulnerable to XSS but you cannot apply permanent patch yet. Thus you want to create a temporary WAF virtual patch to block attackers from exploiting it. The 'SecRule' keyword allows you to analyze and act upon variables. You'll notice there are 2 lines thus we are analyzing 2 variables. The 1st is the 'REQUEST_FILENAME' variable and it holds the name of the file being requested. In this case we validate that it's the product.aspx page. Then we can set a bunch of actions. The first one I want to point out is the word 'chain'. This indicates that there are multiple 'SecRule's that are getting chained together (in this case our 2 lines/2 variables we're comparing). Also it says 'deny' and 'log' which means if these chained rules match we are denying and logging it. Just like a snort rule there is an "id" also for tracking. There are also a bunch that start with the letter 't' which stand for transformation functions. 'none' starts you with a clean slate, then it's saying do all the comparisons in 'lowercase', and use the 'normalisePath' to eliminate any double slashes, and use unicode with 'urlDecodeUni'. The other action in the first line is 'phase:2' which indicates for the WAF to look at the Request. Phase 1 is the request headers, Phase 2 is the request, Phase 3 is the Response headers, phase 4 is the Response, and Phase 5 is logging. The phase is for performance.
The second line is another 'SecRule' on a variable called 'ARGS_GET'. More specifically, it's comparing the value of the 'productid' query string argument. This line creates a whitelist to basically attempt to allow the good data and block the attackers bad data. In this case it's providing a regular express that says the productid can only contain numbers (1 to many). Thus by allowing only numbers, the WAF will 'deny' the request and 'log' if anybody tries to pass anything other than numbers into the productid parameter. Just like that you've prevented the XSS.
SQLi Example
Exploit Url: http://www.mysite.com/search.aspx?keyword=value';insert+into+user+('admin','password');--
Virtual Patch:
SecRule REQUEST_FILENAME "/search.aspx" "phase:2, t:none, t:normalisePath, t:lowercase, t:urlDecodeUni, chain, deny, log, id:1002"
SecRule ARGS:keyword "'" "t:none, t:urlDecodeUni"
Just to provide a second example, above is a url that you've identified as having a keyword parameter vulnerable to SQL injection. In the case above, the attacker terminates the keyword value in SQL with the apostrophe, then inserts an admin user into the user table, then comments out the rest of the SQL. To prevent this we chain 2 'SecRule's again. We first check that we're on the vulnerable 'search.aspx' page and we're going to 'deny' and 'log' again.
The second line then look for the 'keyword' query string parameter, and if it contains an apostrophe or any unicode variation, then it will 'deny' the requst, thus you've temporarily prevented the SQL injection.
The article has many more great examples of how to block things like CSRF, Path Traversal, etc.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2016, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Labels:
ModSecurity,
Patching,
SQL Injection,
SQLi,
Virtual Patching,
WAF,
Web Application Firewall,
XSS
Get-Hotfix Powershell
I thought this Powershell command was simple but useful.
$> Get-Hotfix KB958488
It will lookup a Hotfix on the current computer/server you're on and tell you whether it's installed or not, and if so on what date and by whom. If you leave off the KB # it'll just list out all hotfixes already installed. Pretty helpful! Here's more info at Technet.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.
$> Get-Hotfix KB958488
It will lookup a Hotfix on the current computer/server you're on and tell you whether it's installed or not, and if so on what date and by whom. If you leave off the KB # it'll just list out all hotfixes already installed. Pretty helpful! Here's more info at Technet.
More about neonprimetime
Top Blogs of all-time
Top Github Contributions
Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.
Subscribe to:
Posts (Atom)