Thursday, September 24, 2015

The Intranet of things

The "intranet" of things is real, it's here, beware!

Why am I saying intranet and not internet?

Read this Krebs Target article.. The best and scariest quote is "able to communicate directly with cash registers in checkout lanes after compromising a deli meat scale located in a different store"


Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

TLS Best Practices Guide Summary

The TLS Deployment Best Practices from SSL Labs is a good read. Thought it might be good to summarize some of it.

Private Key Strength - In order of strength it's 1024bit RSA (low, should be replaced) , then 2048bit RSA ( should be safe for a while ) , and then 256bit ECDSA (best right now if you have support).

Private Key Protection - Guard it with your life, password protect, minimal access, and revoke if compromised.

Private Key Source - Get your keys from a reputable large CA that has strong security posture and services available like simple revocation.

Hash Function - SHA1 is week and should be replaced immediately, SHA2 is the current standard as long as your users can support it.

Protocols - SSL (v2 and v3) are considered broken and should not be used. TLS v1.0 is considered but v1.1 and v1.2 have their advantages that will soon lead to the decom of v1.0

Cipher Suites - You should only be using suites with 128bit encryption or stronger. Also not Anon key exchange suites provide no authentication, NULL suites provide no encryption, Export key exchanges are broken and should not be used, and RC4 is broken and should not be used.

Cipher Suite Selection - Clients submit a list of supported suites and the server chooses one. Having the suites ordered by strength on the server is important so the highest security available is utilized.

Forward Secrecy - This is a feature that makes TLS not use the server's private key but instead a different key each time. The benefit is that if the server's key is compromised, historical traffic cannot be decrypted because the key was not the same.

Renegotiation - It's ok for a Server to initiate a re-negotiation of TLS settings, but there is no reason a Client should. Thus configure the server to ignore Client re-negotiation requests.

TLS & HTTP Compression - They're insecure and most clients don't support TLS Compression, so disable it.

Mixed Content and HSTS - Avoid mixed content of HTTP and HTTPS. Go all or nothing to HTTPS. Otherwise it leads to user confusion, mis-trust, and also some known vulnerabilities related to mixes. Use HSTS (Strict Transport) which tells the browser your site is only ever HTTPS.

Cookies - Cookies must be secured properly by the developer or else it can render your HTTPS ineffective in protecting session and other sensitive information.

EV Certs - You can go one step beyond and get an Extended Verification certificate which goes through extra tests and is harder to forge.

Public Key Pinning - Another above and beyond step that specifies which CAs can issue certificates for your websites.

ECDSA Certs - Elliptical curve certificates provider more strength from smaller key sets which increases performance.

Validate your settings - Run re-occuring tests like ssl labs to ensure your settings are secure.

Keep in mind this information is accurate as of the Dec 2014 publishing of the best practice guide and new information may have surfaced since then that changes some of these stances.

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Wednesday, September 23, 2015

WAF Is not an Excuse to Ignore an Vulnerability

WAFs are great, they can add an extra layer, and they can make attacks more difficult. But they are not the end-all-be-all. They have their flaws and thus as a developer you still need to write secure code and fix known open vulnerabilities. I thought it'd be interesting to review some of the concepts found in the Bypass WAF Cookbook to illustrate how this can be.

.NET specific % symbol - Some versions of IIS/ASP allow the % character in the url but actually ignore it when processing. Therefore if your url was ' * from users' then IIS/ASP will actually just run ' * from users'. Why is this a problem? If you wrote a Snort, IDS, or WAF regex rule to search for the word 'select', then 'sele%ct' may not match but will still run in IIS/ASP so you just found a way to possibly bypass the WAF if it can't handle that and perform some sql injection attacks!

.NET specific %u symbol - Some versions of IIS/ASP allow the %u to specify unicode characters instead of ascii. Therefore if your url was ' * from users' then IIS/ASP will actually just run ' * from users' because %u0065 is unicode for 'e'. Why is this a problem? If you wrote a Snort, IDS, or WAF regex rule to search for the word 'select', then 'sel%u0065ct' may not match but will still run in IIS/ASP so you just found another way to bypass the WAF. Now WAFs may be getting smarter and learning tricks like this, but it's difficult if not impossible to capture all these scenarios. Like the author mentioned, a windows firewall bypass was found where in multibyte unicode sets, sometimes multiple codes resolve to the same character so like %u0065 and %u00f0 might both resolve to 'e'.

Apache specific http methods - Some versions of Apache are too lax in their http method syntax and thus you don't even need the word 'GET' in the request and it'll still perform one. Thus if your rule specifically looks for the 'GET' keyword, it won't match but yet Apache will still serve the request, thus you can send this malformed request, the WAF doesn't match the rule, but Apache still process the request ... bypass!

PHP specific normalization issues - Some version of PHP may parse the Content-Type header in strange ways that can trick the WAF into thinking it's requesting an image but PHP will process a non-image request.

HTTP parameter method changes - There are usually multiple ways to submit parameters, like GET, POST, and Cookies on a website. Sometimes a WAF may look only for GET and POST and thus you can use a Cookie to submit the same parameter and bypass the WAF.

Content-Type header changes - The WAF is inline and has to take performance related shortcuts. So it may decide to filter out or ignore certain types of data. So it's possible to bypass the WAF by tricking it into thinking the request is ignore-able ... such as switching the Content-type to 'multipart/form-data' (which is a method for transferring bulk form data to a server)

Parameter pollution - Another trick is to send multiple parameters like ' * from users'. Now which 'a' will the WAF look at? The first or the last? And which will the web server use? If you can find a mismatch such that the WAF picks one but the Web Server picks the other, then you have a bypass!

Database tricks - If you have a WAF rule that's looking for a space followed by the word union you may find bypasses by finding characters that databases (like mysql) access beforehand like '\N' ... such that a URL like this '\Nunion select * from users' passes by the WAF rule, but the web server still processes it as a valid statement. Of course you can do other tricks too that involve certain string manipulation functions like CONCAT, SUBSTR, etc. and it's not likely the WAF can understand them all, yet the database will know exactly what to do with them.

Performance Bypass - Another concept is that the WAF's usually have a timeout period or some performance threshold, if they can't finish analysis in X period of time then they ignore it. Thus if you can find a way to submit a larger or slower than normal request that the WAF ignores but the web server takes the time to process, you just found a bypass!

Application Layer IP Filtering - Another concept is that some WAF's allow certain IPs to bypass and go directly to the Web Server (perhaps your corporate assets for performance reasons, etc.). The problem then is some of those headers or attributes can be spoofed to tricking the WAF into thinking you're coming from a different ip (such as using x-forwarded-for, etc.). If you can trick it into thinking you're one of the allowed ips, them bypass!

There is no simple fix, as you can see the bypasses could occur at the WAF itself, at the Web Server (IIS/Apache), at the language (PHP), at the database (mysql) or anywhere in between. Thus you can't trust WAFs as your only saving grace. Instead you should implement your WAF as 1 layer in your defense that would also include Firewall, IPS, secure coding, etc.

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Securing SSL with

I thought this tool seemed pretty simple and handy

The usage is simple for a linux shell prompt...

And the output is easy to read.

I could see this tool being used by developers as a scorecard to show how well your website HTTPS is configured. Green results being good, red being bad.

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Wednesday, September 2, 2015

Cross Site Scripting (XSS) that Port Scans

I wrote a post last year that told you Cross Site Scripting (XSS) is worse than you think. Well, here is another example of why that is true and as a developer you have to take it seriously.

The big question is, did you know that you can essentially perform a port scan (functionality that a tool like NMAP provides) via javascript? Say What?!?!

If that's true, that means when you find an XSS vulnerability in a website, you can inject javascript that will run a port scan. So what you say? As an attacker, I'm outside your network. Where's your end user? They're inside on your network. As an outside attacker I can only port scan your DMZ which is exposed to the outside world. I could never port scan your internal network. Or could I?

In most cases, I'm guessing your end user on your internal network could probably run a port scan of surrounding devices/workstations/servers, right? Guess what. If I can get your user to browse to any website that has an XSS flaw, I can have javascript run inside your users browser, on your user's workstation, inside your network, and use their workstation to launch a port scan across other internal devices. And I could even return the results back by simply also injecting an iframe that calls back to my web server with the scan results.

That sounds like a problem!

In this blog post I'm going to skip past the concept of XSS. You can learn more about what causes it, how to fix it, and how to exploit it at OWASP. What I'm going to do in the rest of the post is show you the more interesting topic of how a port scan could occur in javascript.

At a high level, it's super simple. I'm basically going to make an HTTP request to the ports I want and the devices I want. I can do that by creating html image tags (img) that have a url (src) of http://targetIP:targetPort/testfile.jpg

If the port replies back with a response of any kind, I now it's up. If the port fails to connect or I get no response, I'm going to assume it's down.

Now granted, this method is not anywhere near as reliable as NMAP, but if life gives you lemons why not make lemonade out of it? What do I mean by that? Well, this type of scanning is at the application level. If the application/service running on that port is coded to spit back a response if it receives something that looks like HTTP, then great! But of course, there are many applications that won't respond to malformed or incorrect requests. Thus this method isn't perfect since it will mistakenly think that some ports are closed, when it all actuality they have a application running that simply won't provide a response.

The other primary downfall of this method is that all major browsers now block http requests to well known ports (like 22-ssh, etc.) so those will always be reported as closed via this method.

But, lemons to lemonade, right? So simply look at the bright side ... you are able to perform a semi-accurate port scan on an internal network without being in that network. That's more information that the attacker had to being with so I'm sure they'll take it and run. You'll probably end up identifying unusual, unconventional, or non-standard ports being used ... but hey many times are the interesting applications anyways.

So, let's get to the port scanning in javascript.

My setup is 2 devices...
- Victim to XSS that unknowingly performs a port scan (virtual box windows desktop)
- Nearby Server that gets scanned (virtual box Kali linux)

The Victim simply has a web browser that loads an html file with the attackers malicious XSS.

The Nearby Server has SSH running but on a non-standard port (7722), but if I'm the attacker I don't know that yet.

So I find a website vulnerable to XSS, I inject the following evil javascript into a website. I send the link to the victim in an email. They open the email, the vulnerable site loads, and the javascript I injected runs on that victim's browser. But I'm not interested in just hijacking sessions or social engineering the user with XSS, I also want to generate a network map of what else is on the internal network. So perhaps I write a script that cycles through several of the RFC 1918 ips and performs a port scan on them and returns me the results in an iframe http post back to my external server (which will likely get out since port 80 outbound isn't blocked).

As an example I've created a simple javascript method called scan that hits a range of ports on 1 ip.

Scanner.scan(Scanner.printresult, '',7720,7725);

The method is simply looping thru each port in the range and calling the primary method that scans 1 host, 1 port.

Scanner.scan =
  function (callback, hostIP, hostMinPort, hostMaxPort) {
   for (currentPort = hostMinPort; currentPort <= hostMaxPort; currentPort++){
    Scanner.oneHost(callback, hostIP, currentPort);

The primary function is the funnest part. It simply declares an html img (<img src='' /> ). Then "if" the image loads then that means that I got a response (mighta been http 200, or 302, or 404 ... doesn't matter ... I just know it responded) and thus know it's open. If it doesn't load but instead hits my timeout period then I assume it's closed (although as discussed above it might not truly be closed).

var Scanner = {};
Scanner.oneHost =
  function (callback, hostIP, hostPort) {
   var timeout = 200;
   var imageUsedAsScanner = new Image();
   imageUsedAsScanner.onerror = function () {
    if (!imageUsedAsScanner) return;
    imageUsedAsScanner = undefined;
    callback(hostIP, hostPort, 'open');
   imageUsedAsScanner.onload = imageUsedAsScanner.onerror;
   imageUsedAsScanner.src = 'http://' + hostIP + ':' + hostPort + '/testfile.jpg';
    function () {
     if (!imageUsedAsScanner) return;
     imageUsedAsScanner = undefined;
     callback(hostIP, hostPort, 'closed');
   , timeout);

So to wrap this up ... I run this javascript and print the results. Per the screenshots below notice that port 7722 (which was open) returned a "reset" while the other closed ports just refused the connection. Boom, I can port scan with XSS!

Why do we care? Developers, you have to make sure you're taking XSS seriously! Even something as trivial as an XSS flaw can lead to full network compromise. Security professionals also should care, as they need to monitor and pay attention to those TCP scanner alerts local to local as they they could be coming from an unexpected source, XSS!

NOTE: This example above was created with the intention to educate and create awareness only. This information should not be utilized information for anything malicious.

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Tuesday, September 1, 2015

An Exploit: From Developer to Attacker, a Tale of PHP and Metasploit

I thought it'd be helpful to illustrate how a vulnerability goes from the code written by a developer to the metasploit module added that makes exploitation easy for script kiddies. This post is helpful for developers as they might see just how simple script kiddies can own your box. This post might also be helpful for sys admins so they can understand exploitation occurs and maybe what to look for. This post may also be helpful to security professionals to simply understand the basics of what an attacker is doing.

For this setup, I have created 2 linux boxes in virtual box (both set to Host-Only adapter so they have their own ips and can talk to eachother). 1 has kali linux and plays the role of the attacker. 1 has ubuntu server and plays the role of the victims public facing web server. The website will be written in php. The vulnerability will be poorly written code with a command injection flaw. The exploit will be written in ruby as a metasploit module.

Please don't get stuck focusing on the code / vulnerability (as it's simple and silly, although many times mistakes aren't much more complex but instead just basic programming 101 mistakes). Please also don't focus on the exploit / metasploit module (as it's also simple and could be enhanced to be more powerful). Instead please sit back and try to grasp/visualize the 10,000 foot high overview of how a developer writing php code and can turn into a script kiddie rooting your web server.

Now that I've set the stage, let's have some fun!

I have a website running apache (/etc/init.d/apache2 start). In the root folder of the website I have a 'hidden' administrative page that my perhaps well-intentioned webmaster decided to write so that he can work from home. Per the screenshot below it allows him to run commands like "ps -aux" if the website is running slow and determine what might be the cause.

The disgusting code looks something like this

Notice the code uses the extremely unsafe shell_exec method, it performs no input validation, and the page doesn't even require authentication. This leads to a command injection vulnerability because I can basically run any shell command I want under the same privileges that the website runs under (www-data). Security by obscurity is perhaps the only defense here and we all know that doesn't cut it. There are 100 more reasons I could list for why this is the worst idea anybody could ever come up with, but that is for another day. Also if interested the nl2br method simply converts newlines to html breaks so the output is prettier.

Now the webmaster doesn't just host one website, he's actually provides a standard platform for all his customers and hosts this same codebase on many webservers. So this vulnerability is actually widespread across many sites on the Internet. Mr. evil attacker is recon'ing and scanning and by happenstance comes across this vulnerability. He wants to share it with the world, so he writes a metasploit module in ruby. Below you can see that ruby module, and in it the payload is 'cmd' or a command prompt. And the scarey part to me is that the actual exploit code is really short. Look at that 'exploit' method at the end, it's really really short as many of these modules are. This one simply calls an HTTP GET and passes the payload into the AdminCommand query string. Take a quick deeper look at the code below. Seriously, it's not very complex. In fact it's almost too easy.

The attacker saves this ruby file into the exploits folder (~/.msf4/modules/exploit/neonprimetime_web.rb). Then the attacker loads metasploit console (./msfconsole). Once at the msf command prompt, the fun begins. The script kiddie can say 'use exploit/neonprimetime_web', set the RHOST to the ip of the website he wants to attack. Then type 'exploit'.

Let's also look below at the Ubuntu web server now. Oh Crap! There appears to be a connection going back to the script kiddie on port 4444!

Let's look at the Apache logs. What the? That's an ugly looking bash command. No script kiddie could understand that. That's the beauty of Metasploit. It has all the pre-built payloads that open up the reverse shells and get you access into things. It does it all for you.

Script kiddie isn't done, he can run commands on YOUR web server!

Oh boy, from the ubuntu web server we can packet capture and see the commands run and the results sent back.

Script kiddie has the permission of the account running the web service. I think you're in trouble.

Now if the account running apache has minimal privileges, the attacker may have to work harder to find an unpatched privilege escalation vulnerability (buffer overflow or something) on your server and gain root. But believe me, that's not going to be hard. There are open source scripts that kiddies can run on a victim server that will crawl thru the server and literally spit out recommendations of which privilege escalation vulnerability will work for that victim. If the website is already running as root or some privileged account, well then it's probably all over, you already handed him the keys to the kingdom. He's going to create something that persists, and probably pivot to the rest of the servers on your network.

I hope you enjoyed the walk-through and it has opened your eyes to just how important writing good code, following best practices, and patching vulnerabilities is. The bad guys are good at this, they've made it mundane, simple, and repeatable. You have to work hard to protect your things.

NOTE: This example above was created with the intention to educate and create awareness only. This information should not be utilized information for anything malicious.

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Metasploit - Import Custom Exploits

Metasploit import custom exploit
# cd ~/.msf4/modules/
# mkdir ~/.msf4/modules/exploits
# wget
# ls
# msfconsole
msf>use exploit/theexploit

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Metasploit says 'Database not connected or cache not built, using slow search'

Metasploit search warnings
msf>search platform:win
[!] Database not connected or cache not built, using slow search
#service postgresql start
#service metasploit start
msf>search platform:win

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.

Website Fuzzing 101

You have a web application. You want to see if there are any buffer overflows, DoS, or other oddities or you're just interested in determining how good your developers are validating input. One possible way is SPIKE run out of a linux environment.

A spike script tells SPIKE what requests to send. In the example below I'm crafting an http request to send to a test web server, except that the query value will be fuzzed with a bunch of random data.
s_string("GET /?q=");
s-string(" HTTP/1.1\r\n");

Kick off SPIKE...
./generic_send_tcp TESTSERVER 80 ~/scriptfile.spk 0 0

And watch the requests fly out! Then take a look at your application logs and anytime the website crashed or generated scarey buffer overflow, null reference, database, or other errors. ... make sure to review that part of the code and patch your code so that it handles the fuzz data in a more proper manner. Your website should be able to gracefully handle any data thrown at it.
Might be good to tail the apache access logs...
tail -f /var/log/apache2/access.log

Copyright © 2015, this post cannot be reproduced or retransmitted in any form without reference to the original post.