What are they trying to exploit?

I’m seeing in my logs several attempts to append the url with things like …


… what are they trying to exploit?

[quote]I’m seeing in my logs several attempts to append the url with things like … what are they trying to exploit?[/url]
What is/are the url/s?


An example would be…

Thanks for the link. I looked the site over, inspected the code, etc. and don’t have the slightest clue what someone might be trying to exploit with this code. :frowning:

I’m not even sure without looking at the logs (ip addresses, time/frequency, surrounding log entries,referrer, etc.) that it is an exploit attempt. It is certainly nothing obvious. If there is a clue to be had, it lies somewhere deeper in the analysis of your logs.

If you navigate to the url in firefox you will see that the page still loads, but your display is changed considerably. Most notably your “Movie Poster Image” display is supressed. In fact, trying the string on the end of other urls slightly restyles the page, supressing all the graphics in your main content section of the page (header, sidebars, etc are ok, but other images are stripped). Is it possible that this is to facilitate a screenreader or link robot? Is it the result of a webfetching script/bot that only wants to read the text?

I believe it is “dorking” with the javascript on your page, or some re-write rules in place on your site, or some other “site specific” condition, as appending those characters to a different ramdom url that is not on your site does not have the same effect. Weird. Possible the result of the behavior of your stm31.js, but who knows.

It is interesting to see how it causes the display to change, but I don’t see obvious exploit at work here, as the only impact I can identify is the change of the display.

I’m sorry I don’t have any real insight into this behavior. Hopefully some javascript wizard can make more sense of this than I can.


I’d avoid using any script that is allowing random user-supplied path_info, query strings, etc… to go unchecked and modify the page.

:stuck_out_tongue: Save up to $96 at Dreamhost with ALMOST97 promo code (I get $1).
Or save $97 with THEFULL97.

It appears unrelated to a script. Happens on any page with or without scripts. I thought it probably had something to do with the .htaccess. However, removing .htaccess has no effect.

So, doing some experimenting…

Put up a file with .html as the extension. File viewed fine when entered normally. Viewing as .html/ though got a 404.

Same file with .shtml as the extension views fine when the URL is entered normally but when the trailing slash is entered the server apparently gets confused as to where it actually is in the tree and what it is looking at. It treats it as both a directory and a page, so all the relative URLs get written incorrectly. For example a link to the main index page written as /index.shtml would become /testpage.shtml/index.shtml. This is how the CSS is getting stripped. The server is looking in the wrong location for the file.

It should be treating all the extensions like the .html/ example where it sees there is no such directory and tosses an error. I’d also be happy with it seeing the URL isn’t formed correctly and dumping the trailing slash but that isn’t happening either.

Not sure how to deal with this without rewriting every page to use full URLs so it always delivers the right page and correctly rendered.

Or perhaps I shouldn’t worry about this since it seems to be just one person screwing around and I haven’t been seeing this anywhere else in the logs.

Your additional research explains a lot. I never thought to try it on a page with “no scripts”, as I saw the javascript in your “theme” and just assumed that all pages would have that javascript on them. It almost makes me wonder if it is worth a not to tech support (something might be messed up in their Apache instance.?)

Since you indicated that removing the .htaccess had no effect on the behavior (and I assume you cleared your cache completely - on my my Firefox, a simple “refresh” was not sufficient!) that would eliminate re-write rules. Re-write rules creating the situation would have been my best guess.

If your logs indicate that it is a single user, it might not be too critical, but I’d be suspicious that that “user” might be a bot and inspect my logs thoroughly enough to make sure that is not the case (and I’m sure, from your post, that you have probably done that). Good luck with your research, and if/when you get it sorted I would love to learn what caused it :wink: