What about cron as a hack monitor?


Picking up from another thread, I’ve become intrigued by the following code

What’s interesting about this command is, if placed in a shell and run as a cron job, it could be ‘silent’ until something bad happened to your account. I’m wondering whether other commands could be placed together which are ‘silent’ (i.e. no output) when conditions are normal, but create output, therefore email when things are abnormal. The use of cron would be perfect for this since they could be run once a day and report only when there is an obvious problem. Any thoughts for other commands?





I’ve looked at this wiki, and it’s really cool. But I was specifically thinking of commands that are ‘silent’ when normally run; therefore no email from a cron job. While these are great scripts and shells, they send output which is likely to be ignored if it comes every day. I was really trying to have an email that is the ‘oh crap!’ factor; it should not arrive.



create a git repo of your account. run a cronjob of ‘git status | grep Changes’


I personally like a daily report. I’ve actually been thinking about taking bobocat’s scripts and parts of yours to make a daily report that includes things I might want to look into further. So far tho I haven’t done alot other that conceptualize. The trick is in fact to suppress details that are normal, so that when the report arrives it includes only things that need to be looked into/checked further.


i simplified it for the wiki. i use a series of colours to tag low, med, and high threat changes. most of my report comes greyed out, with just they key potential problems in red. the only way to get to that stage is run a full report and start adding files to the low threat list. it gets better over time.

you can configure git to do the same with the .gitignore file, but it’s either ignore or not, whereas I want to monitor. Usually I just open, look for red, and delete…


Got the second part about adding to the cronjob, but any info on the first part for the git-illiterate?



Um, Google?


Yup tried google and dreamhost wiki, but I not really trying to create a code repository am I? This is where git instructions become sort of overwhelming. If I just want to stick my account into a git repository, then it’s really not a project. From the dreamhost wiki, there are quick instructions for creating a local project:

# Create the local repository [local ~]$ cd project [local project]$ git init [local project]$ touch .gitignore [local project]$ git add . [local project]$ git commit
I assume this creates a local empty project. From my repository experience (or lack of it) I’m not confident that I want to wade into what would happen is I did this on /home/myuser/. Is this what your suggesting? Then use .gitignore to ignore files you expect to change?

ps. I loved the smart-assed google link!


It’s not only what I’m suggesting, it’s almost an official DH recommendation: http://discussion.dreamhost.com/thread-134372-post-150533.html#pid150533


Ok, after lots of experimentation, and google. I think I have a working understanding of how to do this, so in my web root directory /home/myuser/mywebsite.com/ I ssh’d in and typed:

git init touch .gitignore git add . git commit -am "saving my butt"

now a cron job could issue the above command ‘git status | grep Changes’ and changes would pop out.

One issue, one question: issue: approved changed would need to be recommitted, so:

git add git commit -am "these files are ok"

and second, question would git list modified files?

do I have the structure basically correct?


You can have an entire program issue no output unless certain conditions are met.

A good beginning would be to collate what you wish to monitor and go from there.


I’m wondering about if there is a nOOb cron job that could alert the clueless when they are attacked and otherwise is silent. So far I’ve stolen the idea of the other writeable directory, and file structure changes in the webroot (Although one plugin update on wordpress and the user would probably be overwhelmed). Since I’m proposing cron, some setup would be OK, like initialization files as suggested in bobocat’s wiki about intrusion detection. But my thought was a real wake up email, one that only arrives when something weird happens. Can anyone think of additions to these two commands, I’ve been noodling with crunching the error log with awk or grep and eliminating criteria errors, so all that’s left are legitimate error log warnings, although even that may be too chatty for the nOOb criteria I’ve thought of.



I think it’s an interesting question about whether security-checking cron jobs ought to be silent when there’s nothing wrong. If it was me, I would start worrying about whether the cron job had been accidentally stopped. I won’t say more than that here, maybe my worry is too abstract and I don’t want to de-rail your technically interesting thread.

I’ll take this up in the thread which I started, where people who fundamentally disagree with me (there do seem to be some!) can more easily ignore it.



[quote=“kelly7552, post:13, topic:57283”]
But my thought was a real wake up email, one that only arrives when something weird happens.[/quote]

What you are proposing is something that not only monitors for changes, but also analyses the changes and only reports changes which are unauthorised? And automated?

Let me know what you find. I think people would pay money for that.


What you’re after would be quite straight forward, but you need to map out the elements you really need to track and go from there. Calling a plethora of tests across your entire system would be far from economically sound. For example you could feasibly scan your entire structure for embedded exploits on an hourly basis - but it would be a very heavy burden resource-wise given that such testing procedures are CPU intensive.

GIT is good for things static, but any site that is undergoing active changes (such as a plug-in update or install) within a CMS type website will echo status changes possibly quite often, and might even be overly annoying to some users running a dynamic site.

Using your WP site as an example, I’d be inclined to monitor just a very few core static files (index, config, include, htaccess…) within a run-often monitoring job - eg. 5m. This could act as a your email warning mechanism for a hands-on check, or as a trigger for an automated and more thorough “secondary stage” test procedure, or even perhaps a full file scan.


Don’t be confused, I think it’s a perfectly good option to have a cron that does security checks and emails you, but I know from experience that you become either numb or uncaring to wade through lots of cron output, especially is your a nOOb who is just worried about blogging about cheese heads in wisconsin. Real sys admins should be looking for crap. Unfortunately, DH is an attractive to people who can’t login in to a unix machine and have zero background in sys admin issues.

On the same topic as Hardening Wordpress, I’m wondering whether i could start a wiki with this wake up concept for the many Dh customers who don’t know how to ask the question. Right now, I’m not convinced the idea is a good one.[hr]

Cool! I was just thinking that the Git solution might be a good one, with one major modification: limit the files to things nOOb’s don’t do. If I focus on say WordPress, and I’m thinking about putting specific files into git, maybe a list like:

If I stay away from media files and plugins which get updated, and I add a test for other writable directories, I may have a system that would only FALSE trigger with a wp theme update (fairly rare), and a wp update (probably more often). These would normally be installed by the admin user that’s getting the email, so all might be good.

The holes would be plugin attacks, successful login attacks, and compromised ftp passwords, and compromised WP database. Htaccess would catch some of the holes if the troll modified it.

I’d need a reset shell script to reset git after one of these events. Maybe I could write a wp plugin to reset it :slight_smile:


Too right. I know users who become “ad blind” very quickly. raises eyebrow at mirror

If we’re mailed only when certain flags return positive then we know we should act on it.


Ok, I think I’m on to an interesting solution this would be limited to Wordpress Users, I’ve been running this all day and other than the fact that I currently have a DH bug that is spam filtering my cron email, I know it works. Here is what I setup:

cd $home cd /home/myuser/mywebsite.com/ git init touch .gitignore git config --global user.name "Bill Kelly" git config --global user.email mymail@mywebsite.com git add .htaccess git add *.php git add ./wp-admin/*.php git add ./wp-admin/includes/*.php git add ./wp-admin/network/*.php git add ./wp-admin/user/*.php git add ./wp-content/themes/. git add ./wp-includes/*.php git commit -am "saving my butt"

Now in the same website.com directory we’d have a file daily.sh:

#!/bin/sh cd /home/myuser/mywebsite.com/ find . -type d -perm -o=w git status > /tmp/gitstatus grep modified /tmp/gitstatus grep deleted /tmp/gitstatus rm /tmp/gitstatus

I experimented with deleting files and changing files and I received a warning for both. I’d also need a refresh git shell script when you change something and want to stop the emails.

So, nothing until one of these files is deleted or changed.

Questions: 1) I wrote to a temp file since I needed to grep both modified or deleted and I’m a grep/awk dolt. if there are easier way that pipes it instead of writing a temp file?

  1. I think .gitignore is superfluous, but I don’t know git that well. Is it?

  2. are CSS and JS also area’s that should be monitored, do they have the same potential for insertion that php has?

Inquiring minds want to know!



I’ve been thinking about this thread quite a bit. I made the point above that I enjoy having a daily report, in addition to what I said elsewhere, it’s nice to get just because you know the system is still working, i.e. if you stop getting the emails that is also something to investigate. The counterpoint has also been made along the way tho that complacency could in fact set in and the emails might become something never read. Therefor when a problem occurred it could be missed because the emails are not being reviewed anymore. Valid point.

On the other hand a script that is normally silent unless in alarm condition could in fact be circumvented by a sly hacker, or broken by unrelated causes. The alarm doesn’t work then either. And since no email output means things are good we could miss the alarm here as well.

Building a better mousetrap:

User A: Build “the script” as a “report” on a hidden URL of the main site, protected by .htaccess and/or some other means, but built to be served by apache as a private portion of the site. That report could be simple or complex and human readable if the site owner should navigate to the page (Debugging and/or just a random check on the site).

User B: On the same machine, different machine, or even a different host. This user serves no websites and exists only to run a second script via CRON. This second script makes an http request to the URL of the script built on user A thus causing the “check” to occur, output from the first script is collected. This second script then scans the text received for a series of several things that must be present AND absent and either declares “alarm” or “no alarm”. On “alarm” or time out/error (no output received) then “alarm output” is generated, else (i.e. “no alarm”) exit quietly.

The next thing to avoid is dependency on email solely for the “alarm output” generated. The entire output collected by the user B script could in fact be emailed as the first line of notification, but since we know that email is unreliable and/or might be delayed there should also be a second method employed of notifying the site owner. My suggestion for the second method would be to have the script ALSO use the twitter API to send a “direct message” type tweet to the site owner (by default i.e. the user has not unchecked the option in their twitter settings, receipt of this type of tweet will also cause twitter to generate yet another email notification sent from twitter). With all of these notification methods employed the message to check on the site because an alarm condition exists should be quickly received even if mail the primary email is delayed or lost.

Just a few thoughts, it’s starting to become more complex but seems to be a better way since the hacker would be unable to stop the cronjob from running, and if the script in USER A was in fact tampered with the alarm would most like be triggered anyway because the user B script would be actually analyzing the output of the User A script.