It’s not terribly difficult to use cron; you’d probably have more to worry about what tool to use to fetch the page. IMO it is the best/easiest way.
What I do:
- Edit a text file with cron entries.
- Upload text file
- Then in shell, execute following: “crontab textfile”
- If you need to check what the current entries are, execute “crontab -l”
- And to save them to a file, “crontab -l > textfile”
cron entries are space-delimited paramters. The first five are minute, hour, day of month, month, and day of week. The last is the command(s) to execute.
So you might want something like this:
0 2 * * * chdir ~/mirror; wget -p --convert-links http://www.server.com/dir/page.html
-------------------[/code]This would run every day at 2 AM (server time). The wget command here is from the documentation, it would save the HTML document and linked resources (CSS, images) in the same directory, ~/mirror/www.server.com/dir
Perl / MySQL / HTML+CSS