I want to write a php program to grab a report that changes each day so I could save a copy in a DH directory for a permanent record. The url is :http://www.usbr.gov/pn/hydromet/yakima/yakstats.txt. I started writing a little php program to give the url to curl, then explode it on newline characters to an array, then write the array to a file and save to a directory with the file name derived from the current system date. But then I thought, hey it’s already just a text file. Do I need to get this complicated? Is there a way to just copy the text file from the original location to my directory? A shell script even?
crontab + cURL + some Perl for the filename?
Don’t even need Perl for that; A bash script can derive a filename to include the date, etc.
you are obviously more grizzled than i! I don’t know bash scripting, though, so there’s a new thing to learn.
Use lynx in a bash script to get the text file, then save it to a directory with filename + sys date string?
You only need two lines:
mv yakstats.txt "`date +%Y%m%d_%H`_takstats.txt"
Or, as a one-liner:
Oh, yeah, wget! Thanks, guys.