How to fetch a url with curl or wget silently

Cron jobs need quiet operation; if a command generates output, you’ll get an email from cron with the command output. So if you want to fetch a file silently with wget or curl, use a command like this:

curl –silent –output output_filename http://example.com/urltofetch.html

wget –quiet –output-document output_filename http://example.com/urltofetch.html

There are shorter versions of these options, but using the verbose options will make code or cron jobs easier to understand if you come back to them. Be aware that urls with “&” in them can confuse wget at least, so depending on your shell (bash, csh, tcsh), you may need to put single or double quotes around the url.

30 Responses to How to fetch a url with curl or wget silently (Leave a comment)

  1. I have nothing to say other than “Hey Look I’m the first comment”

    It’s the little things that keep me going.

  2. Of course if you want it deathly quiet, just add ‘>/dev/null 2>&1’ to the end…

  3. PHP executable works well too. (It’s what I prefer to use.)

  4. If you want really, really silent wget/curl, then add a “2>&1 /dev/null” … but it is often a good idea for errors/exception conditions to show up *somewhere* in case things go awry. Along those lines, wget has an “–append-output” option that may be useful.

  5. Good points, alek and Steve.

  6. Thanks Matt, that’s my first file in my new HOWTO folder 🙂

  7. I tend to do thease sort of automated jobs as perl scripts.

    I can then make sure the perl logs its actions to a logfile for later debuging – its also a little clearer when you or a thirdparty comes back to it in 6 months time.

    The last one I did was a handler for parseing our incoming sms’s (delivered as xml) into our SMS campaign manager.

    BTW get the Perl cook book from oreily it saves so much Time and has loads of usefull tools you can steal^h^h^h^h^h adapt.

  8. After some more howtos you can change your blog title to:

    Matt Cutts: Gadgets, Google, HowTo and SEO

    🙂

  9. I understand the ‘> /dev/null’ part, but what it the purpose of the ‘2>&1’

  10. I simple use the file_get_contentents() for everything GET, I save curl for POST.

  11. feddy: 2>&1 redirects stderr to stdout so that everything ends up in stdout and therefore to /dev/null…

  12. Matt Sandy, Do you know how you could add proxy/tor support with file_get_contents()

  13. feddy, as far as I know you can’t use a proxy with that function, but if you really need more functionality then go about it the curl way.

  14. Another useful one to know is: wget –spider

    I have some protected pages that are inside my framework that need to be run at intervals, –spider makes wget behave as a web spider (it won’t download any pages, it’ll just check to see if they are there).

    You can also disable output by passing everything to /dev/null

    * * * * * wget –spider http://www.example.com >/dev/null 2>&1

  15. I had another problem. I had restricted access to curl, wget and every other suspicious bot to my site, so the only way to achive a cronjob like this was by using the -A directive, which sends an agent header.

    eg.

    10 * * * * curl -A Firefox http://……. > /dev/null 2>&1

  16. Oh…for a min there I though wget –spider http://example.com would give me all the links spidered right from the default site page. Something like Google’s site:http://example.com

  17. wget is a program with incredible untapped potential for most people

    I personally like wget –delete-after http://website.com

    it deletes the output after the execution, not as nice as quite or /dev/null 2>&1 but very powerful still.

    also wget will work with tor, just a question of having tor proxy set up right on your server and digging for the additional commands.

  18. Verbosity can be good. Wget or curl with their respective “quiet” options will silence some output from those scripts but not all. They will still likely show critical errors, which is why you may want the redirects to /dev/null. However, we often see cases where you need some errors but not others. wget has a -nv flag that is not verbose but not quiet. You can also use /etc/cron.d/filename on most linux systems to fine-tune your cron. You can specify a mail address within the file you place in this directory. This can be useful to alert someone in case of a problem.

    Also, don’t overlook security. Run your crons with a user with as few privileges as possible. If you simply need to wget a file, then a normal user with no login privileges will often suffice.

    Lastly, don’t forget –tries=number option. This will have wget retry in case of a failure. Note the default is 20 retries unless a failure occurs. There is also a –retry-connrefused which will retry even when a connection is refused, useful for overloaded URLs.

    Lastly, there is the –timeout option. Always use this option if you are fetching URLs frequently. The default read timeout is 900 seconds. That’s 15 minutes! I’ve seen many servers with dozens of crons piled up because they are polling every 5 minutes but the server is slow, so they are waiting 10 minutes or so to get the data. The problem quickly snowballs out of control.

    In brief, we recommend:
    1. use the least privileged user as possible for the user running the cron.
    2. explicitly set timeouts to work with your application
    3. decide what level of error reporting you need and use -q -nv and/or /etc/cron.d as required.

    These tips are mostly for wget but curl has many of the same options.

    Lastly, one more security tip. We often create a “wgetforuser” which is wget with permission that users can use. We then set the main wget to only be used by root. This helps (does not prevent) some attacks where a wget command is passed into an insecure web application.

  19. Oh…for a min there I though wget –spider http://example.com would give me all the links spidered right from the default site page. Something like Google’s site:http://example.com

    cgiproxy guy Said,
    January 21, 2007 @ 7:06 am

    wget is a program with incredible untapped potential for most people

    I personally like wget –delete-after http://website.com

    it deletes the output after the execution, not as nice as quite or /dev/null 2>&1 but very powerful still.

    also wget will work with tor, just a question of having tor proxy set up right on your server and digging for the additional commands.

    Jeff Huckaby Said,
    April 10, 2007 @ 7:21 pm

    Verbosity can be good. Wget or curl with their respective “quiet” options will silence some output from those scripts but not all. They will still likely show critical errors, which is why you may want the redirects to /dev/null. However, we often see cases where you need some errors but not others. wget has a -nv flag that is not verbose but not quiet. You can also use /etc/cron.d/filename on most linux systems to fine-tune your cron. You can specify a mail address within the file you place in this directory. This can be useful to alert someone in case of a problem.

    Also, don’t overlook security. Run your crons with a user with as few privileges as possible. If you simply need to wget a file, then a normal user with no login privileges will often suffice.

    Lastly, don’t forget –tries=number option. This will have wget retry in case of a failure. Note the default is 20 retries unless a failure occurs. There is also a –retry-connrefused which will retry even when a connection is refused, useful for overloaded URLs.

    Lastly, there is the –timeout option. Always use this option if you are fetching URLs frequently. The default read timeout is 900 seconds. That’s 15 minutes! I’ve seen many servers with dozens of crons piled up because they are polling every 5 minutes but the server is slow, so they are waiting 10 minutes or so to get the data. The problem quickly snowballs out of control.

    In brief, we recommend:
    1. use the least privileged user as possible for the user running the cron.
    2. explicitly set timeouts to work with your application
    3. decide what level of error reporting you need and use -q -nv and/or /etc/cron.d as required.

    These tips are mostly for wget but curl has many of the same options.

    Lastly, one more security tip. We often create a “wgetforuser” which is wget with permission that users can use. We then set the main wget to only be used by root. This helps (does not prevent) some attacks where a wget command is passed into an insecure web application.

    http://www.ihsac.com

  20. I’m hoping I can use “wget http://…?arg=value&pwd=password” in crontab to call a server I wrote to get a particular action done (it sends me the results in an email).

  21. How to fetch a url and save it in another folder rather than existing/root folder?

  22. @David

    To get into a .htpasswd protected folder, use:
    –http-user=xxxxx –http-passwd=xxxxx

    Cheers,
    Robert

  23. thnxxx

  24. –output-document=/dev/null

  25. Hi , I have a problem with wget command. I want to get the file just only 1 time but the command can ‘t do that.

    wget –tries=1 http://xxx.com
    It ‘s mean retry for 1 time

    wget –tries=0 http://xxx.com
    It ‘s mean retry indefinite times

    Could you tell me how can I use wget command without retry?

    Thanks

  26. I tried wget –debug with –tries=1 and wget appeared to only try once and then give up so as far as I can see, tries=1 means just that

  27. using >/dev/null 2>&1 works a treat, thanks

  28. Best option is wget -O – -q http://example.com/urltofetch.html >/dev/null 2>&1 in order NOT to download anything.

    @Jame: by default wget doesn’t retry at all. If you used the -t switch without a number it will default to 20 retries and if you use it with 0 it will retry to an infinite number.

  29. For the record, curl –silent will be totally silent and not even show error messages. One needs to use –show-error for that.

  30. Trying all hints on this site, but it still does not work. Got zero-byte files in /root, messages or a logfile, but I want a really silent wget-cronjob.

    For me it works as follow: wget -O /dev/null/ “http://www.example.com” >/dev/null 2>/dev/null

css.php