Transfering big files


#1

Hey guys,

Trying to move my sites across from ev1 (theplanet) to dreamhost…
the problem is i have tarball my data up and it’s over 2gig’s large.
now, wget isn’t going to allow this…

So, I shell into my dreamhost server and ftp to the ev1 server but the transfer just times out… i’m getting a connection lost… this happens if I upload the file from ev1 to dreamhost.

Can someone shed any light how I can transfer this?

I’ve done this move to another server fine… so I’m thinking I need a new way to transfering this big file.

cheers!
thanks in advance


#2

If your other server supports SSH, then try ‘scp.’

Or use ‘split’ to break up the file into smaller pieces and ‘cat’ to append them back together again. At least I think it’s cat that does this.

Is your tarball compressed? That might help a bit.

-Scott


#3

use curl
curl -C - -L -O URL

curl -C - URL

man curl


use VICM3 for a full 97USD discount on your first year!


#4

curl relies on the Apache server on the other end. More than likely, that Apache server can’t handle a file greater than 2 gigs.

-Scott


#5

Alternatively rather than trying to transfer the tarball just transfer the site without archiving. may take longer though but probably won’t give any problems with file size.

Just wget -rc ftp://path_to_site/*

$97 DISCOUNT with [color=#CC0000]DISCOM97[/color]
More codes


#6

That way, yes, bur curl also understand ftp and other protocols.

one great trick is not to use a tarball to transfer, but to pipe a tar over a ssh…

You have a tar.gz file on a remote machine and want to extract it to the machine you are currently logged into:

ssh user@host “cat /path/file.tar.gz” | tar zpvxf - -C /DestPath

another great is

tar

tar is usually used for achiving applications, but what we are going to do in this case is tar it then pipe it over an ssh connection. tar handles large file trees quite well and preserves all file permissions, etc, including those UNIX systems which use ACLs, and works quite well with symlinks.

the syntax is slightly different as we are piping it to ssh:

tar -cf - /some/file | ssh host.name tar -xf - -C /destination

-or with compression-

tar -czf - /some/file | ssh host.name tar -xzf - -C /destination

Switch -c for tar creates an archive and -f which tells tar to send the new archive to stdout.

The second tar command uses the -C switch which changes directory on the target host. It takes the input from stdin. The -x switch extracts the archive.

The second way of doing the transfer over a network is with the -z option, which compresses the stream, decreasing time it will take to transfer over the network.

Some people may ask why tar is used, this is great for large file trees, as it is just streaming the data from one host to another and not having to do intense operations with file trees.

If using the -v (verbose) switch, be sure only to include it on the second tar command, otherwise you will see double output.

Regards


use VICM3 for a full 97USD discount on your first year!