Server :: Scp Truncate Text File Busy - Copying File Is Not A Running Binary?
Jun 14, 2010
I am having problems with scp during a backup operationI added a ps -ef before and after the scp operation used during the backup.The backup is a script to backup a Zimbra ServerI am including the code segment that I am having problems
# DRCP Section. To scp newly created archives to a remote system
if [ "$DRCP" = "yes" ]
I've got a rather large CSV file (~700MB) which I know to consist of lines of 27-character alpha-numeric hashes; no commas or anything fancy. Somehow, during its migration from Windows to Linux (via winSCP and then a few regular SCPs), it has converted into some kind of binary format I am unfamiliar with.If I open the file in vi, everything appears fine, and it says [converted] at the bottom, although I know it's not a line endings issue (and dos2unix doesn't help). If I 'head' the file, it looks proper except for a " at the beginning of the first line. If I open up the file in nano, however, I see the at the start and then "^@" before every character (even newlines and EoF).
If I try to re-save or copy the file (say via: head file.csv > short.txt), this special encoding is preserved. I copied the first ten lines out of vi (which displays it properly) into my Windows clipboard via my SSH client, then pasted it into a new text file, test.txt. This file is visually identical when opened in vi (and similar through 'head', minus the ), although it's roughly half of the filesize. I have no idea what format this once-text file got converted to (it's notoriously hard to search the internet for symbols), but surely there must be some way to convert it back.
I've got a text file with a list of .gz files, these .gz files are in various sub directories of one parent directory and I've hacked this little script together to copy them from their current location to a new one and spit out any it can't find to "/home/user/not_found" but for the life of me can't get it to run properly!
im trying to output a list of running processes via a shell script. At the moment i got this which outputs the processes to a text file called out.
echo $(ps aux) >>out
The problem is though, the processes are all just one big block of text which makes it hard to read. Does anyone know how to sort the output to a text file so that it prints to the text file at 1 process per line? I know its probably simple but im very new to linux.
There is this server running squid and dansguardian as proxy for the local network. Everything is working fine. But I have seen that from time to time the danguardian dies out and fails to respond to shutdown or restart commands. And this is because of the binary located at /usr/local/sbin named dansguardian goes empty. There are multiple instances and hence copying another named dansguardian.2 to dansguardian does it. And dansguardian works normally as it should. Looked into dmesg and /var/log/messages but nothing there. It was compiled and not installed from pre compiled binaries. And runs on CentOS5.4 Final.
I wanted to know how can I cross compile SMS SERVER TOOL for an embedded computer and make just one binary file for it or how can I change all of its default files places like its demon and object file and gather all of them to one directory to execute and use and run.let me explain it better for you : I have an embedded computer with Linux OS that its file system is read only and I can not add any file to /usr /lib and ..... and I can just mount a SD memory card to it and copy all of my programs to it and run them from there as you understand I have two choices to choose, first make one big binary file for each program that I am doing it now and it is not a suitable solution and the second is finding the way to change default place of shared object file of my program.now you tell me what can I do to solving this problem.
I've had Ubuntu 11.04 installed on my desktop since it's release. Up until an hour ago, it was working fine. I clicked on an update from the update manager, now booting into a graphical mode is completely broken, (the start-up load hangs at 'Check Battery State ... [0k]'). I restarted my computer, and booted into safe mode, and launched the terminal. This all works fine. I then typed :
Code: sudo gdm start into the command prompt, hoping that I would be able to start things manually. Instead, it spat out this: Code:
gdm-binary: WARNING: Unable to load file '/etc/gdm/custom.conf'. No such file or directory. gdm-binary: WARNING: Unable to find users : no seat-id found. gdm-binary: WARNING: Gdm Display: display lasted 0.070467 seconds
The last line was printed about 8 times, with slightly different times, before it gave up and failed. Some information which might help, I have Gnome 2, Unity and KDE (not sure which version), installed. My graphics card is the GTX 275, and I have driver the Nvidia driver 275.21. So yeah, I think the update has gone and moved custom.conf somewhere, but I have no idea on how to fix it. I have a graphics programming assignment due on Friday and I would be eternally grateful if I could get this fixed well before then.
In a project I'm working on with a few other people, I got the task of writing an assembler. The last thing I do is convert the commands into a binary representation, and jam it into a file. Now one of my teammates said he'd like to be able to "reference" the code within another program. He said he'd be able to do this if the file I output is a Linux object file. I'm thinking it'd also work as an executable. Anyway, he said he'd like to be able to grab the file and reference the binary by address. I'm still fuzzy on this, and if you're confused with what I said here, please tell me so I can ask him for better details.Anyway, I'm aware that gcc can compile files to ".o", but that's only for C/C++, and my file is just binary. I'm also aware of "ld", but I haven't seen any use of it to help me. I'm happy to hear suggestions as to what I can do. If anything, I think I'll implement a few functions to grab the bits and hand them to him in an array or something.
I'm hoping to set up a cron job that takes a file and copies it to a remote password protected FTP server. I've got a command that formats the file with the correct name and I've put it in the anacron file in /etc/cron.d (which I think is right, haven't tested it yet).I'm not sure how to copy the file to a remote server though. I do actually have the ftp server bookmarked in my places menu. So is there a simple way of suppling a file path that will put it straight into that folder? The only problem I can see with this is that the connection won't be open continuously, so would need to be re-opened when needed (I could presumably save the password in the keyring so that I don't need to be there to type it in).
Or maybe set up a cron job that connects to and mounts the ftp server a minute before it has to copy the file over?
I'm planing to copy a productive mysql innodb file from one server to another, and the file size is around 300GB. As the file is keeping changing all the time, I have to shutdown mysql instance and copy the large data file to other server as quickly as possible.I should have to find a way to speed up file copying ... I'm wondering whether there's a way to copy file block by block.If the destination side block has same content, then bypass it.
I checked the 'Run executable text files when they are opened' option in Nautilus preferences. I have noticed that files such as .sh and .bin launch by simply clicking on then (which is great). However I have also noticed that an ordinary .txt and .html file must not be marked as executable in order to launch it in Gedit and Firefox respectively via clicking. Otherwise you must right click and open with every time. What file types need to have execute permissions? What file types never need to have execute permissions?
When I try to copy PDF files from one folder to another folder, it give me this error: "Error while copying "2004-SNUG-Europe-paper_...log_DPI_with_SystemC.pdf". There was an error copying the file into /media/CCDCE66BDCE64F70/Backup Master/Heterogeneous_cosimulation/Documentation" "Error splicing file: Input/output error" What is the reason of this error and how can this be fixed?
I have a 7.2 GB file (VMWare virtual machine file) that I am trying to copy from its original location to the another folder OR to external hard drive...each time I try to do this, I always get the following error after the copying process reach 'exactly' 1.4 GB
Error reading from file input/output error
And I have to either Cancel or Skip
I've tried to split the files to smaller pieces but the idea didn't work as I still get the same error whenever I try to compress/ split or do any operation with this file. how I can copy this file?
Asuming I have two files, one large file and one small file, I want to write the smaller file to the large file without overwriting the remaining part of the larger file.
Both are binary files, and the large file can become very large, so I want to avoid copying the whole file, as that will take some time. Is there any standard Linux console utility to do this, or do I need to write it myself?
I have a log file which is continously being accessed. Now I want to delete the first line without disturbing the file.Is it possible? The Issue is the log file is being provisioned with ^@^@^@ characters in the first line occupying huge space.So I need to get rid of that. I dont have time to work with root cause but just a script to reduce this space.
I'm trying to pipe from a textfile to sendmail.The command I'm using on teh sendmail server is:[root@sendmail-server test]# sendmail to-email-address@relay_server-address < test2.txt.I'm doing this because I was doing this from an aliases file just fine until about three weeks ago. The aliases file suddenly stopped working after the relay server received an inordinate amount of email from the From: address and for the To: address.
I am trying to copy a 7.3gb .iso file to an 8gb USB stick and I get the following error when it hits 4.0gb
Error while copying "xxxxxx.iso". There was an error copying the file into /media/6262-FDBB. Error splicing file: File too large The file is to be used by a windows user, and I'm just trying to do a simple copy, not a burn to USB or anything fancy. Using 10.4.1.LTS, AMD Dual Core, all latest patches.
I have a .txt-file with ~50.000 lines of numbers, generated by a mathematics program. From this file, I need line ~ 1.100 to line ~16.000 (these lines are always the same btw, this may make the solution easier, dunno) to be copy/pasted to another file, where the lines ~500 to ~15.000 (also, every time the same) should be overwritten by the aforementioned lines...I haven't found or come up with anything that works yet, mostly I find solutions to copy everything from one file to another but I can't find something to specifically overwrite a part of a file with part of another.