General :: Script To Crc32 Check If Multiple Files Are Corrupt Or Not?
Dec 2, 2010
write a script that scans a folder to see which files are corrupt and if they are move them to another folder.All files have in it's filename CRC32 hash that I want to check if it's correct.Something like this:
Code:
Filename: . . . . .filename S01.E01 [CRC32Sum].mkv
Should have CRC: . CRC32Sum
[code]...
I have a really elaborate website project and I'd like to check in all the code to an SVN repository WITHOUT checking in all the jpg, pdf, sql, .mobi files, etc. I have subclipse installed on my local machine as part of Eclipse, but downloading the entire site and doing it file-by-file would take a very long time.
1) check in certain file types in a directory (e.g, /home/foo/public_html) while excluding certain file types (e.g., *.jpg, *.pdf, etc.) and subdirectories (e.g. exclude home/foo/public_html/images) in that directory.
2) I need to specify that this initial checking of my entire project goes into the /trunk folder in my SVN repository -- and not the root, not 'branch' subdir, etc. Also, I need the directory structure of my files to be preserved. that is to say i don't want /includes/conf/config.php to be at the root of my trunk folder in my repository.
3) I need to specify a comment that is applied to all of the files I'm checking in. E.g., "this is the initial checkin dated 12/24/2009, happy holidays!"
I am trying to compress a folder and the contents within, while keeping the permissions the same. I then need to check if the compress file is corrupt or not. Base on that result I need to transfer the file.
cd /home/ops/Desktop/temp tar cvzfp backup-"$(date +%d-%b-%y)".tar.gz /home/ops/Desktop/dir1 gunzip -tl backup-"$(date +%d-%b-%y)".tar.gz
i have 30 linux PCs running. i need to check the performance of all pcs( memory,ram and process usage) in single command or in GUI mode.In solaris we have perf script to check performance in GUI mode. i need same type in linux?
I recently copied about 1TB of videos from one drive to another. Now it seems like my videos(a lot but not all) are choppy or just end up freezing(mkv files seem to be the ones freezing).Is it possible that the transferring of a large amount of data could have damaged my videos?
im running beta 2 of 10.04 x64 and ive noticed a problem where for some reason whenever I download a file it always gets corrupt even though its a 100% complete.
Ive tried using wget instead to download the file and that worked fine
ive removed ubufox package and its now working fine
Is anyone else having this problem?
I wanna be 100% sure that its not my fault and is really a bug before I submit it
have virtualservers on virtual harddisk due to system crash when im trying to access the servers its giving the error as failed to start virtual machine fileserver medium /rootfiles/vdi/fileserver.vdi is not accessiblevd:error verr_media_not_recognized opning image file /rootfiles/vdi/fileserver.vdi (verr-media-not-recognized)result code:ns-error-failureour company network gone down.no internet is coming.so if i want to install dhcp i need dhcp rpm package for fedora12 2.36.31.9-174fc12x86_64
I recently installed the latest version of Ubuntu to setup a home media server. Previously, I was running a Windows Home Server. I added a new HD and mounted it during installation. This was ext2. I think mounted my 2 drives that had NTFS partitions on them from the Windows home server and copied all of my files over to the new ext2 partition. Everything looked great. I then re-partitioned by two old Windows Home Server Drives to ext3 partitions to setup a new file system for storage.
This is when the bad news became evident. While all of my .jpg, .mpeg, etc. files are there on my ext2 file system, I can not open them anymore. All programs (IE, paint, windows media player, Itunes, Photoshop, MS Office, etc.) are unable to open files they normally process. Did I somehow lose part of the files (a header, attribute or something) during this copy process?
Discovered today that my /share directory is missing or corrupt. All fields (permissions, ownership, inode, etc) read '?'. Will a fsck from single user mode fix this? Tried rebooting several times, etc. All other system functions see to be ok, except all the files in /usr/share are missing, the directory 'share' blinks red and obviously dovecot will not run - which is keeping our IMAP from running.
I am translating some po-files and I would like to run a spell checker over them. I have Ubuntu 10.10 and use gtranslator. As far as I know, gtranslator can't spellcheck the whole file.
I tried ispell: $ ispell lordsawar-0.2.0-pre4.de.po - this doesn't work, as English and German strings, as well as some programming-relevant comments appear in the .po-file.
Do you know any program running on Ubuntu which can spell check po-files?
I downloaded the first Lenny DVD for amd64, wrote it but on trying the install on my laptop (Gateway NV5389u) I cant get past the installing base system step: I get an error that some files are corrupt / cannot be read from the DVD. I am wondering whether there's a way I can download a minimal version or just the files needed for the base system installation then use the same DVD to install the packages, coz I have a terribly slow internet connection it took me a whole 2 days to download, and I surely cant stand any more of it.
I plugged my external hard disk into my computer and it gives me this message: Quote:Unable to Mount:Error mounting: mount exited with exit code 13: ntfs_attr_pread_i: ntfs_pread
failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error
I've been using Ubuntu on my fileserver for quite a while now, and I've always really had this problem, but I want to finally address it and get it fixed. At seemingly random points (when my fileserver is under stress - typically while I'm writing lots of data to it), my fileserver will crash. It generally completely crashes, not responding to any further file requests or any of my SSH commands, and must be reset hard (typically by flipping the power switch). After such an occasion, I end up with some corrupted files. It seems to corrupt a large array of files (it's not an isolated issue - for example, it corrupts files that were not being accessed anywhere near the time it crashed, including files that had never been accessed during that period of uptime). The files don't get completely smashed, but they're definitely corrupted (artifacts in images, skips in audio and video files, often complete failure of binary files such as virtual hard drives or disc images).
I'm using Ubuntu Server 11.04, but similar issues to this happened for me in 10.04 LTS (in fact, I upgraded to try to solve them). I'm using mdadm to create an 8-drive raid6 array. The drives are 1.5 TB each, mostly Samsung HD154UI, but with a WD drive in there too (sorry, I can't find the model number at the moment). The hard drives themselves appear to be working fine - SMART reports no issues with any of them, mdadm says they're all up, and I have no reason to believe that the drives are at fault here (although I can conduct further tests if necessary). I've posted about this problem before here and here. In these cases, the issues seemed to be with XFS - in fact, I switched from XFS to ext4 on my RAID array because I simply believed XFS to be unstable. Unfortunately, this issue occurs with ext4 as well, so I'm fairly certain it's an mdadm issue. Here is the output of "cat /proc/mdstat", for those interested:
I have a system with Voyage-Linux (Debian based) as my OS running on a compact flash card. Some files appear to be corrupt on it. Whenever I do a ls,cp,mv,rm command on these files I get the message Stale NFS file handle. I actually had the problem on 2 identical systems. I fixed the first one by attaching the CF card to another linux system and then running e2fsck -f -v /dev/sdb1. It got rid of the bad file.
My problem is I won't be able to do that all the time. I'm gonna have several of these systems in different places and won't have direct access to them, therefore I'm looking for a solution that would work on the system itself. Now running e2fsck on a mounted filesystem seems to be a bad idea from what I read, but I tried anyway and it did not get rid of the file. I tried running tune2fs -c 1 /dev/hda1 and rebooting, which is supposed to run e2fsck after the next boot (not 100% sure here) but that didn't seem to work.
In order to download files from a particular website, I have to include a header containing the text of a cookie, to indicate who I am and that I am properly logged in. So the wget command ends up looking something like:Code:wget --header "Cookie: user=stringofgibbrish" http://url.domain.com/content/porn.zipNow, this does work in the sense that the command does download a file of the right size that has the expected name. But the file does not contain what it should--the .zip files cannot be unzipped, the movies can not be played, etc Do I need some additional option, like the "binary" mode in the old FTP protocols?I tried installing gwget; it is easier to use, but has no way to include the --header stuff, so the downloads never happen in the first place
there is a folder. Its empty. When every I drag a new file and put into it it echo out "there is file in there" and keep monitoring the folder. How can I do it?
I have backup_server and application_server.backup_server has directory AAA. I need to check from application serverthat is there any new files created today in the AAA dirctory. if yes, all files were created today or partial files?.
I have running license server on my server. Right now I would like to write small status script and check if software is running.My software include 3 deamons:
1) daemonA 2) daemonB 3) daemonC
My script should check, if each of this deamon is running. If all deamons are running then script should print short output: "License server is running" if one of this daemons is not running, output should "License server is not running". Is it possible to write small loop to check it ? Let say, loop will take new daemon name from deamons pool and will check if its running. Sometimes I need to check more than three daemons of one Program and I dont know how to write good script for this. Maybe somebody could help me with this loop that in the future I could also use; daemonD, daemonE, daemonF.etc.etc. if all daemons from pool is running then..."Software is running"
I am fairly new to Linux and was needing some help on a comparing more than 2 files. I am try to come up with something that would compare at least 10+ different files to a master file and give me an output of what is missing.
Example would be: a.txt, b.txt, c.txt, d.txt compare each of them to the master.txt file, than output the missing text for each file into new file.
I came across comm and diff commands, am I looking in the right place or is there a much easier way of doing this?
I have a directory with hundreds of html files. For all the files I have to: - delete all the row from the beginning of the file to the sentence "<img src="immagini/_navDxBottom.gif" />".
- delete all the rows from the sentence "<br clear="right" />" to the end of the file.