Ubuntu :: Copying Large File Blocks Externally Causes Disk Full

Oct 15, 2010

This has happened several times now, with 9.10 and 10.04. I back up my photos periodically to external drives, using Nautilus. At the next attempted login Gnome won't start and sometimes gives power manager incorrect installation error.

First time this happened I was stumped and eventually did a clean install. Second time, I found advice elsewhere in this forum to solve this by emptying root trash, which did the trick. This time, however, root trash has nothing in it and 2 users trash were insignificant (I emptied them all anyway with rm -r). Tried looking for enormous directories but couldn't find a smoking gun. I would rather not end up doing another clean install - a painful and extreme solution. I'm continuing to look for solutions to the immediate problem, but my question really is, what causes this and how do I prevent it in the future? I've run Computer Janitor regularly and ran apt-get clean but no help. Should I do all my large scale copying from terminal? I'm not a total noob, but close.

View 9 Replies


ADVERTISEMENT

Ubuntu :: Copying A Large File From The Network?

Feb 17, 2010

I am trying to copy a file from a network resource on my Local Area Network, which is about 4.5 GB. I copy this file through GNOME copying utilities by first going to Places --> Network and then selecting the Windows Share on another computer on my network. I open it and start copying the file to my FAT32 drive by Ctrl+C and Ctrl+V. It copies well up-to 4 GB and then it hangs.

After trying it almost half a dozen times I got really annoyed and left it hung and went to bed. Next morning when I checked a message box saying "file too large to write" has appeared.

I am very annoyed. I desperately need that file. It's an ISO image and it is not damaged at all, it copies well to any windows system. Also, I have sufficient space on the drive in which I am trying to copy that file.

View 8 Replies View Related

Debian Configuration :: Jessie LVM - Full Disk / Large Logs And GParted

Sep 23, 2015

So, my issues since upgrading to Jessie seem to compound. When I fix one issue, two more arise. Right now, I have a full system disk. How it got so full. So I started poking around. I ran

Code: Select all find / -type f -size +50M -exec ls -lh {} ; | awk '{ print $NF ": " $5 }'

Found a few files I could delete, and did, but I also found Code: Select all/var/log/syslog.1: 33G
/var/log/messages: 33G
/var/log/user.log: 33G

What I find strange is that they're all exactly 33G each. So that accounts for the missing 99GB I deleted them, however only recovered 27Gb. Whats weird is when I type df -h I get

Code: Select allFilesystem      Size  Used Avail Use% Mounted on
/dev/dm-0       106G   74G   27G  74% /
udev             10M     0   10M   0% /dev
tmpfs           3.2G  9.7M  3.2G   1% /run
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sda1       228M   27M  189M  13% /boot
/dev/sdb1       1.9T   62G  1.8T   4% /media/ntfs
tmpfs           1.6G     0  1.6G   0% /run/user/0

What are the tmpfs's and how can I reclaim that space, and what is /dev/dm-0 and why is that taking up so much space?

I have 2 LVGs vgdisplay -v

Code: Select allroot@SETV-007-WOWZA:~# vgdisplay -v
    DEGRADED MODE. Incomplete RAID LVs will be processed.
    Finding all volume groups
    Finding volume group "WOWZASERVER"

[Code] ....

After deleting the log files, I was able to regain access to my GDM session. But I still cant find out what /dev/dm-0 is, and where all the 75 GB is being taken up.

I just noticed, however, even though I can access the drive A-OK via browser, terminal, and web services (Our wowza) when I enter gParted I get this error for sda, my primary OS drive!

Code: Select all  Libparted Bug Found!

Error informing the kernel about modifications to partition /dev/sda2 -- Invalid argument. This means Linux won't know about any changes you made to /dev/sda2 until you reboot -- so you shouldn't mount it or use it in any way before rebooting

Now that I'm in gParted I see 3 partitions: [URL] ....

It reports now, that I have used ALL of my disk space.

Post Log delete, and fresh reboot, this is what Code: Select alldf -h outputs

Code:
Select all Filesystem      Size  Used Avail Use% Mounted on
/dev/dm-0       106G  8.7G   92G   9% /
udev             10M     0   10M   0% /dev
tmpfs           3.2G  9.8M  3.2G   1% /run
tmpfs           7.9G   80K  7.9G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock

[Code] ....

What the heck is going on?

View 0 Replies View Related

Ubuntu :: Root Filesystem Fills Up When Copying A Large File?

Mar 17, 2010

I was just copying a large (50GB) file from one mounted partition to another mounted partition (a USB drive), but before the operation completed, my root filesystem, on a separate partition, filled up.Because it filled up I also couldn't get past the login when I rebooted. I think this is because there is no room to load temporary files. I'm expanding the root partition to temporarily fix this. how can I avoid my root file system filling up when copying a massive file between mounted partitions? the file is being cached in root during the transfer.

View 3 Replies View Related

Ubuntu Servers :: System Crash When Copying Large File

Jun 15, 2010

I am having a bit of a problem with my Ubuntu Server 10.04 install. I think it might be a kernel problem. Basically, what happens is when I copy a large file (a 160GB disk image) to my drive (>60GB) the system consistently crashes after about 60GB of the file is transferred. It doesn't matter if I am sending the file using cifs, or over SSH. Checking syslog (paste dump here), it seems these flush errors always appear shortly before the crash occurs. The destination filesystem is a hardware RAID 10 array with 2TB of space. It is formatted as EXT4.

View 7 Replies View Related

Server :: Speed Up Single Large File Copying?

Apr 22, 2010

I'm planing to copy a productive mysql innodb file from one server to another, and the file size is around 300GB. As the file is keeping changing all the time, I have to shutdown mysql instance and copy the large data file to other server as quickly as possible.I should have to find a way to speed up file copying ... I'm wondering whether there's a way to copy file block by block.If the destination side block has same content, then bypass it.

View 4 Replies View Related

Ubuntu :: Error "File Too Large" Copying 7.3gb File To USB Stick

Nov 24, 2010

I am trying to copy a 7.3gb .iso file to an 8gb USB stick and I get the following error when it hits 4.0gb

Error while copying "xxxxxx.iso". There was an error copying the file into /media/6262-FDBB. Error splicing file: File too large The file is to be used by a windows user, and I'm just trying to do a simple copy, not a burn to USB or anything fancy. Using 10.4.1.LTS, AMD Dual Core, all latest patches.

View 2 Replies View Related

Debian Programming :: Copying A File Full Path Name To Clipboard

Jul 26, 2013

How do you set up a command just to copy a file's full path name (%F) onto the clipboard?(I can't seem to get this without copying the contents of the file.)

View 2 Replies View Related

General :: Grep - Manipulating Large Text File Full Of Records

Nov 26, 2010

I'm trying to manipulate a large text file full of records (metadata - one complete record per line). I need to delete every line on which certain words appear - there are five different words, all pretty simple all-caps strings with occasional whitespace. I tried using grep -v, which worked a treat, but only string-by-string. Ideally I'd like to run this as grep -v -f, where the file targeted by the -f contains the strings I need to match in order to delete the lines they're in.

i.e. grep -v -f filecontainingSTRINGS.txt targetfile.txt > outputfile.txt

When I try this, however, I don't get any matches - or more specifically, no changes are made in the output file. It works fine if there's only one string in filecontainingSTRINGS, but it doesn't work if there's more than one (I'm using newline as the delimiter). (Also my machine doesn't recognise /usr/xpg4/bin/grep - no idea what that's all about!)

View 5 Replies View Related

OpenSUSE :: Remove Large File On External Disk?

Jan 27, 2010

1. An external hard disk with VFAT32 file system has a continuous 23GB file (old HD disk image). It is too large to 'remove to wastebasket' and unlike MS Windows remove to wastebasket does not sense file size and wipe file index .

How to remove a large file in SUSE 11.2?

View 9 Replies View Related

Ubuntu :: Everything Freezes When Copying Large Amount Of Data?

May 20, 2010

Well, when I copy large amount of data the other applications than Nautilus freezes until the copy is done...

So, what can I do? Because when backuping some data this is really annoying =/

View 6 Replies View Related

Red Hat / Fedora :: Full Disk Encryption DD - How To Access Data In DD File

Feb 12, 2010

I am investigating full disk encryption and have made a DD copy of the hard drive which has been encrypted, this DD file is stored on my computer for analysis.

First question is - Anyone know how i can access data in this DD file even though its been encrypted?

Second question - Is there a DD command where i can image the systems memory? I ask this because when a system is turned on, to get past the pre-boot authentication stage you need a password. From what i understand, this password will be passed in to ram when power is applied to the system. Making a copy of the memory will also copy the password?

View 5 Replies View Related

General :: Copying Large Number Of Files From One Directory To Another

Feb 10, 2010

I've a directory containing around 2.8 lacs of files. I want to move them to another directory.If I use cp or mv then I get an error 'argument list too long'. If I write a script like

for file in ls *; do
cp {source} to {destination}
done

then because of ls command , its performance degrades.How can I do this?

View 7 Replies View Related

General :: Copying Large Number Of Files In Windows?

Mar 15, 2011

I am facing problem in copying a large number of file 18 lakh (18,000,000) files from my personal hardisk to another hardisk each file is very small and size of folder is around 3.95 GB copying files using copy given by Windows is frustrating and I am not even able to compress file its giving me error that its not readable.And problem is I am not able to open this drive in Linux it showing me error there saying do diskchk in Windows and Windows disk check is also not able to repair this drive and goes into some mode unsolvable.Is there any way to open disk with error to open in Windows and if not any way I can copy data faster?ERROR: Disk labled EDU is corrupt go to windows and chkdsk /f there and reboot into window 2 times.

View 3 Replies View Related

Fedora Hardware :: Low Transfer Rate When Copying Large Files Over Wireless

Jan 11, 2010

I just bought a HP 3085dx laptop with an intel 5100 agn wireless card.
The problem: copying a big file over the wireless to a gigabit hardwired to the router computer only gives an average 3.5MB/Second transfer rate. If I do the same copy from my wireless-n macbook pro to the same computer. I get a transfer rate of about 11MB/sec. Why the big difference? I noticed the HP always connects to the 2.4 GHZ band instead of the 5GHZ bands...

On the HPL
[jerry@bigbox ~]$ ifconfig wlan0
wlan0 Link encap:Ethernet HWaddr 00:246:36:AC4
inet addr:192.168.1.75 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::224:d6ff:fe36:acc4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:639243 errors:0 dropped:0 overruns:0 frame:0
TX packets:1293049 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:53832795 (51.3 MiB) TX bytes:1888619922 (1.7 GiB)

[jerry@bigbox ~]$ iwconfig wlan0
wlan0 IEEE 802.11abgn ESSID:"<censored>"
Mode: Managed Frequency:2.412 GHz
Access Point: 00:24:36:A7:27:A3
Bit Rate=0 kb/s Tx-Power=15 dBm
Retry long limit: 7 RTS thr: off Fragment thr:off
Power Management: off
Link Quality=70/70 Signal level=-8 dBm Noise level=-87 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

I am not getting any errors. I don't know why the bit rate is not known. My airport extreme base station typically reports that the 'rate' for the hp is typically 250~300MBi and about the same for the MacBook Pro. The hp is about 6 inchs away from the base station. Is there anyway to get the rascal to go mo'faster? Is there anyway to get the rascal to use the 5GHZ band.

View 3 Replies View Related

General :: Cp Adds Exclamation Points When Copying Very Large Text Files?

Jul 13, 2009

For my research I have some very large files that are basically millions of lines of ten columns of numbers. These files can be up to 5 GB in size. Recently I noticed that when I made a copy of one of my files, some exclamation points appeared in it where there should not be any: in front of random numbers throughout the file. Making another copy of the file would result in exclamation points in front of different numbers in different parts of the file. Doing this many times has given me up to four exclamation points in different parts of the file. Sometimes the file copies just fine without producing any extraneous exclamation points.Additionally, I have occasionally seen a "^K" where there should be a newline (the data that should have been on the next line was instead on the previous line with a ^K in front of it) in copies that I have made of my files. I don't know if this is related or not.

View 7 Replies View Related

Hardware :: Should Use Disk With Report Of Bad Blocks

Feb 4, 2010

I'm setting up a mysql replication system. I usually maximize what's left on my stock before I buy. I do badblocks first to all 2nd hand hard drive to make sure it is in good condition before I install an Operating System.

I pick up a 200GB sata hard disk in the stock room and test it using badblocks. I left it in the office doing the test and just check it tomorrow. When I return to check, the report says it found 30 bad blocks.

View 2 Replies View Related

Software :: How Many 4096 Byte Blocks On Disk

Aug 13, 2011

I want to find out how many 4096 byte blocks there are on my disk. I used df -B 4096 and it gave me a number but I'm not sure it's correct as I can use dd to read past what should be the final block.

So I do df -B 4096 and it reports this result:

Code:
Filesystem 4K-blocks Used Available Use% Mounted on
/dev/sda1 15618840 13190294 1635137 89% /

But when I use dd to go past that block, it doesn't report an error or anything. The command I'm using is

Code:
dd if=/dev/sda1 bs=4096 count=1 skip=15618841

How can I know that I'm really reading the very last block on the drive?

View 2 Replies View Related

Ubuntu :: Disk Full - Can't Free Any Disk Space

Jan 2, 2010

I'm running mythbuntu 9.04 and am having an issue with disk space.

I try 'rm' various log files but the space I free up lasts less than a minute before the disk reports as being full once more.

df -Th | sort gives:

Quote:

/dev/sda1 ext3 8.3G 7.9G 0 100% /
/dev/sda6 ext3 138G 125G 6.3G 96% /music
/dev/sda7 xfs 783G 617G 167G 79% /videos
/dev/sdb2 xfs 344G 242G 103G 71% /recordings

[Code]....

There's nothing enormous in /var/log and my trash and the root trash are empty.

why size and used fields are not the same despite 100% usage being reported on sda1..

View 7 Replies View Related

OpenSUSE Hardware :: Rescued Disk - What To Do With The Bad Blocks That Were Not Copied

Oct 13, 2010

My old system disk almost failed me. I dd_rescued the disk onto a new one (_don't_ use dd_rhelp, that one took days and seemed to forget what it had scanned more than once). The condition of the disk was not very good. So naturally I expected the subsequent fsck.ext3 to bomb half of the new disk. But obviously only a few inodes were affected and I lost only about 10 files completely (mostly on the root partition and not the home partition - yeay!).

However, I then ran a badblocks and put all the bad blocks into a file. I could use this to run fsck but I fear that

1. It could undo or even redo worse than what the first fsck run did, and

2. giving the list of bad blocks, fsck might on the one hand rescue/flag damaged files (which is what I want), but I don't want it to flag the bad blocks on the new disk as well (they are not bad blocks anymore, just copies of bad blocks).

So, how can I single out problematic files using my list of bad blocks (which I can then look at one by one if they could be rescued) without flagging the supposedly bad blocks on the new disk.

View 3 Replies View Related

General :: Implications Of Bad Blocks When Reinstalling A Disk Image

Mar 17, 2010

Let's say I'm using one of those PCs that uses a SSD flash drive in place of a more regular HDD.

Say I burn my favorite .iso distro and install it on this PC. I install my favorite applications and seek out and install any missing drivers and generally tweak the system like you do. When I am finally happy with it, I make an image of this installation to an external USB drive.

Now, say 9 months later some of those SSD blocks have gone bad because they were erased too often. They're no longer usable. Also, because I'm a sloppy person who can't be bothered to delete redundant stuff and run make-cleans and so forth, the disk is getting pretty cluttered and takes longer and longer to do stuff.

I decide the obvious solution is to remove and save any data I need to keep, then just over-write the disk with the image I made 9 months earlier.

The question is: will the firmware be smart enough to re-map my incoming image to avoid these bad blocks on the SSD? Or am I going to wind up with some parts of the image being located on bad areas of the SSD?

View 2 Replies View Related

Ubuntu Multimedia :: Large *.mp4 Files In Gtkpod - Hangs At "Copying Tracks" At 0%

Jul 27, 2010

I am dual booting XP and Ubuntu 10.04, but in the future I will be getting a new machine and I will only be running Ubuntu and won't have access to iTunes. Because I have an iPod Touch, I have been trying to find workarounds for syncing everything that iTunes took care of in the past. One problem I have is managing movies. I have looked through various media players/iPod management tools (Amarok, Rhythmbox, gtkpod) and I am using Rhythmbox to sync my music and and attempting to use gtkpod to sync my movies.

gtkpod is able to sync songs (Tested with a few minute test clip) and short *.mp4 files (15mb I know for sure from test). I am unable, however, to get it to sync a movie (~700mb) I am able to drag it onto my iPod in gtkpod, but when I try to save the changes and write the files, it hangs at "Copying Tracks" at 0%. It eventually crashes during the couple times I have tried to wait it out. So this being my situation, my question is, is there a size limit to the *.mp4 files I can sync to my iPod Touch via gtkpod? is there any other tools that you know of that I can sync videos to my iPod with?

View 9 Replies View Related

OpenSUSE :: Dolphin Losing Files When Copying Many Files Or Large Folders?

Feb 14, 2010

I've discovered that Dolphin seems to lose random files when copying many large folders.

I first noticed this a few months ago when I tried to copy my music library from one folder to another on the same HDD. It consisted of around 600 folders and 6500 files. During the copy there were no errors but after the copy I found that some of the newly copied folders were missing files. I put it down to human error or a glitch.

Yesterday I tried to copy 13 folders containing rips of some of my DVDs. Each folder basically had one film of either 700MB or 1.4GB. Again no errors showed up during the copy but I found 3 of the newly copied folders were empty.

It's not so critical with music or films but I can't afford to lose work data like this.

Has anyone experienced or seen a similar problem with Dolphin? I'm going to have to do some more extensive testing but this is not good.

The first time I noticed the problem I was running KDE4.3.4 (I think) and now the latest was with KDE4.4.0.

View 9 Replies View Related

Server :: Difference After Copying Large Directory To A New Directory?

Apr 4, 2010

I m having a RHEL-5 sever.ABC directory size is 57GB after taking backup in the same disk with name ABC.bkp showing 56GB. i used below command to copy/backup. # cp -r ABC ABC.bkp (different sizes after copying)..I checked both the directory sizes by #du -sh <ABC> and du -ks <ABC.bkp>In both GB and KB there is lots of difference (200mb). why this will happen in copying? what is the solution for above question? what is the correct way of copying 1dir to newdir exactly?

View 4 Replies View Related

Ubuntu :: Cloning A Large Disk With Little Actual Used Space

Aug 10, 2010

I need to clone a laptop drive to a desktop drive. The laptop drive disk is 150 gb, however, only about 8 gb is used. Is it possible to clone this disk to a smaller drive?

View 3 Replies View Related

Ubuntu / Apple :: Data Size Too Large For Disk?

Apr 28, 2011

I am trying to burn mac osx 10.5 install disk from from a 6.7gb dmg disk image. I thought I would be using 2 DVD-R 4.7GB discsfor this burn, I was hoping when the first was full it would ask for another to finish the burn. Instead it get the message that the DVD will not hold the choosen DMG. file.

Can I do anything besides buy a dual layer DVD that would hold the whole file?

View 1 Replies View Related

Ubuntu :: Installer Encountered Error Copying Files To Hard Disk

Oct 18, 2010

I have about 170 Gigabyte free at the last of my hard. I have windows 7 and suse linux installed on the machine. When I try to install ubunto. I start to create the partitions manually because I want to add it as a third operating system on my PC. Anyway I create the 4 partitions /boot - / - /var - /home. Automatically it choose to install the boot on sda not sda 9 as the /boot was sda9. I click install.

It gives me this message "The installer encountered an error copying files to the hard disk:
[Errno 5] Input/output error

This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment." I burn another cd and do the same ... the same problem.

I try to create the partitions at the end of the hard disk not the beginning although I am sure that there is no error in the hardware but the same message. Lastly I change the boot to be created in sda 9. The same problem, when I do everything. I download Linux mint another operating system and do the same points. The same error message appeared by the way the boot is being damaged after restarting and I have to fix it from suse linux cd.

View 1 Replies View Related

Red Hat / Fedora :: RAID And Disk Transfer For Copying Contents

May 5, 2011

I had used DELL 1950 with 300 GB raid disk. Now, I purchased Dell 2950 with 450 GB (6 disk - 3 pairs of raid). I wanted to pull out old 300 GB from 1950 and put it in 2950 (temporarily) to copy all contents to the new system. How do I know which HDDs I need to pull out from 2950 so that I can replace them with 300 GB HDD to mount. I do not know how raid setup (I know unix alone - not raid commands). Is this possible? How to do it?

View 5 Replies View Related

Server :: Mysql Optimization -- Copying To Tmp Table On Disk?

Nov 16, 2010

we had a CENTOS 5.5 x86_64 machine with 8ered with MySQL 5.0.91-community.Currently we had a very high number of Created_tmp_disk_tables (31k in 4,5 hours!!!).I've read suggestion that we need to increase tmp_table_size, and we've set tmp_table_size to 64M, but this drupal modul's query still cause mysql create tmp table on disk

Code:
SELECT DISTINCT node.nid AS nid,
comments.subject AS comments_subject,

[code]...

View 4 Replies View Related

Debian :: Manually Partitioning And Formatting Large Hard Disk

Jan 5, 2016

What is the recommended method these days for command line partitioning and formatting for the Terabyte size hard disk.?

It was easy to keep up when your working or have access to hardware for re-purposing, but that has all dried up and my knowledge has been left behind. The problem(s) are with new, recent hardware

Following a crash from a now detectable faulty stick of RAMM, I've lost one of my data hard disks and my fiddling with replacement seems to leave various errors/warnings mainly about GPT not supported and this message is still present despite trying fdisk, cfdisk, gpart, gparted, and(?).

System is an ASUS mobo using SATA drives (root 500Gb: MBR+3 partitions;/, swap, /home), and two 2.4TB with single partitions.

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved