Ubuntu :: Pulling Of Large Files From A Mounted Directory Into RAM?

Oct 18, 2010

I'm having a bit of an issue with Lucid installed via Wubi. I stuck the OS on its own partition (30 GB in size), and don't store any large files in the Ubuntu file system (when I download something large I move it to another hard drive.) I don't have anything wacky or esoteric installed on my system.

I've been consistently having a problem where, after a few hours or a few days of being booted up, Ubuntu warns me that my available HD space is dangerously small. The amount of available HD space Ubuntu sees then shrinks from a few GB to nothing within a few minutes, and the only way I can seem to solve this is to reboot. Taking a closer look at what's happening, my Home folder balloons in size until there's no more writable space recognized. But there are no files being created or added to, so it looks like there's a bug of some sort. This SEEMS to be correlated with watching videos (or maybe it's the pulling of large files from a mounted directory into RAM? My videos are all on another HD, as mentioned before). I can generally go a few days without getting the "low space" message, but I can't seem to make it through a full 2-hour movie without getting the error.

View 3 Replies


ADVERTISEMENT

General :: Copying Large Number Of Files From One Directory To Another

Feb 10, 2010

I've a directory containing around 2.8 lacs of files. I want to move them to another directory.If I use cp or mv then I get an error 'argument list too long'. If I write a script like

for file in ls *; do
cp {source} to {destination}
done

then because of ls command , its performance degrades.How can I do this?

View 7 Replies View Related

Fedora :: NFS Mounts But No Files In Mounted Directory

Feb 10, 2011

I am trying to get NFS mounts working properly using autofs, but I'm only getting partially results. I have an auto.master set up for indirect mounts and a list of map files for each entry. Forgot to mention that when I mount the directory using /etc/fstab or setup a direct mount in /etc/auto.master I can see files in the mounted directory...

View 10 Replies View Related

OpenSUSE :: Pulling Active Directory Attributes?

Jun 7, 2011

I have tried using likewise but I came across this yesterday. When you install Likewise only on a Linux, Unix, or Mac computer and not on Active Directory, you cannot associate a Likewise cell with an organizational unit, and thus you have no way to define a home directory shell in Active Directory for users who log on the computer with their domain credentials. I am trying to pull attributes from acitve directory.. namely the homeDirectory

View 1 Replies View Related

Fedora :: Back Up Files From Home And From Another Mounted Directory On System?

May 28, 2010

I am using back in time to back up files from home and from another mounted directory on my system (ntfs). The back-ups are occurring automatically and appear to be complete; but, I cannot delete old back-up snapshots in the backintime GUI Also with sudo nautilus or as root in terminal with (rmdir) I cannot delete the snapshots. My drive is filling up and rather than uninstalling back in time, I would like to simply delete the unneeded snapshots. How can I delete these files? Is there an rsync file that I should configure to delete these? My expectation of backintime was that it would back-up at the requested frequency and not create complete duplicate copies of the files, but, use symbolic links to unchanged files. How can I verify if this is the case? Does the cron file control this>

View 1 Replies View Related

General :: Searching A Directory And Pulling Out Filenames With A Certain Pattern?

May 17, 2010

I would like to search a specific directory and pull out filenames that have this pattern: "_bsc_" Then I want to do some processing and move the file to another directory

View 5 Replies View Related

Server :: Difference After Copying Large Directory To A New Directory?

Apr 4, 2010

I m having a RHEL-5 sever.ABC directory size is 57GB after taking backup in the same disk with name ABC.bkp showing 56GB. i used below command to copy/backup. # cp -r ABC ABC.bkp (different sizes after copying)..I checked both the directory sizes by #du -sh <ABC> and du -ks <ABC.bkp>In both GB and KB there is lots of difference (200mb). why this will happen in copying? what is the solution for above question? what is the correct way of copying 1dir to newdir exactly?

View 4 Replies View Related

Ubuntu :: Mounting Directory Within Already Mounted Directory?

Aug 9, 2011

I am curious if mounting a directory inside an already mounted directory is considered safe? I have done this on an Ubuntu server before but never thought to ask if it could cause problems.An example:

/dev/mapper/Raid5-VMStorage on /var/lib/libvirt/images
/dev/mapper/Raid1-SpareStorage on /var/lib/libvirt/images/Workstations

View 3 Replies View Related

Ubuntu :: Back Up A Large Directory ( 13 GB ) To DVD?

Jan 21, 2010

If I wanted to back up a large directory (13 GB) to DVD, what would be the best way to do this? Basically, what is the easiest way to make an archive that is split into volumes small enough to burn to disc?

View 3 Replies View Related

Ubuntu :: Command With The -r Option To Compare A Large Number Of Files And Files In Subdirectories

Jun 16, 2011

I am using the diff command with the -r option, to compare a large number of files and files in subdirectories. My main interest is to find out which files have been changed, and not what the actual changes are, and since a lot of files has been changed, it would be a lot easier to view the file names only. Is there and option for diff that might do this, or does there exist a similar tool/command that could do the job?

View 1 Replies View Related

Ubuntu :: Encrypted Private Directory Too Large?

Oct 9, 2010

This thread was nearly titled "The volume Filesysyem root has only 128 KB free space remaining" then I discovered the cause my Encrypted Private Directory had grown to 20GB eating all the free space on my Ubuntu system partition. Here's what happened:All was well with my system last night, left it downloading 2 GB of files from the internet to an NTFS drive to return to low space errors this morning.I checked and nothing had been downloaded to my Ubuntu partition, and even if it had, it could of handled the 2GB without issue. Did some reading on here and the first step I tried found the problem:

Code:
mark@media:~$ df -h
Filesystem Size Used Avail Use% Mounted on

[code]....

View 2 Replies View Related

OpenSUSE :: Dolphin Losing Files When Copying Many Files Or Large Folders?

Feb 14, 2010

I've discovered that Dolphin seems to lose random files when copying many large folders.

I first noticed this a few months ago when I tried to copy my music library from one folder to another on the same HDD. It consisted of around 600 folders and 6500 files. During the copy there were no errors but after the copy I found that some of the newly copied folders were missing files. I put it down to human error or a glitch.

Yesterday I tried to copy 13 folders containing rips of some of my DVDs. Each folder basically had one film of either 700MB or 1.4GB. Again no errors showed up during the copy but I found 3 of the newly copied folders were empty.

It's not so critical with music or films but I can't afford to lose work data like this.

Has anyone experienced or seen a similar problem with Dolphin? I'm going to have to do some more extensive testing but this is not good.

The first time I noticed the problem I was running KDE4.3.4 (I think) and now the latest was with KDE4.4.0.

View 9 Replies View Related

Software :: Differentiate Two Large Text Files Using Shell Script / Files Are Like Below?

Jan 20, 2009

I want to automate this using script.How to automate it?

File1:
s.no# 1 name:aaaaaa
city:abcd

[code]...

View 1 Replies View Related

Ubuntu :: Copying Files To A Directory And Skip The Files That Already Exist In The Directory?

Jun 30, 2011

How would i go about copying files to a directory, yet skip the files that already exist in the directory, and also remove the files that are in the directory. For example:

Code:

$ls /dir1
img001.jpg
img002.jpg

[code]....

Now i would like to copy from dir1 to dir2, but the contents of dir2 would be:

Code:

$ls /dir2
img003.jpg

View 7 Replies View Related

General :: Large Directory With Wget With Two Links Pointing At Same Thing

Mar 19, 2011

I'm trying to crawl a directory on a website and basically download everything in it. The structure is simple enough(but there are also multiple folders), but there is one thing that makes wget choke up.Both of the links work, but they are both the same thing. So wget will download the same file twice. How can I make wget ignore the first one? but this doesn't seem to actually do anything. It will still download the duplicate URLs

View 1 Replies View Related

General :: Splitting Large Directory Over Multiple Blank DVDs?

May 19, 2010

I am currently trying to copy a directory of roughly 400GBs to dvd, have gotten myself stuck. I tried to tar and then split; however, I don't have enough room on my hard-drive to make a compressed tar and split it up and then burn to disk, so I need a way to tar the and compress the directory, split it, and burn to disk every 4.3GBs.

I went ahead and installed DAR as an alternative, as I hear it is designed for this type of task, but I can't figure out which way is heads or tails.

my OS is the newest version of ubuntu 10.

View 5 Replies View Related

Software :: Combine Remote Shares To One Large Contiguous Directory?

Feb 17, 2010

I know it is possible to do... but I am not sure how to go about the whole thing. Here's the scenario. I run a lab. Lots of PCs. As time goes by, the older ones dont have the memory or disk space to run more modern apps. But I want to put them to use...

What I am trying to do, and have started, is the following: 1. Install Linux on a bunch of them, make a share on each of these. I've already installed FreeNAS on four machines. (Let's call these machines ClientA, ClientB, ClientC, and ClientD). And have made all the available diskspace

2. Install Linux on a fifth machine (call this Machine1) , and on this machine combine over-the-network all the shares from ClientA, ClientB, ClientC, and ClientD into one large "virtual" directory on Machine1. I know this is do-able, but what I hope to have is the total disk space from all the machines in step 1 to be combined for the purposes of saving files. Not sure which file system to use. For example, if all the other four machines have 2GB of space each, I want to be able to be able to save a 7GB file.

3. And then allow sharing of this one large directory using Samba.

4. Then allow lab users (not on any of the above mentioned machines) to be able to access the Samba-enabled large shared directory on Machine1 to read and write files. The user will have no idea where that the files[s] is/are not on Machine1, and that it maybe segmented in some way, nor should they care.

I understand the risks (if any one machine of ClientA, ClientB, ClientC, and ClientD goes down, lose probably everything). I am considering throwing mirroring into the mix (mirror Machine1's large directory), but that can wait.

So in the above scenario, what file system can I use on Machine1 to combine all the shares from ClientA, ClientB, ClientC, and ClientD to make one large "virtual" directory?

I've looked at UnionFS, but from my understanding while it combines directories, the maximum file size is the size of the largest share. Is this true?

View 3 Replies View Related

Server :: Extremely Large Metadata Size For Directory On An Ext4 Filesystem?

Jun 24, 2011

I am running CentOS 5.5 with a 14T ext4 volume. We are sharing out a few sub-directories via NFS. Our customer was doing some penetration testing with our web app that writes to one of those NFS shares. We are not sure if they did something to cause the metadata to grow so large or if it is corrupt. Here is the listing:drwxrwxr-x 1 owner owner 470M Jun 24 18:15 temp.badI guess the metadata could actually be that large, however we have been unable to perform any operations on that directory to determine if the directory is just loaded with files or corrupted. We have not run an fsck on the volume because we would need to schedule downtime for out customers to do so. Has anyone come across this before

View 2 Replies View Related

Ubuntu :: NFS Mounted Directory \ The Owner Is The Default Superuser Of The System?

Nov 8, 2010

mount an NFS directory as a regular user (which doesn't have sudo rights) because a suitable entry (i.e. with the user option) is defined in /etc/fstab file.But, when I mount it, I am not the owner of it! The owner is the default superuser of the system. So I don't have write permissions in the mounted directory.

View 4 Replies View Related

OpenSUSE Install :: 11.1 - Two Devices Mounted On One Directory

Jan 11, 2010

I have 3 disks in my PC which are partitioned equally as I use them for Raid 1.The first partition on every disk is a simple ext2 partition for booting. No Raid there. So I mount them as /boot and /boot2 and /boot3. So I can backup my /boot to the other boot directories. That worked for some month and this morning I just want to look if all directories have enough free space left. So I did a df -h and got this:

Code:
Dateisystem Grove Benut Verf Ben% EingehÃĪngt auf
/dev/mapper/system-root
6,0G 301M 5,4G 6% /
udev 1,5G 292K 1,5G 1% /dev
/dev/sdb1 122M 29M 87M 25% /boot
/dev/sdc1 38M 21M 16M 59% /boot2
...
/dev/sda1 122M 29M 87M 25% /boot
As you can see /dev/sda1 and /dev/sdb1 are both mounted on /boot

Here is what mount says:
Code:
/dev/sdb1 on /boot type ext2 (ro,acl,user_xattr)
/dev/sdc1 on /boot2 type ext2 (ro,acl,user_xattr)
...
/dev/sda1 on /boot type ext2 (ro,acl,user_xattr)

This is no Problem for me as I could just remount it correctly, but I would like to know if this problem is known. I did not change anything by now and this PC is a server which is running 24/7, so I can deliver more debugging Information if someone is interested.

View 9 Replies View Related

General :: Setting ACL For A Temporarily Mounted Directory?

Sep 8, 2011

I wanted to set ACL for a directory. For that it is important that the device should be mounted as acl on that directory.

But I do not want to add the acl mount in /etc/fstab. So I am tempoararily mounting the device to some temporary directory as acl and setting ACL and then unmounting it. Then, I'm mounting it to the original directory.

The code is below:

tmp="/tmp1/backup"
orig="/mnt1/backup"
dev="/dev/sda2"
mkdir -p $tmp

[Code]....

The group is being changed, but ACL is not set for the directory.

View 1 Replies View Related

Server :: Changing Permissions On NFS Mounted Directory?

Apr 19, 2011

I have a server running RHEL6 and a virtual machine also running RHEL6. I created a directory /home/data on the server and another on the VM. When I mount the host directory on the VM, I am not able to change the ownership/permissions through the VM no matter what. The ownership is set to "nobody" and I can't even change it to root.

View 4 Replies View Related

Programming :: Deleted Files In Directory With So Many Files Without Deleting Directory Itself

Nov 14, 2010

There are millions of files in many directories. Wherenver i try rm * or find or use xargs, they say 'argument list too long' and exit. How can i deleted files in a directory with so many files without deleting the directory itself.

View 3 Replies View Related

General :: Where Is /boot Directory Mounted For Multiple Kernels

Oct 4, 2010

I'm running Fedora 12 - Linux 2.6.32.21 with a boot partion on /dev/sdb3 of a hard disk.

I downloaded a vanilla kernel version 2.6.35.4 and have built it and run it successfully. I built this kernel to play with building device drivers.

My grub configuration uses the same root filesystem for my fedora installation as my vanilla 2.6.35.4 kernel; both use the LVM root filesystem. (/dev/sd4 /dev/sdb5 /dev/sdb6)

When I'm running fedora 12 (2.6.32.21) I can see the files in /boot which contains my kernel, system-map, initramfs, grub directory, etc. I also see my vanilla kernel 2.6.35.4 and it's associated support files (map, initramfs, etc.)

My question is when I boot into my vanilla 2.6.35.4 kernel and I look in /boot, I only see my vanilla kernel and it's associated support files. No grub, no fedora kernel. If I do a df -a, I see that /dev/sdb3 is not mounted like it is when I'm running my fedora kernel. I'm confused as to what is going on here. Can anyone shed some light on this?

View 5 Replies View Related

Software :: SSHFS - Permissions Denied On Mounted Directory

Jul 8, 2011

I have a problem with sshfs. I want to share a binary with some others computers, but i only want them to be able to execute (no read/write ). So, on my main server, I chown root:root bin & chmod 701 bin. That work nicely on main server, local users can execute bin w/o read/write ... But when I mount directory using sshfs, users cant exec/read/write ...

SSHFS version 2.2
FUSE library version: 2.8.4
fusermount version: 2.8.4
using FUSE kernel interface version 7.12

View 9 Replies View Related

CentOS 5 :: NFS Mounted Directory Owned By Avahi-autoipd

Apr 15, 2009

I'm having trouble with an NFS mounted directory on a newly installed 5.3 server.

The directory mounts fine using this command:

However I cannot write to the directory despite it being mounted RW and having read-write access on the host (a Netapp Filer). If I check permissions on the mounted directory, I see it is owned by 'avahi-autoipd':

I have disabled the avahi-daemon in chkconfig, although it wasn't running to begin with. Any idea why this directory would be owned by avahi-autoipd and whether or not that has anything to do with why I can't write to the mounted directory?

View 3 Replies View Related

Ubuntu :: Rsync Really Slow On Large Files?

Mar 1, 2010

I have Ubuntu on both my laptop and desktop machines, both are connected to the same network. I back up the laptop to the desktop by running the following on the laptop:

rsync -avv --stats /home/alisdt alisdt@xxx.xxx.xxx.xxx:/home/alisdt/laptop_backup (with the IP address of the desktop instead of the many x, obviously). Whenever rsync hits a large file (greater than a few MB), the network use rapidly drops to ~60KB/s (that's kilobytes not bits). When I copy the same file to the same place using scp, I get > 500KB/s throughout the transfer. Things I've tried:

* mounting the desktop home dir on the laptop using SSHFS -- a simple file copy is fast, rsync is still slow
* ditto with NFS
* rsync --whole-file option, in case the delta-transfer algorithm was choking on large files
* rsync --inplace option
* HPN-SSH (http://www.psc.edu/networking/projects/hpn-ssh/) to enable dynamic window and unencrypted bulk transfer, just in case it was some ssh bottleneck I think it's either an rsync application problem, or a network problem that is only affecting rsync. Any ideas, or other ideas of what I can try to debug? In case it's relevant, I'm using 9.04 on both machines. (A standing bug prevents me from upgrading the laptop, and I haven't bothered to upgrade the desktop).

View 3 Replies View Related

Ubuntu :: Moving Large Amounts Of Files

Mar 6, 2010

I am trying to move a large amount of files (over 30k and 86GB) to another HDD but I get a Augment list too large error?? I tried rsync, cp, mv and still the same error

View 1 Replies View Related

Ubuntu :: Cups: Large Files Do Not Print?

Nov 29, 2010

i have a problem with cups on a lucid/64 machine.

"Unable to write print data: Broken pipe"

The pdf-file to print has a size of 4,7MB. After sending the file to the printer the size of the file is more than 18 MB.

We use a Xerox WorkCentre 7232 which is via

socket://ip_adress:9100

connected to cups. The same configuration had been working fine for several years with hardy.

Cups refuses to print large files. When splitting up the print-file all works fine.

View 4 Replies View Related

Ubuntu :: File System Format For Mac OSX 10.5 For Large Files?

Sep 19, 2010

Is there a file system that both Mac OSX 10.5 and linux can read/write for large files (like 4gb files)? My desktop is Ubuntu and I run most from there, but I want to back up my MacBook and linux box on the same external hard drive. Seems there are some (paid) apps for Mac that will mount NTFS but I'm wondering if there is just a shared files ystem that will work for both.

View 9 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved