Software :: Backup Size Limiting - BackInTime Doesn't Have An Option To Limit Disk Space Used
Oct 15, 2010
I have BackInTime backing up my computer to a RAID cluster. The problem is that BackInTime doesn't have an option to limit disk space used. I also use this drive as a fileserver, and need to be able to keep some space open for that.
Is there a way that I can limit the amount of space a specific folder can take up? Alternately, is it possible to create a disk image that will only take up the amount of space in the image, but can automatically expand to a certain size? It would work similar to the Mac SpaseBundle format.
How can i limit disk space usage for one user? Like.. User john123, you can only use 100mb of my harddisk. User jake155, you can only use 250mb of my harddisk.
The limit bandwidth options in U1 don't work for me. I will check the checkboxes and change the values. When I come back later, they are back to being unchecked, with the default values filled back in.
I'm working on a few servers running centos and using postfix. I don't know what the exact problem is, but we are having problems with the disk space being maxed out at 100 gigs. What we think the problem is...is that postfix is either caching or logging all the emails we send out. We sent 250k emails (500kb apiece) over the weekend and we were having trouble with that quantity. It seems some of those email were queued up for retry sending...but we didn't have sufficient disk space for that? Something broke - I'm not sure what.
What I want to do is to find and change the config file that has to do with postfix email retrying - possibly limit this (not sure if this will fix my problem). Or, turn off /limit any way that postfix logs/caches emails so that it won't take up all the disk space when queued up for retry... Again, I'm totally lost here (on both what's going on, and how to fix it). I'm not sure what more information is needed to address this problem
I have been using backintime so far. Now I am changing to 64bit and have installed backintime. I received first some kind of warning that the snapshots were converted to some new format. But still I can not do any snapshots with it, I am simply told backintime could not make any backups.
I have 2 directories in my home folder that I would like to set a size limit on. The directories are ~/backup and ~/temp. Is there an easy way to limit the size of a directory without having to make partitions?
Some thing is using up a huge amount of my disk space about 10G and I can not determine what it is. When I look at my disk usage in system monitor it say I have used about 25G and when I scan the directory in disk usage analyzer the entire file system used is 15G.
I used photorec to recover lost files and it brought up 70gb worth of files, when I was done looking through them I deleted these files. but these files still seem to be taking up my disk space. When I try to access my trash bin with root I get a message that reads...."The folder contents could not be displayed. sorry, could not display all the contents of "trash": operation not supported." if I open my trash bin when I'm not in root, the bin is empty.
In trying to solve a friend's lack of foresight, i have currently disabled my system. I was using dd_rescue to make a copy of a drive with a corrupt and unfixable Partition Table. I was a fool, and had a drive mounted to /media/Storage, but ran the backup to /media/storage. Thus, dd_rescue completely filled my primary drive before informing me that there was a problem. I don't really trust myself with command line work, so I foolishly sudo'ed nautilus and deleted the folder /media/storage. Unfortunately, I didn't realize it, but the available space on the drive still read 0bytes. I tried Terminal work to do a sudo apt-get clean command, but for some inane reason, the laptop screen won't support the display setting for the Terminal login, so I just had to hope that I was doing it right. I wasn't, and decided to try working from a Live CD so I could see what I was doing. the folder /root/.Trash/ doesn't exist on Ubuntu's install drive, and I can't figure out why the properties of the drive say "contents: 241310 files, 3.7 GB" but also "Total capacity: 52.8 GB. Free space: 0 bytes"
Any suggestions on how I can get this to shake out?
I'm having trouble making a back-up of my system through back in time. I use the sudo option because I need to back up my mysql and apache configurations apart from the usual home directories. My back up runs to an external hard drive.
Now every time I run the back up, it takes 45-90 minutes to do its thing but at the end in the left bottom corner comes a message saying that the back-up couldn't be done.
I have tried several things, like making more space in the hard drive, running without sudo but to no avail.
I recently installed Bio-Linux 5.0 as a dual boot system with XP for some bioinformatics applications, but Im having some problems with the amount of disk space which can be allocated specifically for the Ubuntu install.
Ive been using blastclust to analyse some very large data sets, which keeps on crashing due to filesystem running out of disk space.
When I installed Bio-Linux 5.0 from the live cd, the maximum size I could allocate to the install was 30 GiB, and I havent been able to find a way to change this.
Ive tried using System->Administration->Partition Editor using the live cd, and can view / delete the partitions, but I cant find a way to specifically alter the disk space allocation for Ubuntu.
How do I increase the filesystem size to larger than the current 30 GiB?
I have set up squid server. My cache directory has been set up as per following statements.cache_dir ufs /Cache1/squid 10000 16 256cache_dir ufs /Cache2/squid 10000 16 256Now the problem is that size of /Cache1 and /Cache2 has reached to about 8GB and in near future it will reach the maximum limit of 10GB. I just want to know that whether I need to delete the contents of these directories or otherwise.
I was just testing specifying limit on file size to a user and have added the following to /etc/security/limits.conf bob soft fsize 100 This basically should have said not to allow bob to create anyfile greater than 100Kb in size.
But the interesting thing is, if bob already has any file which is greater than 100Kb in size, it even doesn't allow to log him into the system both from console and SSH. Also nothing is logged in logs.. How do I configure it so that, bob can login to the system even though he has any file greater than 100Kb (but doesn't allow him to create file which are greater than 100Kb) ??
I'm looking for a free backup solution how work in client-server in both environments Linux(server) and Windows(client). in my case, i want to give a disk space quota in my Linux server for each remote windows client.
This is are magic file who kill 20 linux nodes today and.. of course i want to ask - What i can do for limiting size of nscd.log file? i try to find help in man files: nscd and nscd.conf - but nothing about log size. (just paranoia mode with auto restarting.. but this sounds ugly. i need just limit log file size)
Total Newbie running Win 7, Lucid Lynx 64-bit, sharing partitionUbuntu keeps reporting low disk space. I've read dozens of postings, looked at gparted and done some resizing but it's still not right. Had to remove everything I could last night to free up space.Disk utility shows I have an 18 GB root.disk, gparted shows partition has 204 GB available.The space is there in the partition how do I get root size to increase?
I have a fedora 9 server. It is used purely as a dedicated server. Until recently I never came close to my allowed bandwidth of 1 TB but I expect that may change in the near future because I will be adding many files for downloading. I have Apache 2.2.9, PHP 5.2.6, mySQL 5.0.51 and Webmin 1.441.
The most critical thing is monitoring total bandwidth and then doing a job, probably using Cron to change a folder's name, to stop downloads before a critical point is reached and my sites shut down. I would also probably eventually like to limit member downloads so all members of the sites get a chance to download and one person doesn't use all the bandwidth. I expect that would be possible using php. to log the bandwidth used by members. I know php but I don't know how to get the bandwidth using php.
We use VxVM and VxFS on HP-UX and Ie used them in the past on Solaris.ve found they are available as Storage Foundation v5.1 for 64 bit RHEL.(In fact there�s even a BASIC version for free on 2 processor systems). Previously wed run into a 2 TB limit for filesystems on the older versions we have on HP-UX. The data sheets at Symantec are pure marketing fluff. Does anyone know what the filesystem size limit is for 5.1 on Linux?
Why is it in Linux that there is a stack size set by default? And why is it so small? (My system is set to 8192 kbytes.) And why is there a default limit on the stack size when the max memory and virtual memory size are, by default, unlimited? (Aren't they both fed from the same place ultimately?)
Reason I ask: I want to use recursive functions in my programming a lot more. Problem is, if the language (or implementation) doesn't happen to support tail-call recursion, then I can be pretty well certain that the first huge problem that gets thrown at my function is going to kill my program because the stack size limit is going to be quickly reached. Obviously, I can change the stack size limit for my own computers, but it doesn't feel so great knowing that most of the people who copy and execute my code will have probably have overlooked this. Anyway, does anyone know: is this small default stack size limit just one of those historical artifacts, or is there some technical reason for it?
a client brought in an 160GB external HDD and wanted to get the files off it, there appeared to be no partitions on the disk but i thought it may have been formatted to use the whole disk. I tried to mount it as the various FS types the client thought it may have been to no avail.
I ran testdisk on it which told me that it previously had a mac partition table and a 210GB partition on it (which is larger than the disk) could anyone enlighten me as to whether or not this is even possible, and if so how could i retrieve the data?
How do you put a limit in place for file caching in Suse 11.4?
My pc becomes usable on a regular basis with minimum cpu usage because I can't open new applications There are no error messages etc, the new apps just don't open.
free -m shows the vast majority of my memory is used by cache free -m total used free shared buffers cached Mem: 3185 1048 2137 0 38 503 (this becomes max) -/+ buffers/cache: 506 2679 Swap: 2055 0 2055
I've done ... echo 3 > /proc/sys/vm/drop_caches
Sometimes an app will open after this but the system becomes unstable, locking up regularly.
I not sure why the default max is 100% file cache but I'd to put a sane file cache limit in place, like 40% or something. I've put limits in place in the past using a percentage and I've poked arround but i don't see the setting.
I have a large file (deflated size: 602191947)that is not saved in my Ubuntu One account. On sync'ing the file is being uploaded, and eventually reaches 602191947 - and then nothing more happens to this file - but sync'ing the following files in the queue goes on with success. I have tried manual upload with the same result. The file is still being marked as 'uploading' even after several tries and log ins/log outs, and reboots. So I was just wondering whether there is a file size limit - can't seem to find information regarding this.
Is there some way to limit the download size of updates for ubuntu? At the moment, update manager shows that I have some 300 MB worth of downloads. I can't find any way to deselect many updates at once either.
I really don't understand what's happening.I make a 3.5tb RAID array in Disk Utility, yet it makes it so that one partition is 3tb and the other is 500 gigs free!Why is that? Ext4 can do huge partition sizes I thought.
a possibly preposterous question. I am aware that you can designate a swap file or swap partition on your hard drive that linux uses as "memory". Suggested sizes for the swap file that I've seen range up to about 1024MB. Is there a limit to the swap file size that you can set?Basically I am running a perl script that processes a massive B) file (DNA sequence data), etc, and requires around 48 GB of memory to run, maybe a bit less. So, would it be possible to set a swap file to a massive, ridiculous size (~60GB oratever) and successfully run such a script on a desktop?Yes, I am aware that it would massively ow down the process. The thing is, if the perl script normally completes in about half an hour, and I can get it working on a desktop, I don't mind if it takes days or weeks to complete. I really don't. That's because it takes days or weeks to get access to a computer with the required grunt to do it.So, is this a stupid idea? Is it even possible? If so, given a perl script that normally completes in a half hour on a 48G system, if you do this, would it take days? weeks? decades
I have a 1TB external hard drive. I would like to create in it 10 folders:
Code:
I would then like to permanently mount each folder to its machine (I have 10 machines connected through a switch, so each machine will have a folder that is mounted to ONE of the 10 folders in the external hard drive).
My questions: (1) Is this a good configuration? are there better ideas to give individual machines more space without replacing their hard drive? (2) How do I limit each one of the folders ('folder1', 'folder2', ...., 'folder10') to a size of 100 [GB]? I don't want one folder (say, 'folder1') to grow in size and 'steal' the space designated to the other folders.