I have a large number of folders that each contain quite a few files of varying sizes (from a few bytes to 400kb or so), mostly smaller ones. I need to get the actual (not the disk usage) size of these folders. Is there any way to do this with a command like 'du'?
I am trying to figure out the "actual" disk size used by my system. When I run the "df -h" command,I am not taking here into consideration the shared memory of 2Gb as it is a sort of virtual shared memory and is not allocated physically. Is that correct ?
I've been working on getting another OS installed on my computer for one of my classes (OS specific assembly instructions). To get this OS running, I had to start using a GPT rather than a MBR table. I backed up my Ubuntu partition (ext4) using the old-fashioned dd command. I've since been able to get everything working again after a dd restore.
The problem is that my original Ubuntu partition was only about 50GB and the dd image only takes up 40 GB. After I restored the image to the new drive (146BG), gparted is reporting 119GB used and only 26GB free. What can I do to reduce the size of my install to 40GB again?
When looking at the disk in baobab, it says the the filesystem is only 47.2 GB and that only 20.9 GB has been used. This is likely what the old partition's breakdown was. So my new question is: How can I make the filesystem capacity (47.2 GB) equal that of the partition that it is on (146 GB)?
i have some issues running twinview through my laptop and an additional monitor. i currently have the nvidia settings such that the monitor is the primary screen. however, whenever i click full screen in videos or another application, it maximizes in my laptop screen. also - when i maximize videos videos, the screen fills up, but the actual video size doesn't tend to increase (it doesn't stretch to fit). is this an adobe problem? ubuntu 10.04, nvidia driver v195, dell latitude d630 laptop.
I multi-boot several Linux distributions with an assortment of additional data partitions. I get frustrated whenever fsck is forced during boot. (It ONLY happens when I'm in a hurry don't you know...) So I wrote a script to automate forced fscking when I do have the time. (And/or while I'm doing something else in another workspace.
Because I multi-boot, I've learned that udev doesn't always assign the same device name to each drive for all distributions. I've had the same partition identified as hda5, sda5, & sdb5 by different distributions (without doing anything to affect the boot order) So my solution is to keep a list of partitions in a specific file on each distro with valid device names according to that distribution's udev process. Actually I'd use LABEL= instead but the labels don't show up in /etc/mtab, and I like to make sure a partition isn't mounted before I try to fsck it.
I can make this work in a for loop using cat. But I've seen so many things about NOT using cat that I wanted to rebuild my script. I can make this work with a redirect instead of cat via a while loop, But I "LIKE" old style for loops. But I can't seem to find a way to make a redirect work with one. I thought this might make a good first �LinuxQuestions.org� question. I'm also open to any other suggestions on better/alternative methods... Is it possible to redirect from a file into an actual for loop?
My script is as follows:
Code:
#!/bin/bash # FsckEm I script to force file system checking on unmounted ext2/ext3/ext4 # partitions in preselected list. FsckEm accepts no options. Partition
I'm confused about "hard link" feature. I've been learning from my UNIX Academy DVDs training that hard links to a file can be many and each of them is an effective filename for the associated data. So let assume that we have some very sensitive data in a file and we want it to be deleted and file has 20 links. I "delete" a file, but in fact I deleted only one "name" of it. My understanding from the training that data is still there until we delete the last associated hard link. But how can I find the names of all of them? If we have the names, they can be removed one by one. Or may be there's command that can trace all the "names" and remove them at once?
I use amarok 2 and I have a lot of files that are titled "Track #.mp3", in Amarok I have changed them to see as the real songs but the actual files are still the same. Is there a way to change the actual file names using amarok to match the tags I have inside of amarok? The reason why I'd want to do this:
1. If my home folder becomes corrupt I don't have to redo 100's of songs (I have a backup but none the less 2. If I ever decide to use another program or if I'm in W7 using Windows Media Player classic it'd be nice to have it recognize the correct files without having to double up on the tag editing
If this isn't possible I'm going to wishlist it because I think it's functional and having a bunch of Track# files is a pain but impossible to get around when you have a lot of mix cd's.
I scanned a document with xsane and saved it as a pdf, the pdf shows up great but there is extra white space at the bottom of the document. how do i get rid of the white space and make the document the actual real legal size?
I am browsing our repository and I want to get this folder but all of the files there have ",v" in the end of their filenames and if you open each file, they have some written data which are headers for version control before the actual content of the file. I want to extract the actual content to make the file my_file.c,v --> my_file.c. Is there a command to do this?
I am having lock error and permission errors so I cannot checkout manually using CVS.
# # insert the line '-A INPUT -i eth0 -j ACCEPT' # in iptables # /-A INPUT -i eth0 -j ACCEPT/a -A INPUT -i edge0 -j ACCEPT
but when i run sed -f script iptables. it just echo's it to the the screen with my line added and not into the actual file. anyone know what i am doing wrong?
I was just testing specifying limit on file size to a user and have added the following to /etc/security/limits.conf bob soft fsize 100 This basically should have said not to allow bob to create anyfile greater than 100Kb in size.
But the interesting thing is, if bob already has any file which is greater than 100Kb in size, it even doesn't allow to log him into the system both from console and SSH. Also nothing is logged in logs.. How do I configure it so that, bob can login to the system even though he has any file greater than 100Kb (but doesn't allow him to create file which are greater than 100Kb) ??
is lvresize with --resizefs options re-size the Logical Volume and then re-size the file system? i mean we don't need to use resize2fs?I looked at man pages but it doesn't explain this option.
There seems to be a limit of 2 gb on files. Im trying to add my music files and have about 7 gb. Is there a way to make a file that will allow the extra info.
Firstly, I did perform a search on this problem in these forums, but didn't quite get what I was looking for. So I hope I don't yelled at for making a duplicate post. So I used rsync to backup my webroot to another nix machine. du -hs gave me 1.3 G on the source machine and 1.1 G on the backup machine. I tried to compare the individual files and noticed a trend. The files on backup machine were always smaller than the files on source machine. The source uses SATA drive, destination uses IDE. So this time I rsynced locally to another folder on the source machine. Same size anomaly. So i did a simple cp file ~/file and same size anomaly. So it's not a rsync issue.
I took a file and ran md5sum on both, the source file and destination file. To my surprise, even though the file size was different, they had the same md5sum. Now, let it be known that the source machine is a production server and the dir i rsynced was being used, serving pages to the web. I googled about this and came up with stuff like open descriptors and holes. I don't understand this stuff and was wondering if this was really the case. What are those if it is the case? And my backup copy is 100% identical right? There are thousands of files and I ran md5sum only on couple. Can I take comfort that when time comes, I can restore using my backup without any problems?
i have this directory with multiple images 'pics' and the size is 20mb, and i want to make a .zip or .rar package of this directory but with an increased size so the .zip/.rar file will be 100mb, and then when you extract it the file size is the original 20mb
I have copied some vides from my camcorder and the size seems a bit excessive - 140mb for 3 mins! Is there a program I can use to save the .mp2 file as .mpg and reduce the size?
Multiple dirs full of mp3s All strictly encoded with exactly the same parameters (CBR 128kbps, Joint-Stereo, etc) Is it possible to determine the total playing time (to within ~98% accuracy) by some formula based on the total file size? I say ~98% accurate since ID3 tags do consume a small amount of space.
I'm trying to write a script that takes two arguments, the first argument is a number, and the second argument is a filename. The shell script should indicate if the file's size is BIGGER or SMALLER the number provided. this is what i have sofar, am i on the write track, i'm hoping its just a problem with my if command
if [ $1 -h $2 ] then echo "$1 is bigger than $2" else
i am using classpath-0.98,jamvm-1.5.4 and arm9 cortex processor. so my question is after install classpath and jamvm on arm9 , i am getting around 30MB FAT file. so tell me some tips how to reduce the size of FAT file as much as possible.
am trying to write a shell script to find the size of a particular log file and if the log size grows, script should mail the changes to the administrator or a any user so script should monitor the log file continuously in a time interval, how can i do that?
I tried with these codes to find the file size but it throws me error says command not found
What limits a file to have some maximum size depending on the Operating System? I do not exactly understand this. If you have the storage space, what else can be the limitation? You should be able to store as much data as you want the way you want (even in a single file) unless you run out of storage space.
I am looking for a way to configure rTorrent to stop downloading all torrents after they have downloaded x amount. For example, specify 15mb and as soon as the torrent reaches that size have it finish downloading the pieces it has requested and then start seeding partially completed. The reason for this is I'm trying to come up with a way to build ratio on a site where torrents are added very fast and at a very high frequency.
I download and add the torrents to rTorrent automatically via RSS, but I only want to download a small amount and seed that small piece while there are still a lot of people in the swarm (swarm drops off very quickly) and come out with a positive ratio from that small piece, beating the ratio clock so to speak. I thought it would be an interesting all be it somewhat impractical exercise in shell scripting, if rTorrent can be hooked into like that, documentation is sparse in some areas.