In a book, I read tha cmchk command is used to get the disk block size. But in Ubuntu, it is not allowed as command is not available.Can some body tell me what is its equivalent in Ubuntu.
But then again, it doesn't calculate the actual file size, but rather a size aligned to 1024 bytes just as Windows does that with 4096 bytes cluster size. Is there a way to calculate the actual file size? eg. 1021 bytes
I have reformatted my hard drive with allocation size 64K for a better performance on my WDTV HD media player(dealing with large files). When I mount this drive on Linux, the mount tells me that "blksize=4096".If I keep writing files usinghis default etting(blksize=4096) to my NTFS formatted hard drive, will my WDTV be able to benefit from the performance improvement of 64k allocation size ? Should I try and mount my hard drive with a larger blksize ?I did some research on google but couldn't find an option to increase the blksize when mounting an NTFS pre-formatted drive.
I'm trying trying to understand dd, and I'd like to know why frequently do we have to use a block read/write size, like "dd bs=1024", "dd ibs=512"... If it executes the operation byte by byte, isn't it irrelevant? What is this block size then?
How do you go about getting the raw size of a block device under Linux from within a C program? And I mean the raw size of the block device itself, not a file system that may or may not be installed on it. And I'd like to be able to get the raw size of any block device, from hard drives (e.g., /dev/sda) to LVM partitions (/dev/mapper/vg0-home) to loop devices to anything else that is a Linux block device.
I use dd in its simplest form to clone a hard drive dd if=INPUT of=OUTPUT However, I read in the manpage that dd knows a blocksize parameter. I was wondering whether there is an optimal value for the blocksize parameter that will speed up the cloning procedure?
I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I cannot mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments.
I have reformatted my hard drive with allocation size 64K(formatted on windows with 64k setting) for a better performance on my WDTV HD media player(dealing with large files).When I mount this drive on Linux, the properties tells me that"blksize=4096".If I keep writing files using this default setting(blksize=4096) to my NTFS formatted hard drive, will my WDTV be able to benefit from the performance improvement of 64k allocation size ?I am confused, Does it have anything to do with "blksize=4096". ?Should I try and mount my hard drive with a larger blksize ?I did some research on google but couldn't find an option to increase the blksize when mounting an NTFS pre-formatted drive
I have one hard disk (call her HDA) that contains nothing but a single ext4 partition containing a backup of all my important data. Last night I did a clean install of Ubuntu 10.10 on my primary hard disk (call her HDB) and from there proceeded to upgrade directly to Ubuntu 11.04 upgrade. In 10.10, I was able to read HDA just fine. However after the upgrade, I can no longer mount this drive. When mounting from file browser:
Code:
Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda,missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so The end of dmesg said the following:
Code:
dmesg | tail [ 82.130904] EXT4-fs (sda): bad geometry: block count 122096646 exceeds size of device (122096381 blocks)
my hard disk has a block count greater than the size of my device. I've done my background searching on this and tried a command line utility I've never heard of before:
Code:
# sudo e2fsck /dev/sda e2fsck 1.41.14 (22-Dec-2010) The filesystem size (according to the superblock) is 122096646 blocks The physical size of the device is 122096381 blocks
[code]....
this is as far as I've gotten. This drive holds over a decade's worth of work for me and is extremely valuable. I really didn't think that the Ubuntu upgrade process would mess with this drive, seeing as the Ubuntu install was contained on an entirely different drive. What is it that I need to do to restore my drive to working status?
I want to generate a temporary random list from a directory of files and then determine the size of an arbitrary block of files from this list (say 1-25 or 26-50) and add their names to a file along with some other info for each name. I can generate a random list with file sizes like this: ls -l | sort -R | cut -d " " -f 6 but i'm not sure how to add up the sizes of just a certain block of these files and at the same time save the file names.
I am new on ubuntu and I really don't have any background on making a server. To be frank is I am still a student learning ubuntu server, how to make and configure them.
My problem is that whenever I type the command: /etc/vsftpd.conf an error message says that: -bash /etc/vsftpd.conf: Permission Denied
I am still discovering what are the commands on the vsftpd server. By the way I am using the server on VMware.
tell me the command for iptable rule to add in Chain RH-Firewall-1 to block ftp port & the ftp server was configured in public ip address,i searched in google but i did'nt get the exact command for iptables rule in Chain RH-Firewall-1.
I'm trying to find a command or program to show what files and folders are taking up the most space on the hard drive, much like tree size view on windows, is there and equivalent on linux?
Code: #!/bin/bash cmd1=$(cat /var/log/messages | grep -e 'blocked for more than 120 seconds' | cut -c 55-62) if $cmd1 != 0; then echo 'okay'; fi
however i'm messing up somewhere... bash attempts to evaluate the elements in cmd1. when I try to run this script it complains saying:
Quote:
test1.sh: line 5: blocked: command not found
I am open to alternatives. My intent is to replace cat /var/log/messages with dmesg, so I can attempt to determine if a problematic application I use encounters a blocked state (unresponsive for more than 120 seconds).
Should I be using a different test condition? I tried something like:
Code: # this declares cmd1 as an array cmd1=($(cat /var/log/messages | grep -e 'blocked for more than 120 seconds' | cut -c 55-62)) #attempt to determine if number of elements in array is greater than zero if ${#cmd1[@]} > 0; then echo okay; fi
But I get the same error... what am I doing wrong?
I ran ls -lh for same tar ball file on RHEL 3 and RHEL 5.3 box. The sixth column of ls -lh output threw 6.3G on RHEL 5.3 box and 16E on RHEL 3 box. Both the machines have ext3 file system.
Find below the output for RHEL 3 and RHEL 5.3 respectively :
2.0T -rwxr-xr-x 1 root root 16E Oct 20 10:34 bac.tar.bz2 6.3G -rwxrwSrwx 1 root root 6.3G Oct 20 10:34 bac.tar.bz2
I'm using fc14 and the SG driver to test some SCSI (SAS) targets. In doing so, I'm bumping up against what appears to be a 512KB maximum transfer size per command. Transfers up to 4MB sometimes work, but often they result in ENOMEM or EINVAL returned from the write() function in the SG driver. I could not find any good documentation on how the SCSI system in Linux works so I've been studying the source for drivers in drivers/scsi.
I see that there is a scsi_device struct that contains a request_queue struct that contains a queue_limits struct that contains an element called max_sectors. The SG driver seems to use this to limit the size of the reserve buffer it is willing to create. I see that there are several constants used to initialize max_sectors to 1024 which would result in the 512KB limit I see (with targets having 512 byte sectors). At this point I have several questions:
1) When the open() function for the sg driver gets called, who initializes the scsi_device struct with the default values?
2) Can I merely change the limits struct to arbitrary values after initialization and cause the SG ioctls to set the reserve buffer to allow greater values?......
F13 with default GhostScript 8.71Trying to crop an eps file.The window size adheres to command but the location(offset) never does.Tried several different layouts and none work.Quote:ps2pdf -dEmbedAllFonts=true -dProcessColorModel=/DeviceCMYK -dPDFSETTINGS=/prepress -r300 -g1086x1201 -dPDFOPT someeps.eps test.pdfYields the same asQuote:ps2pdf -dEmbedAllFonts=true -dProcessColorModel=/DeviceCMYK -dPDFSETTINGS=/prepress -r300 -g1086x1201+100+100 -dPDFOPT someeps.eps test.pdf