General :: 512KB Maximum Transfer Size Per Command?
Apr 7, 2011
I'm using fc14 and the SG driver to test some SCSI (SAS) targets. In doing so, I'm bumping up against what appears to be a 512KB maximum transfer size per command. Transfers up to 4MB sometimes work, but often they result in ENOMEM or EINVAL returned from the write() function in the SG driver. I could not find any good documentation on how the SCSI system in Linux works so I've been studying the source for drivers in drivers/scsi.
I see that there is a scsi_device struct that contains a request_queue struct that contains a queue_limits struct that contains an element called max_sectors. The SG driver seems to use this to limit the size of the reserve buffer it is willing to create. I see that there are several constants used to initialize max_sectors to 1024 which would result in the 512KB limit I see (with targets having 512 byte sectors). At this point I have several questions:
1) When the open() function for the sg driver gets called, who initializes the scsi_device struct with the default values?
2) Can I merely change the limits struct to arbitrary values after initialization and cause the SG ioctls to set the reserve buffer to allow greater values?......
When I try to delete a file, ( move to trash ) It says , The trash has reached It's maximum size! clean the trash manually. When I click on the trash icon on desktop it is empty. Where is the trash? Where can I delete these files ?
First off all, I'm booting from a large MEMDISK using PXE (900MB) . Due to our environment, I cannot decrease the size, nor move files to a nfs/iSCSI/... environment. Everything needs to be in that MEMDISK.
Now, when I try to run the OS, I get out of vmalloc space. How do I increase it to a number which allows such a large image to be mapped? I tried the parameter "vmalloc=1280M", but with that parameter, I don't get past the Booting the kernel screen.
Memory should not be an issue, since the machine(s) have at least 2GB RAM. (900MB MEMDISK + 256MB for other kernel stuff + 768MB for user stuff). The machine(s) have a Pentium 4 Extreme Edition processor, with hyperthreading and SSE2, but no EM64T.
How can I boot the system, and get past that message? Decreasing the MEMDISK size is not possible too. It is at the smallest we can get with our userland + kernel + modules.
How do I change the size of the available shared memory on Linux?evidently 4GB is not enough for what I am doing (I need to load a lot of data into shared memory - my machine got 8GB of RAM).
I created a VM disk image with kvm-img, but I forget what was the max size of that disk image when I created it. Currently, its size is 6.2G, I want to install some large packages in that VM, so I want to make sure the disk image can expand to an adequate size.
The aim of this script is, when the folder reaches 20M then attributes will be set to that particular folder so that no newfiles and folders cannot be created or copied to that samplefolder. whenever i copy a file morethan 20M to that folder its getting copied fully and then the attributes were applied. But i dont want this to happen, when the folder reaches its maximum current write operation to that folder should be stopped automatically with a error.
I have set up squid server. My cache directory has been set up as per following statements.cache_dir ufs /Cache1/squid 10000 16 256cache_dir ufs /Cache2/squid 10000 16 256Now the problem is that size of /Cache1 and /Cache2 has reached to about 8GB and in near future it will reach the maximum limit of 10GB. I just want to know that whether I need to delete the contents of these directories or otherwise.
This seems like a relatively simple question, but the answer seems to elude everyone: What is the MAXIMUM SIZE of a Linux loopback device (not counting any specific filesystem limitations)? Is it the maximum size of a linux block device?
Is there any maximum limit to the heap memory allocation?My program is in PERL and i am using a solaris system. when i did "pmap pid" (pid = my process id) it is showing a number of heap memory allocated and all of them with GB sizes. This single process is eating up most of the physical memory.Is it normal and is there any way to get the heap memory size down
I'm wondering what's the maximum heap space a process can use (not necessarily by a single malloc()) in Ubuntu x86_64. Which parameter determines the size
i have got an old computer with some partition and one have linux slackware installed; it is all included there (root and a swap file); its size is almost 4 gb. Now i have a new laptop and i do not really want to reinstall linux on it; simply i want to transfer all things from old on new computer. The size of new hd is almost 12 Gb and i want to use entire with linux slackware. I will recompile new kernel on old computer for the new. Now, i think to use dd to make one image, this follow command may be good, i think:"dd if=/dev/hda3 of=./linux_slackaware.img bs=4096 conv=noerror"I use zipslack on msdos partition (hda2) to run this command; it will make a 4 gb file image partition;Now i ask you:it is possible to transfer and to adapt this image partition on a different size image partition?The new is 12 gb size.what are the right dd command parametres?
Code: #!/bin/bash cmd1=$(cat /var/log/messages | grep -e 'blocked for more than 120 seconds' | cut -c 55-62) if $cmd1 != 0; then echo 'okay'; fi
however i'm messing up somewhere... bash attempts to evaluate the elements in cmd1. when I try to run this script it complains saying:
Quote:
test1.sh: line 5: blocked: command not found
I am open to alternatives. My intent is to replace cat /var/log/messages with dmesg, so I can attempt to determine if a problematic application I use encounters a blocked state (unresponsive for more than 120 seconds).
Should I be using a different test condition? I tried something like:
Code: # this declares cmd1 as an array cmd1=($(cat /var/log/messages | grep -e 'blocked for more than 120 seconds' | cut -c 55-62)) #attempt to determine if number of elements in array is greater than zero if ${#cmd1[@]} > 0; then echo okay; fi
But I get the same error... what am I doing wrong?
I am trying to figure out the actual size of files and directories on a CentOS Linux 5 server and when I do a ls -l I see for example at the Directory of /Data 4096 but once in side the directory and I do a ls -l I see larger file sizes. How do I get the actual file size of a Directory to show up?
I've spent hours trying to scan + shrink a multipage PDF documentlosing readability. This is the first time I've ever needed to do this! (I had to scan each page as ".jpg" in order to email and open on another computer, so I could not scan to PDF directly, which I think is why each page was so large; lower DPIs made the text too blurry.)I found this great tip on UbuntuGeek...but anyone can do this if GhostScript is installed:
Having a bit of a issue with Debian Squeeze and transferring files to the Sony PSP..Hook up PSP to USB port and Debian mounts it..I go to drag a 125 meg mp4 to video folder..Copy windows takes about 10 seconds to transfer it..Exit USB mode and there is no video there. Go back into USB mode and look at video folder on the PSP memory stick and there is no video..It vanished. From another after copy progress closed I right clicked PSP and unmounted it..
It error-ed saying device was busy and could not unmount..Looking at light on PSP i see memory stick is still being written to..i wait for light to stop flashing..About a minute or so..Then am able to unmount it..Go to PSP video and theres the video ready to be watched. Debian isnt accurately showing the copy progress...Its showing complete when it isnt..I have to watch the light on PSP to know when it is truly finished.
i am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
[root@sa1 logs_os_backup]# less remote.log Wed Mar 10 09:12:01 AST 2010 building file list ... done bkup_1.tar.gz deflate on token returned 0 (87164 bytes left) rsync error: error in rsync protocol data stream (code 12) at token.c(274) building file list ... done code....
I have learned Linux for a while now, but Linux is continuously surprising me with new stuffs nearly every day... Today I met a really strange problem, that the command "ls" indicates the size of some directories is ZERO, as for /home.
However, there is a directory inside /home, which contains many files/directories.
Even worst, when I tried to create a file under /home, I got the "permission denied" error,
By the way, /home is within the local file system, not NFS share.