I am doing an analysis with postfix, qmail and sendmail analyzing its performance.I need to send mail of size 10 MB, 50MB and 75 MB and analyze the time taken to send each mail to different users.I first used telnet, but file attachment is very hard there.Then i went for thunderbird but the file attachment size is just 5 MB. So is there a possibility to send such huge file size?
I finally got E-mails fetched downloaded to local disk, now a further thing that I would like to do is to rip off the attachments to save as its original file name and extension.A E-mail like this for example:
Code:
From root Thu Apr 7 17:21:34 2011 Delivered-To: ted_chou12@tedchou12.cz.cc Received: from pop.gmail.com [74.125.155.109]
I'm wondering what's the maximum heap space a process can use (not necessarily by a single malloc()) in Ubuntu x86_64. Which parameter determines the size
When I try to delete a file, ( move to trash ) It says , The trash has reached It's maximum size! clean the trash manually. When I click on the trash icon on desktop it is empty. Where is the trash? Where can I delete these files ?
This seems like a relatively simple question, but the answer seems to elude everyone: What is the MAXIMUM SIZE of a Linux loopback device (not counting any specific filesystem limitations)? Is it the maximum size of a linux block device?
Is there any maximum limit to the heap memory allocation?My program is in PERL and i am using a solaris system. when i did "pmap pid" (pid = my process id) it is showing a number of heap memory allocated and all of them with GB sizes. This single process is eating up most of the physical memory.Is it normal and is there any way to get the heap memory size down
First off all, I'm booting from a large MEMDISK using PXE (900MB) . Due to our environment, I cannot decrease the size, nor move files to a nfs/iSCSI/... environment. Everything needs to be in that MEMDISK.
Now, when I try to run the OS, I get out of vmalloc space. How do I increase it to a number which allows such a large image to be mapped? I tried the parameter "vmalloc=1280M", but with that parameter, I don't get past the Booting the kernel screen.
Memory should not be an issue, since the machine(s) have at least 2GB RAM. (900MB MEMDISK + 256MB for other kernel stuff + 768MB for user stuff). The machine(s) have a Pentium 4 Extreme Edition processor, with hyperthreading and SSE2, but no EM64T.
How can I boot the system, and get past that message? Decreasing the MEMDISK size is not possible too. It is at the smallest we can get with our userland + kernel + modules.
How do I change the size of the available shared memory on Linux?evidently 4GB is not enough for what I am doing (I need to load a lot of data into shared memory - my machine got 8GB of RAM).
I'm using fc14 and the SG driver to test some SCSI (SAS) targets. In doing so, I'm bumping up against what appears to be a 512KB maximum transfer size per command. Transfers up to 4MB sometimes work, but often they result in ENOMEM or EINVAL returned from the write() function in the SG driver. I could not find any good documentation on how the SCSI system in Linux works so I've been studying the source for drivers in drivers/scsi.
I see that there is a scsi_device struct that contains a request_queue struct that contains a queue_limits struct that contains an element called max_sectors. The SG driver seems to use this to limit the size of the reserve buffer it is willing to create. I see that there are several constants used to initialize max_sectors to 1024 which would result in the 512KB limit I see (with targets having 512 byte sectors). At this point I have several questions:
1) When the open() function for the sg driver gets called, who initializes the scsi_device struct with the default values?
2) Can I merely change the limits struct to arbitrary values after initialization and cause the SG ioctls to set the reserve buffer to allow greater values?......
I created a VM disk image with kvm-img, but I forget what was the max size of that disk image when I created it. Currently, its size is 6.2G, I want to install some large packages in that VM, so I want to make sure the disk image can expand to an adequate size.
The aim of this script is, when the folder reaches 20M then attributes will be set to that particular folder so that no newfiles and folders cannot be created or copied to that samplefolder. whenever i copy a file morethan 20M to that folder its getting copied fully and then the attributes were applied. But i dont want this to happen, when the folder reaches its maximum current write operation to that folder should be stopped automatically with a error.
I have set up squid server. My cache directory has been set up as per following statements.cache_dir ufs /Cache1/squid 10000 16 256cache_dir ufs /Cache2/squid 10000 16 256Now the problem is that size of /Cache1 and /Cache2 has reached to about 8GB and in near future it will reach the maximum limit of 10GB. I just want to know that whether I need to delete the contents of these directories or otherwise.
how to unblock email with attachment .zip should be all attachment through to my email i have below message when people sent me email to my domain
Warning: This message has had one or more attachments removed Warning: (the entire message). Warning: Please read the "Apex-Attachment-Warning.txt" attachment(s) for more information.
This is a message from the MailScanner E-Mail Virus Protection Service The original e-mail message contained potentially dangerous content, which has been removed for your safety. The content is dangerous as it is often used to spread viruses or to gain personal or confidential information from you, such as passwords or credit card numbers.
With three 1.5TB, 7200RPM drives in RAID0, they thrash. And yet the network out, 4 gigabit ports LAG'd together to create a single 4 gigabit connection, can't even push a single gigabit a second.
Here's what I've done so far:
I've enabled jumbo frames on the bond: ifconfig bond0 mtu 9000
Tweaked SAMBA performance:
Tweaked hdparm:
I haven't enabled jumbo frames on the switch but I'm almost sure that won't help me much after trying all this.
I'm running out of ideas here guys. The clients connected are pulling down images in both Ghost and WIM (ImageX) format. Large files too, upwards of 12 gigabytes.
I'm writing a client-server program. There are more than 500 clients. I start a thread to process and response to each client and the processing needs some MySQL query. I'm looking for any possible hazards on my server!
1- Any limitation on "Maximum Simultaneous Socket Connection"?
2- Any limitation on using mysql?
3- As socket on Linux are file, Any limitation on number of sockets or threads?
I'm using a Linux server (Centos or Fedora or Ubuntu) and clients are both Linux and Windows.
I'm running nginx for static files and as a proxy server for a comet IM server on ubuntu Jaunty. On high load I'm hitting a limit of 1024 file descriptors. I've tried increasing this limit but still can't pass 1024. Does "more /proc/sys/fs/file-nr" gives me the global count of used file descriptors? Why do I see a maximum of 1024 open file descriptors in /proc/sys/fs/file-nr if this is the global count for the machine and each user should have at least 1024 allowed file descriptors by default? Is there a way to increase the limit while the server is running?
Some relevant info on my server: sudo more /proc/sys/fs/file-nr 1024038001 sudo sysctl fs.file-max fs.file-max = 38001 sudo nano /etc/security/limits.conf ... * hard nofile 30000 * soft nofile 30000
I also added this to /usr/local/nginx/conf/nginx.conf: worker_rlimit_nofile 10240; Uncommented the following line in /etc/pam.d/su: session required pam_limits.so
I'm out in a village and we get more powercuts than I like, and recovering the journal on my server is getting rather irritating. I'm looking into getting a cheap UPS, the powercuts will last usually a maximum of 30 seconds, so I only need a few minutes.
I've been looking at:Plexus V 500VA UPS Plexus V 1200VA UPS APC SUA750I Smart-UPS 750VA
I know near to nothing about these things, my question is will those work with a machine with a 700W PSU? How do you know? The 500VA doesn't really mean much to me. Ideally I'd like to get my desktop on there too, but that's more for convenience than anything.
Will any of these do the job? Any Linux compatibility issues I should plan for? Any recommendations from personal experience is greatly welcome Edit: I will be happy with a UPS that can inform Linux power is down, and get the server to cleanly shutdown straight away, I'm more interested in a clean shutdown than maintaining power to use the machines during the outage. Edit 2: Can UPS devices be piggybacked to one another to provide extra uptime? i.e. Could I run 2 of the 30's so when the first runs down, the 2nd carries on?
ok I have the original install 170GB partition mounted on /
I need the following; OS on / 30 GB data mounted on / data 140GB
Fdisk would have been a joke, but the box was setup via lvm, so I am learning on the fly. I was able to shrink the partition using; lvreduce -L 30G /dev/mapper/server-root
vgcreate shows; VG Size 169/76GiB Alloc PE / Size 9453 / 36.93GiB Free PE / Size 34005 / 132.83 GiB
Now, per my DBA, I need that 130 on a seperate 'partition' and I am not 100% sure on the next step. I am reading on vgcreate, lvcreate, etc.
I really don't understand what's happening.I make a 3.5tb RAID array in Disk Utility, yet it makes it so that one partition is 3tb and the other is 500 gigs free!Why is that? Ext4 can do huge partition sizes I thought.
I have a server setup running VirtualBox and several Windows guests. I'm running the box headless and start the windows machines using
Code:
sudo vboxheadless -s "WinXP Pro SP3"&
via SSH session from my MacBookPro in terminal. I then connect to the Virtual Machines running MS Remote Desktop from the MacBookPro. All of this works perfectly except that I've run out of space on my partition for Virtual Machines and need to create a few more. I have plenty of room on the HDD but when first installing Ubuntu Server I only partitioned and formated about 1/4 of the drive. Is it possible to run a command in my SSH session at the command line to partition the unused portion of the HDD, format it, and expand my current partition into that space? Or, do I have to use something like gpartedlive, boot from the CD and do the partitioning?
Desperate to reduce RAM usage of my tiny VPS running Ubuntu 9.04 and Apache2.2.11, here I saw that: On Linux, each child process will use 8MB of memory by default. This is probably unnecessary. You can decrease the overall memory used by Apache by setting ThreadStackSize used by Apache by setting ThreadStackSize to 1MB in.
So I tried to give the suggestion a try. But when I append: ThreadStackSize 1000000 in my /etc/apache2/httpd.conf <IfModule mpm_prefork_module> directive, and restarted apache, it failed with this message: Invalid command 'ThreadStackSize', perhaps misspelled or defined by a module not included in the server configuration
So I figured out that the relevant modules are neither enabled nor available on apache2. Now I am wondering whether there is a way to decrease the ThreadStackSize without the need to compile apache from source? If not, what should I do?
I've just installed Ubunter Server 9.04 (after having installed 9.10, having problems with it, and uninstalling it). Mostly, 9.04 is working well so far, but for one nuisance: the font is huge.
Well, okay, not huge, but big. On my other machine, running Ubuntu 9.04 desktop, same size monitor, I have the resolution set to 1440x900 which gives me 46 lines on the CLI (with the window maximized, but not full-screen). On the server machine, however, I'm getting only 25 lines -- and there's not even a window title-bar, menu bar, or panels taking up any of the landscape.
So my question is this: Not having a GUI nor any of the associated display-management software, how can I set the screen resolution or otherwise get my display font smaller, using the CLI?