In my production setup, i have 3 servers using the same mount point. However, i see that the IOPS is low. Does this kind of architecture have any impact on IOPS. In case it is neutral, how can i tune my setup for better IOPS.
I have a RHEL5 server that hosts an apache SSL proxy and about 20 tomcat instances. As of late we've had latency issues on the system that I can't pin down. In trying to diagnose whether the local HD is being over-utilized, I started gathering disk utilization stats using iostat and sar. For Sar, I'm using the "tps" metric, and for IOStat I'm combining reads and writes per second for the raw disk device, sda. When I put the stats into excel, the profile of the graphed data points match up for the most part, but sar is reporting the values for the same data points as being many magnitudes higher. Can anybody give me a hint as to why one tool would report the same data differently when (as far as I know) both of them pull their disk I/O stats from the same place?
calculating the IOPS for my system ?I had understood that IOPS could be calculated by monitoring IOSTAT. The r/s (reads/sec) and w/s (write/sec) combined basically gave you the peak IOPS.However having monitored iostat for my app during a performance test, the figures I'm seeing lead me to believe that iostat couldn't be a reliable way to calculate IOPS. I get values of over 10,000 for w/s! I didn't think this was possible given the storage system i have (4 x 7500rpm sata disks in raid 5).
Right now i have a HP DL 180 Server with 130 Gb Hard Disk & 8 Gb ram after Raiding0+1. i want to configure Domain Controller Server for my office for 200 to 300 Users. what should the partition size must be mentioned in my 130 Gb Hard Disk, is that going to be Sufficient for ME ?
i am bit confused about /Usr /Var /Boot partitions, as i need to manage perfectly in 130 GB
if i go with 4 Gb swap and remaining for " / " is that will be fine ? should i need to specify partition sizes separately for / tmp /var / usr ..
i had installed open suse linux enterprise server 10.2 SP2 (x86_64) i successfully installed the operating system with adding more packages, but after going inside i cannot configure ethernet, though i have 2 ethernet ports, no ports has been found, i installed some drivers but i cannot found the ethernet , while giving command ifconfig i get this ,
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1
how i need to detect the ethernet card i updated more packages and configured network card but not connected...
I'm new to setting up Linux Servers. I've setup a Ubuntu 10.10 Server along with CUPS and I'm using Webmin to talk to the server. I have a HP PSC 1315 Multifunction printer connected via usb to the server. Using the CUPS web interface I am able to get the server to detect the connected printer and it identified the HP PSC 1310 Series drivers.
When I printer a test page from the server's screen the print job goes through ok and the size was about 5k.
I then setup a samba share to allow my Windows 7 machine to share the printer. Windows 7 is able to pick up the shared printer correctly and I used the default HP 1310 Series drivers. When I tried to send a test page to the printer, that single page ended up being 3887kb and I also tried printing out a single paged word document which ended up being over 7MB.
Has anyone had any experience on using SUA(Services for UNIX Applications) rsync to "pull" files down to the Win2k3R2 server from a linux rsync host?I was trying to use cygwin rsync before until I found out from cygwin that the cygwin port of rsync was "flakey" and would fail intermittently for no apparent reason. cygwin suggested I use SUA or SFU for rsync services.
I've looked for/ am looking for any experience using SUA rsync to copy files down from a linux rsync host to the Windows host via rsync on the Windows host. Also, if you have done this successfully, do you have any pointers/caveats you can share on how you got it working? What I am basically looking to do is copy files and subdirectories of files from a linux host using rsync to some static location on a Windows server on a scheduled basis so that I can backup the windows server to tape using Symantec's Backup Exec application.
I'm doing it this way to avoid deploying the Remote Agents for either linux or Windows on the target hosts. As an alternative I've seen reference to a product called DeltaCopy that uses a native Windows rsync port with the native linux port of rsync to do what I need also.I realize this is not a strictly linux question, but more of a hybrid as I'm moving data to and from Windows and linux hosts. So, if this is too Windows-y a question, please say so and I'll withdraw my question.
We recently shifted our applcation and database to LINUX from UNIX.so now application server(weblogic 8.1) is on linux and oracle database 9i is on linux.previously there is a process in application server(which access the Database dor the data) which took only 1 hour to run on UNIX after shifting to LINUX the same process is taking 4 to 5 hours to run.
but when individually ran the queries on the database it is quck than UNIX.our ADMIN tried changing the kernel parameters for the database server, but it is still the same.
if anyone of you have shifted using Paid Red Hat Linux with CentOS, and what are your experiences of moving from Paid Linux to Unpaid Linux CenOS. When do you suggest a person use Paid Linux and when to use Unpaid Linux?
I have learnt that the network locked huawei modems may be unlocked to use any sim card bu getting a special unlock code and it should ask for it when a "foreign" SIM card is inserted. This procedure works well in Windows, but in Linux where I use wvdial, I dont get prompted for this unlock code. Does anyone know how to insert the unlock code in Linux using any Linux tool (GAMMU/GNOKII/Minicom etc)?
A complete back up using tar takes consumes more time. so is there any way to take incremental backups using tar.And i also want to take incremental backup dump of my databases too.Any suggestions and links will be very helpful.i keep on googling for this,but could find any exact for this.
I have 3 linux systems configured for running applications in each, named system1, system2 and system3. I have around 100 GB of space in system3 under /usr but not much being used. In System1 very less space is there but mostly hits coming here and need to have proper backup, as the system1 is quite old and not planned partitons properly. So I want to use a disk having more space for backup requirements.
i have centos 5.9 running on my server and i have to take backup of my entire data from the different server.This one I want to make it as backup server. I need few informations about the tap drive
1.Which tape drive is good also compatible with Linux (centos ), pls send me the link 2.How to take backup into tape drive , good if you send any doc. 3. Any backup software which is kind of opensource
I am looking for some monitoring tools (such as disk usage,memory usage, cpu,etc) for my linux machines. I came across two tools, cacti and splunk.Which one is better ? It will be nice if you can also let me know the reason.
The question that I am posting here is quite interesting as it was asked in the interview I attended today. And, honestly, I could not provide a solution. Ok, here goes the problem statement:Design a Web Interface that has three text fields:
IP Address: Subnet Mask: Default Gateway:
And a button:
When we click the Submit button, the two entries must be set in the concerned files and then the network service must be restarted to bring the new IP Address, Subnet Mask, and Default Gateway in effect.As we all know that these settings can be done by the root user or a user who has those priveleges.The complete web interface needs to be done only in PHP. Some Shell Script can be used if required.
I am looking for some advice on a Linux home server. I'd like it to be low power and reliable. I have searched and found a lot of information online already, but it's too hard to choose due to the many options and opinions.
The main reasons for me to get a home server are (more or less in order of priority):I'd like to have my files centrally, so I can reach them from my laptop, my PC, my phone, from work etc ( or stream them) I'd like to have my own e-mail server for my own domain(s), perhaps also Jabber I'd like to have my own web server to develop and perhaps even to host a website I might play around with home automation in the future Of course I am not here to ask questions regarding the software I would need, but I'd like to get a nice low power (I am a bit of an environmentalist, but not extreme) and reliable server.
I also get the impression that many people like AMD Geode processors and the choice is certainly not limited to these products. I am here for suggestions.It's not a necessity that the server will be small. Actually, I think I'd like to have something with two discs, so I can use RAID (no experience, but it makes sense to do), so the B3 looses from the Via here.Also, I like a bit of flexibility, so I think I would prefer a home network in the future with a separate modem, router, server and WAP over an all in one solution. Perhaps even have the hard disks in a separate case? Actually, I think it makes sense to place a server out of sight (aesthetics, theft, etc), while a WAP should of course be fairly visible.
At the moment, I do not have lots of files to store, but I think it makes sense to get decent capacity anyway (at the very least 500 GB and expandable).Making a good choice is more difficult for me than spending the money. I'd be happy to spend something around 400 - 600 for a good product. I am able to put components together myself if that would give me a much better product for the same amount of money (did it before, but not so often).
i want to know how to make my old computer into a linux server that i can keep at home, and when i go to college i can take my new computer.so right now i have a core 2 duo with 4 gig ram, 640 gig hd, 600 w psu, a 9800gtx, and its in a cooler master case.so how can i make it so its running a linux server all the time and could host things for me.i also want to set up vnc through ssh (i think thats what its called)
We are looking at installing a linux server in the office, the requirement for the same are.All sent items of any users but be sent like a bcc or a copy to a common management id. All sent items must be saved with a copy.There would be only 1 email id on the remote server and mailman would have then to distribute the emails when it downloads are this possible in linux if yes please suggest me which email software would support this and has anyone done this before with spam assaign etc.
I am unable to ssh a Linux box from other Linux boxes; also tried to window putty.Although I am getting the password prompt instantaneously.So far, by comparing logs of other server, I am just able figure out that "debug2: callback start" is not coming in ssh -vvv logs.
I installed opensuse 11.3 to my laptop.Toshiba L505-13w satellite. core i5 2.27 ghz , 4gb ram, 1gb ati display.I can't measure cpu temperature. I tried "acpi -t" and sensors but nothing happened.I also tried system information widgets from plasma menu.Still I can't see my cpu temperature. Can anyone help me about this problem? I want to see my cpu temp.
I'm trying to get our linux servers to use Active Directory (AD), and have gotten our linux (RHEL 5) server to fetch users and groups from AD. Now I'd like to add computers (and groups of computers) to AD, and have our linux boxes make use of this info. Does anyone know how to get our linux-boxes to understand computers and computer group objects on AD?
I am looking to be able to encrypt using OpenPGP certain outgoing emails on my linux server. Currently I have GPG setup with a public key, however encryption outgoing emails prooved to be harded.After a bit of research I have found GNU Anubis which acts as a middlemad between the MUA and the MTA, by encrypting emails before they reach my MTA (Sendmail)However I am having a bit of problem with the configuration of bind and remote-mta, as specified by anubis.I have the sendmail service running on port 25 and I want to leave it there, but I have configured my php.ini SMTP port to 24. So it runs through port 24 first and anubis then forwards the emails via remote-mta to port 25Here are my anubis configs:
With all those set, I can't seem to get the basic modication of emails to work. (trying to change a certain subject to something else, just to see that anubis is working). However emails are still working with port 24 as the SMTP port.
I have a proxy/gateway server with X routable addresses and X clients, each connecting to his corresponding address from my server. All clients have public static IP's. I need something like the output of 'pktstat -1 -w 10 -B -i eth0 -n -P -t -T' but that would indicate the biggest'traffic hogs' from my clients.
Something like: 126.96.36.199 <-> my.public.ip.1 1344KB/s up 289KB/s down 188.8.131.52 <-> my.public.ip.2 1203KB/s up 200KB/s down
With this output, I can limit the traffic passing thru my server using a bandwidth limiter on my.public.ip.1 and my.public.ip.2. Pktstat only shows the total traffic from-to the respective IP's gathered in a 10second interval (-w 10). I would like something that would indicate the bandwidth per ip more precisely, I don't want to divide the total traffic by 10 (seconds). Please note that this will go in a cron job. The interactive tools like iftop are useless (I would like something like a text screenshot of iftop from which I could extract the needed information).
Any good way to measure USB drive speed in a repeatable and effective way? I know that what I "copy" and how I "copy" and the filesystem in use make a difference in anything I try to measure. I know that I can:choose some file or folder tree use various tools (cp, tar, rsync, etc) to move the data use time command to collect details do this more than once.
One "benchmark" I've used in the past was to create a source tree for some application and use make clean; time make in that tree. Lots of I/O of various sorts and some computation and very repeatable. Since I'm not seeking a gold standard benchmark, I can do what I like, but I'd like to do something (a)meaningful in general for myself and others, and (b)avoid inventing what probably already exists.