Server :: JFS On Large LVM Volume (35TB) Fails
Apr 25, 2010
I am running Debian Lenny 2.6.26-2-amd64 and I have the following setup:
3 Raid6-Volumes each 19TB size (md0,md1,md2)
I would like to join the three of them to one large LV (of 57TB) formatted with a jfs fs.
I installed:
jfs_mkfs version 1.1.14, 06-Apr-2009
lvm version
LVM version: 2.02.39 (2008-06-27)
Library version: 1.02.27 (2008-06-25)
Driver version: 4.13.0
Prerequisite: the md are assembled and have synched:
Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] .....
I used jfs_mkfs on each of the three mds and mounted the drives to run some checks and they worked without a problem. I am pretty sure the drives/raids are OK. Then I created the VG "pod" with the three drives as members: .....
Then I created a 40TB LV for testing
Code:
lvcreate -L 40T pod
which gave me this
Code:
lvscan
ACTIVE '/dev/pod/lvol0' [40.00 TB] inherit
Then I went on formatting the drive using
Code:
jfs_mkfs /dev/pod/lvol0
Still - everything fine.
When trying to check the partition with jfs_fsck I get:
Code:
ujfs_rw_diskblocks: disk_count is 0
Unrecoverable error writing M to /dev/pod/lvol0. Cannot Continue.
The really funny thing is that I don't seem to get the error if I try to run the thing with a 25TB LV everything looks fine: There is nothing in syslog that would point to the source of this error. Also - if I try to mount the 40TB partition, everything is fine - but when I try to write to it, the whole system hangs and there is no way to recover.
View 9 Replies
ADVERTISEMENT
Feb 16, 2011
We had to reboot a server in the middle of a production day due to 99% iowait (lots of processes in deep sleep waiting for disk iops). That's never happened to us before. It had been 363 days since the last fsck, so it started automatically on reboot. It hung at 4.8% on a 2TB LVM2 volume for about an hour. I killed the fsck and rebooted the server. The second time, it went past that point and is currently at about 62%. First, what causes e2fsck to hang like that? Second, is there any danger in killing e2fsck, rebooting, and starting it again?
View 1 Replies
View Related
Dec 8, 2009
We're load testing some of our larger servers (16GB+ RAM), and when memory starts to run low they are kicking off the oomkiller instead of swapping. I've checked swapon -s (which says we're using 0 bytes out of 16GB of swap), I've checked swappiness (60), I've tried upping the swap to 32GB, all to no avail. If we pull some RAM, and configure the box with 8GB of physical ram and 16 (or more) GB of swap, sure enough it dips into it and is more stable than a 16GB box with 16 or 32GB of swap.
View 6 Replies
View Related
Dec 22, 2010
I'm asking for an advice about the setup of a large volume: I have 2 disks of 1 Tb each and I want to merge them in a single volume/partition. I am in doubt about setting up a LVM, a RAID0 device or both. I know that RAID0 has no redundancy but I will manage a backup on other media, so that I can take advantage of the stripe feature in terms of I/O performance. On the other hand LVM let me to easily manage and expand the volume in a near future. Am I correct? Anyway I don't know if I can ever setup both and in which order. First LVM then RAID, I suppose.
View 8 Replies
View Related
Dec 12, 2010
I am concerned about my tweeters in relation to signal coming from the computer. I have machine LINE OUT fed into my preamplifier (hardware audio component) all the time. The max levels of tuner, tape recorder, turntable, etcetera are all "equalized". That is, they all more or less deliver the same output level to the preamp. The difference between a CDDA level and that of another one is always minimal. The same goes for vinyls (not "so minimal"). As to radio broadcasts, one may be xmiting with great power and make differences larger, but still no harm for the tweeters, relatively speaking. Mine are 50w four-way loudspeaker systems (forgive the word, I used to call them baffles).
The power amplifier is 30w per channel so, theoretically, it cannot damage the speakers. But the preamp (Yamaha C-2) can deliver a very large signal. When this signal is unduly large, the woofer and mid-range speakers won't suffer. But the enormous resultant distortion can damage the tweeters. Some audio files have the signal recorded at very high levels. And I could launch aplayer, play or mplayer, or any GUI player, or I could receive an email audible notification when amixer's output controls are near the top and put my tweeters in danger. I think the core of the problem is the huge variation in level between different audio files.
View 14 Replies
View Related
Feb 3, 2010
I am programmatically doing a large FTP upload (about 5 Gb's of 2-3 Mb files) from my Windows 7 machine to a ProFTPd server on my Ubuntu box. When the program is run, transfer rates are huge! but as run time continues, transfer rates drop off significantly and eventually slow to a halt resulting in a:
Quote:
Read Timed Out
error.
The rest of the uploads fail due to Socket Read Errors.
Why is the transfer failing like this?
View 2 Replies
View Related
Oct 15, 2010
My MySQL replication slave failed,while reading a large size table from master.Thhe information from error.logError 'Unknown table engine 'InnoDB'' on query. Default database: 'test'. Query: I am sure that table is a Big size table..
View 11 Replies
View Related
Feb 3, 2011
We have 2 servers, 1 is the webserver and the other is the Mysql server.
When transfering a 2GB file from the webserver to the Mysql server.
The webserver's connection to the mysql DB server dies completely.
Need to restart the MYSQL process in order for it to come back online.
During this connection downtime, when using phpmyadmin on the mysql server shows no problem running queries etc.
View 2 Replies
View Related
Jan 21, 2011
I'm new to setting up Linux Servers. I've setup a Ubuntu 10.10 Server along with CUPS and I'm using Webmin to talk to the server. I have a HP PSC 1315 Multifunction printer connected via usb to the server. Using the CUPS web interface I am able to get the server to detect the connected printer and it identified the HP PSC 1310 Series drivers.
When I printer a test page from the server's screen the print job goes through ok and the size was about 5k.
I then setup a samba share to allow my Windows 7 machine to share the printer. Windows 7 is able to pick up the shared printer correctly and I used the default HP 1310 Series drivers. When I tried to send a test page to the printer, that single page ended up being 3887kb and I also tried printing out a single paged word document which ended up being over 7MB.
View 4 Replies
View Related
Mar 29, 2010
I have a 14TB raid, file system is read-only and I am trying to run e2fsck -B -p -C -v -y /dev/sdb1, it goes through, but fails and says bad block/inode or fails to transfer, something like that.Is there a way I can get this to run successful, this is a production storage server, its critical.
View 13 Replies
View Related
Nov 28, 2010
I'm trying to design an inexpensive large scale DNS server but fail to find any metrics or methods to base scalabilty.Can anyone offer information on building a stable dedicated DNS server? That might be able to scale well.
View 8 Replies
View Related
Dec 29, 2010
i have Ubuntu10.10 (kernel-2.6.35-22-generic) installed. struct stat StatBuff;
[Code]...
I have mounted a windows share folder on /mnt. When i gave any directory within /mnt/ to stat function it fails with errorno 75. perror shows "Value too large for defined data type". Example 1 is fail but Example 2 works fine.
View 7 Replies
View Related
May 25, 2010
I've noticed my ldap.log file has grown to 30GB and has filled / to 86%..
1. I want to know if I zero out the logfile will this cause any issues while ldap is up and running?
# > ldap.log
2. I will be changing the log file settings in slapd.conf. I will then need to restart slapd.. Will restarting impact existing connections at all?
View 2 Replies
View Related
Feb 3, 2011
I've got a server running CentOS 5.5. I used the automated iptables config tool included in the operating system to allow traffic for vsftpd, Apache and UnrealIRCd. When I send large files to FTP, even from the local network, it works fine for a while and then completely times out... on everything. IRC disconnects, FTP can't find it and when I try to ping it I get "Reply from 10.1.10.134: Destination host unreachable" where ..134 is the host address for the Win7 box I'm pinging from. This is especially frustrating as it's a headless server, and as I can't SSH into it to reboot I'm forced to resort to the reset switch on the front, which I really don't like doing.
Edit: the timeouts are global, across all machines both on the local network and users connecting in from outside.
View 4 Replies
View Related
Jun 4, 2011
I use red hat linux es 5. I use startx to start the x-win desktop. But when I use vritual manager . The display application is too large so the bottom part for the application cannot show out. I cannot scroll down to get the display of bottm part. So, I do not know what button display at the bottom part. So, what can I do. I already set the display to 800x640 already.
View 1 Replies
View Related
Feb 25, 2010
We're running
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
$ uname -r
2.6.18-164.11.1.el5
It hosts an Apache/2.2.3 web server. We also run apache-tomcat-5.5.23. Most of our programs are mod_perl. Sometimes our users input over-sized data sets, or queries that generate too much output. (I realize that we should try to prevent them from doing that, but right now I'm looking for a more general solution.)
When a large job runs it can 'freeze' our system. The system becomes unresponsive to everything, including command line commands. Sometimes it unfreezes after a while. Once, in this situation I was able to create a high-priority shell. ps reported:
[Code]...
Observing the machine, I see at least one very busy disk. I suspect that some high priority system process (perhaps kswapd) is using all the cpus, preventing anything else from running. Unfortunately, I cannot find much info on kswapd, or debuggging this problem.
View 3 Replies
View Related
May 21, 2011
Looking for script that I can use to install a large set of packages in Ubuntu. Typically, when I'm starting a new server I have to do "apt-get install" on a bunch of packages. I'd like a script that I can add/remove packages from and say yes to all the dependency questions.
View 1 Replies
View Related
Feb 18, 2011
how big and widespaced the fonts on Clementine playlist are and how good they look on the appmenu (where my mouse pointer is). This is not because Clementine is QT4, I've got the same problem with Chrome, Opera etc. I've been messing with system-settings (KDE settings tool) a day before the fonts become that widespaced in order to make my KDE apps look more native on my GNOME, but I haven't touched the fonts settings there.
View 9 Replies
View Related
Feb 6, 2011
Every time I attempt to transfer over a large file (4 GB) via any protocol, my server restarts. On the rare occasion that it doesn't restart, it spits out a few error messages saying "local_softirq_pending 08" and then promptly freezes. Small files transfer fine.
Relevant information:
Ubuntu server 10.10
Four hard drives in RAID 5 configuration
CPU/HD temperatures are within normal range
View 7 Replies
View Related
Oct 6, 2009
I think I am having a problem due to an NFS server file size limit. Is it possible I am missing a parameter on the RHEL NFS setup to handle large files? I am running an NFS server on a RHEL 5.1 machine and the HP-UX 11.0 machine does an NFS mount to that file system. The HP-UX executes a program that resides on the HP-UX machine to process a large 35 GB data file that resides on the NFS server machine. The program on the HP-UX can only read/process the first portion of the file until an "RPC: Authentication error" is returned multiple times until the program prematurely decides that it has reached the end of file.
I tried recompiling the same program to run on the RHEL 5.1 NFS server to access the 35 GB file locally (on the NFS server instead on HP-UX) and the program completed successfully, processing the whole file (about 7 hours of processing) with no "RPC: Authentication error." In addition, I have been running the nfs mount with the same machines for quite some time, but not with such large files sizes.
View 3 Replies
View Related
Apr 22, 2010
I'm planing to copy a productive mysql innodb file from one server to another, and the file size is around 300GB. As the file is keeping changing all the time, I have to shutdown mysql instance and copy the large data file to other server as quickly as possible.I should have to find a way to speed up file copying ... I'm wondering whether there's a way to copy file block by block.If the destination side block has same content, then bypass it.
View 4 Replies
View Related
Mar 16, 2010
I have a cpanel server(not sure if that matters). I wanna transfer to another cpanel server. The size is about 3TB, I dont have enough on the hard drive to zip then transfer that way, and cpanel cant handle moving such a large migration.Been searching on the net to find an answer but everyone saying to backup then migrate. whats best tar or rsync or somthing else.
View 3 Replies
View Related
Mar 8, 2010
I have been a RPM-based distribution guy for a long time (redhat,centos,suse). We have a large shared and dedicated web environment that is starting to require more and more linux. I am in a position to switch gears and move to ubuntu if it makes sense. Things that are important to me are:
1. ease of deployment (both servers and websites themselves)
2. patch management
3. documentation
View 2 Replies
View Related
Jun 18, 2010
I have cygwin on Windows XP running rsync to remote Ubuntu server over ssh using ADSL.My data set is about 20Gb! But, Cygwin will backup incrementally, so after the first backup the process should be relatively quick.With ADSL the first backups will take too long. I was thinking about doing the first backup by copying files to an external hard drive then attaching the hard drive to my remote server and copying the files. The idea being that rsync will pick up the files as if it had created them in the first instance. The incremental backups will then pickup from there.
Does anyone have any experience with this and/or can provide any advice? The external hd is fat-32 which is okay with Windows and should be okay with Ubuntu? From XP right click copy and then paste keeps the file dates intact on the external hd - is this enough to get rsync going incrementally?
View 1 Replies
View Related
Feb 28, 2011
I am learning about linux memory and hugepages, and know that hugepages basically is just memory that manages memory. I thought I'd experiment with the subject, and wrote a very small C program [URL] that basically just eats 20 GB of memory. The idea was that I would use this small C program to see how big the page table would get when handling large areas of memory when I'm not using hugepages. After running the program on my RHEL 5 server I was expecting the PageTable to be huge, but found that it was only about 43 MB. The page size on my RHEL box i 4 kB. Why I'm not getting the major PageTable size issue I was expecting?
View 2 Replies
View Related
Feb 15, 2011
we've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month. I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
View 8 Replies
View Related
Jul 6, 2010
I have a test RHEL5 box, sitting in a brand new Dell blade rack on a PowerEdge M610, with a lovely Emulex OCm10102-F-M FCoE card connected to a Cisco Nexus 5000 switch. The whole setup is extremely new (the cards only recently became available for purchase.) We've finally worked with Emulex to get the cards functional, and we are ready to some stress testing to the SAN. My question now is, is there a good tool I can use to generate a large amount of traffic to a LUN? The Wintel team used a windows-only tool that showed an average of 6gigs/second throughput, so I need something that can generate very large files and simulate a consistent throughput to the LUN. I found iozone, but I'm having a devil of a time with it.
View 2 Replies
View Related
Apr 6, 2009
I am attempting to upgrade a system from 4.7 to 5.2 using a (now) DVD drive attached to the onboard IDE. Originally I had tried using a remote NFS image and a USB stick but I thought maybe there was a problem with the image. I can get up to the point of the installation of selecting the keyboard for the system and then it freezes and never goes any further. It doesn't appear to be a kernel panic since I can still switch between consoles.
I've got an MSI K9NGM2-FID with 14 drives in it. It serves as a file server for our backup server. It's got a secondary 4 port Silicon Image SII 3114 SATA card using the sata_sil module, and an old IDE Promise FastTrak TX2000. Technically I could have 16 drives but the 750W PS is walking the fine line on tripping it's self-breaker with the 14 drives and 7 fans. I would like to NOT have to disconnect all of this to do the upgrade.
I thought maybe that running the install using the "noprobe" option would help so it didn't detect and load the modules for the Silicon Image or the Promise cards and detect all of the drives but it still gets stuck on the step after selecting the keyboard. The installation info console and the dmesg console don't really provide any useful information. The installation console says:
INFO : moving (1) to step welcome
INFO : moving (1) to step language
INFO : moving (1) to step keyboard
INFO : moving (1) to step findrootparts
And the last lines of the dmesg console says:
<6>device-mapper: multipath: version 1.0.5 loaded
<6>device-mapper: multipath round-robin: version 1.0.0 loaded
<6>device-mapper: multipath emc: version 0.0.3 loaded
Is there a hidden "debug" option that will turn on a lot of extra logging?
View 7 Replies
View Related
Jun 24, 2011
I am running CentOS 5.5 with a 14T ext4 volume. We are sharing out a few sub-directories via NFS. Our customer was doing some penetration testing with our web app that writes to one of those NFS shares. We are not sure if they did something to cause the metadata to grow so large or if it is corrupt. Here is the listing:drwxrwxr-x 1 owner owner 470M Jun 24 18:15 temp.badI guess the metadata could actually be that large, however we have been unable to perform any operations on that directory to determine if the directory is just loaded with files or corrupted. We have not run an fsck on the volume because we would need to schedule downtime for out customers to do so. Has anyone come across this before
View 2 Replies
View Related
Feb 3, 2010
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
Code:
auto lo
iface lo inet loopback
# The primary network interface
#auto eth0
[code]....
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
View 1 Replies
View Related