Server :: Large File Size Cause RPC Authentication Error?
Oct 6, 2009
I think I am having a problem due to an NFS server file size limit. Is it possible I am missing a parameter on the RHEL NFS setup to handle large files? I am running an NFS server on a RHEL 5.1 machine and the HP-UX 11.0 machine does an NFS mount to that file system. The HP-UX executes a program that resides on the HP-UX machine to process a large 35 GB data file that resides on the NFS server machine. The program on the HP-UX can only read/process the first portion of the file until an "RPC: Authentication error" is returned multiple times until the program prematurely decides that it has reached the end of file.
I tried recompiling the same program to run on the RHEL 5.1 NFS server to access the 35 GB file locally (on the NFS server instead on HP-UX) and the program completed successfully, processing the whole file (about 7 hours of processing) with no "RPC: Authentication error." In addition, I have been running the nfs mount with the same machines for quite some time, but not with such large files sizes.
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 2.6.29.5-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
CanoScan LiDE 210 running under 10.10 on a Tosh Tecra M11-130 laptop.Currently trying out xsane to archive some paperwork in monochrome, as the bundled Simple Scan utility can only save in either colour or greyscale. The problem is that the same A4 page saved as monochrome has a file size about three times larger in Ubuntu than in Windoze.
The scan mode options are either 'Colour', 'Greyscale' or 'Lineart'. There is no 'halftone' setting available, as shown in some of the xsane manuals. Don't know whether this is significant to this issue. Xsane's main option window shows 3508 x 2480 x 1 bit for a 300 dpi A4 monochrome scan when 'lineart' is selected, but the intermediate file size is 8.3MB instead of just over 1MB before packing for the PDF. This is consistent with each pixel not being recorded as a 1 or a 0, but as a greyscale 11111111 or 00000000, i.e. monochrome/halftone, but stored in an eight bit field. how to tweak xsane for true monochrome intermediate .pnm files and saved PDFs?
I'm new to setting up Linux Servers. I've setup a Ubuntu 10.10 Server along with CUPS and I'm using Webmin to talk to the server. I have a HP PSC 1315 Multifunction printer connected via usb to the server. Using the CUPS web interface I am able to get the server to detect the connected printer and it identified the HP PSC 1310 Series drivers.
When I printer a test page from the server's screen the print job goes through ok and the size was about 5k.
I then setup a samba share to allow my Windows 7 machine to share the printer. Windows 7 is able to pick up the shared printer correctly and I used the default HP 1310 Series drivers. When I tried to send a test page to the printer, that single page ended up being 3887kb and I also tried printing out a single paged word document which ended up being over 7MB.
I am running CentOS 5.5 with a 14T ext4 volume. We are sharing out a few sub-directories via NFS. Our customer was doing some penetration testing with our web app that writes to one of those NFS shares. We are not sure if they did something to cause the metadata to grow so large or if it is corrupt. Here is the listing:drwxrwxr-x 1 owner owner 470M Jun 24 18:15 temp.badI guess the metadata could actually be that large, however we have been unable to perform any operations on that directory to determine if the directory is just loaded with files or corrupted. We have not run an fsck on the volume because we would need to schedule downtime for out customers to do so. Has anyone come across this before
I am trying to copy a 7.3gb .iso file to an 8gb USB stick and I get the following error when it hits 4.0gb
Error while copying "xxxxxx.iso". There was an error copying the file into /media/6262-FDBB. Error splicing file: File too large The file is to be used by a windows user, and I'm just trying to do a simple copy, not a burn to USB or anything fancy. Using 10.4.1.LTS, AMD Dual Core, all latest patches.
I'm planing to copy a productive mysql innodb file from one server to another, and the file size is around 300GB. As the file is keeping changing all the time, I have to shutdown mysql instance and copy the large data file to other server as quickly as possible.I should have to find a way to speed up file copying ... I'm wondering whether there's a way to copy file block by block.If the destination side block has same content, then bypass it.
When I am trying to run the Xserver using the command startx I am getting the below mentioned error
xauth: creating new authority file /oracle/oracle10g/.serverauth.22555 Fatal server error:PAM authentication failed, cannot start X server. Perhaps you do not have console ownership?
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
Code:
auto lo iface lo inet loopback # The primary network interface #auto eth0
[code]....
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
I'm new in UNIX & trying to access the server using SSH but I encounter this error PAM Authentication Error. I use edit /etc/ssh/sshd_login & set the PermitRootLogin to yes. But didn't work. I used this command ps -ef | grep sshd & saying Process environment requires procfs(5). I don't know what to do now. What I want is access it by SSH but I got Access Denied. [MOD]Pruned from [URL]. create your own thread instead of resurrecting a five year old one.[/MOD]
I am trying to burn mac osx 10.5 install disk from from a 6.7gb dmg disk image. I thought I would be using 2 DVD-R 4.7GB discsfor this burn, I was hoping when the first was full it would ask for another to finish the burn. Instead it get the message that the DVD will not hold the choosen DMG. file.
Can I do anything besides buy a dual layer DVD that would hold the whole file?
On Ubuntu server 10.10, with a relay smtp server with authentication via postfix; I keep getting 535: Incorrect authentication data. I'm sure my username and password is correct. Heres how I set up postfix: I created a file called smarthosts.conf in my /etc/postfix/ directory that contains the following:
[Code].....
my server uses plain text authentication on port 25. I would like to use security like SSL, but this particular server is unsecured.
I have set up a Ubuntu server to handle Dan's Guardian for protection of the children. I need next to set up a centralized file server and some kind of authentication method.
We are dual booting the computers just now since we need to use "Rosetta Stone" language software and they will not release a certain plugin for Linux according to our assigned help person. We also use pure Windows XP in some classrooms for now, and will do so until the school's children gets used to Ubuntu.
So, what is the best authentication method for a mixed environment? Where might I find a Ubuntu "howto" on the method?
What is the best way to set up a file server? Howto? Can the box running Dan's Guardian also be the authentication box and file server? (it is our newest box, only 2 years old and has a large hard drive)
When I installed opensuse 11.2 64-bit (KDE) the installer set the root partition to 20GB by default. That seemed unnecessarily large, so I reduced it to 16GB. I then completed the install (basically a default KDE install minus games & educational stuff) and still had more than 8GB free. I'm aware that these days hard drive storage space is quite cheap, but it's not so cheap for me as I have an SSD. Would it not be reasonable to reduce the default root partition size to 12GB, or perhaps vary it according to the software package load selected?
I have 24" dual monitors with 1920x1080 resolution on both of them. Consequently the text appears so small. I use the following text-intensive applications frequently:
Web browser (Google Chrome) IDE (Komodo) Terminal (Gnome Terminal) Email (Thunderbird)
I can configure text size on IDE, Terminal and Email. But for Chrome, it is not a good idea to set proportional font size because often one wants to see the entire (not just proportional fonts) site to be zoomed. So I am asking: Is it possible to increase DPI in Ubuntu (much like on Windows) so as to increase the text size across all apps? OR Is it possible to set permanent 'zoom' in Google Chrome, using a third-party extension maybe?
I'm trying to copy a 7.8GB tar.gz file to an external hard drive via command line. It gets to an even 4GB and stops, and gives an error that says "file size limit exceeded." I edited some file at /etc/security/limits.conf to look like: "root hard fsize 10024000" but that didn't do anything at all. Yes, I am copying this as root.
I made a system in CentOS5.5. I used Tomcat6 and PostgreSQL. But I couldn't enter my system. There are some error. And I don't understand what kind of error this. JDBCExceptionReporter.logExceptions(100) | SQL Error: 0, SQLState: null JDBCExceptionReporter.logExceptions(101) | Cannot create PoolableConnectionFactory (FATAL: Ident authentication failed for user "postgres") "postgres" is username. Is anybody knows anything about this error message.
Our system setup: windows server domain controller 2008 We are installed sambain Ubuntu 11.04, with ads authentication using winbind,i can able to give the access restriction from Linux for windows ADS User for linux samba share folderall are working fine from Linux,i want give the access fro domain user from MS -windows , what is the file permission owner ,etc, any one try this concept please give me a any document any example
I am using OS 11.0 Every time I boot my laptop (dell inspiron 9300 - ati video M300). I get the desktop display as 1920 X 1200. This is too large for my default. I use KRandRTray to resize back to 1024 X 768. How can I set 1024 X 768 as the default but still have the option to go to 1920 X 1200?
I downloaded pdftk 1.41 fromand installed on Red Hat Enterprise Linux 4, 32 bitI am primarily using this utility to uncompress pdf files to remove the 'Flate' compressionIt works good with small pdfsHowever, when i use to uncompress pdf files of size 35MB or more, the uncompressed output file grows up to 2GB and then the uncompression fails with error:"File size limit exceeded"I can concatenate two files with output file size upto 3GB in size, so 2GB is not the limitation at the linux level
I using rhel5 i installed&configured nagios ofter loged in the nagios server when i m enter hosts & services i m getting this error # it appears as though you do not have permission to view information for any of the hosts you requested. If you believe this is an error, check the HTTP server authentication requirements for accessing this CGI and check the authorization options in your CGI configuration file.
I installed Fedora 15/Gnome 3 because I liked the Universal Access Settings widget for controlling the appearance of my living room computer attached to my TV. It should (when it becomes more stable) make it easy to zoom in on the screen when I'm on the couch. There is also a Large Text setting that allows me to toggle between normal text size and perhaps 125% text size.
I'd like to set that value to about 200%, but don't see how to do it. dconf-editor didn't seem to have a way. gnome-tweak-tool has a way to make all fonts bigger or smaller but I want to easily switch between normal text size when I'm sitting close and large text from the UAS. Screwing around with gnome-tweak-tool would require me to be up-close. It looks like UAS is controlled by /usr/share/gnome-control-center/ui/uap.ui, but it is a wickedly complex xml file & I don't know what to edit. Is there a per user way to change the behavior?
I am doing an analysis with postfix, qmail and sendmail analyzing its performance.I need to send mail of size 10 MB, 50MB and 75 MB and analyze the time taken to send each mail to different users.I first used telnet, but file attachment is very hard there.Then i went for thunderbird but the file attachment size is just 5 MB. So is there a possibility to send such huge file size?