Where 'xxx' is a date/time string down to the second.
Each subdir entry contains a number of files, depending on load at the time.
What we see happening is that a subdir will appear empty from a client (ls -la shows . and .. ) but an rmdir will fail with a 'directory not empty' error. From all 7 clients (mix of Centos 4 and 5). However on the server the files are visible. If we 'touch' the files the clients will then pick them up and process them.
It doesn't appear that waiting any amount of time will make the files visible (we've waited 8 hours while testing).
We've tried different mount options, NFS4, etc. Nothing got rid of the issue. Changing one server to use CIFS however solved it. So it appears to be some bug in NFS.
The problem appears to be intermittent and random, we can go hours without seeing it, or minutes. I'd say it affects fare less then 1% of the files written.
I have tftp-server running on Centos 5. Clients which are on the same subnet as the server are able to get and put without problems. I have a client that is across the internet that is having trouble getting files from my tftp server. A tcpdump reveals that the client is requesting the same file over and over again. In /var/log/messages, I am see the following error repeated over and over until the client finally gives up.
localhost in.tftpd[12727]: tftpd: read: No route to host
Yes, I know this is not a good practice, and this is only a short-term solution.I have a server with a web-file-server daemon running internally as root, so the permissions for all files it transfers/creates have a uid/gid of 0:0.This is fine for the daemon, but I would like to manage those files from another workstation - actually a few workstations on a very limited LAN subnet - through NFS. How would it be possible to have users from a certain subnet mount NFS with root read/write abilities?I have seen the anonuid/anongid options (for the /etc/exports file), but I'm not so sure this is the right way to go.
At my work, we have several clients (outside clients) that have an FTP login to our FTP Server. Their login then leads them to their home FTP folder. The FTP server is currently a Win2003 box. Because we have so many clients, we would like to implement some form of WebGui that would allow each client to manage their own FTP home folder and user info, such as resetting their password if they lost it.
Is there anything like this available in linux that would provide us with that kind of control/usability?
I have a server (called NAS) that shares out /public, and I also have this same server running KVM with some VM's. I am setting up the first VM now, and one of the things it does is download torrents onto the share. I am connecting it via "mount -t smbfs //nas/public /mnt/nas" from the VM on the host and it seems to work fine. However whenever I add a torrent to the queue on the VM to download, all downloads stop and seem to be disconnected. I can restart them after a few seconds, but they will stop after a few minutes with another disconnect error. I have the network interface bridged, so I thought it was talking directly to the host.
I can't download files from my server that I have. code...
it have worked before when the server used port 21 but now when they changed it to 4700 it doesn't work. I have emailed them and they get it to work with FLASHFXP but that is a freeware and it is for windows.
in filezilla I get "Error: Failed to retrieve directory listing"
in my office we are using redhat server and 20 windows client machine. some times while viewing the server stored files or not able to view.yesterday i saw a problem all the files are showing but not able to open a single fine after restarting the computer iam able to open the file.
I can't seem to get the X server to allow access from clients on other hosts. (I know, not exactly a network problem, but. I made the change in /usr/share/gdm/defaults.conf to be : DisallowTCP=false
and this worked on another CentOS system, but it hasn't fixed it on this one. What other things could prevent other clients from connecting to the X server? From the local host, I get :
Warning: Tried to connect to session manager, Authentication Rejected, reason : None of the authentication protocols specified are supported and host-based authentication failed although the client DOES actually create the window and work! So, maybe this message is a clue.
From the remote host, I get : Error: Can't open display: 10.10.1.20:0.0 Which is not terribly informative. Is there a log somewhere which details why a connect request was denied? The files in /var/log/gdm are not very informative.
Don't work nslookup from clients guest OS.I have LinuxMint 7 and I'm installed VirtualBox on her. I created three guests OS. Two CentOS and XP
Name The first CentOS linux1.starline.ca The second CentOS centos.starline.ca The third XP xp2.starline.ca[code].....
On the clients guest OS nslookup don't work. It write : timed out; no servers could be reached .What is going on? Why nslookup don't work from clients guest OS?On client machine in the file /etc/resolv.conf have record ameserver 168.135.88.2
i have openldap server with phpldapadmin as a gui, i'm gonna use the ldap server just for address book.you can see in the picture how i built my ldap db.
I have setup openldap and samba for authenticating Windows and Linux clients on my server. They are working fine. Windows users are getting authenticated through server as Primary Domain Controller and Linux clients directly from Openldap directory. But I have little problem that is I want to mount home folders created on server to be available on clients so clients get a centralized storage with some quota on both Linux and Windows clients. Can you help me please how can I do that.
I'm new to Centos 5 (and Linux) and, after installing Centos, I configured Samba, Apche, ... w/o problems (through interactive interface). My problem is tha t I need to use DHCP (all our clients use dynamic IP addresses for the ease) but I don't find dhcpd ... nor the sample config file(s).
Note : the new server I intend to use is actually connected on a LAN with an 'old' DHCP server (still under W2K server), is this the reason why I can't find/activate dhcp on my new machine ???
I have openvpn tunnel setup between two CentOS servers. One of the CentOS servers also acts as a DHCP server for some client computers.
Server A= OpenVPN server Server B= OpenVPN client (connects to Server A with OpenVPN)
The two CentOS servers can ping each other (172.16.0.0/24) via the tun0.
However, client computer connected to Server B (DHCP server) can't reach 172.16.0.1 (which is the OpenVPN server).
I think I am missing some routing in my "ip route show". Following is the full picture:
What command can I issue to get this fixed? something along ip route add?
There is no firewall service on both end. service iptables stop! I can't bridge eth1 and tun0 as DHCP server might mess up the other side. I can't do a push of "redirect-gateway def1" because then clients loose their IP as they send DHCP requests to Server A.
I need to write some kernel modules which needs sys_call_table to be exported. But I got info saying because of security issues nowadays kernels come with out sys_call_table export. Is there any way to export this table ?
i have been trying to complete the following project1) Configure a FTP server where we can upload and download files.........2) server must run at 9 pm & stop at 9 am automatically ............although the first task was easy ,i have no idea how to accomplish the 2nd task(not to mention I'm a new user)
I just installed DarwinStreamingServer. All seems normal in admin panel But everytime I try to play the file using QTime player...the play button quickly turn into "Play" again as I push it. Then I try to open it via my nokia phone, I got message "Disconnected" as soon as it connected to the server.
I have a weird performance issue with a centos 5 running a nfs server and a rh8 client. I think the fact that it is rh8 client should be downplayed. It is just that with rh8 client the performance degradation seems more clear. See test details below OS in server is Centos 5 x86_64 kernel 2.6.18-92.1.22.el5
1Gb connection between machines File to test over NFS is a 1GB file. First of all I wanted to measure how the network alone performs while using NFS. So in the server side I run a "cat" command on the 1GB file to /dev/null. Please note that the disk read speed is about 98MBs. At this point the file system has the 1GB file cached in memory. In the client side a "cat" on the same file gives me a speed of about 113MBs. It seems then that the bottleneck in this instance is the network and it is very close to nominal speed. So the network performance is really good. (BTW I know that the server got that file from cache because a vmstat or iostat shows no disk activity.)
The second test is reading from disk with no caching involve. In the server I flushed the 1GB file from the memory. For instance by reading another 5GB file and I repeat the same thing as above in the client (a cat on the 1GB file). Now, the server has to go to disk.(vmstat or iostat shows the disk activity). However the performance, now, is about 20MBs, I was expecting something closer so 90MBs. (since the reading speed in the server in the first test showed 98MBs).
This second test was repeated for ext2, ext3, xfs with no significant differences. A similar test using a RH8 NFS server and client gets me close to 60MBs for a 1GB file not cache by the file system in the serverSince network speeds and disk read speeds are not the bottlenecks ... what or where is the limiting factor then?
I've setup a Lamp Server for Testing, The Lamp Server is Up & Running on CentOs 5.5
I am now trying to setup a VSFTP server where local users can upload files to there home directory so that Apache can serve web pages straight from the directories of system user home/accounts giving users the ability to run their own web sites which are hosted off the main server [tutorial here: [url]
So far i have been able to serve/display index.html files from the users home directory [url] but so far i cant upload files to any user home directory, every time i try to upload a file with filezilla i get this error message: 553 Could not create file. Critical file transfer error
I have searched online for similar problems like mine and so far i've tried alot of the solution but none seem to work. I'm confused, dont know where i went wrong, i put the users in a group called ftpusers and here are the permissions on the users (test, ftpuser & testftp) home directory. have a look an tell me where i went wrong :(
Also the root directory where the web pages are served from is called public_html here are the permissions
Here is my vsftp.conf file can someone check it to see if i made any errors in there:
the start up fails, but a process is left running. these machines are running the latest stable version of Centos 5, and we have been "messing" with the firewall. When I'm logged on through a console, I get error messages that lockd can not contact the server even though the nfs mounted home directories work fine.
I have a DHCP/PXE server behind a firewall. It mounts partitions on the file server on the corp. network on the other side of the firewall. Every box that PXEs also mounts partitions on the main file server.
I was hoping I could change them to mount from the DHCP/PXE server, so that server could cache and cut down on the requests through the firewall, as well as the sessions that the firewall must track.But it seams a little strange to try to export directories that are simply NFS mounts on another server already.
Anyway i have a very old Mandrake server where a previous owner hosted mailboxes on. This server is getting very slow and does alot of e-mail related tasks like:popsmtpmxIt runs on sendmail (which is also very outdated...) and it doesnt seem to respond to its config files. And the whole smtp and mx thing leaves us with some really weird mail problems...So i want to implement it in our current mail setup in which i have it all on seperate servers:2 smtp server (dns roundrobbing) (postfix)4 mx servers (1 etrn) (postfix)1 webmail server (v-webmail) (just apache and connects to the pop/imap server)And 1 pop/imap server (postfix, dovecot)I also want to implement smtp authentication because of all the mobile clients i have to host... This is where it gets tricky.
I want to export the unix user table of the old mandrake server and import that into a mysql database. This database will be used to authenticate the smtp users.I also want the export of the unix users to import it to the other pop/imap server so users can logon to that server instead of the crappy Mandrake server.I would expect that the export from unux users to mysql (including passwords) is the hardest part. I googled it, but some of the stuff i found didnt seem to be very reliable, so thats where you guys kick in :-). So is this possible? If so, how can i do it?I know i should go with some kind of ldap situation but that seems a way bigger hassle then this setup.
I wanted to know if it's posible to import filesystem quotas. And if so, how can I do it? I recently migrated a server, and presently the users don't have any quota limitations.
Tutorial for setting up a domain server, dns server, ldap, mail server, firewall and proxy with centos and how can I join ubuntu clients to the domain?