I use NFS to serve up my media files to all the other machines on my network. What I am seeing is if a client has the NFS directories mounted for a couple of hours, files will start "vanishing" from what is being served up. The files are still there on the server. The client can no longer see them. The only way to make them viewable to the client again is to unmount the directory on the client and then remount it.
We are having an odd apache issue. Certain files are not being served out.The files actually live in a remote DFS volume served off a windows system. They're mounted under /data/vizwx. There's a subfolder here called wx_intranet which has all the data.The Document root for this virtual host is /data/www/htdocs. There is a data/vizwxsubdirectory with a symlink named wx_intranet which points to /data/vizwx/wx_intranet .SELinux is completely disabled (not even permissive).
When a user requests a file from http://<host>/data/vizwx/wx_intranet/VizWx_14c.jpg no error is given to the browser. Firefox just shows the URL in question (I can't even view source). The <vhost>_access.log file shows: wx_access.log:172.16.1.16 - - [19/Feb/2009:15:30:16 +0000] "GET /data/vizwx/wx_intranet/VizWx_14c.jpg HTTP/1.1" 206 167791And there's nothing in the <vhost>_error.log, even after cranking the LogLevel up to info.
The permissions on the files in the share are slightly weird for reasons I don't fully understand, but they should work... the files are an effective 'chmod 3777 <file>' for all files in the mounted directory... that's with Sticky and SetGID bit set.What am I missing here? Oh, I also modified the Vhost's options from "Options all' to 'Options all -SymLinksIfOwnerMatch" to see if that was the problem... no dice.)CentOS v5.2, apache 2.2.3
If I use PHP or SFTP to delete or replace a file while it is being served will this cause the download to fail?If so is there anything I can do about this, the concern is for large files.
My chief file server (Dell PE R300) died last week with a disk error, and because it serves the /usr/local and /home partitions via NFS to my ~60 desktops, nobody could do any work until I managed to rig up another server and pull data off the backups. I'm using RHEL 4.To avoid this in future my plan is to knock up a dual server solution with DRDB and Heartbeat.In the meantime, is there a better way to allow desktop users to carry on working as normal, without relying on the file server too much? i.e. something better than NFS but not LDAP (I don't want to implement this just yet as the organisation as a whole may do this in the future)? My users need to be able to access the same home area on any linux desktop managed by me.Also, to implement DRDB/Heartbeat, it might be best to have the home areas on an external array, is that right? Can anybody recommend some hardware?
Is there some way that I can use apache/iptables to serve both of my servers through the same basic domain? I'm not talking about VirtualHosts either, I don't believe ( despite the fact that's all I can find anything to read about on Google when searching for this ).
Anyway, the problem is that I have a TorrentFlux php torrent-client that I run so I can add stuff to the download when I'm not around. I don't want to put this on my regular webpage server because there's just not enough diskspace in that machine. On the other hand, I don't want to replace that server all-together because the other machine is my desktop and would not make a reliable host.
So my idea was that there could be some way to have the apache2 server on my dedicated server redirect a subdirectory in its webroot to the webroot of my desktop's TorrentFlux server, but over standard HTTP ports so that it is more like the server is serving up the content of my desktop server through its HTTP service, instead of simply redirecting.
The reason I want to do this is because so far I have to use the TorrentFlux server on a nonstandard port so my dedicated server can still host its own things on port 80, but I've been running into several situations where the browsers I want to use do not support the nonstandard port such as with some instances of IE or the browser on my BlackBerry.
Anyway, I'm not looking for a step by step or anything ( though it couldn't hurt ) but I just need some ideas on what I could search for to get some better ideas. There's probably some term for what I want to do I'm not even aware of that would help me greatly.
Runnining Ubuntu 7.04 and apache 2.2.4 along with shorewall 4.0 as my routing/firewall software.
I want to put Google analytics code on all legacy pages on my server, most of them don't have a template so I was wondering if Apache can automatically append the code.
I have a white slate centos 5.5 installation on a virtual box at Media Temple (one of their new VE servers). I am trying to create a development environment where I can have Bind9 serve up one set of zone files to me and other developers on the internal network and another set of zone files to external requests (ie... using the views feature). I would like to be able to develop for sites for which the dns is not yet pointed at my server. The network is created by having the VE server be an OpenVPN server, and connecting my client box to the server (my mac - 10.8.0.6 / my ve server 10.8.0.1).
I have the connections working fine, I have also been able to route all network traffic from my mac through the vpn to the server. For some reason, I cannot get the DNS server on the ve server to serve me an internal view zone file. When my vpn is on, I cannot ping or navigate to any web pages from my mac. I think this is because my ve server is not setup as a dhcp server and the ip tables are not setup to allow all internal requests to use the server to go get web pages.
I cannot view-ping anything else from my mac/client when on the vpn, I can successfully ping any website my ve is authoritative for. This tells me that my ping is obviously going over the vpn, and thus an internal request, but the external zone file is still served up. The following is my named config.
An issue that has been hassling me for years since I started using Linux (Debian!) is related to the boot messages that quickly scroll on the video during the boot process. The main hassle is related to the fact that I cannot get a log of those messages. The second hassle is due to the fact that with my brand new netbook (Toshiba NB200) I cannot even stop the scroll and go back along the message stream with SHIFT+PageUpDown to understand what's going on. Of course I know that I can get a log of the boot process with 'dmesg' but I get the feeling that the very first lines show some problem I cannot grab at all.
I have a few problems with my openSUSE 11.3 (I were using openSUSE 9 ages ago, then I moved to Debian, jet after few years I though way not to give openSUSE another chance ).
Basically all is great except for 1 little annoying problem - system tray.Applications like Amarok, Network Manager, SUSE updater, etc just does not show up in system tray! Amarok keeps playing music after I minimalize it to tray, but its no where to be found XD.
I'm using the following repositories: download.opensuse.org/repositories/KDE:/Release:/46/openSUSE_11.3/ download.opensuse.org/repositories/KDE:/Extra/KDE_Release_46_openSUSE_11.3/ packman.unixheads.com/suse/11.3/
Do you know a command for zypper to install newest packages, ignoring branding (from who package is)? That is one of things that truly is a pain, and which I never experienced while using Debian.
When I insert an SD card in the reader, slackware creates a mount point and mounts my card volumes. On unmounting the volumes, the mount point vanishes. How do I achieve this manually?When I attempt to mount a volume using the mount command, the mount point folder must exist and the folder does not vanish on umount. Is there a way to create a mount point if it does not exist? and ensure that the folders vanish on umounting?
Yet another problem, but I need some input here. I just updated and restarted my computer (lots of updates), and when I logged in all of the window decorations are gone as well as alt-space and alt-tab doing nothing. Thinking that the update messed with the WM settings, I opened the settings manager up, but both the Window Manager and Window Manager Tweaks section are blank. The other sections are fine, just those two. (and no, I'm not doing Unity, KDE, or Gnome) The panels appear to be functioning correctly, though my workspaces got smashed as well.
Edit: Upon closer inspection, I can no longer add workspaces, I set the number of workspaces to 4, but it only shows one in the list below
Edit 2: Upon creating a new user to mess with themes, the WM was working fine in the new user, same problems in this user.
Just found my PC with screen fully lit a good half-hour after leaving it. Went into "system settings" and checked the power-management settings and, as I expected, there were no management profiles. "Here we go again" I thought and logged off and then back on again. Checked the settings and there was "performance", the only one available. As usual in this situation, I'd tried "restore default profiles" and there were none.
Why does this profile keep getting lost in this manner? Where is it (and defaults if any?) supposed to be stored?
Graham Davis, Bracknell, Berks. openSUSE 11.4 (32-bit); KDE 4.7.0; AMD Phenom II X2 550 Processor; Video: nVidia GeForce 210; Sound: ATI SBx00 Azalia (Intel HDA); Wireless: BCM4306
When kubuntu is installed, there is a plasmoid desktop (a kind of transparent window that enables one to create and store links to applications). Nice and convenient.
Alas, the special desktop is easily hidden - permanently... Soon after the installation I inadvertently clicked the X on the plasmoid desktop pop up menu and that hid this special desktop.
In the file manager (Dolphin) I can still see the Desktop with all the links to applications, which are still active (I can run the applications by clicking their icons).
Surely there is a simple way to restore this plasmoid again to the view, but I just do not find how..
If you have kubuntu, you too can have "the fun" of hiding the plasmoid dekstop and then finding how to show it again. Just place you mouse pointer on the plasmoid desktop. That will pop up the menu with X, shift-spanner, turning and resize buttons. If you are brave, try to click X on the pop up menu. Bingo, the plasmoid disappears. Surely fingers more nimble than mine, would find out how to get it back... I can not.
I've installed Ubuntu 10.04 on an old PC with an Intel 845 series motherboard, using an onboard graphics solution. By default, it boots with an 800x600 screen resolution. When I change the resolution to 1024x768, the resolution switches perfectly, but the mouse pointer disappears. The mouse can still be USED, but I have to 'guess' where the pointer is. The only way I have found to rectify the situation is to reboot, upon which the resolution returns to 800x600 again as well. Kubuntu 10.04 suffers from the same problem, only in Kubuntu the mouse pointer reappears when the resolution switches back to 800x600, so I don't need to reboot.
What I have is Xubuntu running as a VM with VirtualBox on my Windows 7 Media Center that is always on. I am trying to be able to remotely access an X display on that box to do network auditing/ various linux stuff from the various places I go. I would like it to be as simple as possible and leave no trace on the remote computer, so what I would like is to use a java-enabled browser to connect to an xvfb on xubuntu with SSL encryption. I almost got it working using cherokee/x11vnc/desktop.cgi but it only works once or twice and I get network errors even on localhost. I would rather just not have X running all the time on the VM and just have an xvfb display waiting/created when I log in remotely from a browser.
Over the last 3 or 4 days, I have been unable to load sites that serve their images, scripts and whathaveyou from Amazon's cloudfront domain.
[URL]
I have made no changes to any of my networking files, hosts{allow,deny}, or dns settings.
Connections don't provide any errors, just continually fail to load. Stopping the page load after a while reveals the raw HTML in some cases (quora and blekko).
Tested in Firefox, Chromium, Midori and Vimprobable.
I have booted into another distro and pages resolve immediately.
I have disabled IPV6 and my firewall - to no effect.
I have a server that host's several sites, recently I had to create a new server because the old one isn't good enough for me. Ive installed apache2 on the new server and moved all the files from one server to the other. I'm making tests in my local lan so I've edited my computer's hosts file to point to the name of each site to the local ip of the new server:
i have been trying to complete the following project1) Configure a FTP server where we can upload and download files.........2) server must run at 9 pm & stop at 9 am automatically ............although the first task was easy ,i have no idea how to accomplish the 2nd task(not to mention I'm a new user)
I am setting up a Demo website that I am hosting in a Debian Lenny VM, I have installed Apache, mySQL, and PHP5 I know the php server is working because if I place an info php file I can see it in the browser. However, I downloaded phpfusion, as well as phpmyadmin and they are on my desktop, I open a superuser file browser (also added write access to /var/www to everybody and copied the files over as me) and copied over the files. Once in the www folder they are no longer seen as "valid" php files, if I try to access them with iceweasel, IE, Konquerer it asks me if I want to download the file. If I look at the files on my desktop they have different icons. I have attached a screenshot. I can't tell what the difference is, but obviously there is one otherwise the icons wouldn't be different. Does anyone have any idea what's going on here or a way to rectify it?
I have a LAN of about 70 computers that I would like to share media files between. I have gotten to the point with Samba that I can view the files without a username/password from client PC's. I would like to make all the folders read only except for one which will be writable for everyone. The thing that I am having a hard time with is allowing a couple of administrators (on Windows 7 machines) read/write access for all files/folders. I am completely new to Ubuntu and Samba so please make explanations thorough. Here is /etc/samba/smb.conf file:
I'm trying to rsync files and directories from a RedHat linux host(v 4.5 & 4.7) to a Windows server 2003R2 Standard Edition with cygwin running. I'm executing the rsync command from the cygwin shell. The transfer involves rsync'ing approximately 1 TB of data from the linux server to the windows server. After about 280+GB of data transfer, the transfer just dies.
There seems to be no particular file or directory that the transfer stops at. I'm able to rsync GB's of data from other linux hosts to this cygwin server with no problem. Files and directories rsync fine.The network infrastructure is essentially the same regardless of the server being rsync'ed in that it is GB Ethernet running through Cisco GB switches. There appear to be no glitches or hiccups across the network path.
I've asked the folks at rsync.samba.org if they know of any problems or issues. Their response has been neutral in that if the version of rsync that cygwin has ported is within standards then there is no rsync reason this problem should happen.I've asked the cygwin support site if they know of any issues and they have yet to reply. So, my question is whether the version of rsync that is ported to cygwin is standard. If so, is there any reason cygwin & rsync keep failing like this?
I've asked the local rsync on linux guru's and they can't see any reason this should fail from a linux perspective. Apparently I am our company cygwin knowledge base by default.
config my apache server to list all my files: c/c++, php, java files, like the txt file on my server, e.g /var/www/mydomain/pub i want to dump all my c/c++, php, java file under the pub directory and I can access it from my domain name, if I dump txt file, I have no problem to view it, but when I dump c/c++ or php files under pub directory, then I can't view it like regular txt file, Q: is there anyway I can configure my apache server to view all the c/C++, php, java file as like txt file?
I have tftp-server running on Centos 5. Clients which are on the same subnet as the server are able to get and put without problems. I have a client that is across the internet that is having trouble getting files from my tftp server. A tcpdump reveals that the client is requesting the same file over and over again. In /var/log/messages, I am see the following error repeated over and over until the client finally gives up.
localhost in.tftpd[12727]: tftpd: read: No route to host
I want to run rsync on server A to copy all files from Server B when they are newer than 7 days.(find . -mtime -7) I don't want to delete the files on Server B.
I have a weird performance issue with a centos 5 running a nfs server and a rh8 client. I think the fact that it is rh8 client should be downplayed. It is just that with rh8 client the performance degradation seems more clear. See test details below OS in server is Centos 5 x86_64 kernel 2.6.18-92.1.22.el5
1Gb connection between machines File to test over NFS is a 1GB file. First of all I wanted to measure how the network alone performs while using NFS. So in the server side I run a "cat" command on the 1GB file to /dev/null. Please note that the disk read speed is about 98MBs. At this point the file system has the 1GB file cached in memory. In the client side a "cat" on the same file gives me a speed of about 113MBs. It seems then that the bottleneck in this instance is the network and it is very close to nominal speed. So the network performance is really good. (BTW I know that the server got that file from cache because a vmstat or iostat shows no disk activity.)
The second test is reading from disk with no caching involve. In the server I flushed the 1GB file from the memory. For instance by reading another 5GB file and I repeat the same thing as above in the client (a cat on the 1GB file). Now, the server has to go to disk.(vmstat or iostat shows the disk activity). However the performance, now, is about 20MBs, I was expecting something closer so 90MBs. (since the reading speed in the server in the first test showed 98MBs).
This second test was repeated for ext2, ext3, xfs with no significant differences. A similar test using a RH8 NFS server and client gets me close to 60MBs for a 1GB file not cache by the file system in the serverSince network speeds and disk read speeds are not the bottlenecks ... what or where is the limiting factor then?
I've setup a Lamp Server for Testing, The Lamp Server is Up & Running on CentOs 5.5
I am now trying to setup a VSFTP server where local users can upload files to there home directory so that Apache can serve web pages straight from the directories of system user home/accounts giving users the ability to run their own web sites which are hosted off the main server [tutorial here: [url]
So far i have been able to serve/display index.html files from the users home directory [url] but so far i cant upload files to any user home directory, every time i try to upload a file with filezilla i get this error message: 553 Could not create file. Critical file transfer error
I have searched online for similar problems like mine and so far i've tried alot of the solution but none seem to work. I'm confused, dont know where i went wrong, i put the users in a group called ftpusers and here are the permissions on the users (test, ftpuser & testftp) home directory. have a look an tell me where i went wrong :(
Also the root directory where the web pages are served from is called public_html here are the permissions
Here is my vsftp.conf file can someone check it to see if i made any errors in there:
I have 2 computers on the same network that i need to link together to transfer files 1 is a web server the other is a minecraft server. the problem is that the file transfer will be constant as the minecraft server will constantly updates files on the web server and I dont want it to go to the router then to come back to the web server. I want to add a second network card to each computer and link them together and use this second connection to transfer the files is it possible?