Server :: Could Not Open Requested SVN Filesystem?
Nov 17, 2009
Ah, the most dreaded SVN error of them all. It's not even showing up in my Apache logs, either, just on trying to commit.
When I try to add a file to the repository via the following command, I get the error:
Code:
add -N F:eetlemed-1.7workspace
ootClientsrccomeetlemedwebappdataContactInfo.as
A F:/beetlemed-1.7/workspace/rootClient/src/com/beetlemed/webapp/data/ContactInfo.as
commit -m "no more broken commits?" (24 paths specified)
code....
It seems that this error only shows up when I try to add a file to the repo, not when I try to update. I've already run chmod +rwx -R repository, so it should have all of the permissions it could possibly need. The only thing I don't know about is adding it to a group. I don't think my server has an apache group, so I wouldn't know where to start and what to do with that.
This problem happens from time to time, but I can't seem to shake it this time. I'm running SVN over HTTPS on Apache 2.x.
Ubuntu has officially jumped the shark. I did an update...restarted and got this:
Quote:
mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have requested /sbin/init. No init found. Try passing init=bootarg
I am going to be involved with a massive filesystem copy from a local to a remote server in the next couple of weeks. There are ten filesystems in involved in this process. All, except one, are one hundred gigabytes in size, with the remaining one at twenty gigabytes.The cp command with the -pr options will be used to copy the directories to their new location. A speed test, involving ten directories, was done to determine the average amount of time it would take to complete the process. The ten directories used in this test ranged in size from 2.3 gigabytes to 4.3 gigabytes. The results indicated the average amount time to complete the copy was around one minute and thirty seconds.
The question I have is the following: Is it better to interactively go to each filesystem and run the cp -pr command there, or should I write a script that will automatically go to each filesystem and run the copy?
i used my windows box to connect to ftp(fed box) and i keep getting an error saying "The requested name is valid, but no data of the requested type was found." does anyone know why i cant connect?
I installed squid on my centos and I tried to follow some guides but it still gives the same error
Quote: ERROR The requested URL could not be retrieved
While trying to retrieve the [URL]...The following error was encountered: Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect.
[Code]...
ZON-BFC0 is the name presented in my wireless connection, plus the actual ip I'm using there is the ipv4 I found with a quick "ipconfig" search on DOS.
today there was problem with isp but we fix it now other machine run internet fine but when we run through squid proxy machine it give this message to all users. what fields column values config i should check it. i also reset the cache make empty folder. restart machine also service clear the logs . it is on centos 5.4
see below message
ERROR: The requested URL could not be retrieved While trying to retrieve the URL: [URL] The following error was encountered: Read Error
The system returned:
(104) Connection reset by peerAn error condition occurred while reading data from the network. Please retry your request. Your cache administrator is root.
I am in the process of writing a web-application using apache,mod-python and cheetah. I installed apache2, mod-python and cheetah and also enabled the userdir module in apache2 so that i can host the webpages inside my public_html folder.
The public_html folder has a folder named 'site' which gets displayed when I type "url" in the browser. There are subfolders inside this 'site' folder and the site folder also has an index.html page inside it.
But when I click on the site folder in the browser, I get a 'Requested url /~myusername/site/ not found'. There are files inside the folder with 777 permissions and still I get this error.
I use Ubuntu Jaunty with the following configuration - 'Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 Server at localhost Port 80'.
These are the lines I have added to the /etc/apache2/sites-available/default file
Code:
<Directory "/home/myusername/public_html/site/"> SetHandler mod_python PythonHandler mod_python.publisher PythonDebug On PythonAutoReload On
I have a crippled file system after a disk failure and attempt at repairing. Currently the Web Server is running, however I cannot open the filesystem to explore files and view. I am going to do a fresh install this weekend and upgrade system. I have two questions I hope this forum will help with. (Installing new disk and entire server)
1). Can someone tell me how I could copy the http.config file so that I can view my settings from terminal (This took me forever to get right for my environment to get shtml and cgi files to run correctly and I don't wish to lose). I am running Fedora 8, I'm also not sure of the location. (I'm fairly new to linux and not great with the terminal, however that is the only access working.) It would be great if someone added the terminal commands. If I can copy them I could email to my email address.
2). In the past when I have copied web files (web sites) over to the www directory the permission would be incorrect. Is it possible to write some of the web sites to CD (Disk burner is working) and then copy to the new system with permissions correct? Would you give a little how to all web sites are in the www directy. I think that is etc/www/"webdomain name" One thing I might add is all drives will fail it is just when...
Whenever I want (for example) save a file from the internet with Firefox, I can't because Firefox does not open the "save as" window of Dolphin. Also, when I want to save my bookmarks in Firefox, I can't because Firefox can not open the "save as..." window.
There is no failure message, no crash, just nothing. Like as the request is simply ignored. Any Ideas what might be wrong in my system? The system is OpenSuse 11.3 64 Bit with KDE 4.6 and all current updates.
I have installed ubuntu to my pc. i made 3 partitions. one for system, one for data and one for swap. two of them were ext4. after some time i have reinstalled ubuntu again. but this time i didn't put to format the second partition, but just mount it using ext4. after that i cannot open my files. checked with gparted shows that 2GB used, but with df 188MB. and in properties writes ext3/ext4 filesystem. i used chown, chgrp but didn't help. please help, these data are ver important. i cannot lose them.
I am very new to linux, and I have a question regarding the filesystem check (fsck). The power recently went out and when I tried to restart linux the following error appears:
*/dev/sda1 contains file system w/errors, check forced it then goes on to say..
*An error occured during the file system check. Dropping you to a shell; the system will reboot when you leave the shell. Give root password for maintenance (or type Control-D to continue) I wasn't sure what to do, but checked some other online forums and they suggested running fsck manually - so I typed in the root password - and used the command, "fsck -A -V ; echo == $? ==" it then gave the following message
*WARNING!!! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage *Would you like to continue (y/n)
Again, I wasn't sure what to do so i just checked no. I then manually turned off the computer and was prompted at the beginning to press Alt-3. I was brought to another screen and it informed me one of the drives was degraded and suggested rebuilding the array. I tried doing this, but it still brings me back to the original error of, "/dev/sda1 contains file system w/errors, check forced," and the process continues.
Also, when I tried to rebuild the array, I didn't backup any of the data on our home directory before doing this (which was probably a big mistake). After being prompted to type the root password, I was able to give the ls command and look at all the directories...the home directory where our data was stored was empty and I am afraid I may have lost some information. Is there a possibility that data was lost when I was trying to rebuild using the old drives?
Since upgrading to Lucid, I am getting the following dialog warning on login: 'Could not apply the stored configuration for monitors X Server does not support size requested' Im using the current proprietary NVIDIA graphics driver with dual heads. My display is fine, but the warning every time I login is annoying. After googling around I found this thread: [URL]. I tried going to Monitor Preferences as suggested. My resolution as displayed in the default tool is set to 3840 x 1200, which I suspect is the issue forcing the dialog, but I cant change the resolution, refresh rate or rotation from the Monitor Preference dialog box. dino99's response (in the referenced post) about xorg.conf not being needed anymore seems relevant. How can I resolve this issue and get rid of this annoying warning? Is there a configuration that I can update with a supported resolution to placate lucid?
I'm doing an upgrade of a web based application for a charity organization. To test the changes that we're making to the code, I wanted to set up a server on my home PC to host a copy of the site. I've configured apache, php and mysql like the server that's currently hosting the real app, but I have a problem:The first page is a login screen. This, of course, has a login form like so:
I am working on a NFS server embedded on a PowerPC plateform (4650EX, 512Ram, 1 GB Ethernet) but i can't mount my exports folders from my client. Here are messages :
I have sendmail running on my centOS 4.6. My lamp server also runs on it. I want to send mail through php mail function. when i execute php page, which fires the mail function, it takes so much long tim1 say even 1 minute, and at last displays that message sent successfully. Suppose, destination address is [URL].... I did not get any mail there. My server is running in LAN. I checked the status of sendmail, it shows me that it ios running. when i issue "nmap localhost" it shows me that SMTP port 25 is open, but when i issue "nmap myserver" (192.168.1.20 myserver ( written in hostfile)), it does not show that SMTP port is open.
I checked the /var/log/maillog, one person in my previous post advice me to see that. There it shows that message is accepted for delivery...but i do not get any mail in my destination, even not in spam folder. One more confusion is that, in my case my server is in LAN and if I am at all enable to open the SMTP port on it, does i need to open SMTP port on my router (which connects my LAN to internet) also needs to open? I think no, because SMTP is application layer protocol, it will wrap my mail in IP packet, which router just need to forward. am i right?
With so many filesystems available which one should I use to make backups? All I care about is reliability and stability. I don't care at all about portability.
When I try to boot to OpenSUSE I get the following error during boot-up: unknown filesystem type 'reiserfs' could not mount root filesystem - exiting to /bin/sh$
This only started happening quite recently - before this I could boot to Linux quite happily.
I have a following problem: Recently my drive with Ubuntu 9.4 has mysteriously stopped working, i.e. when I switch the computer on it informs me that GRUB didn't find the filesystem. Well, I suppose it happens.
First, I though it was due to the drive dying, but I popped it in an external enclosure and HDTune told me the drive was fine. Wanting to recover the files on the drive before reinstalling I first tried to mount it in said external enclosure under Windows (I have Win Ext2 driver installed which used to work just fine). This time, however, drive gets assigned a letter but upon opening it Windows popped up an error saying that the drive was not formatted and whether I would like to format it then.
Unfazed by this streak of failures I tried to mount it under Linux but, alas, to no avail. I might have tried every single -t operator under mount command but it still won't budge and let me mount.
I have an HP proliant server n i am new to linux the problem i am facing is that when i start the system it turns ON normally but after the linux starting up options like when we select whether to start it normally or in failsafe mode after selecting one option out of the two it comes up with following messages on the screen and get stuck over there.
[messages]
I would have attached the snapshot i have taken with a camera but there is no option of attaching files overhere.
I've had a look at some similar threads but as I'm very new to linux they're already a bit technical for me. Sorry, this calls for someone with patience. I gather from other threads that disconnecting an external drive without unmounting is a no-no, and this seems to be the likely cause. Now the disk is read only and I'm unable to change any settings through the usual control panel on ubuntu. I'm just not familiar with the terminal instructions. I tried to cut and past a few command lines from other threads but I got some warnings that proceding could damage data. Like this one: WARNING! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage.
Are there any tools out there to scan, say our fileserver(20TB), to check for things that shouldn't be there like mp3s, videos, software crackers, etc?
we have a data transfer network drive, shared via nfs and samba.But now I got the special demand to make any of the files read and wirteable, regardsless of the permissions they had before.With acl I get the right permissions (via default values) but the standard unix permissions overwrite this. e.g. when I have 644, it does not care that the group has write permissions)Does someone have an idea (except chmod via cronjob )