I have a Dell PowerEdge 2850 running Ubuntu 10.04 Server with SSH and Samba. My problem is that I am unable to execute my adduser.sh script which reads from a text file and adds users to the box and Samba. I have ran chmod a+x to make it executable and placed it in /usr/local/bin.
When I run sudo adduser.sh I get "sudo: unable to execute /usr/local/bin/adduser.sh: No such file or directory".
When I run adduser.sh I get "-bash: /usr/local/bin/adduser.sh: /bin/bash^M: bad interpreter: No such file or directory " I have been using Kubuntu as my home workstation for some time now but I have managed Windows servers but since I was given the freedom to setup this server I chose Linux.
I am installing VMware-server-2.0.2-203138.i386 on FC 12
Have an error when executing vmware-config.pl initially. When I try again it has the message:-
start of messages;
The following VMware kernel modules have been found on your system that were not installed by the VMware Installer. Please remove them then run this installer again.
vmmemctl vmci vmxnet vmhgfs vmblock
I.e. - 'rm /lib/modules/2.6.31.12-174.2.3.fc12.i686/misc/<ModuleName>.{o,ko}'
Execution aborted.
End of message
I tried remove them using modprobe e.g.
modprobe -r -v vmemctl the response was "WARNING: All config files need .conf: /etc/modprobe.d/anaconda, it will be ignored in a future release."
Subsequently I tried vmware-config.pl, but have the same error.
I have a triple boot system with Ubuntu 11.04, Windows 7 and Windows-XP on it. My disc configuration is something like this... ('cause I think it would be required for you to understand my problem) I have a 250GB hard disk which was originally partitioned with Windows-XP in six partitions C,D,E,F,G and H (All NTFS type) with 'C' drive having Windows-XP on it and 'D' drive having Windows-7 on it.
I installed Ubuntu on the 'H' drive by partitioning it into two halves of approximately 20GB each. One partition is named 'New Volume' as per Windows naming scheme. On the other partition I installed my Ubuntu-11.04 OS. As per my plan I would be using this 'New Volume' for all my Ubuntu related data and software only. I want to install 'Ant' build tool for Java to be usable on my Ubuntu. For this, as described on the Apache Ant user manual I downloaded the 'apache-ant-1.8.2-bin.tar.gz' and extracted it. All this I did in the 'New Volume' drive.
Now as per the 'Ant' manual I needed to change a file's ('/media/New Volume/ubuntu files/software files/apache-ant-1.8.2/bin/ant') permission to executable, which is currently set to '-rw-------' and I want it to be '-rwx------'. I've tried various things such as 'chmod/sudo' and also tried changing the permission with the 'root' user, but so far I've not been able to change the permissions for this file. However, if I copy the 'apache-ant-1.8.2' folder to '/home' directory then I've been able to change the permission for the concerned file.
I am dual booting XP and Kubuntu 10.10 from separate hard drives and I can't execute .exe files from my XP drive. I also have an internal data drive that I can execute .exe files from fine. When I try to go into the properties for the file to mark it executable I am unable to do so. I have ran "sudo nautilus" to mark the files executable and that doesn't work either. I even changed /dev/sdb1 to be owned by myself instead of root, and that doesn't help. When I used "sudo nautilus" I also got error messages in my terminal when I tried to do anything (which I copied and attached to this post).
how to bind a script to a F key (F12) that will run as root even when not logged in. I have a headless server on client premises where it'd be easier for them to press F12 to run this script that will be rarely needed than to give them SSH instructions etc. I know this must be do-able, but I can't get my Google-fu on for this question. The only way that I can possibly think of doing it is to touch a file whenever that key is pressed and have the script idly checking for that file every few seconds in a loop.
Some time ago I installed LAMP in my server, but now I need to execute .php files from the command-line (in order to execute some manteinance scripts for mediawiki). Seems that the PHP files running in the server are run thru some kind of "module" in apache2. Can I tell apache2 to run a .php file in command-line mode using that php module? Or should I install a fresh copy of php-5? Won't that interfere with apache or mangle the system?
started setting up my 3rd ubuntu server under the OS of Ubuntu Linux 9.04 64 bit. I have configured the server to allow root access and am using this to execute this file. As you can see from the screenshot of PuTTy, the file exists but is refusing to load up. I am also able to nano the file. I have tried moving the file to /root/ and still had no luck.
I used to execute the command setrcs /data/dev/projects in a solaris machine. And all the files in the given path were visible from the current directory (whichever directory i am in). So that i need not copy necessary files required for compilation from the path to the current directory.
Ex: Suppose I am writing a C code x.c in the directory /root/test/. I do #include <bdm.c> in the file x.c. While executing I need not have the bdm.c in the current directory. Because I have executed the command setrcs /data/dev/projects. The compiler automatically looks for the bdm.c file in the path /data/dev/projects. But i am not able to execute the command setrcs in linux. Is there any similar command with similar functions or if there is any different command in Linux which serves my purpose.
I've just installed subversion.I need to create a script /etc/init.d/svnserve that will start at boot time.I want to use start-stop-daemon --start so I can track my process and eventually kill it using start-stop-daemon --stop.My problem is that I can't get it to work and the documentation shows no exemple.
I've replaced $DAEMON by the whole line: svnserve -d -R -r $REPO_ROOT and got -d is not an option.I'm not quite sure what to do at that point. If someone has some experience with start-stop-daemon it would be great.
My question is I trying to install php5 and add mcrypt,, but it is unable to run "aclocal" command. It show this message.
I have check with the path of /etc/profile, I'm sure that I have added the path as below. Beside that, before "aclocal" command I able to run another command which is "phpize". It is able to run successfully, after I added the path as below...
I am trying to move our current system running on Mason, Apache 1.34 and Mod_perl1.0 to Apache2&mod_perl2.0. I am currently unable to execute the mason files, I tried a test.pl file which is executing. I have referred to the multiple forums, http://www.masonhq.com/?FAQ:Components & http://perl.apache.org. But so far the Mason files are displaying as text but not executing the embedded html. Kindly let me know if anyone had a similar issue and how did you go about resolving the same.
I installed wine and I installed one of my file ccnaexploration.exe using wine. After that I ran it from applications>wine>programs>exploration4.html. It opened with the browser google-chrome and I launched the "Launch course". It didn't work after that. I tried with firefox also. It didn't work with firefox also. I installed other .exe file using wine. That other .exe file is working. I only have problem with exploration4.html. I have posted the .html image also. you can check it out.
I am trying to execute a 4GE file using command something like this "/usr/bin/ksh path of the file with some arguments " ex: /usr/bin/ksh /home/abc.4ge S "./xyz" . I am able to execute the 4GE without this "/usr/bin/ksh" specifying in the command which basically runs in ksh shell itself. But when i try to run it exclusively using the path of the shell it gives me an error something like this "/usr/bin/ksh: /home/abc.4ge: cannot execute". I did check the permissions and all the file has execute permission.
I have problem starting clamd. It's unable to execute setgroups() /etc/group , /etc/password files are world readable Here is output after starting clamd: sudo clamd ERROR: setgroups() failed.
I'm not so expert using Debian (3.2.68), especially when it comes to install stuff... I tried to execute for the first time a *.sh 451mb file, but it displayed that:
Code: Select allroot@Poulpe:/home/ambroise# sh gog_hatoful_boyfriend_2.0.0.2.sh Verifying archive integrity... All good. Uncompressing Hatoful Boyfriend (GOG.com) 100% Collecting info for this system... Operating system: linyx CPU Arch: x86_64 trying mojosetup in bin/linux/x86_64 USING
PANIC Initial setup failed. Cannot continue.
Error: Couldn't run mojosetup
It seems that mojosetup is an open software included for all recent GOG games, but maybe it can't run on Debian ? Or could it be more a directory issue ?
I've checked out a subversion project with source c++ files in netbeans 6.8 on Red Hat 5.5. My machine has a dual boot with windows xp and RHEL 5.5 so I checked out the project on a folder called winshare which is a shared drive/partition (E: under xp) allowing both operating systems to access the contents. I've Fedora as virtual machine on xp and wanted to be able to work on the source seamlessly whether using fedora or RHEL.
Problem is that Netbeans is able to build the source just fine but I can't seem to run the generated executable. It has -rw-rw---- permission and the owner is the same user logged in (let's say user1) but no matter what I do, whether I change permissions as user1 or root issuing command chmod 777 -R /dir/where/file/is does not have any effect whatsoever on the executable as well as any .cpp or .h files (nothing that I need exectue permissions on .cpp but just to make a point).
So I am trying to install Plastic 2.8.148.0 on a virtual server. I have the install package that i copied over to the machine using WinSCP i then gave the file executable permissions by using the command "chmod +x PlasticSCM-professional-2.8.148.0-linux-installer.bin" But when i try to install the package by using the command "sudo ./PlasticSCM-professional-2.8.148.0-linux-installer.bin" It gives me the following message.
sudo: unable to execute ./PlasticSCM-professional-2.8.148.0-linux-installer.bin - No such file or directory
why it isn't recognizing it? I know the file was there because i was able to change permissions and i can see it with -ls command.
I am using Ubuntu 10.04 LTS. Two days ago I updated using update manager. After that I can not boot Ubuntu. When I trying to boot system showing message " Ubuntu is running in low-graphics mode Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself" But I can not configure it.I can not boot to 'recovery mode' also
/var/log/boot.log
Code:
fsck from util-linux-ng 2.17.2 /dev/sda6: clean, 304282/1680960 files, 2964945/6723194 blocks init: Failed to spawn ufw pre-start process: unable to execute: No such file or directory
I had a problem on ubuntu when running "sudo apt-get dist-upgrade" and wanted to report how I solved it. Hopefully this helps anybody with similar problems.
I always got the error message:
Code:
The problem is in libxml-sax-perl.postinst which does not seem to be executable.
I did not install any perl packages manually by cpan and found the solution in http://ubuntuforums.org/archive/inde...t-1342009.html where it was only one part of a bigger problem.
I created a backup file (always a good idea) of my libxml-sax-perl.postinst:
Code:
I deleted the old file:
Code:
Created a new file:
Code:
With the following content (copied from the link mentioned above):
I have a web server in my kitchen with apache running on it. Since the upload speed is quite low due to my isp I would like to execute a bash script that uploads a file to another server through a website (which is htaccess protected) The idea in general: Someone with access to my website browses through a folder, copies a file path to an input form and presses "upload". Rather than executing a bash script directly I could have a cron job running in background that finds the path and then uploads the file to the other server I have userspace on and is accessible via sftp/ssh. The file would be than erased later after a couple of days or so. That person would be able to access the file with higher speed some time later without logging in via ssh and doing all that manually.