General :: Mount.nfs Fails In Ubuntu / Works On Red Hat?
Aug 13, 2010
I want log in locally to my Lucid (10.04) workstation and have my code saved over the network on my samba account At work, all developers have samba user ids and when we were running Red Hat, we went thru the following procedure to get setup.
* open a shell session to NFS server and execute the "id" command to get my samba user information.
Code:
$> ssh l alberto our_nfs_server
$> id
uid=7090(alberto) gid=100(users) groups=100(users)
* Create locally a login on my Linux workstation with the same login and uid.
Code:
$>sudo useradd u 7090 g 100 d /home/alberto s /bin/bash alberto
[code].....
Code:
mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount.nfs: access denied by server while mounting our_nfs_server:/d0/homedirs/alberto It does not work at all!!?? However, the same procedure on a RedHat workstation works fine. No errors.
I've added my public key to the remote machine's authorized_keys file, and I can ssh over without password. But when I try to mount the remote share using sshfs it -always- asks for my user's password. I have set sshd_config|PasswordAuthentication no ... and when I mount the share as root it says, "read: Connection reset by peer". My mount is being done as user, so it shouldn't be a root authentication problem: sshfs#bill@droog://media/droogfuseuser,noauto,gid=6,umask=007,cache=no,ServerAliveInterval=15,reconnect,allow_other,comment=sshfs 0 0 I can't mount as user because /dev/fuse is not suid, and I'd rather not set it such.
I've successfully mounted a network share with mount.cifs for the past 2 years using fstab with credfile.
[Code]....
Yesterday I moved this system to a new datacenter, but did not alter fstab or the credfile. The //server/share directory has IP rules in place, but this was updated with the new system IP while we moved the system. Now, I am mysteriously unable to automount //server/share. The local error is 13 (permission denied). The Windows server we are mounting returned a code that is defined as "username is valid but password is incorrect" Again - no changes (content or permissions) were made to my credfile or fstab entry. I've restarted netfs a few times, including rebooting the system twice. What is baffling is I can successfully mount //server/share via command line: Code: mount -t cifs //server/share /mnt/mycooldir -o username=foobar,password=1234
The username and passwords are identical in credfile and the mount options - I copied & pasted username / password from the credfile itself.
I'm having some problems bringing up my wireless card at boot time on my Red Hat Linux 9.0 box. After some investigation with the boot scripts, I noticed that it was having problems with the DHCP negotiations when the ifup wlan0 command is issued.I then tested the command outside of the booting process and everytime the command is issued the first time, the dhclient fails to get any DHCP offers and returns with the message Unable to obtain a lease on first try. Exiting. and then exits.
If I then issue the same ifup command again a second time, it successfully receives a DHCP offer from the router and receives the needed IP address, and after this, the Internet connection on the box works fine. It is consistently doing this everytime. Why is it failing to obtain a lease the first time?
I've got a problem in doing sudo working for mounting things (e.g. usb pen or optic discs). Details:The OS: Slackware 13.0The response to sudo -l command:
Code: User user1 may run the following commands on this host: root) /sbin/shutdown -h now, /sbin/shutdown -r now
I have an mdadm/lvm2 volume with 4 HDs that I created in Ubuntu 10.04. I just upgraded the computer to Ubuntu 10.10.
I redid the mdadm commands to get volume up and running, did mdadm --detail --scan > /etc/mdadm/mdadm.conf to get the configuration file.
But now, every time I reboot, it tells me that the volume is not ready. /proc/mdstat says that I always have one disk of the volume "inactive" as md_d127. I need to stop this volume and reassemble the whole thing to get it working.
This is what I get out of mdadm --detail --scan and put inside /etc/mdadm/mdadm.conf:
And this is my /proc/mdstat on boot:
I need to do mdadm -S /dev/md_d127, mdadm -S /dev/md127, mdadm -A --scan to get this volume working again.
This did not happen with Ubuntu 10.04. I'm really fearing the loss of my raid5 data now.
I'm attempting to mount a Windows dir to a mount point on my Linux VMWare instance running on my Windows 7 machine. I am using a shell file to automatically mount the directory I want at bootup. However, I'm finding that Linux always mounts to a directory at the top of my C: file structure for some reason, and I can't figure out why.
Here's the dir structure: C:/target (don't want to mount this, but this is what gets mounted) C:/Users/me/target (this is what I want to mount to)
Here's my shell script: Code: mount.cifs //192.168.56.1/Users/me/target /mnt/target -o credentials=/root/credentials.auth,domain=mycomputer,uid=1001,gid=1001,rw
And here's what I get when I enter mount at the prompt: Code: /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
If this means anything my Virtual Machine has a single hard drive SCSI, which I believe is /dev/sda1. I don't see anything from mount that would indicate that the c:/target dir is getting hard-mounted somehow from the /etc/vfstab file, but maybe I just don't understand how mounting works...
I'm trying to use a remote procedure call. When I call my server, my server should activate gammu and send an sms with it. I've used the code in the following tutorial: [URL]. The command for the uptime, and the command for the greeting work perfectly. But when I write my own method on my server, it fails ..
Code: function uptime_func($method_name, $params, $app_data) { return `uptime`; } function greeting_func($method_name, $params, $app_data) { $name = $params[0]; return "Hello, $name. How are you today?"; } function gammu_func($method_name, $params, $app_data) { $text = $params[0]; $number = $params[1]; $result = "echo '$text' | gammu sendsms TEXT $number"; exec("$result"); return $result; } On my Client (the one that calls the server) I see the output of $result. So my gammu_func is definitely working... He just doesn't execute Code: exec("$result")
I know that syntax is right cause I tried it in a different php file. I think it has something to do with the user rights. I don't think I have the privileges to run that command ...
I am able to boot from the Ubuntu 10.04 LiveCD fine, but the Kubuntu CD fails to boot. Trying to run Kubuntu sends me to a command line and after the first three steps of the install option I am again sent to a command the line. I have burnt and verified several Kubuntu CDs and each time I get this result. I am running Ubuntu, but I would like a fresh install of Kubuntu (I would rather not just install kubuntu-desktop).
I have a usb modem which I plug into my ubuntu 10.10 system for a dial-up service. The modem is recognised by wvdial and dials properly, but the connection is not established.I get "Unable to run /usr/sbin/pppd" (although pppd is certainly in /usr/sbin), followed by "Check permissions of specify a 'PPPD Path' option in wvdial.conf", and finally "Connected, but carrier lost", then it gives up.
I'm having an issue with a BIND server. After a restart, (or randomly, I assume whenever a cache expires,) when I try to resolve any domain I get a "Host yahoo.com not found: 2(SERVFAIL)" Eventually it starts working and works fine till the cache expires again;
System:Tri-head, dual-card: GeForce 9500 GT, GeForce 8400 GS Dual-boot: openSUSE 11.4, Ubuntu Natty Driver: nvidia proprietary (260.19.44 in openSUSE, 270.30 in Ubuntu due to kernel version) xorg.conf: same for both Results: All three heads work just fine in Natty; secondary screen fails in openSUSE:
Code: [25.164] (EE) NVIDIA(1): Failed to initialize the NVIDIA GPU at PCI:2:0:0. Please [25.164] (EE) NVIDIA(1): check your system's kernel log for additional error [25.164] (EE) NVIDIA(1): messages and refer to Chapter 8: Common Problems in the [25.164] (EE) NVIDIA(1): README for additional information. [25.164] (EE) NVIDIA(1): Failed to initialize the NVIDIA graphics device! [25.164] (II) UnloadModule: "nvidia" [25.164] (II) UnloadModule: "wfb" [25.164] (II) UnloadModule: "fb"
But nothing is jumping out at me in the output of dmesg. I also don't see any additional system or kernel logs in /var/log. I'll google some more on that front. One other fun fact: nvidia-settings fails to run in openSUSE. Unless I launch it under gdb. Then it starts up and runs as expected. (And the second screen ain't there, as expected.) Here's (what I think are) the relevant items: Xorg.0.log - Pastebin.com dmesg - Pastebin.com xorg.conf - Pastebin.com Additional output available upon request.
After a massive meltdown with the upgrade, I finally got the system put back together. Had to reinstall grub. It now defaults to a version that says "Generic - pae". When choosing this from the grub menu, it always hangs on the Ubuntu splash screen; however, I found that when I choose "previous Linux versions" from the grub menu, then choose the Ubuntu-generic, everything works fine - other than my initial bad impression of unity - but it works, at least.
why my default choice in grub doesn't work, and what I need to do to fix it? Below is a cut and paste from /boot/grub/grub.cfg. The first entry is the one that does NOT work, and the first entry after "submenu" is the entry that DOES work.
menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode
I'm trying to install Mandriva Spring 2010 for a friend on his laptop after MS Windows crashed. The installation appeared to work, but I've got an odd networking problem - firefox is unable to load URLs. Every URL I try returns a server not found error.
When I drop into bash I'm able to do the following Code: ping 66.102.9.103 ping google.com
However when I try Code: wget http://google.com
I just get a message that tells me that wget is "unable to resolve host address google.com'". This is odd - ping is able to resolve google.com, but wget isn't. I assume that firefox and Konqueror both have the same problem. Could it be cause I've specified the http protocol?
I can't get a program (wbar) to run directly from my user account, it fails saying "Image not found -> maybe using a relative path?". But if I run su -c "wbar", it shows up and manages to load the image. I think it has something to do with ImLib2 or whatever loads the image. I checked permissions on libImlib2.so.1 and it's world-readable and executable. Can libImlib2.a be causing this problem, set to 644? What else should I be checking?
Having an odd problem running a mysqldump via crontab. I have the script running on other servers and they work fine, so not sure how to actually troubleshoot, but the script looks like the following;
If I run it as a cronjob as root, it finishes in a second and a 20k file is there. If I run it from the command line as root it does the backup (takes a few minutes) but does complete the backup and can be unzipped and read successfully.
I installed freenx-0.7.3-i486-1alien.tgz on the server (with Slack 12.1), all according to instructions. I installed the no machine nxclient 3.4.0-7 on the client (with Slack 12.2). I ran setup and configured. I used the default no-machine keys, not custom ones. Since authentication failed at the beginning, I enabled DBauthentication, and added my user and password, which seemed to allow authentication to occur.
Here is my node.conf: # node.conf # # This file is provided by FreeNX. It should be placed either into # /etc/nxserver/node.conf (FreeNX style) or /usr/NX/etc/node.conf # (NoMachine NX style).....
I hope I haven't missed this in another forum but was tough to search for.While administering my new centos 5.2 x86 server through SSH I am successfully able to issue
I recently picked up an external HD which I partitioned, formatted and can mount just fine under Debian. When I plug in the device, I can see an appropriate sda1 entry for my partition in /dev. However, when I attempt to use the device in Gentoo (the system I bought the drive to back up) it seems to not be recognized. I still get some new entries under /dev when I plug it in, but no specific partition number is recognized. On Debian (where it works) here is the output of dmesg after plugging in the device:
Code: [ 9179.847274] usb-storage: device found at 8 [ 9179.847277] usb-storage: waiting for device to settle before scanning [ 9179.848514] usb 5-5: New USB device found, idVendor=059b, idProduct=0070 [ 9179.848520] usb 5-5: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 9179.848523] usb 5-5: Product: eGo USB [ 9179.848526] usb 5-5: Manufacturer: Iomega [ 9179.848528] usb 5-5: SerialNumber: 090000000000D517 [ 9184.844890] usb-storage: device scan complete .....
I have to admit that I'm kind of baffled by what is going on here. It would seem that in Debian the drive is initially treated as a cdrom device and then my partition is seen, but the same is not occurring in Gentoo. How I can make the sr0 device work in Gentoo? Am I missing a module?
i am very frustrated with my ubuntu system which is meant as a workstation-server. It seems to loose track of its drives every time I add diskspace.Now every other boot (it seems) it says (among other things: VFS: can't find ext3 filesystem on dev sdd5) drops me to a maintenance-shell and asks for the root password (whats that in ubuntu, the user password does not word) - so I can't log in. Ctrl-D sometimes works and it boots through or boots after warmstart by Ctrl-Alt-Del.
[edit: found out that with alt-f3 i can access a normal shell - good - but how do I start up the GUI in ubuntu? - startx does not work] I have manually fsck'd all ext2/3 partitions form a ubuntu booted from CD - still it says something about not being able to mount /dev/sdd5.Since every now and then I have so much trouble bringing up the machine I am quite unhappy. I suspect the constant updates are prone to break things - is that so?And how do I resolve this issue. Unfortunately there is nothing about the errors in /var/log/messages or dmesg so it is no use posting it here. Where are such errors recorded? [edit: I changed the mappings in fstab to resolve the issue - still it is awkward that drives get messed up so easily. Are UUIDs in fstab less error prone when moving drives around, or do they have other issues?]
So a few hours ago I opened up GPartEd from a live CD, and one of my operations was moving the /home partition up a little. In the process, it was changed from /dev/sda8 to /dev/sda7. Well upon the next boot, /home failed to mount. I edited /etc/fstab (and apparantly DropBox had just commented out the line that mounts /home so I have NO IDEA how /home was being mounted before) and still no dice. So every time I boot, I have to go to manual recovery and type mount /home before I can boot.
Is there some way I can mount /home without making a "git 'er done" script?
On a side note, Google Chrome has refused to open since the partition editing.
Oh, and, in case it is helpful, here's the contents of /etc/fstab:
We are testing Ubuntu as the base for our products, we create custom Karmic installations (debootstrap + some extra packages) and then deploy our software, these systems can start in two different modes: "normal" and "read-only". We must say the system works quite well, but after some time, several systems are showing a common error: they don't start or fail showing an unexpected inconsistency for one of the partitions (every system has one hard disk with four partitions), the message refers to an unexpected inconsistency. When this error appears, a recovery console is started and I can run fsck and answer "yes" to its recommendations, after this the system runs again without errors. The problem is that this error can appear again at a random time and we need to avoid this "manual fixing" process, I've searched the web and found some references to a bug? in an early Karmic version: [URL]
Besides ALSA 1.0.23, we use only standard Karmic tools (all from the official repositories), we are running the latest kernel update available for Karmic, and don't know whether this inconsistency is caused by our software or by the system itself, or maybe because of incorrect shutdown?
At the moment, I'm setting a new test system using Lucid. Does anybody know if this is a "common" error in Karmic?
So when I booted my system up today my second internal hard drive which is formatted to ext4 failed to auto-mount for me(I have an fstab entry for it). When I I tried to manually mount it from terminal it failed and suggested I run dmesg | tail and here is the output from said command:
I have no problems accessing files, with the windows shares. The only problem is getting my printer to mount. Here is the trouble shoot log, I didn't fine the problem till, I tried to print something.
Client is running Oracle VM Server 2.2.1 (kernel 2.6.18-128.2.1.4.37.el5xen). Storage is a NetApp 3210 (NFS configured to use TCP).
Iptables on client has udp and tcp ports 111, 2049 and the NFS server ports opened. Info retrieved using: rpcinfo -p NetApp
When trying a manual mount ...
But when using the proto=tcp option, it works ...
Stopping iptables also works (I can manually mount the share without using proto=tcp).
Is the mounting process somehow trying to negotiate first using udp which the Netapp doesn't respond and hence it fails by timing out?
Can I configure iptables such that I don't have to use the proto=tcp option? Or is there another configuration file I can tweak so that I don't have to use the proto=tcp option?
I have a script which runs a few other scripts (in subfolders of the first script) in order to mount some unix/linux shares) Anyway, when I run the file from rc.local and try to pipe the output into a file the file is empty and the shares are not mounted. however when I run the file it mounts everything.... Also, the script doesn't work on my wireless clients...
I have a linux server running slackware 13.37. I am trying to mount a samba share with my other slackware machine, but I get a "mount error(13): Permission denied" when I run
sudo mount -t cifs //server/share /mnt But, if I run sudo mount -t cifs //192.168.1.100/share /mnt
I've just upgraded to 11.3 (64 bit) and the nfs client does not quite work. I have 3 mounts I try to make, and 2 out of 3 work. The third seems to mount, but shows an empty directory. There are no errors in /var/log/messages on the 11.3 client or the server. The only difference between the 3 mounts that I can see is that the failing mount is of an xfs system. The other two happen to be ext3. Is that visible to the nfs client? Or is that a red herring? I can still mount all 3 just fine from my other opensuse 11.2 systems.
For info, the server is running opensuse 10.3... Code: nas:~ # uname -a Linux nas 2.6.22.19-0.2-default #1 SMP 2008-12-18 10:17:03 +0100 i686 athlon i386 GNU/Linux