Fedora :: Mounting Other File Systems Takes A While?
Jan 31, 2011
during the boot process mounting other file systems takes a while. Although it ends up resulting [OK] it was not like this before and it used to be was very fast. I took a look at /etc/fstab file which is posted below, and suspected that devpts is the problem. So I commented it out and reboot, but it wasn't helpful.
Maybe this is the wrong place to ask, but I'm trying to install opensuse 11.4 (64-bit) in virtual box, the installation went smoothly (no visible error messages). But the boot seem to get stuck. If anyone has got an idea what it could be, it would mean a lot. These are the last prints in the log before it gets stuck:
INIT: version 2.88 booting System Boot Control: Running /etc/init.d/boot mounting mandatory file systems done
My DVD drive start working bad recently in KDE 4.4.3 - openSUSE Forums , but this is another very annoying thing happening in my openSUSE box. I have several flash drives, from several sizes and all have the same problem: When I plug them, the drive simply takes a LOT of time mounting and showing the data, but really, a LOT, and meanwhile the flash light blinks the desktop environment its FREEZE, until it shows the mounted drive. Some output from a recently plug that takes again a lot of time:
Code:
[ 4685.082027] usb 1-6: new high speed USB device using ehci_hcd and address 5 [ 4685.579461] usb 1-6: New USB device found, idVendor=0325, idProduct=ac02 [ 4685.579480] usb 1-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
I have fedora12 on two pc's, i want to share some files between them which i was not able to do with nfs, so let me know the whole procedure to do that n also let me know where my shared files will be visible
after reading a lot I was finally able to run the upgrade interface (no more root errors) but I get a new error:"You need mor espace on the following file systems:346M on On / I have 760mb free. I dont know what else to do to install, I though I just needed 250mb to upgrade. Post added at 08:37 PM CDT Previous post was at 07:20 PM CDT Well after unistalling a lot of apps I managed to free 900mb in total.I dont know why my / is so full, I arranged a 9gb for it, and apparently I need more space, Im not using it more than sharing and downloading files, some hosting (nothing right now). How can I now what is getting so much space? Or maybe is the hole update process, from 8 to 11 and now from 11 to 13...
I restore tape backup on my Linux server. At the time of booting the server it cannot able to read the filesystems.
I am getting the following message, Code: Your system appears to have shutdown uncleanly Forcing file system integrity check due to default setting Checking root filesystem fsch.ext3: file system has unsupported features (S) (/) e2fsck: Get a never version of e2fsck! (FAILED) *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. Give root password for maintenance (or type Control-D to continue):
I installed Fedora 12 in a virtual environment using VMware workstation, I am learning a Linux book. 1. The book ask me to change my directory to Fedora 12 DVD's RPM file directory under terminal. It assumes the mounting point for the disk image would be e.g. '/media/dvd/packages/', but if I type in 'cd /media/dvd/packages/' it obviously won't find the directory. So how do I navigate to the directory using CD command, but I guess put it more accurately I will need to find out what is the mounting point of the Fedora 12 DVD image in my VM.
2. I have another question with my root password, I cannot login as root when the VM first boot up, at the login screen where you are asked about your account name and password screen, So I have to use my normal user account (made up by my first name and last name) when I installed Fedora. But I know exactly what my root password is. The weird thing is I can still access to the root account in the desktops windows environment no problem. e.g. if I go to the top bar 'system-administration-authentication' program, it will let me in after I typed in my root password. In other words, I have access to all the admin tools in the desktop environment.
After preupgrade downloads install media and reboots to start install, I get a dirty file systems error on /dev/sda2, my / partition.I fsck'd sda1,2,3,4,5,6 (all clean) and rebooted, ran preupgrade again and got same error. No other disks are mounted other than internal SSD.wtf is going on here? ;-)More importantly how does one get around this error? Only half-solution I have found on the net for this problem is to set allowDirty=1 in upgrade.py and recreate install.img. Have no preupgraded before so don't want to take any more risks than necessary.Thanks for any workarounds....---------- Post added at 03:12 PM ---------- Previous post was at 08:31 AM ----------Anyone have ideas here? I'd like to avoid yum upgrading as that looks to entail more pain.Why on earth does Anaconda see my /dev/sda2 on "/" as dirty when fsck reports it as clean
I can not use nfs from F10 client to F12 server. nfs mount on F10 to F12 times out anf nfs4 mount gives "mount.nfs4: mounting localhost:/home failed, reason given by server: No such file or directory" I have tried to close firewall and set selinux to permissive mode on both client and server with same result. Samba works fine. On server [root@flokipal ~]# mount -t nfs4 localhost:/home /media/tonlist mount.nfs4: mounting localhost:/home failed, reason given by server: No such file or directory
but
[root@flokipal ~]# mount -t nfs localhost:/home /media/tonlist [root@flokipal ~]#
I have 2 different mounts. One points to a local windows share(NTFS ->Samba) and the other one points to a PPTP VPN connection sharing(I belive that is NTFS too). I use "cifs" scheme in my fstab to mount these. And I use my Debian box to copy between these 2 mounts. I have started using Rsync for that purpose, I think that it works fine for now. My main problem is that it looks like Rsync cannot figure out if the files are same or not in source and target folders when I use these mounts. Most of the time Rsync copies the same files and folders over and over again even though those files and folders are on the target.
I am wondering if there is a way to make this scheme work? Being on a Vpn connection(slow) a Windows box, Rsync could have save a lot of my time if it could have recognized the files and folders that are same on both ends
Last year I was looking into fault-tolerant distributed file systems and I recall one kernel-based system that required a physical partition on each machine in the cluster, but would treat it as a single volume - ie. a write on one server would appear on the disk on all the servers.Unfortunately I didn't bookmark the specific system I was looking at, and now a year later I can't remember the details.What I don't want is NFS - a single file server with a file system mounted on various machines. What I do want is mirroring - one disk shared among multiple servers, so that if one server dies, it doesn't make any difference to the rest of them.
A bit of investigation turned up Red Hat's GFS, which kind of looks like what I want, but looks more and more like an NFS model to me. I was wondering what everyone's opinion of the various options out there were.
How to get the permissions of any file systems --------------------------------------------------- what does it mean? "permission denied while opening filesystem"
through commands can we give/get permissions of file systems
I work for a company that does remote computer support, we use VNC protocol to help our clients. I installed a VNC repeater that allows my clients to connect to me going through all firewalls and port forwarding. The linux VNC repeater outputs all connection information to /var/log/vnc.log and looks something like
Code:
UltraVnc Linux Repeater version 0.14 UltraVnc Tue Mar 22 03:37:02 2011 > routeConnections(): starting select() loop, terminate with ctrl+c UltraVnc Tue Mar 22 03:37:12 2011 > acceptConnection(): connection accepted ok from ip: 55.555.555.55
[code].....
I need a script that reads this log file every so often (30 seconds to 5 minutes) and sends an email when an connection has been accepted. I looked into reading log files and got this so far
Code:
LOG=/var/adm/sqllog while true do tail -100 $LOG | grep "ORA" > /dev/null
It takes an awfully long time to delete a file when deleting it though one of the KDE programs, like kdevelop or konqueror file manager.
Deleting files with rm works fine. I suspect it has to do with KDE recycling bin mechanics which I know nothing about. I am running fluxbox wm if that matters.
After an update recently I noticed that my process count jumped up quite a bit. Somehow it doesn't seem related (it was an apt update I believe), but I'll just throw it out there. All of the extra processes seem to be related to XFS and JFS file system kernel processes, but none of my file systems use XFS nor JFS, just EXT3 & EXT4. Is there any safe/easy way to kill off these processes and prevent them from re-spawning? I don't find having irrelevant idle processes to be beneficial nor efficient. It's using Ubuntu 10.04 64-bit. Only active file systems are EXT4 and EXT3.
and also I need to find answers for following two questions.How to find mountable devices and their device files in Linux?How can I allow a regular user to mount a device in Linux?
I'm familiar with the software and hierarchy of the mount command but I can't find any info on why it is needed or preferred. What are the physical aspects of it? What is the burden of having files accessible all the time?
I was wondering if it was possible to hide the File Systems from a user. So when then browse through folders or choose to save something the default folder is their "home" folder. I am using SAM Linux distribution and don't want my users to be able to screw anything up! I use thunar as my file manager and was just wondering if it is possible?
I have to move files between two file systems /inst and /inst2.When I perform 'cp -a /inst /inst2' it copies everything even hidden files and preserves access permissions.But when I perform 'mv /inst /inst2' it also preserves access perms and moves everything besides hidden files.Questions :hy is so ?What tool to use when moving file systems from one fs to another (rsync) ?
I have 3 linux systems configured for running applications in each, named system1, system2 and system3. I have around 100 GB of space in system3 under /usr but not much being used. In System1 very less space is there but mostly hits coming here and need to have proper backup, as the system1 is quite old and not planned partitons properly. So I want to use a disk having more space for backup requirements.
We have upgraded CentOS release 5.6 (Final) with 2.6.18-238.9.1.el5 kernel. After the reboot all configuration files under /etc became READONLY. my file system's still in rw mode.code...
I have a newly installed Kubuntu 10.04 running here, works fine except for one thing.
I have a kind of "fileserver" and it has a samba share that I have mounted in the home folder of my desktop computer ("/home/xxx/fileserver", the server is running an older version of Ubuntu, can't exactly remember what it is but the filesystem is ext2, if that's of any importance).
I have large files on the server, mostly video. When I use Dolphin (or Konqueror, doesn't make any difference) and right click one of these large files and choose Properties, it takes a LONG time to load the properties window. As if it copies the file to local hd before opening properties, or something.
The reason why I posted here and not in the networking section is, that I had the exact same setup with my previous installation which was Kubuntu 8.04, and also at least three different Ubuntu's before that. Never had this problem before, so I think my server and networking thingies are okay.
This post is not to ask for help, but rather to document my recent effort to downgrade my ext4 file systems to ext3 file systems. I don't know if it'll help anyone, but here it is anyway, fwiw.I am running ubuntu 9.10 on an older Dell GX-270, and had formatted my partitions with ext4 file systems. I began to notice partimage wasn't backing up my ext4 file systems and I decided to downgrade to ext3 file systems.My system has one 160GB drive and one 500GB drive. I also have an external usb2 500GB drive./home is on the internal 500GB drive. To convert it, I mounted an ntfs file system on the external drive, created a container file, put a file system on it, and mounted the container as a linux file system.
The backup was done done via rsync. rsync makes things really easy. It understands uids, gids, file permissions, and all kinds of links. That's one reason I created the container file on my external drive. NTFS doesn't understand uids gids, linux file permissions, or linux style links.
I have more than 60 ubuntu systems in my network. I want to copy files from one system to other ubuntu systems. All IP addresses are listed in a text file. So what command can I use to complete the task?
iv been looking around at the different Linux systems particularity the smaller ones such as DSL, Slax and Puppy Linux. However i need a Linux distribution that doesn't have a GUI desktop environment just the plain old terminal to work on. The system would have to be able to boot from a USB drive also. If anyone knows a systems that fits those requirements or something else related please post. Also what file system is best for USB drives for booting systems?
We recently had an issue with "cat /proc/mount" telling us that a CIFS file system was mounted, even though the mount was not working correctly. So we're not sure if we can trust linux to report malfunctioning mounts, so we're planning on adding a specific file on the mounted file system, and verify the mount by reading this file from the client side (linux). If linux fails to read it, we know that the mount have failed. But before we go ahead doing this I thought I'd just hear how others are doing this sort of thing - how do you make sure that mount points are up and working?
- kenneho
EDIT: I just saw that I've posted in the security area, not in the server area. How do I move it?
The app in linux server(CentOS 5.3) uses files from a mount directory(Shared windows directory in read only mode). At the same time, the same file might be edited by user in windows env. We were assuming that as the windows folders are mounted in read-only mode in linux so any change done by user in windows environment would be fail safe i.e. can be safely committed to the file. But when the file concurrently used both by Linux and as well by windows, at some point linux does not release the file handles and the files get corrupted(deleted too). Earlier we were using win2k server and this step was hardly reproducible and win2k was releasing file handles quickly. But with centos, we really had touch time managing files.
What limits a file to have some maximum size depending on the Operating System? I do not exactly understand this. If you have the storage space, what else can be the limitation? You should be able to store as much data as you want the way you want (even in a single file) unless you run out of storage space.
One of the good points of linux is that is easy to customize the partitioning scheme of the disk and put each directory (/home, /var, etc) in diferent partitions and/or diferent disk. Then we can use diferen file system/configurations for each of them for make them better. xamples:
noatime is a mount option to not write access time on the files. data=writeback is an option to layz write metadata on new files. ext3/4 has journaling that make the partition more secure in case of a crash. bigger blocks make the partition waste more space, but make it faster to read and may become more fragmented. (not sure) Then: What are the best filesystem/configurations for each directory? Note: given the answer of Patches, will only discuss /, /home and /var only.
/var -> It's modified constantly, it write logs, cache, temporal, etc. /home -> stores important files. /-> stores everything else (/etc and /usr should be here)