Slackware :: Managed To Screw Up Remote Server Directory Ownerships?
May 26, 2011
managed to screw up the ownership of many directories on a remote server whilst installing a piece of new software. Basically I have set all ownership from / downwards to apache:apache.
Spotted the error quite quickly and managed to abort it, but am now unable to change to root to put anything right. Is there any way to restore ownerships of the underlying Slackware to 'factory default' as it were? Had a quick google and found some links for a script that is supposed to work, but it appears broken,
I use my T61 both wireless and in the dock, where I switch to the wired connection. After the latest round of updates, I seem to have lost the ability to switch to the wired network. When I turn the laptop on, I see it connect to the wireless network, but when I left click on the tray icon under Wired Network is says Device not managed. I have not made any changes to anything in the last couple weeks, besides some updates. How do I go about getting the wired network back in there?I tried to add eth0 back into System - Preferences - Network Connections, but whenever I try and add eth0, but it still says Device not managed.
I'm trying to install mongodb (so called schema-less database). It requires libraries like libjs, libmozjs, spider-monkey and the like. I had an idea that installing xulrunner would provide most of that, but it doesn't do so in any nice way. I'm a bit scared to muck up my firefox/thunderbird/seamonkey system, which I suspect might easily happen if I tread wrong.
I am new to scripting, would like to have a script that tests whether a directory exists on remote host & display the message accordingly. The remote hostname can be provided by means of file containing list of hostnames. Can use rsh for connecting to remote host.I tried with couple of scripts by searching google but didn't get desired result. Please help me, below is my efforts, $file contains list of hostnames.
The new version of Slack is on the doorstep and soon it's time to clean up my box.
What is your remote desktop server of choice?
I require a few things: - Linux side server that starts up on boot - Windows and linux side client - Preferably attachable to a running desktop session, although I can live with a server that starts up X - Semi-effortless setup on Slackware (with slackbuilds, maybe)
This far I've used x11vnc, but we don't get along all the time. I was interested to try teamviewer, but so far I didn't find information how to use it as standalone server.
I have a directory on the remote machine running redhat ES 2.1, which i want to copy to a local machine running fedora fc 10. I am using scp and want to retain permissions, links, and also ownerships of the copied directory structure. I am using scp -pr but the ownerships are all chaning to the user running scp, in my case its root. I have checked the user/groups. They are same on both the sides (lotus,bin,root). And yes I cannot use rsync as the remote m/c does not have this.
My past few weeks trying to cache the use frox, but never managed to run frox correctly. I need a guide for installation frox (ftp proxy or otherwise) that I can understand, because I'm still just learning. I use a superbly managed using squid for web proxy using port 3128
Is there a way to limit the time an instance of a service can run? For example, I want to limit all telnet sessions to 30 mins. Users will be automatically logged out after 30 mins.
I'm completely new to Linux/Ubuntu, but I managed to create a FTP server by using apt-get vsftpd or something.I followed a tutorial and modified a file called vsftpd.conf.I tried to disable all kind ofblocking/permissions. From a Windows client, I can connect to it without any login (I enabled anonymous) and I can download from it, but I am unable to modify it or upload files. Unless there is a better way to transfer files between the computers, how can I enable writing on the FTP server?
I'm attempting to set up a VPN server on my box using the nifty HowTo posted here: [URL]
My setup is as follows:wifi0 --> Internet; managed entirely via nm-applet (NetworkManager) Where I'm running into trouble is in the creation of a bridge interface (br0) to bridge future VPN clients to my local network.
The guide(s) say that I need to screw around in /etc/network/interfaces to setup br0 and [eth0/wifi0] accordingly. The problem is that when I specify a configuration of any sort for wifi0 (my only choice for a network uplink), it disables nm and I am unable to configure my wifi in any sort of sane way after reboot... Further info: this "server" doesn't move, and always always connects to the same wifi hotspot that is also nailed in place.
I installed Ubuntu on a partition on my Mac Mini and everything was working fine until a couple hours ago, when I logged out to go back to the OS X side. For some reason, now in OS X it recognizes the existence of the mouse (there's a pointer I can move around) but it doesn't interact with anything and I can't click on anything. Is this something to do with Refit or my Ubuntu install? Is this something I can fix?
my friend partitioned some of my hard drive (vista) and installed ubuntu 9.04 on it.for a while, I didn't really use it too much.But recently I've started to see that it's a better coding environment for me (cs major).i was having a few problems here and there, so i decided to update to 9.10 through the update manager.i was no longer able to use my mouse pad after doing this, but i eventually found a solution online that worked.then i decided i might as well go up to 10.04.when it was installing, it asked me if i wanted to change a "menu", or keep the current one.I said to change it (stupid, i know).after the installation was complete, i have a few problems that i have NO idea how to fix.i've searched all over the internet, but nothing has worked so far.
I came from the Debian world so I did not do much building software from source. I successfully built wine from source, now the wine binary is in the same directory where the Makefile and all of the other source stuff is. I can run wine from that directory fine, but I sort of want to move it somewhere else. I tried moving the wine binary somewhere else, but when I try to run it I get
[code]...
What all do I have to move into the new directory to get wine working in the new directory? By convention, where should I move wine, I want it available for all users, should I move it to /opt/wine, or /usr/local/wine, or somewhere else?
Okay so a newb question but I typed the simple command sudo apt-get install lottanzb to install lottanzb after adding the testing repositories (currently using Lenny stable) and got a huge shed load of things updated including AWN from the testing source as a result (pages of them flashed by) and it removed some things like Epiphany browser.
Got a load of errors at one point during this including /var/lib/dpkg/info/libc6.postinst: line 165: [: missing `]' Usage: /etc/init.d/cron {start|stop|restart|reload|force-reload}. /var/lib/dpkg/info/libc6.postinst: line 165: [: missing `]' Status of Common Unix Printing System: cupsd is running. /var/lib/dpkg/info/libc6.postinst: line 165: [: missing `]' checking separate queue runner daemon...done (not running).
[Code]...
Am I being dumb here or does that mean I`m now basically using Debian testing as a result and if so how can I undo it yet keep LottaNZB? With the errors am concerned in case I reboot and the system goes from stable to unstable. I chose Debian for the stability so don`t want the testing versions, and have no idea why a simple install command updated so much stuff. EDIT: It bigtime messed it up, when I logged out and logged back in system was unusable and I had to reinstall as tried apt remove but it didn`t undo it. Quite surprised that Linux breaks easier than Windows though
Quote: The precompiled Slackware kernels are available in the /kernels directory on the Slackware CD-ROM or on the FTP site in the main Slackware directory. I am unable to reach it, what's the proper login?
I was recently surprised by this SCIM stuff that seems to screw around with my normal text input. I try to be curious and approach my issues with an open mind, so before I try to remove SCIM, I want to know what the hell it's supposed to do.but it's not obvious to me
I tend to buy music on my iPod touch directly, then synchronize it with iTunes later. After goofing around with Rythembox by trying (unsuccessfully) to import some mp3s onto my iPod, I noticed that my recently purchased 'Fame Monster' Lady Gaga album would not synchronize with iTunes, although the music was still playable and indexed correctly on the ipod itself. I also noticed that Rhythembox can't seem to find that album either. It is quite strange.
So I browsed around the file structure on the ipod to try to hunt down these m4a files and found them in a folder called 'Purchases' - they were all there along with a few other recent purchases. These m4a files are different though - they don't seem to have any of the audio tracks' meta-data embedded in the files themselves, that info is instead in associated 'plist' files that are just xml files with all the info about the tracks you'd expect.
When you sync an ipod touch with itunes, it copies your 'on-the-go' generated playlists - so I tried to trick itunes into finding these files by adding my gaga album to a playlist, but sure enough after synching, that playlist simply disappeared. I tried it again, but this time also adding other songs that are not having any problems, and sure enough the playlist was successfully copied to itunes... SANS any of the gaga tracks! This is so weird.
So basically I have these tracks that are paid for, not cheap might I add, that are playable on my ipod touch, but simply refuse to synchronize with itunes. Why not just grab it m4a off the ipod directly? Well thats true, they play on my computer just fine that way, but the files don't have any of the album and track information embedded...
have a Debian server which I use to hold my home directory for my user account. I used to use Windows 7 and connect to my /home/username directory via Samba which worked great. I could access all of my files as if they were sitting on my local PC, but they were actually sitting on my Debian server.
Now I have decided to give Ubuntu 10.10 a try (looks promising so far!).One thing I'm not sure how to do is to mount my home directory from my server! I am able to open an sftp connection to my server, but not able to access them natively as they were /home/username on my local machine.I'm assuming I need to mount my home directory somewhere in my fstab before it starts up, but which protocol should I use? I'm used to using windows networking, but am trying to get more into linux.Should I use NFS?
If you use Nautilus then you can just use the "Connect to server" from the file menu. However if you file manager does not support connecting to servers (like Thunar ) then you can use sshfs.
Code: sudo apt-get install sshfs You should create a directory as your mount point, say Code: mkdir /media/Server
I have a computer (C1) to which I can connect through the Internet (ssh, for instance) (it has a static ip and though it is sitting behind a router, the appropriate ports are all forwarded). I have another computer (C2) that doesn't have a fixed ip address and sits behind a router that I cannot fiddle with (so no port forwarding here). I would like to know if there is any way to connect from C2 to C1 such that a directory on C2 would be mounted on C1.
I have a folder on my workstation and I currently have an identical copy on my nas mounted via ifs (I'll be using this as my backup). The folder contains many virtual machines that are usually powered off. I like to think of the copy on my nas as a backup. The benefit of having two copies of my vms is that if I boot up too many on my workstation and the drive starts to cause a bottle neck I can simply boot the vms from the nas instead. (gigabit ethernet)
I would like changes I make to my local copy to be reflected on the nas.
I want this to happen In the background, I.e. If I turn off my machine it shouldnt cause a problem, the next time I boot up it just re checks the files and continues syncing the 2 directories.
What is the best tool for the job? Rsync?
I've never really used it before so a few pointers to get me going would be great, or obviously other recommendations if there are any.
I've set up ssh passwordless logins using keygen etc.before so I know the routine.
The problem I'm currently having is setting passwordless logins when I don't have write permission to my "root" of the remote machine. More specifically the slice provided by a commercial web hosting provider. I can ssh and sftp just fine keying in the password manually but since I'm unable to create a .ssh directory in my "root" I'm unsuccessful in scripting logins. What I'm wondering is if the .ssh directory and associated security files can be placed in an alternate location such as the httpdocs directory and pass that location to ssh in a command line parameter.
I am running an openldap server on fedora core 10 and now running into a need of get all users data from Active Directory. Actually I have a php based application which will be using that data from OpenLDAP and it will need to be updated on weekly bases. how can I do it and any script.
i would like to find and backup all *.mp4 files from /Pictures and its sub-directories and move them to a single directory on a remote. I can find and move the files but I don't want the directory structure...just the files to be placed in the remote directory.
I know it is possible to do... but I am not sure how to go about the whole thing. Here's the scenario. I run a lab. Lots of PCs. As time goes by, the older ones dont have the memory or disk space to run more modern apps. But I want to put them to use...
What I am trying to do, and have started, is the following: 1. Install Linux on a bunch of them, make a share on each of these. I've already installed FreeNAS on four machines. (Let's call these machines ClientA, ClientB, ClientC, and ClientD). And have made all the available diskspace
2. Install Linux on a fifth machine (call this Machine1) , and on this machine combine over-the-network all the shares from ClientA, ClientB, ClientC, and ClientD into one large "virtual" directory on Machine1. I know this is do-able, but what I hope to have is the total disk space from all the machines in step 1 to be combined for the purposes of saving files. Not sure which file system to use. For example, if all the other four machines have 2GB of space each, I want to be able to be able to save a 7GB file.
3. And then allow sharing of this one large directory using Samba.
4. Then allow lab users (not on any of the above mentioned machines) to be able to access the Samba-enabled large shared directory on Machine1 to read and write files. The user will have no idea where that the files[s] is/are not on Machine1, and that it maybe segmented in some way, nor should they care.
I understand the risks (if any one machine of ClientA, ClientB, ClientC, and ClientD goes down, lose probably everything). I am considering throwing mirroring into the mix (mirror Machine1's large directory), but that can wait.
So in the above scenario, what file system can I use on Machine1 to combine all the shares from ClientA, ClientB, ClientC, and ClientD to make one large "virtual" directory?
I've looked at UnionFS, but from my understanding while it combines directories, the maximum file size is the size of the largest share. Is this true?
I have a low powered headless box (DLINK DNS323) running linux that I keep in a back room at home to handle large file downloads. It does not have X installed. Everything is handled at the command prompt through an SSH link. In the most common case, I log in from a remote location (perhaps a coffee shop), start the download, then disconnect and go about my business.
If I try to download from a free account of rapidshare, filesonic, or some another file service there is a manual handshake process (decode a captcha, wait for 60 seconds, etc) that requires a graphical client to complete.
I would like to somehow navigate through the handshake using my laptop (running Firefox) (perhaps through a proxy on the DNS323), but direct the download directly to the DNS323 (i.e., not routed through the laptop) so that I can disconnect and expect that the download will complete without further involvement from me or my laptop.
Is there anyway to direct a download to a remote path? Any other suggestions for solving the problem?