General :: Scp + Copy The Same Links?
Jan 14, 2011why scp command not copy links from local copmuter to other ( how to copy the links)as scp -rp dir linux:/dir_targetremark in dir I have files and links
View 1 Replieswhy scp command not copy links from local copmuter to other ( how to copy the links)as scp -rp dir linux:/dir_targetremark in dir I have files and links
View 1 RepliesI have directory that contains some symbolic links:
user@host:include$ find .. -type l -ls
4737414 0 lrwxrwxrwx 1 user group 13 Dec 9 13:47 ../k0607-lsi6/camac -> ../../include
4737415 0 lrwxrwxrwx 1 user group 14 Dec 9 13:49 ../k0607-lsi6/linux -> ../../../linux
[code]....
copy all symlinks
i want to copy all symlinks (both active and dead) from one partition's os's user's home directory, to another partition's os's user's home directory, and i'm wondering if this can be accomplished with some clever bash options and pipes after "cp".
this is very useful for users with multiboot systems, who keep all their data on a separate drive. this would allow the symlink'ed shortcut tree directory hierarchy expedience, to be instantly copyable to any of their oses.
distro-hoppers/surfers will love it. ^_^
so here goes my first attempt at trying to work out what it'd be...
cp -P
(copy, dont follow symbolic links)
(or is it -d i aught use?)
find /% -type l
(finds symbolic links only
[Code]....
I have some playlist I want to keep the songorder, but I have the files in another file-structure.( I have the songs ordered in folder by artist and album, cos all songs are note tagged correctly or in the same way.)I want to use the playlist with the songs in copied to an another plays, for instance an usb-stick.Is there an application who fix mass change of all links to the songs?
for instance I have in m3u-files:
#EXTM3U
#EXTINF:268,Salt Fare North Sea[code]....
Or Is there a way in Amarok or another music-player that can make a copy of the playlists song order and use it with new links on a usb-memory? I don't now how sync to media-player and such things work.
At my Uni, we use a web-based login for our internet connections. Its based off of Cisco, and every Wednesday night every computer on campus must re-enter their credentials to use the network.
Normally on my several computers I simply pull up the Terminal, point links to google.com using
Code:
And enter my credentials when Cisco redirects to the login page.
Literally, the process is
Code:
Then ENTER to accept the redirect, down arrow to skip over the logo image, USERNAME, ENTER, PASSWORD, ENTER, ENTER.
Naturally, this is EXTREMELY time consuming, as I have about 5 computers located around campus and must physically walk to the machines and login every single week.
My question is, How would I formulate a program that does the following;
1) checks for connectivity (i.e. is able to reach/resolve to the greater part of the internet) and
2) automatically fills in the credentials on the links login page?
I have a personal wiki of notes, with now thousands of links in markdown format:
[link text](http://example.com)
but now that fckeditor is available for mediawiki (very beta), it has become much better to just stick with wikitext format. There are only a few conversions to do: tables, links, and bulleted lists. The lists are a fairly simple regex and fckeditor magically reformats the tables, so all I'm left with is the links. But I'm not a regex master. How do I reformat code...
How to copy a Read-Only file in Linux and make the copy writable with a single cp command in Linux (Ubuntu 10.04)? The --no-preserve and --preserve seemed to be good candidates, except that they should "and" the mode flags, while what I am looking for is something that will "or" them (add +w mode).
More details: I have to import a repository from GIT to Perforce. I want that all Perforce depot files are Read-Only (that is how Perforce was designed), while all other files that were derived/copied from depot files are writable. Currently if a Makefile tries to copy a Read-Only file then the derived file will also be Read-only. This leads to build-errors when cp tries to overwrite Read-Only file second time. Of course the --force is a workaround here but then the derived file is also Read-Only. Also I do not want to mess with "chmod" after each "cp" command - I will do that only as the last resort.
I can see some soft links in /etc directory which are pointing to /etc/rc.d Directory contents.
Code:
lrwxrwxrwx. 1 root root 7 Jan 31 08:19 rc -> rc.d/rc
lrwxrwxrwx. 1 root root 10 Jan 31 08:19 rc0.d -> rc.d/rc0.d
lrwxrwxrwx. 1 root root 10 Jan 31 08:19 rc1.d -> rc.d/rc1.d
code....
Any body please tell me what is the purpose of these soft links in /etc directory ? I am using RHEL 5.4 ...
I'm using Links on a Ubuntu server, and to view images I'm using Asciiview, which works well, but the association is not retained whenever I close links. How can I retain this association?
View 1 Replies View RelatedI work in a lab when all the guys use PCs with Windows and access the lab linux servers via ssh.
I prefer linux, so I have a local installation of ubuntu 10.4 on my PC. I mount the home of our lab server using mount server:/home /mnt/home/. I can then access the files on the server (I had to change my local UID to match the one assigned to me on our server in order to be able to write to my home dir).
The problem is all the (symbolic) links I have on the server don't work when I access them through the mounted location. I guess the system simply tries following the link in my local /home instead on server:/home.
Just wondering what the command is to archive data plus symbolic links in tar. We need to be able to recreate symbolic links as the file is laid back to original location(s).
View 2 Replies View RelatedI am, as the forum title suggests, new to linux and to programming and having trouble figuring out how to do this.I have a very large XML file with a lot of information in it. I'm trying to get a single tag out of the file, each of these tags contains a single web link and I want to download the file at every single one of those links. I really don't know how to do this.My thought, though its probably not the most efficient or correct way, was to use VIM to search the document and somehow extract all of this one particular tag and then use wget on the links.
View 3 Replies View RelatedI read and saw a video that says that typing in 'links' command followed by a website, will open a site in terminal.
However, when I type in the links command, terminal returns the error 'bash: links: command not found'.
What package or library needs to be installed to get this command to work.
I have a local installation of ubuntu 10.4 on my PC. I mount the home of our lab server using mount server:/home /mnt/home/. I can then access the files on the server (I had to change my local UID to match the one assigned to me on our server in order to be able to write to my home dir). The problem is all the (symbolic) links I have on the server don't work when I access them through the mounted location. I guess the system simply tries following the link in my local /home instead on server:/home. Is there a way to make the links work?
View 5 Replies View Relatedi have to write a shell script that takes name of the file as command line argument and the script should use path envt variable to look for the specified file and specify where are the different instance of the filename specified exist
firstly enter the filename echo "enter the filename" read filename by using ls command we can get to know how many links are there for that file but i don't know how to get the pathname of the file and its links after knowing the pathnames of the file i can set the path variable using export command can any one tell me how to get the pathname of the file i entered as a command line argument
what exactly does the following symbolic link mean?"target ->/path/to/./usr/bin/example"I am a bit confused on the "." portion of it.
View 1 Replies View RelatedI'm trying to download two sites for inclusion on a CD:URL...The problem I'm having is that these are both wikis. So when downloading with e.g.:wget -r -k -np -nv -R jpg,jpeg, gif,png, tif URL..Does somebody know a way to get around this?
View 2 Replies View RelatedIs there an easy way to replace all symbolic links with the file they link to?
View 4 Replies View RelatedI just setup my first Linux box using [URL] everything went along fine except now I have a problem that I cannot seem to solve. I've set up a webpage on the box for my company's intra-net for testing purposes but I cannot get the links to work. On the server itself all the links work but Firefox still ask me to authenticate with the Adobe Flashplayer (player10), but when I access the page from another computer I have the following issues:-
1. Even though hostname -f shows the a fully qualified domain name I have to use the IP Address eg. 192.168.100.100
2. I can access the page but the links leading to the other pages do not work I get "Webpage cannot be found or the HTTP 404 Not Found" Error Message
3. None of the embedded pictures show up I get the red X.
whats the difference between hard links and soft links?
View 10 Replies View RelatedMy son wanted to try Ubuntu , was tired of win7 OS, backed up much wanted links from old OS onto USB drive, could not find files, could not find icon on desktop.
View 2 Replies View RelatedI have learned without a doubt what runlevels are...the questions I have are related to the init scripts and how to create a link to an init script. I see that there was a post where someone was trying to get people to do homework for them...I assure you I want to understand what I am trying to learn. That said, here are my hang ups:
1. The script that contains the default runlevel to my understanding is /etc/rc.d/init.d, though I've also found /etc/inittab on the web as the default. My book isn't too clear on this as it doesn't state it exactly, so which is it for sure?
2. My assignment asks what I'd name a link to an init script that would start a fictitious BIGD daemon early on in the boot process. My answer: /etc/rc.5.bigd.d --I don't think that this is the right answer though because in the book, it states that the /etc/rc.d/rcN.d contains names of scripts whose names begin with K and S. My understading in that this starts and kills each script depending on how it's entered.
So, I get e-mails from both POP3 and IMAP servers on my Evolution service but what bugs me is that I cannot open any links within the e-mails on a browser window when I click on them (like would normally happen when I go to my accounts individually on the internet). Is it because I use Firefox or a bug in Evolution?
View 8 Replies View RelatedI have a web app that has a bunch of symbolic links in subdirectories throughout it.I need to move the app to another directory structure, and I need to update all the symlinks to point to the new path.The problem is that there's a lot of these scattered throughout various directories.How can I recursively search from the root and recreate all symlinks pointing to /dev/ with /qa/?
View 3 Replies View RelatedI have been using a cron job to duplicate a folder into another users account every day and someone suggested using symbolic links instead although I cannot get them to work. In summary user GAMER generates log files that they want to access via HTTP, however I only have a web-server in the user account SERVER, in the past I would copy the logs folder from GAMERS account into SERVER/public_html/. and then chmod the files so the server could access them. Trying to use symbolic links I set up a link from root (as only root can access both accounts) I used: ln -s /home/GAMER/game/logs/ /home/SERVER/public_html/logs
However it seems that only root can use this link, I tried chmoding the link, all the files in the gamers /game/logs/*, /game/logs itself to 777 as well as changing chown and chgrp to server the files still cannot be read. When viewed from servers account my shell shows the link and where it is to hi-lighted in black with red text. /home/GAMER/game/ (chmod & chgrp) drwxrwxrwx 3 SERVER SERVER 4096 2011-01-07 15:46 logs
/home/SERVER/public_html (chmod -h & chgrp -h)
I'm not sure how to explain my situation. I would like to download the file <https://www.vmware.com/tryvmware/p/activate.php?p=free-esxi&lp=1&ext=1&a=DOWNLOAD_FILE&baseurl=http://download2.vmware.com/software/vi/&filename=VMware-VMvisor-Installer-4.0.0.Update01-208167.x86_64.iso> via the command line. I've tried a few different methods with wget, the best I get is an index.php file. I'm not at all familiar with php but a search for "wget php" yielded nothing helpful.
View 3 Replies View RelatedI have a website that has a massive list of royalty free stock photos and I want to download all of them. I have bought a membership for [URL] so I am able to download as much as I want from them for the next month.
Instead of going page by page and manually downloading each set of stock photos manually, I would like to automate this process. Here's my idea:
1. Download the website with the links to hotfile [URL]
2. Use grep to retrieve all the links to [URL]
3. Feed the links I recieve from grep into wget and download the works of them.
The problem I'm getting is when I use grep, It retrieves the entire line of html code where "hotfile.com" is shown. So here is an example of one link I receive in the output:
Quote:
./1776-santa-claus-vector-set.html:<div align="center"><a href="http://hotfile.com/dl/18418176/181a55b/Santa_Claus_Vector_Set.rar.html" target="_blank">HotFile</a></div>
Is there a way to just have the link shown in the output?
PS: I have everything else working, I just need an automated process of getting all the links.
what is the purpose of using hard links instead of being a shortcut to some file ?
View 5 Replies View Related[Code]....
I am trying to remove <a href links using SED but unable to do it.
The finale result I am looking for is
[Code]....
Is it possible with Linux or should I try with Php?
I'm trying to download linspire Live. There are 2 download links and both are dead?
View 5 Replies View Related