General :: Specifying A Destination Path While Creating A Zip?
Feb 5, 2010
Let us assume I have a zip file called patch.zip, when I run unzip -l patch.zip I get the following output.
bin/a
bin/b
lib/c
To this zip file I want to add a new file, "Readme.txt" located at /path/to/Readme.txt in such a way that, when I re-run unzip -l patch.zip again I get something like this
I had a situation in which the the path of the file to be copied is written in other file and I had to copy it using shell script..I can use cp $(cat /home/robert/location.txt) /media/sda1 on normal linux shell...But I am using buildroot script where $(cat /home/robert/location.txt) evaluate to nothing..is just blank..
I was installing linux and i got the caution below:
WARNING: pkgtools are unstable with tar > 1.13. You should provide a "tar-1.13" in your $PATH. Cannot install /media/SlackDVD/slackware*/a/*.txz: file not found
Originally Posted by Gekitsuu you should be able to do NEWSTRING=$STRING1$STRING2 it works but i can't use it for creating a path variable / file name.
e.g. $STRING1 = path $STRING2 = filename cat $STRING1$STRING2 will not work. so how to get this working?
I'm taking here about tins of directories, thousands of files. I'm looking to find a command that makes me able to move the results above to another path, and to create that path once it doesn't exist like below:
I have a program that takes a relative path as input appends it to a some path string to get the actual path.
Now all I can input is the relative path. So if I want to go one level above my input will be ../mypath.
If I know the depth of the path used internally, I can use .. as many times to go to the root directory and then give the absolute path. But suppose I do not know the depth of the directory, can I construct a relative path string such that it considers it as a relative path. One way could be to have enough .. in the path string so that I can force an absolute path for some maximum depth of path.
Is there some path string syntax that I am not aware of but can achieve this?
Experimenting with shell variables, accidentally deleted the path variable how could I return to the original path value. What kinds of problems will I have if I don't have a path variable.
I have a path c:windowsackup I need this string to be changed into /windows/back/up I used the command -bash-3.00$ echo windackup | sed 's/\//g' but the output is windbackup
prefix=user@my-server: find . -depth -type d -name .git -printf '%h�' | while read -d "" path ; do ( cd "$path" || exit $?
[code]....
How shall i go about changing the absolute path to relative path, so that /home/git/mirror/android/adb/ndk.git gets converted to /mirror/android/adb/ndk.git //echo <command> "$prefix$PWD.git" ?? - anything for relative path?
When I use the cp or mv command to copy/move files is there a way for me to have the destination file assume the same name of the source file, however add an additional suffix.
For example
Code:
Now what if I wanted this...
Code:
Do I have to type the destination file out manually everytime? or is there a quick way for the cp or mv command to assume the source file name and add the .bak
We used to send files in the form of .jpg, .tif, and/or .pdf. Normally the file name will be in the form of 08072011IE01CTYHUB.PDF (DDMMYYYY - is the date, IE - publication, 01 - page number, CTY - edition name and HUB - destination in three characters). These files will be stored in a common folder (say SOURCE). I need a script to move these files to destination by reading the destination from the file name through FTP. At destination these files should be moved to a folder meant for CTY. Please note before the file is sent through FTP it should be compressed (zipped) At SOURCE folder the files will be as:
etc. where first 8 characters are date in the form of DDMMYYYY, next 2 characters are publication, last 3 characters are destination, previous 3 characters are edition and left over in the middle are page number in the form of NN or name. Presently I am zipping these files and send it through FTP to the destination. At destination my counterpart takes the file, stores in appropriate location (like folder name CTY) for use. To automate the above process, I want a script.
I am performing a dry run using Rsync on 2 different boxes.While i'm doing that, Under destination directory, I want a specific directory x to be ignored for sync.Please let me know the exact pattern to ignore the directory.The current command I'm using is:rsync -avnc --delete $LOCAL_DIR $USERNAME@$DESTINATION_IP:$REMOTE_DIRunder DESTINATION_IP, I would want to ignore a particular directory under REMOTE_DIR.
I'm trying to install Slackware package into some specific locations, like for example, I want to put Linux base package into at / and put applications on /usr/local. However when I'm installing using "setup" program, I cannot find a part that let me to choose the installation destination.
At "setup install" option, it gives six different installation method like full, newbie, menu, expert, custom, and tag path. But none of them (I cannot find it) gives an option where to put the installation package to.
Suppose I have a tree structure like this: /home/mahmood/sim/a/b/file1.cpp /home/mahmood/sim/a/b/file2.h /home/mahmood/sim/a/c/file3.txt /home/mahmood/sim/d/file4.txt
How can I copy all of them to /home/mahmood/sim. So that when I run "ls" in /home/mahmood/sim, I see all files: file1.cpp file2.h file3.txt file4.txt
Can 'cp' search for all file and copy them in another folder?
I have recently purchased an external hard drive in order to backup my home partition. In my PC I have a "1.5T" drive with several partitions on it, containing OSes and the home partition. The home partition is 1.3T according to df, the external drive contains one partition that spans the entire disk,df reports it as 1.4T in size. Both partitions are ext3. When I use rsync to copy files from the home partition to the external partition, the external disk becomes full, despite the destination - supposedly - being larger than the source. I don't understand why copying files from one partition to a slightly bigger partition should need more space than on the source partition. Does anyone know what is happening ?
Details : I created the partition on the external drive with gparted; gparted reported it the already have several gigabytes in used space immediately after the partitions creation - I thought at the time that this must be normal. The home partition contains many files of all sorts, including lots of big audio and video files. If you are wondering, for all my important files this external disk is only secondary backup, as they are also backed up to the "internet".
These are the mount points :
/mnt/tmp/ : home partition, /dev/sdb6 /mnt/external/ : external partition, /dev/sdc1
I used rsync to copy the files, I know there are more efficient ways to do this, but I wanted to use the same command that I will subsequently run to sync the backup.
Next I tried adding the --sparse switch, as I was wondering if the problem may come form sparse files. I don't know however if rsync would go back and shrink the sparse file by just adding the switch and executing the command. I also added --one-file-system, for good measure. Here is what I ran next :
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "abcd.avi": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.6]
[code]....
Looking at the destination after a partial copy seems to indicate that the problem is not symbolic links being "expanded". I have not checked the source filesystem for sparse files, nor the destination to see if these files could be larger there, as this does not seem trivial.
Here is some additional info :
$ df /mnt/tmp/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb6 1415342836 1414173740 369096 100% /mnt/tmp
We have an rsync cron job set up to mirror all the files in a "..dashtdocsdocs" folder to the same folder on another server. It copies all the files over correctly and deletes any files in the "docs" directory that aren't in the sending directory, but it also deletes any files we put in the target directory's parent folder (..dashtdocs or other subfolders like ..dashtdocsimages) even though they've been excluded in the .rsync-filter file.
So for example server A has ..dashtdocsdocs and ..dashtdocsimages. Server B has ..dashtdocsdocs but if I manually copy the images folder over to ..dashtdocsimages, the images folder gets deleted from the target directory every time rsync runs.
I'd like to keep just the docs directory synched and update other folders manually, but they keep getting deleted. It looks to me like it's running a delete-excluded option, but that option wasn't used.
I am trying to run the same command(s) on the many destination servers from my source server.source server user "report" ssh keys are added to all destination hosts.
why do we have to define both Source/Destination AND Direction when building firewall.Isn't direction= source->destination? what would happen if source and destination were swapped?
I cannot access/ping my Debian server. I know the IP is right (ifconfig, route and ip addr) all gave me 10.0.2.25 (route gave me 10.0.2.0).I cannot ping it from any computer in my netwerk, even when I try to ping it from my Debian itself, it gives me Destination Host Unreachable !(Wierdly, I can ping 10.0.2.2 tho).I am using virtualbox when the netwerk options 'NAT' turned on. When I look at my /etc/network/interfaces/ the last line looks like:iface eth0 inet dhcpShouldn't their be some other stuff listed?
Today i am trying to learn how to use sed. I set up a testing folder with the following files:
AAb.lol AAc.lol AAx.lol test.sh
My goal is to create a script (test.sh) which renames all the files to their original name without AA. I want to end up with this:
b.lol c.lol x.lol test.sh
sed seemed to be the perfect tool so i went ahead and created a script which i think should clear the job.
[Code].........
mv: missing destination file operand after `$i' From that 2nd line i can tell that $NewName is just empty. I also read something about sed needing the -e option for scripting purposes but i just don't understand it.
Many years ago, I converted a portion of my files to an arbitrary format with a specific extension. i no longer desire to have them in this format and i would like begin the process of replacing them because conversion is not an appropriate solution. unfortunately, they are mixed in separate folders of the same root folder with files in my current format of a different extension. I feel it would make this process easier if I were to move every folder that contained a file with the undesired format to a separate root folder. The files are stored on a Linux server and shared via samba. How can I do this with a couple of commands or a script? I am open to other suggestions as well. I want to avoid time spent editing text files. Ultimately, I'd like a command that produced a list of full paths for folders, sorted by the number of levels would be a nice touch. A list of all of the files is clearly not what I'm looking for.
To make a full backup I run a live Knoppix DVD and clone the computer's HDD to an external HDD using the dd command. Is there a possible problem with the source being copied onto bad sectors on the destination disk? If so is there a way to prevent this from happening? A typical dd command I use looks like: dd if=/dev/sda of=/dev/sdb bs=4096 conv=notrunc,noerror. Is this the recommended command for cloning to a disk of equal size?
Java applet not loading image with relative path(e.g. images/1.jpg) but loads image with absolute path(i.e. from /root/user/images/1.jpg) . This is a problem when i want to host the applet on web server
how to add a path to PATH variable permanently so that it remains persisent even after closing shell and rebooting the system when i added a path, to variable it remained there as long as i didn't closed the shell. but when i reopened it ,changed were undone.