General :: Cwrsync - How To Handle %userprofile% Variables
Jun 7, 2010
cwrsync is a great tool for synching "My Documents" to a network drive in Windows. However it uses a portion of cygwin to do this which uses forward slashes instead of back slashes. So, a manually typed command like the following works great: rsync -r --delete --exclude "My Pictures" "/cygdrive/c/Documents and Settings/demo/My Documents/" /cygdrive/j
However, you cannot put a variable like %userprofile% in this as it comes out like this (and is unrecognized): rsync: change_dir "/cygdrive/C:Documents and Settingsdemo/My Documents" failed: No such file or directory (2)
mkvmerge -o <filename without extension>_TV.mkv -S <filename> && mkvextract tracks <filename> 3:<filename without extension>.*** && perl /home/brian/Desktop/ass2srt.pl <filename without extension>.*** && rm <filename without extension>.***
Doing these commands for multiple command line file inputs is the goal. So I can just type ./script.sh *.mkv in my terminal.This is what I have so far, but it doesn't work whatsoever.
There is not a lot of info on that command, I didn't find a manpage nor a info and not more than examples on the web. Is there any tutorial or info on internet?
I've been trying to understand issues that occur during a uClinux distribution build (so I can include such issues in a module I'm writing for students). My process has been to work through errors that occur due to missing packages, then remove the distribution and build it again to uncover what happens.One thing I notice is different sets of warnings within each iteration of making a new build. From the document here (URl...it states, "A typical warning involves a variable being used before its value has been set."
So my question: is there a way to verify that the issue throwing the warning has been resolved by the end of the make build?And, is running make build again an option or could this cause problems within the build directories or image?
We have setup a High Available Cluster on two RHEL 5.4 machines with Redhat Cluster Suite (RHCS) having following configuration.
1. Both machines have Mysql server, Apache web server and Zabbix server. 2. Mysql database and web pages reside in SAN. 3. Active machine holds virtual IP and mounted shared disk. 4. We have also included a script in RHCS which takes care of starting Mysql, Apache and zabbix server on the machine which turns active when cluster switches over.
The above configuration holds good if Active machine goes down as a result of hardware failure or Reboot. What if, If any one service say Apache/Mysql/zabbix running on active hangs or become unresponsive.How can we handle this scenario ? Please advice.
I recently installed another harddrive into my Arch Linux computer. The first time I booted up all worked fine. The next time I restarted my computer though I was greeted with a /dev/sda2 not found error.
See, basically sometimes my boot harddrive is sda and sometimes it's sdb. It appears to be completely random and I don't see any options for making it non-random in the BIOS. How do I fix this?
i have a script where i need to pass an argument "1234:-)"if i run this as ./shell.sh 1234:-)it wont work because invalid character. i need to handle this with expect utility so if i pass it as ./shell.sh "1234:-)"no issue in bash but expect does not recognize this.
I am experiencing weird problem with Gnome 2.30.2 on my Debian installation. I can't open "Computer" from places, also, partitions which are not explicitly defined in fstab are not mounted automatically.
I did a backup of the ssd on my eeepc using the following command from a Linux Mint on a USB key: dd if=/dev/sda1 of=/media/disk/eeepc_save/SYSTEM/system.bck (/media/disk in an external USB disk)
I deleted the ext2 partition using GPartEd on live USB key and created it back. I rebooted Linux Mint and restored the filesystem using the opposite command : dd if=/media/disk/eeepc_save/SYSTEM/system.bck of=/dev/sda1
I mounted /dev/sda1 and when I "ls" the root directory, I get several "NFS stale file handle" messages concerning directories (/dev and other). I tried "e2fsck -y", had a bundle of corrections that resulted in the deletion of the directories. I don't use NFS. I did the same for the user filesystem and had no problem (it's an ext3 partition). The two filesystems are the ones that came with the original Xandros installed on my eeepc and that were mounted with union-fs.
I attended many interviews for Linux Admin post. Most of them asked me a common question 'How do you handle System Performance Issues' on a Linux Server. Now how do I answer this?
I would very much appreciate infos about a Linux distro handling win32 applications through wine and easily capacity to manage files within FAT32 partitions.
1.- What would be the simplest way to port and handle tarballs downloaded in windows, to a CentOS 5.4 based system (Elastix in fact), but without graphic mode (that is, using only the CLI)? The system has network contection, usb interfaces and a working DVD-RW attached.
2.- If the answer is Samba, how I get Samba working on my system?
* I am implementing mass storage device on a test board.
* It contains NAND flash.
* Using corresponding "udc driver" and "g_file_storage" I could make my test board enumerate as mass storage device on my Linux machine.
* my 16 MB pen drive (test board) is now ready for read/write.
But there are some Bad Blocks on the NAND.Hence copy is not complete. Although on Linux machine there is no error message. Now , what is there in a normal pen drive which manages the Bad Block or what am I missing so that such Bad Blocks are managed.
Here is the error I get on my test board :
mtdblock: erase of region [0x2c0000, 0x4000] on "Bon 2" failed end_request: I/O error, dev mtdblock2, sector 5664 Buffer I/O error on device mtdblock2, logical block 708
I am proposing moving from the mainframe to Linux. Problem is that I am not aware of a scheduling product that is available to handle the production code. Currently using CA7. Is there anything out there that accomplishes the same thing? As you can tell, I am NEW to Linux!
We have Linux server in our environment for application development. In this server we mounted so many NFS share from Storage. Past few days we receive this error in syslog kernel: nfs_statfs64: statfs error = 116 Some user fased this error "Stale NFS file handle"
Server info OS= RedHat Kernel = Linux hostname 2.4.21-47.ELsmp #1 SMP Wed Jul 5 20:38:41 EDT 2006 i686 i686 i386 GNU/Linux
Currently running Slackware 13.37 64-bit on a notebook and finally have suspend/hibernate after realizing that USB devices, especially USB HDDs, need to be disconnected before suspend/hibernate can work. Problem is I have 2 USB HDDs that are connected to my notebook whenever the notebook is stationary for the extra storage so I'd like to create a script that would get invoked that would stop the suspend/hibernate process if certain partitions are mounted. I know what I would like to accomplish, but I have basic scripting knowledge so I was hoping to get some assistance.
1. script would basically store a user specified string containing devices that are non-USB, ie: NONUSB="/dev/sda /dev/sdb"
2. possibly use /etc/mtab to get a list of what is currently mounted and then remove lines containing whatever is specified in $NONUSB and store those values in $USB
3. run a for loop that executes 'umount' on each token in $USB 3a. stop suspend/hibernate process if 'umount' fails at any point 3b. if 'umount' passes then suspend/hibernate
I get this error which means I cant visit websites. I cant rm, cp, mv, vi, ... this file. How do I regain the ability to browse the internet? Is there a way I can create a /etc/resolv.conf2 and have my system use that instead?
I am running Red Hat Linux Enterprise 5; I am always using the export command to set environment variables.Are there any other ways to set environment variables and what are the advantages/disadvantages of them?
I have installed RDGEN which comes with VPFIT package. When I run the program it says: "Failed to find help file" But I ran the program from its main directory where all the files including help files exist. I think maybe the problem is because of this that THEY say: "Some environment variables should be set before starting RDGEN". But I do not know what does this mean and how to do that.
These are the variables: -ATOMDIR -RD PRSETUP -RD PRSETUP -RDSTART -VPFSETUP -VPFPLOTS Would it be possible for you to tell me what does Setting Variable means in this case?
he $g09root is picked up ( in both the csh and the bash), but not the $GV_DIR or the $GAUSS_SCRDIR. I guess it's some stupid error, but it is highly frustrating.Here is the .profile file:Quote:
# To make use of this feature, simply uncomment one of the lines below or # add your own one (see /usr/share/locale/locale.alias for more codes) #
Is there a command that can list the variables that I am using in a script? I mean the variables that I created in the script not the environment or local variables. For example if I have a script that has the following var's like : name=Alex, age=20, postal_code=12345, how can I list them all @ once WITHOUT using echo $name, $age and so on. Imagine I have a lot of variables and i can't echo them all.
In the following lines I am trying to replace Puppy Linux 5.0 Released with the content of external variable Var. Following lines are in a file called news
Code:
<A title="Puppy Linux 5.0 Released" href="http://lwn.net/Articles/388754/" rel=bookmark><FONT color=#ffffff size=2><STRONG>Puppy Linux 5.0 Released </STRONG></A><STRONG> | </STRONG>
I am trying to rename a list of variables in my script using a second list of variables. I want the variables in the second list to replace the variables in the first list such that the first variable in List 1 is renamed after the first variable in List 2, the second variable in List 1 is renamed after the second variable in List 2, the third variable in List 1 is renamed after the third variable in List 2, and so on.
For example:
I know how to rename each file individually, but would like to run Do Loop which can rename all my output files at once.
I set a variable before entering the FTP session (vDate). Then it does not seem to resolve when I try to use it in the session as part of an mput command. $vDate resolves as an empty value. Can you point me in the right direction?