I've got Debian 6 loaded on a Dell E6520 laptop, and my / partition is nearly full. I used Parted Magic to make room adjacent to / so I could expand it, but pmagic won't work with this because it says that / is lvm2 and not mounted. But Debian 6 says that it doesn't have any lvm's and that / is ext3.
I'm trying to copy a list of files except the files which has ".log" in the filename to another folder.I can run it correctly when I am located in the Source folder, but not when I am in any other location.cd /home/me/Sourcels /home/me/Source -1|grep -v "^.*log$" |xargs -n 1 -iHERE cp -r HERE /home/me/DestinationHow can I indicate both Source and Destination Folder?
I want to insert a picture into a group of pictures (I memorized their names in a text file) resulting a new group of picture (which I also memorized their name in another text file), but I have a problem doing that. I want to write something like that:
I'm trying to find all zip files timestamped from the past 7 days, then unzip them into a different director.I tried the following, but it only unzipped one of three files that meet the 7 day criteria. What am I missing?Code:find /home/user/public_html/zip_files/ -iname "*.zip" -mtime -7 -print0 | xargs -n10 unzip -LL -o -d /home/user/public_html/another_directory/
I would like to ask the following: 1) ls -l |grep test -> this will grep every "ls -l" output line 2) ls -1 |xargs grep test -> this will grep every single file with test 3) ls -1 |xargs echo -> this will echo directory list 4) ls -1 |echo -> this does nothing!!!
My question is: how some command can receive input from "both sides" (grep can grep whole output or every single file - xargs, the same is for i.e. wc command). 4) echo does nothing (it's a single echo command).
On OS 11.4, 64 bit box. Has anyone figured out how to make the VM hdd larger after the original one has gotten full? I've search the forum but all the searches come up with ways that confuse me.
I often want to extract some info using awk from a variable/filename while running other things using xargs and sh. Below is an example: Code: ls -1 *.txt | xargs -i sh -c 'NEW=`echo $0 | awk -F'_' '{print $1}'`; echo $NEW' {}
In the above case I would like to grab just the first field from a filename (delimited by '_') from within an sh command. This is a simplified example, where normally I would be doing some further data processing with the sh command(s).
The error message that I get is: Code: }`; echo $NEW: -c: line 0: unexpected EOF while looking for matching ``' }`; echo $NEW: -c: line 1: syntax error: unexpected end of file. I haven't been able to figure out how to escape the awk command properly.
I'm testing some multi-plat java code and I'm getting a bit frustrated with the Linux tests. I need to run the command: Code: $ java -jar /home/developer/TCO/TabletComicOptimizer.jar <file> <args[]> against all the files that match a specific criteria. I've tried various find syntax and I can't seem to get it right.
Normally I would just create a bash script and populate the results of find into an array and then just enumerate the collection but in this specific case I want to demonstrate this operation at the bash terminal.
I've tried things like: Code: ~/TCO $ find . -type f -iname "*.cb[rz]" | xargs java -jar TabletComicOptimizer.jar {} 1200x1800 ; Thinking that the {} is the substitution for each file returned by find but it's not working. How do I execute my java program against each result in the find operation?
I used the command shown below to remove a list of software using yum. It worked, but is there a way of doing this without using the -y option? I would like to review the results before the transaction takes place. I would like to use the same method for installing additional software after a clean install. cat filename | xargs yum -y remove
I will eventually script this but wanted to know where I screwed up when trying to get this oneliner to work. In a nutshell, I want to created a backup directory, find any files that have changed in the last 48hrs and them copy them to the newly made backup directory.
What am I doing wrong here ? Shredding a directory with files is incredibly slow. If I create a file of similar size it is around 1000 times faster. Filesystem is ext4, OS is Slack current. Looking at top it shows shred's status as 'D' which, according to the man page, means uninterruptible sleep ! Also /usr/bin/time shows around 10 minutes but that time was printed on stdout approx. 7 minutes before the command prompt reappeared !
I get this behavior on Slackware 13.37, which includes BASH 4.1.010. Yes, BASH is my shell. I have a file called a.flac and I'm in the directory that contains it.
The output of the ls command is expected: Code: ls *.flac gives: Code: a.flac
Removing the extension with basename works as expected: Code: basename a.flac .flac gives: Code: a
Putting the above command in a variable substitution works as expected: Code: echo `basename a.flac .flac` gives: Code: a
Using xargs with ls and a variable substitution works as expected: Code: ls *.flac | xargs -i echo `echo {}` gives: Code: a.flac
However, when I try to add the basename command to the above command, it stops working. Code: ls *.flac | xargs -i echo `basename {} .flac` gives: Code: a.flac
Whereas the result I expect is: Code: a Why is it not working, and how do I make it work?
Although I've been dabbling for a while I'm somewhat of a newbie so bear with me: Rather than rebuild my Hardy Server due to root being full, I followed suggestions to create another logical volume on the volume group and put root there. I must have missed some fundamental step. Although the partition appears to be functional, the defined space isn't visible and I am stumped:
$sudo df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg--server1-lv--root 4582064 4533388 0 100% / varrun 1895524 248 1895276 1% /var/run varlock 1895524 0 1895524 0% /var/lock
I have triple boot Win 7 32 bit on hard drive 1 Win 7 64 bit on hard drive 2 Data partition accessible by 3 OSs Ubuntu 10.10 on hard drive 2
[Code]...
Everything is working great. I'm using Windows Boot Loader (used easybcd to attach Ubuntu).
I want to expand /dev/sdb3 to have more space for Ubuntu. I am able to shrink the data partition /dev/sdb2, which leaves an unallocated space. I have backed up /dev/sdb3 using Paragon software.
My question is, what is the best way to expand the /sdb3 partition into the unallocated space and restore the ubuntu image backup so that it will use up all the space (unallocated and current /sdb3)? I don't want to screw up since everything is working properly, I just want some more space.
I have a 7GB unallocated space and I want to expand/increase the size of the preinstalled fedora 13 ext4 size from the unallocated space any ideas?**Using gparted live is not working is just displating the whole sda drive.
I have been using Ctrl +/- to expand/shrink the displayed font on web pages in Firefox since I figured out how to do it back in version 0.something. Today I happened to be accessing this page https://personal.vanguard.com/us/fun...T#hist=tab%3A2 and I noticed that when I expanded the font the text to the right side was pushed off screen - not unusual - but that I did NOT have a slider at the bottom of the browser window to allow me to move to the off screen text. I continued to press Ctrl+ until the font would no longer expand. At the very larges font the slider reappeared however, it will only move a little bit to the right.
I then accessed the page with Internet Explorer and Firefox 3.5.8 on an XP box - the slider appeared as expected and I can slide to the right to see all of the enlarged text.
I created a new user on my Ubuntu machine, signed on as that user and viewed the page. Again, no issues. Seems like something in my profile rather than the web page itself or Firefox as installed on my PC.I tried disabling the few addons which I use (Noscript, Addblock Plus, etc.) - no improvement.I now created a virtual machine (VMWare) and installed Ubuntu 10.04 64 bit. Tried the page in Firefox and it works fine. Then I copied my profile to the virtual machine - the page experiences the same problem. So obviously there is SOMETHING in my profile which is causing the issue.
And for my next trick I replaced the copy of prefs.js in my profile on the VM with prefs.js from the default profile. Other than forgetting everything I have ever configured in Firefox, this has fixed the problem. But not a pretty way to go.
I have an awk program that finds all files of a specific filename and deletes them from selected subdirectories. There is logic in the awk to avoid certain subdirectories, and this is initialized via a parameter in the beginning statement of the awk. The parameter should have all of the subdirectory names at the top level. This varies from time to time, so I cannot hard-code the value.I'm having a problem initializing the awk parameter using sed. I'm setting a variable (named subdir) using an "ls" command piped to "xargs". I'm then trying to substitute that value into the awk using the sed command.
Did anyone else notice in the 10.04 RC that it is very difficult to expand a window from the left or right side? The threshold is one pixel long before the arrow disappears.
This forum might not be the best place for this question, but some people here are pretty knowledgeable and may have more insight than I do about this. Anyways, I'm thinking about expanding an NTFS (Windows 7) partition on my desktop computer into unallocated space. I know that there is a risk when shrinking a NTFS partition due to fragmentation but are there any risks of data loss from expanding a NTFS partition? My common sense tells me there isn't a risk but I want to be 100% sure I won't lose any files.
I installed opensuse 11.2 some 6 months ago as an alternative to windows 7, on a 44GB partition. Having become my primary OS, I am looking forward to expand the ext4 partition from 44GB to the maximum possible. I have some 24GB unpartitioned space, and free space on NTSF partitions (one of which could be deleted if necessary). What is the best and safest procedure to perform the partitioning.
So today I needed to switch from openSolaris to a viable OS on my workstation and decided to install openSUSE after having good experiences with it on my personal laptop. I ran into some problems partitioning one of the two hard disks installed on the system. I was limited on the amount of time I could spend at the office doing the install so I decided to use LVM on the one hard disk that seemed to work okay.
I picked LVM because although I don't know much at all about LVM, I at least know enough that it would allow me to expand the root and home partitions once I get the 2nd hard drive working correctly. So now that I've gotten the 2nd disk working okay, I've created two physical volumes on the 2nd drive, one to expand the root partition and one to expand the home partition. So, my question is, can I expand the root an home partitions while they are mounted or should I boot into a live CD environment before I expand the partitions? If I could expand them without booting into a different environment, that would be so great as I don't want to have to drive out to the office again before Monday. BTW, I am a new openSUSE user and an ex Ubuntu user. I loved the Ubuntu forums but had to switch because I do not agree with the direction that Ubuntu is taking.
I want to back up an entire Linux system on a 3Tb external Western DIgital USB3 drive.
I do not want to reformat it from what it is, apparemtly NTFS.
Is there a utility that can act like a file manager like mc, that will permit me to create an ever expanding (to 320Gb) TAR file that will retain all the original file permissions. I have had nothing but disappointment with Linux backup utils with a FAT32 external drive, and I am concerned if I just try an tar the entire drive at once, with around 3 million files, I might run out of memory.
Right now I have a 320GB system drive and 3TB data drive. I want to add two more 3TB drives and do a software RAID5 3x3TB. Is that possible without losing the data that is already on the data drive?Just want to make sure before I bought the 2 two drives. Not looking for instructions on how to do it,but if you want to include some that would be great too Just making sure it will work.
My Ubuntu system is occasionally becoming very sluggish. I'm running many things simultaneously and it's very difficult to tell which program is the culprit.
I suspect that the sluggishness is due to disk activity since the CPU usage is consistently under 50% on each of the 4 cores of the CPU, and over 30% of the 6GB of RAM are free.
Is there a tool that can show me in real time the number of disk IO operations per second and the amount of data read/written per second? Can all this info be broken down and displayed per process?
I want to write a shell script, so that at 9AM every morning a general will be sent automatically to my network users E-Mail ID. My users are as follows: akhtaruzzaman@a[URL], ariful.[URL] etc.
Below is my little effort: # !/bin/bash userlist=`cut -f 1 -d : /etc/passwd` mail -s "mailbackup" << END
keep mailbackup in another drive daily for security purpose