Ubuntu :: Creates A Huge, Unwieldy List That Can't Get Through?
Jul 8, 2010
I'm looking for a file on my computer. It's in .txt format, and was created/last modified on Wed 7 Jul or Thu 8 Jul. It's either in my /home folderr, ot possibly /media/disk. But I can't find it!! The commandCode:sudo find / -name *.txtcreates a huge, unwieldy list that I can't get through.What do I need to do?
I am trying to compare a list of patterns from one file and grep them against another file and print out only the unique patterns. Unfortunately these files are so large that they have yet to run to completion. Here's the command that I used:
Code: grep -L -f file_one.txt file_two.txt > output.output Here's some example data:
Is anyone else having problems with the new 2.6.32-26 kernel and Lucid LTS? I first installed it several days ago on a test system and it immediately killed VirtualBox, since the header files necessary to rebuild the vbox modules were not included in the update. I installed the header files via Synaptic and everything then worked properly, so this morning I allowed my production system to install the updates.
That was a mistake. Immediately upon the required reboot, my GUI failed to appear. Eventually I got a CLI login on TTY1, and attempted to use "startx" to launch the GUI. It failed with a message that the nvidia driver could not be found.
Rebooting and choosing the older kernel version cured all the problems, but for now the security update provided by the new kernel is unavailable to me. This is NOT the reliability I have come to expect from Ubuntu's long term support and update notifications!
I have been running 64 bit Koala on an HP desktop with 8 GB for a few months. Now that Lucid is out and the kernel has stopped changing every three days, I thought I would do a clean install of 64 bit Lucid. Backed everything up, verified the .iso I downloaded a few days ago and burned a CD, and away we go. Everything seems clean, finds the old /home, asks for a restart. I say go.
Suddenly there are way more options on my grub menu. Every Linux option has an alternate with PAE, and the PAE option is the default boot. I didn't even think about it and let it take the default. The reboot hangs. I had to pull the plug to turn the machine off.
I reboot again with PAE. It hangs again. Pull the plug. Reboot without PAE and I'm golden. Google soon turns up all sorts of old advice to add noapic to the boot parameters on a PAE kernel. Instead I sat back and started thinking.
Am I doing with a kernel marked PAE in the first place? This is a 64 bit install. It doesn't need PAE. I look again at the .iso and it's 64 bit.
I've tried using a script to run incremental backups. The idea was to use the update switch to just update the files in the .tar instead of creating new ones. However it seems to have created duplicate files in the tar instead of just updating them (refer to screenshot). Is this normal?
Here is the command in the script;
Code: tar -uvpf /home/jonny/.BackUps/Updating/Documents.tar /home/jonny/Documents
Is there a way stop this but still have them update to the latest version?
I have created an automated backup using the following:
[Code]...
When cron runs it creates a .tar like it is suppose to, but it creates archive-.tar. However, when I manually type in that command, I get archive-"whateverthedateis". Is there a step that I missed since the date keeps being omitted by this cronjob. I tried a another method
I have installed Ubuntu 11.04 for a friend on his new laptop. First I shrunk his Windows partition and then installed Ubuntu 11.04 direct from the Live-CD. Everything worked perfectly and he is very pleased. However I noticed one strange thing. When I ran GParted on the installed Ubuntu Desktop I noted that Ubuntu was installed on an extended partition, /dev/sda4, with the ext4 root file system on /dev/sda7 and three linux swap partitions each of 7.85 GiB. Only one of these swaps partitions is "on", ie with a key next to it. The other two seem to be just wasted space. Why did the install partition three swap spaces?
On a similar theme, on my own machine I installed 11.04 next to my 10.04. This time it installed two additional swap spaces (see problem above). I removed them both and altered the fstab to the UUID of the existing 10.04 swap space and it works perfectly. So my second question is why doesn't the install use existing swap spaces rather than creating new one(s)?
Here's what is going on; I'll open any file in vim to edit, and then when I'm finished and I enter ":wq", instead of saving the file I was editing, vim creates another by the same name with a tilde.
I had no problems doing my video editing with my home videos whether they are dv or the mpeg4.avc hd format. I was able to use KDEdvdauthorwizard easily to make a burnable playable iso from my mpegs that I made from avidemux2. I upgraded many packages and since then I cannot make an iso that I can play at all.
Also, dvdstyler and qdvdauthor crash. KDEdvdauthorwizard says there is a problem with my transcode so will not make an iso anymore. I have tried many versions of transcode and none are accepted. Just a few weeks ago KDEdvdauthorwizard worked great for making my iso.
I decided to experiment with tovid and it makes a dvd that will not play in either of my dvdplayers. It plays poorly on the computer. I tried to make an iso out of my mpeg with mkisofs and that .iso will not open or play. I used mediainfo on the iso and this is what I got:
mediainfo output.iso General Complete name : output.iso Format : MPEG-PS
[code]....
My system is: dual core opteron 185 2.6ghz, 1000 gigs of hard drive, 3 gigs ram memory, onboard video nvidia hdtv capable. I can play hd movies just fine on my system. Movie editing is fast, too.
I have two classes, for argument's sake A and B. A implements the core functionality, B is an encapsulated data structure. If you imagine this situation [code]...
From within B's member functions, I would like to access the public function() in class A. This is not an inheritance issue, they are two discrete classes with radically different functionality. Class A makes an object of B.
I've been searching the web, without finding any sollution to my problem.vsFTPd is acting really weird. I've never seen this problem before, and I've been using vsftpd for some years nowWell.. The thing is, I've made a user that chroots to the folder /var/www on my server. And when I then try to chmod the file /var/www/htdocs/testsite/index.html through my ftp-client, I only get the error "550 SITE CHMOD command failed.", and when I then check in my /var/log/vsftpd.log it says
Code: FAIL CHMOD: Client "192.168.50.58", "/htdocs/testsite/index.html 777" Which I think would mean that it tries to chmod the file "/htdocs/testsite/index.html" instead of chmod the
Im using an if statement in the bin ash shell and it isnt working.
Code: if (( 5000 > $available_blocks )) then echo "WARNING Disk space low, "$pct_used" is used" fi
I don't think its a problem with the if. When i use that code it creates a file with name of the value $available_blocks instead of using the '>' sign for comparing the 2 numbers.
I'm looking for an app for Linux that creates bootable images. Back when I used windows, I used Imgburn. Now, I need an app like that for Linux. Wherever I looked online, I saw either one (or both) of these ideas.
1. Run Imgburn under wine 2. Get k3b
I don't like using wine because the programs run very slow. I'm not sure exactly how to get k3b to produce a bootable image. So that's where I'm stuck.
set up incremental backups with crontab. I just discovered that tar is not actually incrementing the tar files. I first created the tar files, then in crontab I have:
Code: cd /; tar -cpf --incremental --exclude-from=/root/ExcludeFromTar.txt mnt/PATRIOT/bkp/home.tar home
I only just discovered that this creates a file whose filename is "/--incremental". I also tried using tar's -G switch instead of --incremental:
I run in a script a mailx command like this:cat logfile | mailx -s'the logfile' to-me@..This works most of the time, but in some cases mailx automagically turns logfile into an attachment called 'attachment.bin'.I think this may be because 'logfile' contains a few control characters or escape codes?How can I tell mailx to be less intelligent and treat it as an ASCII text file?
I have just got my Openldap server up and running howerver, I admit I'm a little confused about authenticating a client mechine to the server. When I create an account on the ldap server, does this mean that the server creates a user account in the /etc/passwd, or somewhere else on the server?
This ess creates a custom bootable thumbdrive with the latest available Fedora 14 software. In addition, the rpmfusion repositories are preconfigured, and the list of software packages on the currently running system is used as the basis for the thumbdrive image
So, this is a simple download scheduler program code. Which creates multiple threads of the downloading process - wget (i could also have used 'curl' instead 'wget').Can you debug this code?
My cron job is executing the below mysqldump command but it produces an empty sql file. However, when I run from the command line, it works as expected.
I'm creating a script that creates files from svn checkout and compress them using tar.gz the script gets the repository name from command line argument i need to capture a number from the last line of the output and create a file name from it.
The svn returns output of all the file names from the repository and in the end it says: revision number xxxxx. i need to get this number and then rename the tar.gz to it. how do i save the output to a variable and get this number.
I am having some trouble setting up a cron job that creates a tunnel to my remote machine to work correctly on Ubuntu 9.10. The setup looks like the following:
(1) myscript.sh (executable) Code: #!/bin/bash ssh -2 -x -i /home/user/.ssh/id_rsa.prv -L 3128:myremotemachine:3128 myaccount@myremotemachine (2) crontab -e, added the following lines:
I print something with cups-pdf. The pdf appears, it LOOKS as it should, BUT if i actually select text from the pdf viewer and try to paste it somewhere, i get messed up characters (sometimes no characters). So this means that searching too is out of the question i guess.
Why is this happening? I dont know if its a ghostscript issue because on Windows there are several gs based pdf printers and those create perfectly searchable pdfs. Is this a cups/cups-pdf specific issue?
PS Printing through the "Print to file" dialog creates good pdfs but i dont know how to replicate that via command line.
I have some USB-Serial converters and when I connect any of them for the first time a /dev/ttyUSB0 node is created. If I disconnect the device and then reconnect it then a /dev/ttyUSB1 node is created y I do the same again (reconnect) a /dev/ttyUSB2 node is created...
This is annoying because if mostly use only one converter at a time and because of the device node changing I have to reconfigure my software.
To reset the numbering of the device node I can force the reload of the usbserial module but before (I don't know when before) this was not necessary.
I have also checked if the device is being in use before disconnecting it (with lsof) and the device is not being used.
I have a desktop and a laptop both running opensuse 11.2 with kde4. I have a samba share on my desktop. I tried opening a video on that share from my laptop (wirelessly) with Dolphin/SMPlayer. Here's what happened:
The video started downloading and the system tray notified me it would take 25 minutes. I thought that was too long (video is 350MB) so I checked the download speed and it was about 2 MB/s. It didn't make sense but I let it keep going.
25 minutes and 3.4 GB later, the download finally "finished"--according to the system tray. However, I checked my system monitor and something was still downloading at 2 MB/s. I confirmed with "df -h" that I was losing 2MB of space a second. At this point I only had about 700MB of disk space left so I rebooted (I wasn't sure how else to stop the download).
After digging around on / I found my video at /var/tmp/kdecache-londy/krun and it was 350MB. Then I found multiple copies of the same video, of varying sizes, on /tmp/kde-londy totalling 3GB.
I deleted the tmp files and tried it again. This time instead of clicking on the video to play it, I tried copying it to my laptop. Same thing started to happen but I didn't let it continue.
When I converted to OpenSUSE 11.2, and went through YaST HTTP Server Configuration, creating my virtual hosts under the Hosts tab, YaST combinedm all int ile,"/etc/apache2/vhosts.d/ip-based_vhosts.conf".I did google and read, [URL]for further assistance.I'd like each virtual host to have its own file under vhosts.d, and wondering why YaST did not do that.The file /etc/apache2/httpd.conf laid out the file structure, and all vhosts.d/*.conf files are included.Is there a way to tell YaST to create separate files for each vhost, or does the user have to manually do it?
I could not find an answer to this. I can connect to the internet no problem after I edit the connection name to NETGEAR instead of our last name. After a while it looks like it renews the connection and created a new one with the wrong name. The connection drops. It looks to me it is not using the same wireless connection each time but creating new ones that do not work for the above mentioned reason. What can I do to stop the loop and have it use the same connection always?
I am trying to use ln to create a hard link to file a and whenever I do it, it creates a copy of the file instead. After having edited file a, when opening the link, it shows the old information and opening file a shows the new information. The command I am using is
Code:
ln /home/user/file
within the new directory i am trying to link from. I am using centos 5.4.