I have a file : cpq_cciss-2.6.20-34.rhel4.i686.dd which is designed to build a floppy disk; these floppy is used to hold disk driver which is not on RedHat CD-Rom. But this .dd is not complete: some files, like /drivers/pci.ids are missing.My idea is to extract all files from .dd file, put missing files and then re-create a new dd file. But, how can extract all files from initial .dd file, and then recreate a new one?
I'm trying to figure this error message out. This little script is supposed to tweet my laptop's IP address, as a cron job, I'm hopeful that it would do so even if it's stolen. This is a variant of one that works, but this doesn't, and I can't see a difference in the curl line of either one.
Code: #!/bin/bash user="xxxxxx@xxxxxxxxx" pass="xxxxxxxxxxx" wget [URL] TWEET=`sed -n 1p index.html` curl --basic --user "$user:$pass" --data-ascii "status=$TWEET" "[URL]" rm -f index.html exit This is the error message.
Code: curl: (6) Could not resolve host: status=66.183.103.67; Cannot allocate memory {"request":"/statuses/update.json","error":"Client must provide a 'status' parameter with a value."} Why does curl think the status is the URL?
I have a multi-sector nrg file that I would like extracted. I can't seem to find a way to extract the contents of it! Please tell me if there is a tool I can use to do this!
Why do I always need to write "su" then my password to extract or copy any file in fedora 11. How to configure so that I always be in my root directory.
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
Some of my files and directories were mysteriously disappearing and some of my shell scripts were failing after the upgrade to Fedora Core 12. After some debugging I found out that file name globbing is no longer case sensitive in Fedora Core 12, that is
rm -rf [a-z]*
now also trashes all files and directories starting with [A-Z], which explains the removed files and directories
and
ls [a-z]*
now also includes files and directories starting with [A-Z], which caused my shell scripts to fail.
I have created a virtual machine of a system running Fedora Core 4 and I need to upgrade it to Fedora Core 10. Based on what I have read, it iis possible so I started theupgrade process. I get an error message saying that /dev/hda6 (my root paritition does not exist) even though it does.
Does the installer need to read a label from /etc/fstab? I executed tune2fs -L / /dev/hda6 amd ,and added LABEL=/ for the corresponding entry for fstab. but the FEDORA CORE 10 is still giving the same problems for the installation process. Should I upgrade to an intermediate verson like Fedora Core 7 first?
Fedora 12 gcc 4.4.1 I am doing some programming, and my program gave me a stack dump. However, there is no core file for me to examine.
So I did: Code: ulimit -c unlimited and got this error message:
Code: bash: ulimit: core file size: cannot modify limit: Operation not permitted I also tried setting ulimit to 50000 and still got the same error. The results of ulimit -a:
Code: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited
I am in school for my CIS degree and the book I am using this session covers Windows XP and Fedora Core 4. I am having trouble finding & downloading Fedora Core 4. My question is: Is there a big enough difference between Fedora Core 4 and Fedora Core 14 that I would not be able to use 14 instead of 4?
When ever I extract a file from a .tar or whatever, it isn't detected. I notice this mainly when i'm using xampp. I copy zip up all my files on one computer, load ubuntu on another, extract the files to the web folder (htdocs) and then I get nothing. However, when I manually create the files directly on my computer as opposed to extracting them, they appear.
Is there something I need to do in order to have these files appear? Is there some sort of file system refresh? Or am I being a complete idiot?
This is what I tried to do:>cd ~/Desktop >sudo sh ati-driver-installer-10-1-x86.x86_64.run --extract.After entering my password, this appeared: >sh: Can't open ati-driver-installer-10-1-x86.x86_64.run.What am I doing wrong? It apparently worked in this thread.
how do I extract a pgp file on Ubuntu? I have the file and a passphrase. On Windows I have Kleoptra so I can right click the file and click 'decrypt'. Is there an easy way to do this on Ubuntu?
I know a .bin file is an executable file type in linux. We have an error after installing it and it referes to a file name and a line number within the file. I'm trying to find out if the file is part of the .bin file but I need a way to see what's inside of it or extract it.
and I want to extract VAR15 from each line (which can be at any column unfortunately - columns separated with commas - csv file), or VAR15 together with LATn,LONn from each line. Is it possible to do it with awk, grep or something other in linux?
I am trying to untar a file taken from true 64 bit Unix server to RHEL 5 server and I am getting a following error-Archive contains obsolescent base-64 headers.What can be done to extract the contains of those tar file.
I have downloaded squid2.6.STABLE23.tar.gz file. I want to untar the file. The file is located in /home. how can I untar the file so I may install the software.
File in question is [URL].. meder@pc:~$ tar -xvjf wkhtmltopdf-0.10.0_rc2-static-i386.tar.bz2 bzip2: (stdin) is not a bzip2 file. tar: Child returned status 2 tar: Error exit delayed from previous errors
I tried to unzip it as well, and I tried a various slew of commands to no avail on my Debian box which has no GUI. I downloaded this on my local desktop ( Ubuntu ) and was able to easily extract w/ my mouse so I'm not exactly sure what the extractor did differently...
Im trying to extract the contents of a zip file but I want to extract it to my own directory. I'v tried -d from unzip but that just puts the contents of the zip into that directory.
But I want to extract the contents of the first (root) directory in the zip if there is only one directory in the root of the zip else just extract the files/folders in the root of the zip file (if there are more then one files).
e.g. test.zip contents the following dir structure:
test.zip /app_v1/ <-The contents of this directory I want extracted to a dir of my choice - folder-1 - folder-2 - folder-3 - folder-4 - file1 - file2
I just installed Ubuntu for the second time, first time was like 2 years ago and my pc was an oddball and some stuff was just not supported properly. Anyways.. I've got the latest build of Ubuntu installed, everything is working fine, just one problem.
I have many, many, many files that are archived in multiple .rar's. Like .r01, .r02, etc... I have them all copied over to the harddrive, but cannot extract what is in them. Also, I have some of these .rar's archived in a single .rar because of windows being stupid and not letting me copy them to my external drive as the names of them were too long for windows to handle. Those .rar's will extract a couple of the .r01, etc.. files, but not all of them.
I can't seem to extract it with any archiver, and when I used unrar from the console, I get this:shai@shai-desktop:~/Desktop$ unrar c1700-adventerprisek9-mz.124-15.T8.rarunrar 0.0.1 Copyright (C) 2004 Ben Asselstine, Jeroen DekkersExtracting from /home/shai/Desktop/c1700-adventerprisek9-mz.124-15.T8.rarExtracting c1700-adventerprisek9-mz.124-15.T8.image Failed 1 FailedYou can see that it can open the archive but it can't get the file inside.
I have installed unrar-free, and I am trying to extract a file from a .rar archive (actually the file is split into several archives).
When I choose one of the .rar files and open it with Archive Manager, there are no errors and the file to be extracted shows in the Archive Manager window. I then choose to extract it, and after that I click on "show files". But when the folder supposedly containing the extracted file opens, the file isn't there.
Maybe relevant info: I am running latest stable version of Ubuntu, as the user that was created during the Ubuntu installation. The files are in my "Downloads" folder, and the Archive Manager window says that the .rar file is read-only.
I formatted a partition on a USB stick in ext2 format for Linux with GParted. I tried to extract a file onto the partition and received a error that said I did not have the permissions to do it.
I've got a large .tar.gz file that I am trying to extract, I have had a look around at the problem and seems other people have had it, but I've tried their solutions and they haven't worked.The command I am using is: