I have tried for some time to write an .img file to a USB and a SD Card to boot from to my Netbook. The .img file is from Moblin and they suggest that I use Win32 Disk Imager to write the file to my USB. Though it haven't worked for me even if i use the SD Card and other computers.
The easiest for me would be if i could write the .img file on my SD Card and i have search for some tutorials but haven't found a way to write the file.
I've read something about "dd" but don't get the "command", "code" - thingy. If any one know an easy tutorial for it would be great! Or even better would be a tip to another program/utility like the Win32 Disk Imager that works.
I Use a Windows 7. HP Pavilion Elite and a Windows XP. Compaq Mini 110
When I ls -l /etc/passwd, -rw-r--r-- 1 root root /etc/passwd When I login as myself, and rm /etc/passwd, it asks: rm: remove write-protected file '/etc/passwd'? If I say yes, will it actually delete the passwd file?
What are the possible problem when Windows access the file from Ubuntu got Read Only even though have a full permission to read, write and execute the file? Ubuntu to Ubuntu accessing the file there is no problem only Windows got a problem.
I want to backup my Ubuntu box and I found a couple very helpful articles that describe how to use tar for backup and restore. So far so good. I've dabbled in Lin/Unix in the past, mainly on work systems, but Linux on my personal PC is new to me.
When I ran the tar command as root (su -) I noticed several errors scrolling by in the window. It's scrolling too fast to note the exact errors, but I did notice "permission denied" a few times.
Is there some way that I can capture the output of the tar command to a file so I can review it for errors and/or permission denied statements?
Can I just add some arguments to tar?
tar cvpzf backup.tgz --exclude=/proc --exclude=/lost+found --exclude=/backup.tgz --exclude=/mnt --exclude=/sys /
I have created a file newfile.txt using: touch newfile.txt Now I want to write to that file from terminal i.e whatever I will type after the $ will be written to the file this is what I want. How can I do that?
how can I install 10.04 over 7.04 (which is partitioned with XP) which I want to retain WITHOUT over writing XP and WITHOUT losing my data files stored in 7.04?
I am trying to change the write permissions on a file and On the screenshot you will see where i have underlined, its states i dont have owner rights to modify this file, how do I get owner Permissions when this is my installation..
I'm writing a script/plugin for Nagios for testing a WebLogic server.. I redirect some output to a file, and then i read that file to get some data, but i can't seem to write to that file with my script :s... this is the most important code
[Code]...
* EDIT * When i execute this script through a local terminal (PuTTy), it works but when i execute it from Nagios, it doesn't work..
I'm trying to write a script to generate an html file (complete with formatting- echo "[random formatting]" >> index.html) for all the files in the given directory. So far, it works pretty well. HOWEVER, I want the listed files to be treated as links. I'm using awk to grab the part of the filename I want, but I don't know how to do this out as it fails if there is more than one file. The HTML side would look something like this:
Code: <li><a href="2010.05.29.html">May 29</a></li> It all works fine up to the actual number of the day- fine with one file, fails with more than one. My code is this: Code: # Grabs all the files and puts them in a list with anchor text "Listed" ls | find 2*.html | sed -e 's/^/<li><a href="/'
After the syslog facility rolls logs weekly, the Postfix cannot seem to write properly to the mail.log file. What I don't quite understand is that Postfix is still able to write the following error to the log file: ..."status=deferred (temporary failure. Command output: Can't open log file /var/log/mail.log: Permission denied )"It is my understanding that Postfix uses several different processes to write to log files, but I'm confused as to why it is able to write errors to the log but not able to write when sending/receiving mail. After I chmod 777 the mail.log file, Postfix slowly clears the queue and the mails are then received. Everything functions fine for another week, until the logs roll again.
I want to keep a trace of the URL I visit, so I use a command line like this:
tcpdump -ien1 -v -X 'tcp port 80' | sed -nl 's/^.0x[0-9a-f]{4}:.{43}(.)$/1/p' |perl break.pl |perl -pe 's/(GET|POST).(.*?).HTTP/1....Host:.([a-zA-Z._0-9-]*)../" BEGURL
[Code]....
I also tried redirecting stdout and stderr to /tmp/out, it's still empty. The file has write access. I have no idea what it can be. Is there anything else than stdout and stderr?
I want to write a shell script file for the below subject
subject / situation : i have many users say user1, user2, user3, user4 and so on... within my /home dir
Within a user dir say.. /home/user1 i have many unwanted files. these unwanted files start with the name core for eg. core2324, core9789, core 9079 etc.. i need to delete them.
I want to write an automated script for this, which can do the same. How to write a script which can delte these unwanted core files which exist in all user dirs.
I have the following problem. I call a C++ program from a Java servlet by using Runtime exec. The OS is ubuntu and I use Netbeans 7.0 with Glassfish 3.1 web server.The program executes but it does not open and write into a specified file in a specified folder. The same C++ program compiled under Windows opens and writes this file.How can I solve this problem in Linux?
On Monday i had Centos 5.3 with / and /home in ext3, with no problems (at least i don't remember any problem).But as I decided to use XFS (I have a SSD , and i use a 3D Modeling software called Maya , that it's quite sure it could take advantage of use XFS), i decided to create in my new Solid State Disk , that i recently bought , a swap , a / partition formated in XFS , and a /home partition in XFS too.Then I used a Live cd to copy the contents from the old HDD to the new SSD , using find and Rsync --> (cd /mnt/oldroot/ ; find . -xdev -print0 | rsync -avHx --exclude-from /mnt/rsync-filter . /mnt/newroot/)
Once done , i edited fstab,Menu.lst , install grub in the MBR and i had to use mkinitrd to load the XFS.ko so be able to boot with the new system.All worked fine , although i only could log with the root , my normal user could not log , and a error appeared : GDM could not write your authorization file.This could mean that you are out of disk space or that your home directory could not be opened for writing.In any case it is not possible to login. Contact system administrator.But i remember that prior to Copy all to the new SSD , and use XFS , all was fine, and i was able to log with my user.
So thinking that perhaps the copy did by Rsync was not ok , or something i decided to reinstall Centos, now in a partition in my SSD and try another time , but without using a /home partition neither a normal user ,only using root.The SSD had a Swap , a 32 GB XFS partition and the / of the new Install in ext3 (20 GB ).Once installed Centos 5.3 , I updated only Glibc , yum and Python packages (like said in Centos 5.4 Release notes recommends , but not doing the final yum update)Then i updated only the kernel , to have a XFS supported Kernel , and then i rebooted (I would update all once migrated to xfs)
Hardware: Sun T-2000 with Solaris 10 5/09 U7, ZFS root and RAID (what subversion is writing to)Software: Subversion 1.6.12, Apache 2.2.11, db-4.2.52 ( and all related dependencys of course)Everything was fine until today, I have someone come over and they are getting this error when doing an import: svn: Can't write to file /DATA/* : File too large After some testing it seems it does this on files larger than 2G in size, but after googling until I could not google anymore I could only find people having this issue with Apache 2.0 or using APR lower than 1.2 (mine is 1.3.3). Is there a files size limit inside Subversion?
I just found that I could perform write operation using a normal user account to a file system I mounted with the commands as followed:
sudo mount -t ntfs /dev/sda1 /mnt/disk/
This is the corresponding entry in the output of "mount" command: /dev/sda1 on /mnt/disk type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096)
As far as I remember, when using a normal user account, I had to use "sudo" to perform any write operations (mkdir, rm, etc) to a device mounted using "sudo". But now it seems to be changed.
Do I remember wrong, or did Karmic have any updates change this setting? (I never manually changed user settings, except that I added a root user, but I never used it.)
OS: Karmic(up2dated) Kernel: Linux stephen-laptop 2.6.31-17-generic #54-Ubuntu SMP Thu Dec 10 16:20:31 UTC 2009 i686 GNU/Linux
I have apache2 running on my computer. I want to change the permissions for /var/www/ so that I can edit the files without a problem. Right now I can use the gksudo command, but I'd like to be able to have all the files available when using an IDE like eclipse. I've read in several places that Code: chmod 755 /var/www will do, but if I'm not mistaken that would give read/write access to anyone. I'm not in a production environment, so I'm not too worried about security, but I'd like to give anyone else as less permissions as possible. Would this be possible?
i got some mail starting in the last days with this content:
Code:
/etc/cron.daily/logrotate: error: bad top line in state file /var/lib/logrotate/status error: could not read state file, will not attempt to write into it run-parts: /etc/cron.daily/logrotate exited with return code 1 /var/lib/logrotate/status
I recently Installed Matlab and it keeps giving me this permission error, even after running the following command.
sudo chown -R ${USER}:${USER} ~/.matlab Cannot write to preference file "matlab.prf" in "/home/"username"/.matlab/R2010a". Check file permissions. Cannot write to preference file "matlab.prf" in "/home/"username"/.matlab/R2010a". Check file permissions. The desktop configuration was not saved successfully
1. What can I use to read/write to my ext4 file system in Win7 x64? 2. I use Macbuntu. Is there any way to get a translucent top bar 3. My computer seems to be running hot while on Ubuntu. The fan speed seems increased. It goes back to normal on Windows though.
I have a Western Digital "My Book" on my network which I have mounted with cifs.
If I go into it and vi a file, all is fine. I can write and save and close. When I open the file and add to it and then try to write it again, I get the message:
"thefilename" E212: Can't open file for writing
The file is owned by me still and the permissions are -rw-rw-r--
I don't understand why it works the first time and not the second. Also this same effect is observable when I save from another program to there. The first save is fine, the second can not be saved.
I have a number of text files throughout my /home/pjs/Documents directory tree that have execute permissions set. Almost all of my file names have spaces in them. I am trying to write a shell script that will look at each file in my Documents directory, find the ones that have execute permissions set, and run the command chmod 644. Of course, I don't want the command run on the directories.
The following script *doesn't work*, but might serve to illustrate what I am trying to do:
#!/bin/bash for x in "$(ls -R)" do if [ -f "$x" ] && [ -x $x ]; then chmod 644 "$x" fi done
I want each file and directory name to be placed, one by one, in the variable $x, and then tested with the "if" conditionals.
The first problem seems to be that, although the command "ls -R" does produce a complete list of the files and directories I need, they are not placed, one by one, in the variable x like I want them to be.
Also, I think I should use the shift command so that the option -R doesn't get included as one of the values of the variable $x, but I can't figure out where to put it.
I'm running a simple backup and log script that is cronjobed to run twice a day. So currently, when new data is added to the log, its added to the direct bottom of the log file. However, I would like to have it printed to the very top of the log. The code is attached, I can't quote it in here because I am a new user and the system thinks I have url's in it, when they are just paths.