so this makes acsv file with one column. I want to run it again but rather than outputing to a new csv i want to add it to this one as the next column. For this example there will be 100 rows per column.
the 1st one will make the file
[grassGIS code]> /home/gary/AVE_monte_carlo/rstats_AVE.csv add ',' after each value the next one [grassGIS code] open file /home/gary/AVE_monte_carlo/rstats_AVE.csv
Is it possible to write ksh script in the spec file? The target is after I perform rpm -i my_rpm.rpm According to the spec file, ksh script will do some installation & configuration. For example run other script and edit some files.
I have custom software that writes to a sensitive large file when the user does something. I would like to make backup copies of The file that gets written to, but if I make a gzip of the file at the same time someone is changing something, it will corrupt the backup because some of the data will be missing, as its backed up during being written to.
a) Is there a way to detect if a file is currently being accessed/written to? That way if its currently being accessed, I can just make the script wait until its done and then finally back it up.
b) Instead of backing up the large file while it has potential to get written to, would it be better to make a copy of the file first, then gzip the copy? This idea comes from the fact that gzipping the original takes 5-10 seconds, whereas making a copy only takes 1-2 seconds. The less time, the less chance of corruption.
c) Is there anyway to freeze a program or a file to stop it from being written to for an amount of time?
With a, b, and c together. The best solution I have to my problem would be a script that first detects rather the file is being accessed. If not, it would then freeze the file/program and then make a quick copy of it. Once the copy is created, it will unfreeze the original file/program and then go about gzipping the copy.
is it possible to write ksh script in the spec file? the target is after I perform rpm -i my_rpm.rpm according to the spec file , ksh script will do some installation & configuration for example run other script and edit some files
I just noticed on my Ubuntu machine (ext3 filesystem) that removing write permissions from a file does not keep root from writing to it. Is this a general rule of UNIX file permissions? Or specific to Ubuntu? Or a misconfiguration on my machine? Writing to the file fails (as expected) if I do this from my normal user account.Is this normal behavior?Is there a way to prevent root from accidentally writing to a file (Preferably using normal filesystem mechanisms, not AppArmor, etc.)
I understand that root has total control over the system and can, eg, change the permissions on any file.My question is whether currently set permissions are enforced on code running as root. The idea is the root user preventing her/himself from accidentally writing to a file. also understand that one should not be logged in as root for normal operations.
I am running Intrepid Ibex 64-bit, and I have recently ripped a cd with EAC on one of my windows boxes, and it produced a cue sheet and one single wave file for the entire cd. My question is this, is it possible to take the wave file, burn an ISO image that acts exactly like the original cd? In other words, take the single wave file, using the cue sheet splitting it back into it's individual tracks, and then reading the subsequent ISO image from a program like Sound Juicer, so it can write all the metadata and convert it with my own preferences. I know this is a very strange predicament, but it's one that would save me a world of hassle.
I am trying to write into a single log file from 3 different applications in C. In one of the applications I am forking out 5 instances. I would like to know what is the best way to open and close the log file in which i want to write from these applications. Should I open and close it at the start and end of each application or is it ok performance wise if i open and close it inside the log function which will be called for every write.
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 2.6.29.5-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
Is it possible to forbid that more then one user open the same file in rw mode? In windows when you open a file that another user is using, there's ad advise and you have to open it in read only mode
I installed ubuntu 10.04 desktop edition on 3 pc (there is not a server-client architecture). I installed samba.(and smbfs)
put the strings: [name] comment = ... path = /... guest ok = yes read only = no create mask = 0777 directory mask =0777
Computers that access to that directory do (on boot, with root privileges) mount -t smbfs -o username="user",password="pass" //192.168.0.12/name /mnt/cartelladimontaggio
But if two users access to the same file, both are authorized writing on it! So changes made by one are lost when the other save.
I have 3 c files(one of them forks out 5 instances) all writing to one log file. Now to avoid the confusion of opening and closing in each application or instance and running into a situation of not having closed a file I decided to open and close the log file inside the log function for each write.
So what I do currently is fopen, flock, fwrite, unlock, fclose for each write. All the log messages from all the files get written fine and there are no errors but I see a performance hit. The applications talk to each other using SHM(shared memory). So when I try to set a timer and check number of messages lets say I get X messages. Each time I remove or add a log call the number of messages changes. When it is a 1 sec or 5 sec timer it doesnt make a very big diff..few hundreds but when I check it over a longer period..every log call added decreases 1000 messages in count. So I want to know what is an efficient way of implementing the custom log across the application.
i am trying to create a floppy boot disk as my computer doesnt support booting from cd. I have downloaded ntrawrite and placed it in a file with the sbm file and followed these instructions [URL] which i found here.
when i type in the command i get the response "ntawrite is not recognised as an internal or external command"
Is the problem that im not opening the cmd in the right directory? I dont know how to change this.
I am trying to read a file character wise and trying to write the same character to another file. In this process, I unable to read and write white spaces successfully to the new file. The script reads the white spaces but while writing the white space is lost. The section of the code, is given below. Please advice how can i read and retain the white space while writing to a new file.
Code:
if [ -s f_test.txt ] && [ -f f_test.txt ]; then echo "File Exists !!" while read -n1 char; do
it is about the program sha1sum to create SHA1-hashes. As you probably know, SHA1-Hashes do have the length 20 byte. So when I just type:
Code: sha1sum myfile
it produces an output of
Code: (some20byte) myfile
just as it should. Now I want to store the 20byte hash in another file, I use this command:
Code: sha1sum myfile | awk "{print $1}" >> myhash
Unfortunately I'm not familiar with awk, but this should cut off the end of the sha1sum output, which is the name of the file again. The problem here is: The newly created file myhash has the size 41 bytes, and printing it out I can see that it is not the original hash (I wrote a little program to print it bytewise).
My program need to monitor the foler to know which file under the folder is being opened/created for writing. I add the folder into watch list using inotify_add_watch, when a file -- say 'AA' -- is created, I'll get the event through read api call. But the inotify_event only have file name 'AA' and a event mask. these parameters can't help me to know how the 'AA' is created/openned. So I have to scan the /proc folder to get to know how is 'AA' created/openned. I don't think this is a efficient way, especially if there are lots of files are openned/created in a short time span.
There is the Archive::Zip I think I can use with Perl 5.10 but I don't know how. I don't want to read or write any files, just zip something in memory, with best compression, like
$text = "this is a test"; $zippedtext = &Zip($text); sub Zip {
i am working on this thread: [URL] if it is better to open a file every time i need to write to it or should i keep a file open the whole time and when i am done with the script, close it and sendmail it out?
Or i just thought of this: i could keep concatenating to a string and just sendmail when done.
I have installed bluez libs,utils and all the requirements for bluetooth communication. And i am able to scan other bluetooth devices around. Will somebody please suggest me on how to start writing code for file transfer in bluetooth. Or can I get any packages online to do that for me. I am using RHEL5 and x86_32.
I have a relatively common problem, but I don't seem to identify it's source. I have a SAMBA server on my LAN to which there are mapped a few shares as network drives in windows xp (as Y: ) and mounted as CIFS in linux [as /y]. The problem is that every time I save a file [either windows xp or linux] on the mapped drive / mounted folder, our IDEs alert us that the file changes right after the save. I am running SAMBA 3.3.2.
I'm using ubuntu 8.10 which is already installed. Recently I have downloaded ISO file of ubuntu 10.04. Is there any way to install that ISO file i've downloaded without writing the ISO file in a CD?
When trying to conigure via nvidia-setting using root (sudo) and then saving to config file I get the ' Unable to open X config file '/etc/X11/xorg.conf' for writing.' in a message box - below is what i get on terminal:
Code: Traceback (most recent call last): File "/usr/share/screen-resolution-extra/nvidia-polkit.py", line 75, in <module> operation_status = main(options) File "/usr/share/screen-resolution-extra/nvidia-polkit.py", line 51, in main
I'm new to UNIX scripting; I�m stuck with the following I have an Oracle SQL script that takes three parameters
1- File Name 2- File Path 3- File creation date
Under UNIX I have a folder where files will be placed frequently and I need to upload those files to Oracle, what I need is a UNIX script that can do the following
Loop through Directory "/home/applmgr/snktmp" Picks only files Pass the file name to parameter &1
[code]....
Is the above possible? I already knows how to call the Oracle Script from UNIX Im only stuck on writing the UNIX part where it List the files attribute(name,path,date) and store them to parameters ,Looping until the last file in the directory If the above is not possible,then how can I create the below from the command line
On a Linux CD/DVD, there are compressed filesystem images for the live version for KDE or Gnome for example, but they have no extension, but they are clearly an image file ( compressed filesystem images for the live version before installation ) !!
I was wondering, How do I mount these compressed filesystem images, after I copy the ISO content of the CD/DVD on my system .... I want to edit some files or packages and make some changes, like if I want to customize a live version of gnome for example ! ... ( I know you might be tempted to tell me to use KIWI etc to customize etc ..... ) ... but I want to be able to mount the compressed file system image, then edit it for reading and writing while it is in a subdirectory on its own ... i want to open it ! ... is there a way to do this ??? ... these type of files have no extension ...
i can open this compressed filesystem image then to edit for read & write ... before I roll it back again ..... If and when I succeed .... what should I watch out for ? ... will the same compressed file image but slightly modified work again ?
PS. that same question could be kind of translated or be extended like : how do I use unionfs/squashfs programs on the command line to mount these image files with no extension for read & write mode ???
My VPN is behaving funny sometimes, and I have to restart it often.I wanted to write a script which does that for me.It doesn't have to be anything fancy, just a shortcut for the commands I have to type into the terminal. More specifically: it will look at the running processes.If it finds a running vpnc process, it will kill it. Then it will start vpnc.I've written bash scripts of similar complexity,but now I don't have a bash,only an ash. Until now, the only difference I noticed is that there are much less commands available, but then, I don't use it very often.So I have some questions.
Is writing ash scripts different than writing bash scripts?
Is there something specific to consider when doing it?
When the script is ready, how can I deploy it? For bash, I just put the executable file under /usr/lib and run it by typing the file name into the command line, will this work with ash?
Are there any special pitfalls to watch out for in the script I want to write? I think that the killing process part may get hairy, if I write something that kills the wrong process, but even then running the script shouldn't break anything permanently, right?