These files are encoded in codepage WIN1250 and I want to convert them to UTF8 using this script (executing it 'conv *'):
Code:
#!/bin/bash
for file in $1; do
if [ -f $file ]; then
[code]....
However it only renames the first file when it has no spaces in its name. How should I modify it to accept wildcards and to process files with spaces in filename?
I am running Ubuntu 10.10 64 bit to do testing for a web site. I have successfully added a .htaccess file to the production site to process .html files as PHP files, but cannot get my localhost to process the files the same way.
Addition to apache2.config:
HTML Code: <Directory "/home/*/public_html/*"> AllowOverride All Order Deny,Allow
I'm completely new to scripting and I'm trying to figure out how to write a script that will get a list of all the files in a directorywn through any subdirectories.When I have the list I want to o each file in VI and change the fileformat. So far all I have been able to figure out is that VI can do the batch processing and that "ls -R" gets me the recursive file list. I'm still pretty clueless on how to do the batch process with the VI editor. I think I'm supposed to use the Ex mode but I don't know how to get the list of arguments from the filelist into the editor so they can be processed. If it matters the files were all written in a Windows editor and have gotten the MS carriage returns so I want to do a :set ff=unix command on all the files without having to go into each file manually, there are over 300 files that need updated.
I scanned hundreds of pages in gray scale and would like to batch process them to B&W. You can do this using the gimp GUI. what it does is get rid of all the gray shading from reflections off the paper when scanned. So you get crisp white backgrounds with black text and diagrams. I would like to simply do this to the entire group directory at one time as it would be quite a lot of effort to open them and do them one by one.
Currently have access to a VPS where we are running a small game server on ubuntu - the problem is that it is a multi-user environment, so when one person restarts the server process, all files it creates are owned by that users name and group. I have created a group called 'game' and added both users to it, but I need to know how to make all files in the game server's directory to be r/w/x for the group 'game'. Currently, I have a script that chowns and chmods all files recursively on startup, but I'd prefer not having to do this.
Are there any solutions by which linux-based web server frameworks (like zend or any other) can process data inside microsoft excel spreadsheets (*.xls)?
I have some random files in a folder. I want to rename all of the files in a batch process. I have a text file that contains the Currentname of all the files in the folder, as well as a text file with all of the Newname of files in the folder. I want to replace Currentnames with Newnames.
For example, here are the names of the files in the folder: 1.mp4 2.mp4 3.mp4
I have a text file with the Currentname of all the files in the folder: 1.mp4 2.mp4 3.mp4
I have a text file with the proper Newname of the file: a.mp4 b.mp4 c.mp4
I want to rename Currentname with Newname in the folder. So when I go to the folder the Newname of the files are: a.mp4 b.mp4 c.mp4
I was just wondering if there is a software package that will allow my linux server to process video files (mainly AVI and MKV) so that they can be streamed over the network or Internet. I obviously wouldn't be able to stream files between 600MB and 8GB over the Internet and have it play smoothly. What my plan is to setup something so that I can stream my videos over the Internet when I am not home. Can anyone point me in the right direction?
Or maybe there's a software that can break the video file into "parts" and only send out blocks at specific intervals?
I have all of these "archival" email messages and attachments. Most are Mozilla Thunderbird format. Others are Evolution format. Some are win-dose. Most are linux. Where can I find software that will me format, organize, store, and retrieve email messages for long term archives? I'd like to collect all of each thread together with any of the relevant attachments. I'd like something that is searchable. I'd like something that is "pretty print-able" -- PDF or similar -- should I want to share an historical thread with others. I suspect there are commercial-grade, "compliance oriented" application suites for message and document storage.
I need to write a script (possibly awk) that is able to process 2 files, but I am very new to awk and I have problems in how to process 2 files at the same time.The first input file is samples.txtthe formattime_instant measure
Have used 10.04 for almost a year now, and the 'upgrade' now beckons.How can I ensure that my programs and Download files remain intact following the process? I already have 11.04 as an .iso on cd, but am unsure how to proceed.
I just upgraded from F12 to F13 and the result is a mess. From F11 to F12, I*used preupgrade and it worked flawlessly. This time, the process ended on a blank screen. So, I downloaded the DVD and I suppose the files downloaded by preupgarde were used as there were almost no connections to the net during the upgrade process. The problem now is that apparently almost all the FC12 files are still on my system:
Of course, as I said, the previous upgrade went flawlessly and I didn't check what happened on the system with the rpm and locate commands but I doubt is looked that bad. Seeing what kind of mess my system was turned into, I now worry about the next upgrade.
I'm rather new to linux, and I have a dedi server. I know how to browse, install, remove etc, all the basics needed to use it. I've installed flvtool2, memcoder and ffmpeg, and at the moment im converting avi files in to flv. Im then passing metadata using yamdi.
However, this process is very timely as im converting loads of avi files at a time.Im looking for a script, or a way where I can execute one command/script and which will convert all files in the directory I specify, then run those converted files through yamdi.Im guessing it would be some sort of loop, and then changing for each file?
I would like to backup important files (totaling about 400GB) on my ext 4 RAID 5 array to an ext4 external hard drive over USB (external drive is mounted to /mnt. In the future I'd like to automate the process using rsync and cron so for now I'm using rsync to transfer the files. My problem is that using the rsync command like this: # rsync -Pr "/dir1" "/dir2" "/dir3" "/dir4" /mnt
rsync shows me the checks and transfers for awhile and then throws up an i/o error (wish I had a screenshot to show but I don't). When I ls /mnt I get a similar i/o error. I then check /dev for the drive and find that it no longer shows up. Originally the partition was /dev/sdc1. I tried unplugging the USB at this point, plugging it back in and mounting the drive back to /mnt, however it has now assigned it to (you guessed it) /dev/sdd1. I get the drive mounted and try the original rsync command again, hoping the first error was a fluke or some kind of one-time drive fart. This time it makes it quite a bit further and then throws up the exact same problem. Am I doing something terribly wrong here? As I said, I'm very new to bash so I'm not making some absolutely moronic, newbie mistake.
I'm writing a C++ application and need to work with process substitution in the Bash shell. I'm trying to find a way to validate the paths passed as arguments to my program, some of which point to FIFO files created by process substitution.
Is there a shell (or C++) way that I can check if the system creates these files in /dev/fd or if they are created somewhere else?
I've some file with .sh extensions that runs some softwares.Now,how do I stop running that filesI know we run the command ./start_tomcat.sh to start the apache.Is there any command to stop that file/process or is it just kill the process to stop the process
I have an Ubuntu server in which a file is dumped every hour and a new file for the next hour and the process continues. If there is any problem due to which the creation of file stops then empty files are created every minute till the process is killed & started again. I need help to make a shell script to check if the empty files are being created and then kill the process and start it again.It would be a great help if anyone can help me regarding this.
I need to process billions of small files using bash shell commands with limited memory size (256MB). If any of those files contain certain "keywords", the file will be removed. I tried with command:
I am going through a multi-step process to produce output files, which involves 25,000 greps at one stage. While I do achieve the desired result I am wondering whether the process could be improved (sped up and/or decluttered).input 1set of dated files called ids<yyyy><mm> containing numeric id's, one per line, 280,000 lines in total:
I am installling ubuntu on my old (2003) Dell D800 laptop with celeron processor, using a CD (burned at home). The installation process is hanging for about 24 hours now; it says "copying files..." with progress at 63%. The progress has not changed for about 10 hours.
I have a single hard drive on the machine (... noticed issues with dual drives on this forum, so thought I'd mention).
I have a high priority service that I start with sudo nice -n -10 process. This process does not need superuser rights though, except for the priority elevation. But nice requires superuser privileges to elevate priority.
Description of what the code does or what i intended to do:
1. Created a child process from parent process using 'fork()'
2. Sent a signal 'SIGALRM' from child process to parent process using 'sigqueue' function.
(The Third parameter of 'siqueue' function contains the message (message msg) which the child process wants to send to the parent process.'msg' is a stucture instance containing a) pid of child and b) string) 5. Print the 'msg' sent by child process inside the signal handler function 'sig_action_function' of the parent process I am getting some junk value when this line is executed
Code:
printf("%d ",msg->cpid);
I expected to get the pid of child process, which the child process sent to parent process through the signal.
as we all know Process Scheduler does Process scheduling and its a process as well. I was just wondering that if this happens then the Process "Process Scheduler" should be a part of Process queue as well.
So if there are 5 process are there in Process queue & process scheduler is administrating them then since its also a process, once it puts a process under RUN state it should itself go inside queue because at one instant only one process can get executed on a processor. This is quite confusing for me. Please help me out. I tried to search on this but could not find any relevant topics.
I have a shell script to identify whether the process is running or not. If the process is not running, then I execute another script file to run my application. Below is my script and saved this script as monitorprocess.sh Code: #!/bin/bash