I've written a program for a class that my professor will be testing in various low memory environments to see how it behaves when the program runs out of memory. Is there a way I can simulate the execution in a low memory environment without creating a virtual machine?
I am attempting to learn how write bash scripts. I want one to ask for a specific key. For example, I want it to ask a question, then if it is like, y for yes, or n for no, it does a specific thing. I'm thinking it would be something like:
I have a log file that contains information like this:
---------------------------- r11141 | prasath-palani | 2010-12-23 16:21:24 +0530 (Thu, 23 Dec 2010) | 1 line Changed paths: M /projects/ M /projects/
[code]....
what i need is, i need to copy the data given between the "---" to seperate files, for, e.g. the first set of data between the "---" should be in one file and another set of data in another file.
I am using Sphinx search on my webserver and it quits after a certain amount of time leaving my search page broken.Here is a bash script that I want to run every 10mins via cron:
This is my sample code in /etc/httpd/conf.d/applications.conf file currently we are creating subdomain mannually for every new subdomain. I want to automate this process througs bash script , how its possible.
In this code which i marked BOLD that content only we'l change for every subdomain. while manually doing this most of the times errors are coming to avoid that i need to automate this process.
First post from a very new Linux user....I am trying to create a BASH script that will allow user to provide multiple directory names, Checks if the directory exists and if not create the directory.
I am using the following code:
Which works fine as long as the user enters a single directory name. How can I modify this so it will process all directory names user enters on the read response?
I have an Ubuntu server in which a file is dumped every hour and a new file for the next hour and the process continues. If there is any problem due to which the creation of file stops then empty files are created every minute till the process is killed & started again. I need help to make a shell script to check if the empty files are being created and then kill the process and start it again.It would be a great help if anyone can help me regarding this.
I need to process billions of small files using bash shell commands with limited memory size (256MB). If any of those files contain certain "keywords", the file will be removed. I tried with command:
I want to be able to tell processes/programs somehow to use a specific nic. I have a laptop (used as dekstop) and want to make pidgin use my wireless. I have a gigabit wired connection and a wireless connection. I want to force pidgin to use the wireless connection. But more specifically, I just want to block pidgin from using eth0... how to go about that? I can't find anything that will block applications... I could block outside servers from communicating with eth0, but I don't want to do that.
On my RHEL5 system one of my key file in one specific directory gets deleted when I start my application suite (having multiple processes). Is there some way to narrow down which specific process is deleting this file?
I want to change the resource limits for a specific processOr to create a new process and give it limits as I want, There is a function setrlimit, Which is possible to change it but for a programmatic I want to apply it to another Process, The problem is that this function does not receive process ID for example. I read in most books on the subject of The Linux system programming
Is there a way to bind specific programs to specific network devices (not IPs, since I have dynamic IPs)?
For example, I wish for irssi to route through eth0 and w3m to route through eth1. Keep in mind these devices have dynamic IPs, so I cannot attached them to an IP.
The solution cannot be accomplished through route since route pivots on IPs not devices.
I use rhythmbox to do download my podcasts and I'd really like to automatically sync the new files to my phone when I attach it.The phone mounts as an external drive so I was kind of hoping to write a script that would run automatically when the drive mounts.
I would want it to delete files on the phone that are not present on the computer.Can anyone help me with the syntax for both the bash part and the correct rsync command?
I'm new to bash scripting and I've searched around the forums and Internet for this but haven't had any luck. I've found similar things but not what I need. What I need to do is write a simple script that uses what the user inputs to locate and display where a file is. I would prefer to use locate instead of find since I know that the person I am writing this for has locate on her machine (my mom, who is just beginning with Linux).I'm writing the script to make things easier for her while she learns In this particular part of the script I would like to be able to have the script prompt to enter the file she is searching for, read the her input and then display for her where the file is. I realize it would in most cases be much simpler just to teach her how to use locate, but she is very impatient and this is just a part of the script I will be writing, but I can't figure out how to do this.
I have a text file which stores the list of files & dir, I want to get only file's extensions from this file & want to store it in another file.eg, below is the file's contents & from it I want to get the extensions sh, pl & h & want to store it in another file. Also I don't want directory list.
A scripts/services_restarter.sh A scripts/svn post_commit scripts A scripts/tmp/
I want to delete all files within a specific folder without actually deleting the folder, what is a good bash command for this?. I found this one but encountered some errors even though I am executing it within the specific folder:
useratdebian:/home/user/folder# find . -type f -exec rm -rf {} ; [1] 5052 useratdebian:/home/user/folder# find: missing argument to `-exec' [1]+ Exit 1 find . -type f -exec rm -rf
The command as it appears is:
find . -type f -exec rm -rf {} ;
how to delete only the files contained within the folder called "folder" for example?
I am having a problem identifying a BASH script process that I run at startup.When I use "ps -e", I see a few BASH and SH processes running, but I don't know if any of these are my script. Is there a way to give a BASH script a specific name when run that I could see as the process name. That would make it easier to identify and kill when needed.
What i want to do is pretty simple.I want to uncomment every line that begins with "deb" (except for deb cdrom) in /etc/apt/sources.list.I know how to do this through system > administration > software sources.I know I can gksu gedit /etc/apt/sources.list.I'd rather not do it that way.I'd rather have a script do it. It's less work, less typing, less clicking, and would work the same on every ubuntu version.
I'm trying to call a specific variable based on a user selection. For example:
Code: Select a file:
[1] foo.tar [2] bar.tar
Enter a selection: I have already coded each possible selection to have its own variable. If the user selects 2 I need to select $SELECTED_TAR2, or if they select 1 I need to select $SELECTED_TAR1 and then do something like this behind the scenes:
I would like to parse an input file in which there are two columns per each row. We want to see how many lines are duplicated where we define duplicate to be having the same second field and different first field. For instance if the input file looks like the following:
I look after a server which accepts automatic overnight PASV FTP uploads from remote clients. When the uploads are complete, my Bash script copies the files to another location. The problem is, my script needs to be a bit smarter when it comes to detecting active FTP sessions.
I was using:
Code:
netstat -n | grep ":21 " | grep ESTABLISHED
to test if there were active sessions, but came unstuck when a local user left an unrelated FTP session active. The result - my script hung around all night thinking there was an active upload from a remote client. My server is behind a firewall, so remote clients all show an internal (NAT) address,so I can't differentiate by source IP address.I can't install LSOF or FUSER for security reasons. Is there a way I can test for active FTP sessions from specific users? I am running Red Hat Enterprise Linux Server release 5.2 (Tikanga).
Urgent: on reboot, the Fedora 11 lower bars reach about 70-80%, then I get the message:
/dev/mapper/VolGroup-lv_root: (There are 22 inodes containing multiply-claimed blocks.) File /home/burnie/.thumbnails/normal/[bunchofhexits].png (inode #15826, mod time Mon Nov 2 04:24:26 2009) has 13 multiply-claimed blocks, shared with 1 file:
[code]....
Just in case this is relevant, yesterday I spent several hours attempting (and failing) to build IcedTea in order to run a Java web service that required it. After the failure occurred, I exited Linux and went to Windows Vista to run the web service, and found that Vista cannot support 64-bit Firefox, so I rebooted to Linux, and ran make clean on the Iced Tea installation, which balked because a stamps directory could not be deleted because it was not empty; I followed this by make distclean which made the same complaint. So I manually deleted the files in the stamps subdirectory, ran make distclean "cleanly", and then rebooted to reach my current very unsatisfactory state.
file allids consists of 300,000 rows, each containing a 5-7 digit numeric id. file newids consists of 20,000 rows of id's. How do you explain the following timings? time: 0.07s:
I am new to scripting and been working on this bash script for awhile now. I been researching this problem, but I can't seem to find a solution. I was wondering if someone could please help me out. Here is my script:
I cannot get this script to run the "ps -ef" command on the client. It get its value from the host machine that I am running this script from. I need this command to execute on the client. When I run the command (ps -ef | grep NO | grep -v grep) on the client, I get something back. Here is what I get when I try to debug the script.
I have a computation going which takes a long, but uncertain, amount of time. I have another computation which needs to be run, but _after_ the first one is done. I won't be at the computer at that time to manually start the new process. I've done some Googling, and I found how to delay execution by a specific amount of time (e.g. "start process x in exactly 8 minutes from now), but that isn't quite what I want to do. Essentially, I'd like to tell the shell, "When process #nnnn finishes running, then start process x". Is there a way to do this?