I'm a truly ordinary shell scripter (and unfortunately I know nothing about any higher-level language), but I have an important script to run to count the number of files in our client directories.
Problem is that while we've not got a lot of clients (circa 100), they might have a LOT (> 1000s) of files.
And my script is REALLY slow code...
PS How slow? Well I started a cron job at 0005 yesterday, and as of 0930 today its only a little over half-way through all our clients.
I want to gave much details as possible. working directory (~/a1/shell) in the shell directory i have Makefile. also in the shell directory i have subdirectory's (obj, src, include)
My current Makefile
Quote:
#What needs to be built to make all files and dependencies
clean:
# End of Makefile
I wanted it so: all .o files are created in the obj subdirectory, and my application, sshell, is created in the shell directory.
I am getting this error when i run the make run: No rule to make target 'shell.h', needed by 'shutil.o'. stop
How can I make shell script to use sqlplus to update some database table? This is what I'd like to make:
- login to db server (I have create ssh-keygen to bypass the login session) - login sqlplus / as sysdba - update status set status='END' where status='BEGIN'; - commit; - quit;
I just installed Gnome Shell in Ubuntu 11.04 through UGR Linux and everything works fine! The only problem is that I cannot run make and make install. I get the following errors:
Code: alexandros@Autobot:~/gnome-shell/source/gnome-shell-extensions$ make && make install Making all in extensions make[1]: Entering directory `/home/alexandros/gnome-shell/source/gnome-shell-extensions/extensions'[code]........
I am trying to create a shell script similar to ls, but which only lists directories. I have the first half working (no argument version), but trying to make it accept an argument, I am failing. My logic is sound I think, but I'm missing something on the syntax.
Code: if [ $# -eq 0 ] ; then d=`pwd` for i in * ; do if test -d $d/$i ; then echo "$i:" code....
Is there some type of functional way to read things in the Python shell interpreter similar to less or more in the bash (and other) command line shells?
Example:
Code:
>>> import subprocess >>> help(subprocess) ... [pages of stuff to read] ...
I'm hoping so as I hate scrolling and love how less works with simple keystrokes for page-up/page-down/searching etc.
gnome-shell seems to go into loco mode sometimes on my computer... among other problems I'm having this one is the worst. So far, it *seems* to happen after a flash movie is loaded in firefox, although I can't tell for sure if that's the case... Flash performance for some reason is crap anyways on here, I don't know if maybe I need a different version of flash? Anyways, if you guys have any clues as to wth is going on, that would be awesome! I'm a bit of a newbie so treat me nicely
Just a quick update - reloading with r seems to fix the CPU usage issue... until it happens again...
I followed instructions on https://help.ubuntu.com/community/MacBookAir3-2/Meerkat , including the post install ones. Ubuntu with GNOME 2 runs fine, but some animations in GNOME Shell are very slow, like switching to and from Overview. I tried drivers from xorg-edgers and x-swat PPAs, but things are still slow. The most relevant solution I found is http://live.gnome.org/GnomeShell/SwatList. I applied the patch
Code: $ cd ~/gnome-shell/source/gnome-shell $ curl http://bugzilla-attachments.gnome.org/attachment.cgi?id=157326 > shell-animations-nvidia.patch
I've created a simple script based menu. This menu will be accessed by only a certain users via ssh.When user logs in, the menu will automatically run. (configured at user's .bash_profile).How do I force the session to close when user hits Ctrl-C or Ctrl-Break ?In a nutshell, I don't want user to have access to shell.
However when i logged into it, i noticed that the movements with the curser was extremely slow and happened with a delay, but this only happens when i have an window open, its fine when its just the desktop Same thing happened with Gnome-Shell i think it must be mutter because its fine with normal Ubuntu Is there any solution to this at all?
I upgraded my computer to a AMD 1055T 6 core cpu and a new install with 8 gigs of ram. I notice that alot of things are lagging. I have the new updates and it is really slow and that is when I reset any ideas?
I recently started shell programming and my task now is to do a menu display.Currently i am stuck whereby user will input both title and author and it will delete it.
I've stopped the download several times already due to it being so slow. I resumed the downloading at file 1991 (out of 2046 files....) but it's not budging from there. Are there really so many downloads going on today it is making it hard to download?
My disk is very slow after I installed ubuntu 10.04 over my old 9.04. Doing some tinkering helped a little code...
But it is still far too slow. On the other version, I had a custom partition setup, with the home partition with 100GB, and ext3 (and other partitions for swap, boot, root folder and space for a windows partition I never cared to install ).
This time I am using a standard Lynx setup (2 partitions, the swap and the main one with almost 250Gb, using ext4).
Some applications I develop, that use disk for some unit tests, are now very slow to work with. Is there a way to making it faster? Going back to 9.04? Waiting for 10,10? Gparting and making partitions smaller on ext3? I don't know if any of these will work.
I worry about mounting properly an external USB hard disk. It is a sata 1To NTFS formatted It works perfectly with a XP SP3. With xuxbuntu 10.10 , with ntfs-3g installed , it is VERY long to transfer data:it took almost 3 days for 360Go. When I connect to the station , it automatically mount by itself on /media/disk without any option;I suspect bad options are used and no use of NTFS-3G. How to oblige it to mount with good options?
All of a sudden, the startup (time from GRUB to Login Screen) has been quite slow. I recently installed a LAMP server. I've tried disabling httpd and mysql but it didn't seem to have any effect. I've attached my dmesg output below.
I posted this yesterday, but my post completely disappeared (I looked high and low -- nothing.) I am using Ubuntu Server 10.04, all the latests updates. For an FTP Server, I use ProFTP.
One specific directory, and it's subdirectories on my server will not download at a reasonable rate. They move at about 17-50KBPS. All other folders work fine, at around 1.5-2.5MBPS.
What is going on? I have no idea how to troubleshoot this. The files being transfered are in a directory under home. They should have no permissions issues (I reapplied the permissions I want already), I tried restarting ProFTP, the files vary in sizes (from a few kilobytes to about 120 megabytes). I use Webmin for most web management.
I am not having overload issues with my network card or CPU utilization while downloading these files. They are being accessed from the local network.
This issue is taxing because the files in question are backup files.
I recently install Ubuntu 10.04 on my computer and enabled the top effects option when choosing desktop background, screensavers etc. It worked fine for a couple of days then started to take about 3-5 seconds up maximize any program it ran - it minizes them instantly but takes ages the opposite way around.
My laptop has a core duo processor, 3gb of RAM, a ATI 3400 512mb graphics card, when I ran Vista on it, it could easily handle Vista built in effects and run Crysis well on low settings - I very much doubt the problem is that my computer isnt powerful enough. Is there anyway I can get Ubuntu's special effects to work properly again? Atm normal effects wont work either - I am having to use the No Effects option.
I have read many threads but cannot find a solution on this code...
Now no matter how i transfer if its from HDD to HDD or to SSD i never get more then 18-20MB/sec, i just tried booting to my live CD and was able to transfer with 24-25MB/sec
This is really slow, since the worst (2.0TB) can do a minimum of 30 READ
Anybody got any ideas, this seems to be something that has plagued ubuntu for years, and no i don't want to try other distros, i tried almost all of them a year ago and finally went with ubuntu
Is it possible to assist in the programming Shell Scripts Job: To send a message to the email,All orders written in Terminal or ssh example : ls , pwd , cat , and other
I am running on debian squeeze 6.0.2. I have been using it for the last id say 3 weeks and really am enjoying it.
I generally use transmission-gtk to share files over the internet. Normally I seed torrents at 110-160kb/s for hours at a time. However after messing around with firestarter my upload speed for seeding torrents rarely peaks over 70kb/s. I have purged firestarter with no success of my regular upload speed, and am very confused as to what happened. I also notice sometimes when it will get to about 70kb/s it will immediately drop down to the 20-30kb/s range.
For incoming bittorrent connections I use port 37294. I have set port 37294 to be allowed in my firewall, and forwarded in my router (since purging firestarter did not help I just reinstalled it).
I have also read allowing ports 6881-6889 is important, but I have never done that in my history of using torrents, and I have never experienced a decrease in UL speed like this.
Have I done something incorrect? I have never had this issue on other machines?
My system is rather flaky as of late. I was trying, yet again, to get the internal bluetooth (rtl8723be) to work, so I could free up one of only two usb ports on the lenovo ideapad 100. In my blinding brilliance I typed make uninstall, after make did not work (why I did this, I still don't know), and I noticed that it said btusb.ko was removed, as expected at reboot, no bluetooth adaptors were found, I tried reinstalling bluetooth via apt-get, no success, so finally I just ran apt-get update, and apt-get upgrade, once finished, my system did not want to reboot with a sudo init 6, so I hard powered it off and restarted, after it's obligatory fsck, everything okie dokie, it booted up and voila, bluetooth working again, or should I say, the usb adaptor bluetooth, not the rtl8723be.
So I figured, ah, the heck with it, just enjoy my speaker for watching a movie via kodi, and that's when I see that it's buffering continuously so I check a couple different speed tests, and my wifi is only pulling 2.0ish mb down, I booted into windows 10, just to check, and sure enough, same result.....so I don't know if the adapter is dying or what, so, I figure, I'll try and update the driver, this is where I run into not being able to run any make commands they all error with:
Code: Select allmake[1]: Entering directory '/lib/modules/3.16.0-4-amd64/build' make[1]: *** No rule to make target 'modules'. Stop. make[1]: Leaving directory '/lib/modules/3.16.0-4-amd64/build' Makefile:393: recipe for target 'LINUX' failed make: *** [LINUX] Error 2
I have installed the linux headers, and followed this post here, [URL] .... (substituting my uname -r of course)
still having the sudo init 6 not working, and the slow wifi, as well as internal bluetooth not working.
I've just noticed that unrar is suddenly taking minutes to extract instead of seconds.
I can remember if its recently been updated, but I've uninstalled and reinstalled it and the version is: unrar.x86_64 0:3.7.8-3.fc10
I've found a few Ubuntu posts about it on Google, but in true Ubuntu fashion nobody has any answers!
What's odd is that when I unrar a file from (Nautilus or "unrar x *.rar") it takes say 4mins, then if I do it again it takes 15secs, like as if its caching somewhere.
I am having a very weird problem, of which I have never heard before. The problem, shortly (I will give details in the following), is that the copy speed between the partition in which I am running my Fedora and another partition where I store backup data is 50K/s, which is extremely slow (it has been usually around 10M/s, almost 200 times faster).
My laptop is a HP Pavillion Dv7 2080 ep, in which I have two disks of 250G. One of the disks is partitioned between two operational systems, Fedora 14 64 and Windows 7 64, and the other disk contains the above referred partition where I store my backups (lets call it backup disk). The problem happens ONLY when I try to copy anything from my home in Fedora to the backup disk. I first thought it could be a problem in the backup disk. Then I login into the windows and made a copy of some AVI files (3G approx) to the backup disk, which went normally, within a few seconds, thus eliminating this possibility. I try therefore to make the inverse process, a copy from the backup disk into my home folder in Fedora. It went normally, copying 4 G also within few seconds, actually with an excellent speed of 50M/s. After searching in google, I saw some people also complaining of slow copy speed (although they had 7M/s, more than 100 times faster than what I have now), and one cause could be that the source file system (in my case the Fedora home folder) would be near its full capacity. I therefore eliminate everything that I could, around 15G, and now I have like 20G of free disk space, which I think is more than enough for everything. However, the problem persists. Some people also said that it could be the type of files and/or amount of data. I therefore tried several kinds of files (from raw data files to AVI and RMVB regular video files), and several amounts of data (from single files with a few Megas to folders with 20G) and in all cases I had the same unacceptably low copy speed of 50K/s.
It seems to be a problem in the software, caused by the updates, since until 10 days ago it was everything ok. However, I usually make the updates whenever they appear but make my backups much less frequently. Therefore I don't know for sure what has changed in Fedora through the updates since the last backup I made, when everything was working fine.
I have Ubuntu 9.10 64bit on an AMD dual core proc with 4Gbt RAM I have installed XBMC from instructions here... http://wiki.xbmc.org/?title=HOW-TO_i...n_step-by-step I got as far as... ""XBMC is now installed and ready for use."", re-booted and loaded program. It loads OK but I cannot access/select the menus because the mouse pointer bears no relation to the mouse and the keyboard is no help either. Both are incredibly Slow!
When I'm trying to login to the ftp server with appropriate username and password its taking almost 10-15 seconds to authenticate making the login process slow, even when I'm uploading files its again hanging for 10-15 seconds before completing the job successfully. Its not like its happening every time, but 7 times out of 10. Any idea how can make the authentication fast?
I am using Redhat linux 9.0 and using squid proxy server.My problem is tht i my squid server is responsing very slowing. whenevr i try to open sites the site starts to open after 3 or 4 seconds and often squid does not open the complete site. its stop the site in middle. My squid Configurtion is below. Is there any need to tune the system parameters Like from 'SYSCTL.conf' for better diskd performence or another problem. at this time i am using default system parameters. Please help me in detail what is the reason of squid slow performence if there ia a need of any system tuning please tell me in detail. I am very thankfull to you.I am really worried about slow performence of the squid. I also try to offline_mode on but the same problem. code...