Programming :: Dealing With Code Efficiently / Can't Use Textwrangler?
Jul 10, 2011
On my Mac, I use Text wrangler to store all of my html and css code I use to create my web site. So far so good, or should I say so great because I love using Textwrangler. My problem is that when I am aay from my computer or on another computer with another OS, I can't use Textwrangler. I am looking for a way (and I hope this is possible), to store and edit my code online no matter where I am or what computer I am using.
For the best in Jazz, In Transition every Sunday from noon-2pm EST www.chuo.fm
I really enjoy using VIM to write python, but I recently tried to use it to write some essays for school. Are there any configurations that I could change in order to make writing longer essays easier using VIM? For instance, a way to be able to just jump one line (on the computer screen) down when I press the K key instead of jumping to the next paragraph?
I find that (failed) dependencies are such a pain in the s.
I was just wondering, instead of using yum or online to find dependencies 1 by 1. Is there any faster method to go about installing dependencies? or any advice that makes linux installation a swift?
We are in the process of pruning our directories to recuperate some disk space.
The 'algorithm' for the pruning/backup process consists of a list of directories and, for each one of them, a set of rules, e.g. 'compress *.bin', 'move *.blah', 'delete *.crap', 'leave *.important'; these rules change from directory to directory but are well known. The compressed and moved files are stored in a temporary file system, burned onto a blue ray, tested within the blue ray, and, finally, deleted from their original locations.
I am doing this in Python (basically a walk statement with a dictionary with the rules for each extension in each folder).
Do you recommend a better methodology for pruning file systems? How do you do it?
What can you do when your linux system "can't find" dynamically linked libraries that are indeed installed in their correct locations? Case in point, I'm trying to run a program called 'ucanvcam':
oliver@human ~/installed/ucanvcam-0.1.6/bin $ ./ucanvcam ./ucanvcam: error while loading shared libraries: libgd.so.2: cannot open shared object file: No such file or directory oliver@human ~/installed/ucanvcam-0.1.6/bin $ locate libgd.so.2 /usr/lib64/libgd.so.2.0.0 /usr/lib64/libgd.so.2
oliver@human ~/installed/ucanvcam-0.1.6/bin $ ldd ./ucanvcam linux-gate.so.1 => (0xf7706000) [...] libgd.so.2 => not found [...] librt.so.1 => /lib32/librt.so.1 (0xf6b1e000)
How can I tell it to look for libgd.so.2 in /usr/lib64? And more importantly, why isn't it looking there, and where is it looking?
I am having trouble with a script that is supposed to : a)take all the jpg pictures in a given directory/parameter and create thumbnails of it in a directory on the desktop.
e.g from /here/are/the/files.jpg to ~/Desktop/parser-the/files.png. this is my script: all the individual parts work but it falls apart when i put them together.
Code: for picturesource in $(ls ${1}/*.[jJ][pP][gG]) do echo this is the picturesource $picturesource; destination=~/Desktop/parser-"${picturesource}"; echo this is the destination $destination;
My notebook's LCD runs at 1920x1080 while my external LCD runs at 1920x1200. For the most part I use the external LCD, but there are times when I need to disconnect and use just the notebook's LCD. Because of the different resolutions I need to exit my X session and go back in otherwise the higher resolution of the external LCD gets cut off when I switch to the notebook's LCD. Other than forcing a non-native 1920x1080 on the external LCD, is there any other way to get both screens to show the full desktop without having to restart X? The GPU I have is an nVidia with the nVidia drivers.
I am using Ubuntu 10.04 and trying to get a Novatel USB760 to work. Initially,it is detected as a CD rather than a modem. So, I wrote a udev rule to eject it and that worked, but sometimes the CD comes back and remounts itself and then kicks the modem off and the connection dies. I tried various ways to fix this, but no luck. Finally, I just deleted all the cd files in the /etc/udev/rules.d and the /lib/udev/rules.d (if you don't delete these, udev recreates the files in /etc/udev/rules.d). That does work and now the cd reports hardware errors and does not mount and the modem works as it should.
Now, this works on the device that does not and never will have a CD or DVD attached to it, but on my laptop I don't want to do that as I would not have access to my DVD drive. Does anyone know of a way to make the udev rules ignore specific hardware and not try to create mounts or symbolic links for those devices. Ideally, I would like to blacklist the novatel CD part and not the modem, but I am unsure how or if this is possible.
I have a laptop. At home and at the office I hook it up to an extra monitor for extra screen space. When I do this I add some panels on the second monitor.
When I occasionaly use my laptop op the train, all my extra panels show up on my first screen, which get really cluttered.
If I delete the panels I have to recreate them when I connect my extra screen again.
Is there any way of configuring gnome so that you can easily recreate 'deleted' panels, or configure the extra panels to not show up unless the second screen is attatched ('lock panel to screen')?
I currently work within an RTOS environment without an MMU and thus have access to the entire memory map of whatever application I'm working on. As is common in the embedded world, different parts of the memory map relate to different peripherals or different types of memory. For our next generation hardware, my company is looking at moving to an MMU-enabled processor and using Linux in some shape or form. Most of us in the dept are familiar with Linux, but we are not Linux gurus by any means. So how to explicitly indicate to Linux that we need certain portions of an application to be stored in NVRam and other portions of the application to NOT be based in NVRam has us confused. None of us have a clear understanding of how user memory is delved out by Linux and how we can influence Linux to use specific portions of the memory map at specific times.
For example in this new application, we expect to have 2 memory chips, both that are DDR3 interfaces. One is a standard DDR3 chip. The other is a non-volatile MRAM with a DDR3 interface so it can be accessed by a DDR3 controller and coexist with conventional DDR3 memory. But because the portion of the memory map that the MRAM will represent will be the only portion of non-volatile memory, we are unclear how we explicitly access MRAM addresses in an MMU-controlled environment. My hail-mary guess was that we would want to somehow tell Linux that we want the MRAM's memory space to be mounted as a RAM Drive and then we access that memory as though it is a file on a HD, except it is much higher speed since it will be at DDR3/MRAM speeds. Is there a better, more straight forward way to do this? Coming from an RTOS world, Linux is going to pose some serious challenges for us, but I think it will be the right move once we are all up to speed and are thinking Linux-centric.
I am trying to extend my LVM and didn't feel like dealing with the terminal commands, so I downloaded system-config-lvm. But when I run it, I get the following readout:
Code: sudo system-config-lvm Traceback (most recent call last): File "/usr/share/system-config-lvm/system-config-lvm.py", line 50, in <module> from Volume_Tab_View import Volume_Tab_View File "/usr/share/system-config-lvm/Volume_Tab_View.py", line 11, in <module> from Properties_Renderer import Properties_Renderer
I have two NASes. I work off of one, and the other is used as a backup. As I have it set up now, it's slow. Running a backup takes a week. Even for 7 TB, with 1,979,407 files, this seems a bit outlandish,particularly as both systems are RAID-5 and the network is all gigabit. I've been digging about in the rsync man pages, and I really don't understand what differentiates the various topologies.Right now, all the processing is being done on the backup NAS, which has the main volume from the main NAS mounted locally over SMB. I suspect that the SMB overhead is killing me, particularly when dealing with lots of files.
I think what I need is to set up rsync on the main nas as a daemon, and then run a local rsync client to connect to it, which would hopefully allow me to completely avoid the whole SMB-in-the-middle affair, but aside from mentioning that it's there, I can find very little information on why one would want to use the daemon mode for rsync.
Here's my current rsync command line: rsync -r -progress --delete /cifs/Thecus/ /mnt/Storage/input? Is there a better way/tool to do this? Edit:Ok, to address the additional questions: The "Main" NAS is a Thecus N7700. I have additional modules installed that give me SSH, and it has rsync, but it's not in the $PATH, and I havn't figured out how to edit the local $PATH in a way that persists between reboots. The "Backup" NAS is a DIY affair, built around a 1.6Ghz Via Mobo with a Adaptec Hardware RAID card. It's running CentOS 5 with a full desktop environment. It's the hardware I'm running rsync from. (Gigabit is through a additional PCI card).
Further Edit: Ok, got rsync over SSH working (thanks, lajuette!).I had to do a bit of tweaking on my command line, I'm running rsync with the args:rsync -rum --inplace --progress --delete --rsync-path=/opt/bin/rsync sys@10.1.1.10:/raid/data/Storage /mnt/Storage (Note: I'm specifically not using -a, because I want to change the ownership to the local account, to not freak-out SELinux)
It returns an error dealing just with the h264 codec saying that I need to use a vpre parameter? I can't find any documentation on using the vpre parameter.
I am working on a project with a lot of vector math and I'd like to find a way to speed it up.eading about SSE, but I've found no explanation on how to actually use it in code (was looking for some kind of hello-world example, complete with compilation instructions).Does the gcc compiler automatically make use of SSE, if you add the -sse(2,3) option on the command line? Or are their specific functions/libraries you need to call?
Is there, by chance, a fancy name to describe code that must be in a program but will never be executed? In one of my (Haskell) programs, I have some error-handling code that must be in the program to keep the compiler happy (due to the type checking). However I know that, due to the logical structure of the program, it is impossible for the code to be evaluated. I am curious if there is a technical name given to code that must exist but cannot be executed.
I went to compile some "oldish" glx code. The code compiles great but when I go to run it I get a crash. With X Error of failed request: BadMatch (in .....running ddd causes my whole system to lock up when I call the glx function XOpenDisplay. After a few attempts I thought I'd download a demo from the net, I choose nehe opengl tutorial 2, I compiled and ran but even on a net tutorial I get the same error
So essentially, it finds dx files, sorts them by numbers at the beginning, then performs the dx function I made (loops over all of the #-protein.dx and #-water.dx files).
It works fine when I'm running it on Ubuntu 11.04. However, when I try to run it on OSX, I get the following error:
Code: mh320m01:DA_R02 janickij$ ./MOD_Loop_Tuber_Script.sh find: illegal option -- t find: illegal option -- y
I work as a linux sysadmin, and are now and then developing scripts that might be of use for others. I'd like to be able to share these, and for less trivial projects maybe create a central repository or something that others may upload updates/patches to etc.
I want to write a c program with some shell scripts.Now For a simple C program. I am Setting a variable called val2 in bash, now I want to use bash variable val2 in C code. How do I do that?The above doesn't work (coz its spawning a different memory space and when shell script ends the variable dies with it as per my research but how do I keep them in same memory space)Also Is there any Good reference where they teach how to integrate C and Bash Together?
I've been converting some C code to assembly for my homework; it was going well but I'm having trouble with a for loop for hours. I could not understand where is the problem and decided to ask. I'm posting the part where I'm having trouble of my C-code and assembly-code; every other part of codes act same and the variable values are same. I'm waiting this two codes to act same, but they don't.
I'm trying to call some Fortran 95 code in C, but I'm having problems with integers not having the same value in C as in Fortran, and changing values upon each run of the program. I think it has to do with the integer type, but I don't know how to fix it. I'm running Gentoo x86. Here are the files I've got:
I am unable to compile C++ program in terminal. Whenever I try to add "#include<iostream.h>" it shows an error and thats why I can not use "cout" and "cin" functions. I installed g++ for this but the problem persists.