Programming :: DataBase In Python - Compile .py Files Into A .exe File
Jun 20, 2011
i am new to python programing and i have a couple of ?'s
1- what would i do if i needed to find the closest, bigger number, from a list of numbers when the user types a number in.
2- (windows) i need a program that can compile .py files into a .exe file so that it works on a machine without python it also needs to work with python 3.2.
I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time.I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0)and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e10 1) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant.
Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general.I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen
I recently moved into a new place and when I hooked up my webserver, I wasn't able to bring up my page, even from localhost. With some digging, it seems that I can't access the database that housed my posts (wordpress installation). I looked for the datadir in MySQL and that directory shows the wordpress directory that should be holding the database and all the files are still there. 1) why the database no longer shows up 2) how to restore the database from the files?
I have managed to create a HTML file inside python code,convert this to a PNG file through a Python script ?
EDIT: Details added: I have a python script which generates map-legends in the form of an html file. The legend generated have to be pasted on a map which is in a png format. A png format file can be pasted on another png format file easily. But because the legends generated are in a html format I cannot paste it on the map file !!
EDIT: Details added: I did Googling first but it resulted in various soft wares for above purpose which I don't want !!
I was hoping to get some pointers on how to rename files based on database entry. I got hundreds of thousands of files that has GUID name assigned to it. only way to find out the file name is to look up the database table. Its obvious that this is not efficient. I couldn't find any tutorials on how to do this. Please point me to right direction. A starting point would be very helpful.
how i could compile for example: a tutorial file of cantera open source files. when i do :g++ example.cpp -o examplei get huge errors such as the header files are not here,...please tell me how i could compile it. ofcourse i have configured the full opensources and i see in /usr/local/cantera/binthat the files are there but i still can not compile and execute any cpp file.
According to Wikipedia's PHP page, PHP "is a general-purpose scripting language". Does that include being suitable for duplicate files detection? More specifically, the task is collating files from workstation backups into a single place, preserving directory paths and replacing duplicates with hard links. This will be a regular task on a lot of files so performance is important; our current proof-of-concept solution uses a PostgreSQL database of file "fingerprints" to speed duplicate detection. Does PHP have PostgreSQL integration?
I am asking these questions as a follow-up on an earlier thread asking for programming language recommendations for this task. Since then I have learned that PHP skills are available locally.
I've a program which manages my pdf and references. I wish to put some of the information on my website but that program (Mendeley) does export only in XML (or bibtex). I'd like to simply convert the XML output files to SQL in order to create or update an SQL database.I'm not an expert in either XML or SQL (use only PHPMyadmin). Does someone get help me to figure out?
I've run into a little problem, for which I seem to be unable to find an answer. The concept is the following: one machine runs a python script which advertises itself as 'OBEX File Transfer' and receives incoming data, using the Lightblue python module. The script itself is slightly different, but here's an example which effectively works in the same way:
[Code]....
print "Saved received file to MyFile.txt!" This works fine, though I would like to retain the original filename that is being sent to the machine, instead of overwriting a fixed file. Generating a new name wouldn't be such a problem, if I could get the MIME type or filename which is (presumably) being sent in the header of the request. Does anyone know of a way in python to receive incoming files and retain their filenames via Bluetooth?
Also, the CSV file is updates every few hours and I need to load any new data from the file to the database without creating duplicates of data that has already been loaded to the database.
Is there any way to make program in linux machine to make report when some files have been copy to another directory or machine and knows the users who copy the files, I am planning to make this program in c, honestly first time I want to make in python when I know about pyinotify and how easy to monitoring the file in machine, but the problem is I cannot integrate that script python to know the users who do that except for the one who create the file.
Browsing some websites I've found a code for online form where a user provides name, number, etc. Everything is created in html/javascript. I'm just wondering whether it's possible to collect this input and present it in a database form so that I'd be able to see who's provided data and all the details they entered.Actually, it doesn't have to be a proper database (it would probably require php/mysql). It could be a weekly/monthly report (a text file) of people who provided details. The website is hosted by a third party company.This is an html bit:
I'm woring on a personal research project and would like to know if there are lilypond parsers for python available or I'll have to create my own. Just in case you are wondering: I don't need to typeset the content of the lilypond file, just understand what's written in the file (what notes, what duration, when in time to play each one, etc). [url]
I have a Python script that copies a couple of DLL's and EXE to a directory before running the EXE. It can be a fresh copy or the files can already be in the target directory and are then overwritten. The script uses shutil.copy() to copy the files and that works but as the files are copying processing continues and the script tries to run the files mid copy, causing an error.
I need a way to wait for the files to finish copying before the script continues. Putting the thread to sleep isn't good enough, calling os.system("copy ...") also doesn't work, using os.path.exist() won't work because the file will exist during the copy.
i was trying to Compile my c++ program..cme across this problem....but my c programs are getting compilied without any problem.I am using fedora 14. Below is the way terminal reacted to my program..
gcc prog1.cpp gcc: error trying to exec 'cc1plus': execvp: No such file or directory
i have just modify tcp.c file in/usr/src/linux/net/ ipv4 location.Now should i compile the complete kernel?if not then how to compile that net/ipv4 package or etc.
I am developing a program in a system where the Linux does not take care of the sync command automatically. So I have to run it from my application always I save some data in the disk, which in my case is a 2GB sdcard. It is true that I can make the operation system takes care of the syncronization, using a proper mount option, but in this case the programm's performance drops drastically. In particular I use the shelve module from Python to save data that comes from a socket/TCP connection and I have to deal with the potencial risk of the system being turned off suddenly Initially I wrote something like that to save data using shelve:
But that takes too much time to save the data. Note that I use the sync from the OS every time I close a file to prevent data corruption in the case of the "computer" being turned off with data even in the buffer. To improve the performance I made something like that:
Code:
def saveListData( list ) fd = shelve.open('file_name', 'c') for itemVo in list: fd[itemVo.key] = itemVo fd.close() os.system("sync")
Thus, first I saved an amount of objects in a list then I open the file and save the objects. In this way I have to open the file just one time to save a lot of objects.However I would like to know if adding a lot of objects before closing the file would increase the risk of data corruption.I known that turning off the system after fd.close() and before os.sync may cause problems. But what about turning off the system after
I have C++ source code(*.cpp) files that expects it's header files in System's include folder which is/usr/include.The cpp files has include lines like this:
I have made two source files named as sum.c and average.cI have included sum.c in average.c both files in Documents directory.when i compile average.ot followingerror"average.c:4:22: fatal error: sum.c: No such file or directorycompilation terminated.How to solve this issue?I have tried to copy sum.c to usrinclude folder but unable to copy