Server :: Setup Apache2 To Drop A Core File When It Crashes?
Jun 1, 2010
I'm trying to setup apache2 to drop a core file when it crashes. I know that you need to set the CoreDumpDirectory directive in /etc/apache2/apache2.conf and run "ulimit -c unlimited" from the command line (and restart apache after the ulimit command). But, on a reboot, even though the output of "ulimit -a" shows unlimited, apache2 will not create a core dump file unliess you set ulimit -c unlimited again and restart apache2. There must be a way to configure apache2.conf or something so that ulimit -c unlimited is set prior to apache2 starting, no?
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
I am using RHEL 4.7 (32bit) on HP Proliant 380G6 series server. We are using Electric Cloud Agents on these servers. Nowadays we are facing some memory issues and its creates some kernel panic and then restarts the server. When i reported the issue to my application team, they asked me to come with the core dump. I googled it enough, then i set ulimit value as unlimited. (previously it was 0, then i made a entry in /etc/profile file as follows ulimit -c unlimited) But still whenever my server restarts due to that kernel panic, it couldnt generate the core dump. My application was installed on /opt
I have a command line OCR program called OCR Shop XTR (Vividata corp) that I am using on a system with a 6-core AMD chip. I changed the bios so that the 6-cores were activated, but htop shows me that while the program is running, I am only getting activity on one core (the program maxes out the one core with consistent usage between 97% and 100%).
I have read that many programs are not written to take advantage of multiple core cpu's. However, I am just hoping that there is some way to get this program to take advantage of the extra cores. Does anyone know of a way to invoke programs from the command line which would spread the workload out among additional cores?
Here is the output of uname -a:Linux linux 22.214.171.124-1.2-desktop #1 SMP PREEMPT 2011-02-21 10:34:10 +0100 i686 athlon i386 GNU/LinuxAnd here is the output for one of the cores from cat /proc/cpuinfo:processor : 5
vendor_id : AuthenticAMD cpu family : 16 model : 10 model name : AMD Phenom(tm) II X6 1100T Processor stepping : 0
I just set up a Squeeze server for a friend of mine using my exact apache2 settings. He has been playing around, having a bit of fun learning HTML, but he ran into a problem where one of his images would not display. I at first figured his HTML was incorrect, but after a very long time of trying to figure out the problem, I still do not know what is going on, so I think it must be an apache2 problem?[URL]..
I am trying to access log file which located in /etc/log/apache2. I could get into the directory using `su`. I was able to run ls command under the directory and everything was file. I could run a command,
ls -d /var/log/apache2/*
However after I switched to my account, I got an error. sudo ls -d /var/log/apache2/* ls: cannot access /var/log/apache2/*: No such file or directory
I want to use this command in a bash script to get a list of log files. Should I write the script as root and run it as root?
I have a Debian 5.0.7 installed to my server. I try to install Apache and SVN to this server. I use this tutorial: http://www.howtoforge.com/subversion...-ubuntu-server But is unfortunately not working.
My apache virtual host configuration file is:
This passwd file containing 1 user:
The rights for the passwd file:
And apache2 is running like this:
And if I try to login to my page I got an "Internal Server Error" page.
And my error is in the apache log is this:
So I'm a little bit confused about it. The apache2 should have rights to open this file. I checked it, the file is exist and the apache2 is have rights for it. I don't understand it.
I have suse10 64 bit installed. I am setting up a svn server on it. After installation and adding the modules ,while reloading the apache2 it's throwing the error as: HTML Code: httpd2-prefork: Syntax error on line 113 of /etc/apache2/httpd.conf: Syntax error on line 31 of /etc/apache2/sysconfig.d/loadmodule.conf: Cannot load /usr/lib64/apache2/mod_dav_svn.so into server: /usr/lib64/libsvn_subr-1.so.0: undefined symbol: apr_memcache_add_server
Starting web server: apache2[Wed Dec 09 15:36:40 2009] [warn] NameVirtualHost XX.XX.XX.XXX:80 has no VirtualHosts(99)Cannot assign requested address: make_sock: could not bind to address 126.96.36.199:80 no listening sockets available, shutting down Unable to open logs failed!
We have an application which allows user to perform Drag and Drop operations. The problem we are facing now is the application crash. System works fine for some time and allows user to perform many drag drops successfully. But after running for some time and user perform a drag and drop the system crashes.Following stack trace we are getting while system crashes.
I have a server running both apache2 (default port) and squid (3128 port) I set an squid ACL so my LAN 192.168.1.0 gets filtered. ok all works fine except for external web petitions. When i try to access my web server from the outside, using my public ip, i get a SQUID DENIED. i guess that is because in squid ACL's there is something like: http_access all deny at the end of the file. How can i allow external petitions to my web?
I am having a problem with my Core i5 system running Centos 5.5. The system will lock up with a black screen on occasion when launching X11 - this could happen at any time, from the install CD the first time it's being installed or trying a live CD, or the 3rd-10th time I launch startx from tty1 or sometimes just switching from tty1 to display0 or display1. When the crash occurs, sometimes the systems fans will all turn on loud, and the reset button becomes non-responsive. Other times the screen is black and it just sits there and I have to hard power off. There's nothing in /var/log/messages.
I tried a fix from an old post for 5.4 adding the option for "DDC" "false" in the xorg.conf file, and this did not help. I also ran all the system updates, no help.
The system is totally stable so long as I don't switch between ttys and X11 or launch new X11s (it's also stable under other OSs and has passed Memtest86+ 4).
Is this an instability with the Core i5 driver? The "Video Card" in the display options is set to "intel - Experimental modesetting driver for Intel Integrated Graphics Chipsets" - the "experimental" part is a little scary. Should I try setting the driver to "i810" as the 5.5 release notes recommends (again there's no onboard "chipset" this is an embedded graphics processor on the i5 CPU).
here's the getinfo.sh dump
--glen201 == BEGIN uname -rmi == 2.6.18-194.11.1.el5 x86_64 x86_64 == END uname -rmi ==
After I updated several software including Xorg server using "slackpkg update", I Ignorantly deleted configuration files without backing them up making Xorg server crashes.I try to build Xorg.conf using xorgsetup command but it crashes and spews:
Code: Fatal server error: Caught signal 11. Server aborting /usr/bin/xorgsetup: line 170: 3315 Aborted /usr/X11R6/bin/X -configure
I have a Radeon 4850 Graphic Card with ATI Catalyst as its driver, so I try to reinstall the driver hoping that it will fix the problem, which is a bad logic I admit it, and It fails to uninstall (for some reason the uninstall script is gone) and reinstall (problem with file extraction).
I'm completely new to this whole hosting on linux thing. I'm using apache2 and have everything setup as if I was doing it on a windows machine but when I navigate to the site via URL it displays the source code as plan text and thats all I see. I'm running openSuse 10.3.
I followed the tutorial found here [URL] but when I try to access [URL] I get the following: Code: Secure Connection Failed An error occurred during a connection to www.mydomain.com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Not sure what I might have done wrong... I have retraced all of my steps and I don't believe I missed anything.
I'm setting up a htpc system (Zotac IONITX-F based) based upon a minimal install of ubuntu 9.10, with no GUI other than xbmc. It's connected to my router (d-link dir-615) over a wifi connection configured for static IP (ath9k driver), with the following /etc/network/interfaces:
auto lo iface lo inet loopback # The primary network interface #auto eth0
Network is fine, samba share to the media direction works, until I try to upload a large file to it from my desktop system. Then it downloads a couple of percents at a really nice speed, but then it stalls and the box becomes unpingable (Destination Host Unreachable), even after canceling the transfer, requiring a restart of the network.
Same thing when I scp the file from my desktop system to the htpc, same thing when I ssh into the htpc, and scp the file from there. Occasionally (rarely) the file does pass through, but most of the time the problem repeats itself. Transfer of small text files causes no problems, and the same goes for the fanart downloads done by xbmc. I tried the solution proposed in this thread, and set mtu to 800 in the interfaces file, but the problem persists.
I want to set up a PDC on my computer using Samba without LDAP, etc. The only thing I need is to share folders between the two ridiculous computers here. I got a 11.3 laptop and this 11.4 desktop. This is the /var/log/samba/log.smbd extract:
Code: [2011/06/11 08:29:35, 0] lib/fault.c:250(dump_core_setup) Unable to setup corepath for smbd: Permission denied [2011/06/11 08:29:35, 0] smbd/server.c:1134(main) smbd version 3.5.7-1.17.1-2505-SUSE-SL11.4-x86_64 started. Copyright Andrew Tridgell and the Samba Team 1992-2010 [2011/06/11 08:29:35.951937, 0] passdb/secrets.c:73(secrets_init) Failed to open /etc/samba/secrets.tdb [2011/06/11 08:29:35.954910, 0] passdb/secrets.c:73(secrets_init) Failed to open /etc/samba/secrets.tdb [2011/06/11 08:29:35.955027, 0] smbd/server.c:1234(main) ERROR: smbd can not open secrets.tdb
This is the /var/log/samba/log.nmbd extract: Code: [2011/06/11 08:27:48.682275, 0] nmbd/nmbd_become_lmb.c:395(become_local_master_stage2) Samba name server ANTARES is now a local master browser for workgroup XXXXXXXX.WORLD on subnet [2011/06/11 08:28:08.700572, 0] nmbd/nmbd_serverlistdb.c:343(write_browse_list) write_browse_list: Can't open file /var/lib/samba/browse.dat.. Error was Permission denied I have modified in Yast the User Authentication Source to smbpasswd and specified the correct path to the file...
This is the /etc/samba/smb.conf extract: Code: passdb backend = smbpasswd:/XXXXXXXX/smbpasswdfile I erased all the samba related configuration files, uninstalled samba cli/ser samba-yast cli/ser and reinstalled, reconfigured and still have same issue. It worked very well with 11.1... (I clean installed 11.4 yesterday). I thought take sources from samba, compile and then see if it works...
I wanted to setup a local Apache2 server for some programming and testing. I installed and got Apache2 working with PHP, MySQL and all works fine. Now I wanted to add an additional directory to somewhere in my /home. And that's where things went wrong. I went to edit /etc/apache2/sites-available/default. This is it:
Code: You don't have permission to access /po/ on this server. So I go to the logfile; which says this: Code: [Sat May 08 16:43:51 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /po/ denied
I tried a lot of stuff using chmod and chown, but all to no avail. I tried to change the ownership of the /home/name/web to root, and to www-data, I changed file permissions to allow executing the files.
I set up a ubuntu server 10.04 lts as file server in a network filled with xp, win 2003 and win 2008.
I noticed that if from any bill's SO machine i copy a file (for example a vob file of 1gb, or a lot of doc and pdf files) in the samba server, while the copy is in progress if in another windows I browse other folders shared in Samba, the copy process stops and say ' the name in the network is no more available( or something like this). It seems that samba pay attention on the fact that a user is browsing the share and doing so it seems that Samba forget that in one of its share there was a copy in progress, very annoing. A file server cannot do that, in this way is useless.
Gentlemen/Ladies; I checked the existing information on this site and found that it is pretty widespread and confusing to me. I am a Linux Newbie so please be patient. I use Mandriva Linux 10.0 and want to setup a simple file server. I also want to connect a Windows XP computer to access files on the Linux server. I have a spare router I can use. My ultimate goal is to learn MySQL and PHP programming; I am pursuing a Web Development curriculum at a local University but am just starting out.
Am I making sense and can I do it with the equipment I have? Can you point me to some resources,documents, etc. I can use to accomplish this?
I am using python as a cgi for a simple game that i'm planning to run on a website. It requires the user to enter his name and age. This is saved in a file newly created in his/her name. However, I'm getting this error The above is a description of an error in a Python program, formatted
63 for a Web browser because the 'cgitb' module was enabled. In case you 64 are not reading this in a Web browser, here is the original traceback: 65 66 Traceback (most recent call last): 67 File "/var/www/webprog.cgi", line 51, in <module> 68 main() 69 File "/var/www/webprog.cgi", line 44, in main
I would like to backup my entire Ubuntu installation (/boot, swap and /) at /dev/sda (4GB) to /dev/sdb (10GB).I recreated the exact partitions on /dev/sdb, formatted and cp -rp all the files over.For /boot, I used
dd if=/dev/sda1 of=/dev/sdb1
It didn't boot successfully as the system kept rebooting. I then tried to install grub onto the MBR and boot partition (mounted at /mnt/tempboot).
I'm building a file server using Ubuntu. I want to setup an HTML based file upload/download system where I can create accounts for a few users and allow each user access certain folders. So that the user would open it like a webpage using a web browser and then uploads and downloads files, not needing special setup for the computer. Is there a ready made solution for that purpose? Will I have to code it?
I am going to set up a file server on Ubuntu. I have searched a while, but can't seem to find a guide to what I want. The requirements specifications are the following:File server: possible to upload, change and download files.Linux (Ubuntu) clients, Windows clients if possible.Access restriction to deny access to other than registered users.Only the user should be able to read the content of the files.Ideally root should not be able to see the individual files, but in worst case it is ok for root to see the files.Root should not be able to open the files.Point 1-3 is easy to find out how to set up. But I can't seem to find a way to deny root to view the files. The only solution I can think of is to encrypt files or a whole folder, but I don't know how to set it up.
The setup is for a home network, but the server used as a file server will have a web server as well. If someone manages to get access to the server I don't want them to be able to read the files.
I will be relocating to a permanent residence sometime in the next year or two. I've recently begun thinking about the best way to implement a home-based network. It occurred to me that the most elegant solution might be the use of VM technology to eliminate as much hardware and wiring as possible.My thinking is this: Install a multi-core system and configure it to run several VMs, one each for a firewall, a caching proxy server, a mail server, a web server. Additionally, I would like to run 2-4 VMs as remote (RDP)workstations, using diskless workstations to boot the VMs over powerline ethernet.The latest powerline technology (available later this year) will allow multiple devices on a residential circuit operating at near gigabit speed, just like legacy wired networks.
In theory, the above would allow me to consolidate everything but the disklessworkstations on a single server and eliminate all wired (and wireless) connections except the broadband connection to the Internet and the cabling to the nearest power outlets. It appears technically possible, but I'm not sure about the various virtual connections among VMs. In theory, each VM should be able to communicate with the other as if it was on the same network via the server data bus, but what about setting up firewall zones? Any internal I/O bandwidth bottlenecks? Any other potential "gotchas", caveats, issues? (Other than the obvious requirement of having enough CPU and RAM).Any thoughts or observations welcome, especially if they are from real world experience in a VM environment. BTW--in case you're wondering why I'm posting here, it's because I run Debian on all my workstations/servers (running VirtualBox as a VM for Windows XP on one workstation).
I am trying to setup a file server for a small office but have hit a couple of hurdles, is there a step by step guide how to setup a network for windows and mac computers to use it? I had setup a share etc but once I restarted the server all the files disappeared which I had in the home folder? Also when I setup users how can I use passwords that I select as everytime I set one it encrypts it and uses that instead of my one?
I setup a file server using Ubuntu Server edition following the guid by Xam @ [URL]. With only a few exceptions to minor tweaks the guide worked perfectly. The difference is I use a USB terabyte for data storage but that's not an issue. It works FANTASTIC right now. I can manage files from any computer in the house (vista xp linuxmint, ubuntu etc).
The USB is mounted as /dev/sdb1 /media/store ntfs 0 0 in my fstab file. It's sharing the root directory of the disk and users have access to the entire thing. Now that it's all working, I want to know if anyone knows a way I can not let the entire disk be shared but only select folders off the root for example /media/store/Music or /media/store/Videos.