Recently I tried to get BGP table dumps from public route servers. I telnetted into one of those public route servers and ran "show ip bgp" command. My question is: how to save the command output to my local machine? I cannot run "show ip bgp > tmp.txt" on the remote route server.
I have used Dump Command to dump the application files. For Full backup the level 0 is working fine. For incremental backup I used the level 1 or 2 it is getting the error as
DUMP: Only level 0 dumps are allowed on a subdirectory DUMP: The ENTIRE dump is aborted.
The code I used =============================== #!/bin/bash #Full Day Backup Script #application folders backup #test is the username now=$(date +"%d-%m-%Y") [Code]...
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
In my program, I fork() to get a child process. Because of some problem, child process terminates by a segmentation fault. Parent process is still running. I have compiled my code with -g option. I have done: ulimit -c unlimited. I am not getting core dump of the child process. How can I get the core dump of child process?
I have a Red Hat Enterprise (AS) 4.8 system and I need to know how to totally rebuild the system from dump tape. I have been making some full level 0 dumps of the system to the attached DAT72 tape drive... In the case the boot disk goes south, I need to reload from tape, onto a new disk drive. I know how to do this in Solaris. I assume you boot from CD to like a mini-root, then configure and mount the drive on temp mount points, restore the sys data, then load the "boot blocks" (like installboot on solaris).
I would like to have dump backup just my home directory but am having problems the command I am using wants to back every thing and takes hours upon hours it has been running for about 10 hr and only 21% is done. This is the command dump -0u -f dp_hd /media/CENTON USB/ /how can I get this to back up only my home directory
i need to prepare a presentation for that i have to copy a table from [URL] to my power point slide. but when i am copying it i am just getting a table with single column. is there a method to import the contents from web page table to my presentation table?
I have set of DVD disks. I'd like to rip them to .mkv files, but with all information - i.e. all subtitles, and all audio tracks. Is there any tool on Linux that I could use to do it? I found some Gentoo howto about ripping, but it requires writing shell scripts, and I'd rather use something with clickable interface.
Kmail 1.13.2 Problem on startup, error is from nepomuk, data storage. "cannot find Redland backend, nepomuk is disabled until fixed. Also see the following error from the akonadi console:
100503 10:00:15 [Note] Plugin 'ndbcluster' is disabled. 100503 10:00:15 InnoDB: Started; log sequence number 0 31413862 100503 10:00:15 [Warning] Can't open and lock time zone table: Table 'mysql.time_zone_leap_second' doesn't exist trying to live without
Does the dump command back up entire file-systems or is it capable of backing up subsets of a file-system? And is tar capable of taking device names (for file systems) as input to be archived?
I have this dump script that performs either full or incremental backups depending on the day of the week. From what I have read, when using dump you should drop to single user mode to help prevent the backups from being inconsistent before you issue the dump command. What I want is the script to drop to single user mode, perform the backups using dump, and then go back to runlevel 3 after the backups complete.
I know when you enter init 1 to drop to single user mode, but doing so within a script reboots the computer and drops it to single user mode and the rest of the script doesn't complete.What would be a good method for me to be able to accomplish what I want to do? Do I need to run other scripts that would call mine?I am running Centos 5.4
i just touch linux, may i know how can i convert the core dump file to a readable textfile, which include all the information, which is in core dump, such as all variables, threads information, call trace for each tasks, and so on. i know use the GDB can view this, but it won't dump all the informations to one text file. but sometimes, people want to view the core dump reason without Linux environment.
Need confirmation if the following scenario works for making my client and server as identical?
My local(source) Linux server @192.168.0.2 My remote Linux client @192.168.0.70 On the local system : #df -m Filesystem Mounted on /dev/hda3 / /dev/hda1 /boot tmpfs /dev/shm
On the local system , issue the followings to make client and server as identical : #dump -0uvf - /dev/hda3 | ssh root@192.168.0.70 -c "restore -rf - /" #dump -0uvf - /dev/hda1 | ssh root@192.168.0.70 -c "restore -rf - /boot" #dump -0uvf - /dev/shm | ssh root@192.168.0.70 -c "restore -rf - /tmpfs"
We purchased a virtual server from GoDaddy (1 month trial) to set up as a proxy for our networks (24 of them). I am having 2 separate issues. The first is I can't configure/install NAT and support is telling me the only way I can is to purchase a dedicated server. Here's the error:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 iptables v1.3.5: can't initialize iptables table `nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. Here's the fix: [URL] So, what I am hoping to do is configure this by just opening port 3128 directly, and only allowing access from our networks. As a test I did this and allowed only from our office and it did not work. However I can't connect, so I am wondering what I am doing wrong? Here's my squid configuration:
Is there a difference between using GPT partition table when formating hard drives and MS-DOS partition table? What are the advantages/disadvantages of using either?
i created a db_dump from PostgreSql-8.2 and trying to dump it into PostgreSql-8.4 new install. 2 of the 3 databases are fine, except for minor issues, but the 3rd database is giving me the following error message when i start a service calling on it, assist: "database disk image is malformed"
I am trying to increase performance on our system. I wanted to tell iptables not to retain entries into the ip_conntrack. I try to put this int he iptables but I receive the error : seems to have a -t table option.
The entry I'm adding is: -t raw -A PREROUTING -i lo -j NOTRACK
I have a problem with one hard disk,now it says its Unallocated,and i tried to create a new partition on it,but it says that first i need to create a partition table,but when i create one,choosing msdos label,it doesnt to nothing. I used Gparted in Fedora,how can i create a partition table,so i can use my hard disk again?
I have several drives in an LVM VG/LV and for some reason on reboot, a drive will get a corrupt GTP table. I have killed the entire VG and re-created it without the drive that was showing the problem, then then it just happens to another drive. It does not appear to be the same drive each time either. I've confirmed this by using smartctl to check the SN of the drive reporting a corrupted table. It's not always the same drive.
I have swapped around cables to the two controllers to see if I could pin-point which cable or port showed the problem and long story short, there was little consistency in it. This simply does not appear to be caused by any single cable, port, controller, or drive.
Code:
parted /dev/sdb print Error: The primary GPT table is corrupt, but the backup appears OK, so that will be used. OK/Cancel?
When I see that and select Ok, it just shows it again. I can do an mklabel and mkpart, then the LVM LV shows up under /dev as it should, without another vgscan. If I then mount that LV, I can see the data is there and it seems Ok despite the warning of mklabel saying it will destroy the data. Logs show no cause during boot. So, what is causing this? Will doing the mklabel kill the data on it? I just don't understand why Ubuntu is randomly corrupting GTP tables.
Code:
Ubuntu 10.10 x64 Mobo: ASUS A8N-SLI - On board NVIDIA nforce4-SLI controller has 4 ports connected to 3 drives in this LVM LV. HighPoint Technologies, Inc. RocketRAID 230x 4 Port SATA-II Controller - Has 4 ports, 3 of which are used in the LVM LV. (Had 4, one is out with an RMA).
PAM time restrictions - changing Time.conf so it gets time from a sql table. I was wondering with the PAM authentication module (pam_time) that I can grab time from a server using sql/postgress which uses TIMEDATE function to get the time of logging out into pam_time? So basically I want to insert sql statements into Time.conf which would get the time from a table.
had trouble viewing partition table using fdisk, now realised i just cudnt view the whole table from Rescue terminal, please remove this thread, i can't find how ))