General :: Memory Handling Capacity Of 64 Bit Ubuntu?
Aug 3, 2010What is the maximum amount of ddr that can be installed in a 64 bit ubuntu linux system computer?
View 1 RepliesWhat is the maximum amount of ddr that can be installed in a 64 bit ubuntu linux system computer?
View 1 RepliesI have a vector that I do not want to call new on during operation so I use the Capacity Feature to allocate all memory at beginning of program
Code:
struct chartype
{
public:
unsigned char value[MAXCHARTYPE_SIZE];
};
[Code].....
My specs: Debian Lenny i686, 7gb ram, 1tb hdd, Pentium 4 2.2 the rest not important.
Now, i did clean install of Debian on to my 1tb hdd as such:
- 14 gb swap (as far as I understood it has to be twice as ram)
- / root 25 gb
- /home the rest
I don`t know if it`s important, but I am under Gnome+Compiz+Emerald.
So the first question is: There is a program in: Applications/SystemTools/Disk Usage Analyzer It tells me thisScreenshot-Disk Usage Analyzer.png
Correct me if I am not right, but i am running out of disk space on /home?
I've got this log file and I need to get all sorts of information from it...
24 - [02/Sep/2010:00:01:16 +0200] - 10.1.53.62 - 200
23 - [02/Sep/2010:00:01:26 +0200] - 10.1.53.62 - 200
19 - [02/Sep/2010:00:01:56 +0200] - 10.1.53.62 - 200
[code]....
I am writing completion function for one PHP framework called symfony. It has command line interface with syntax:
Code:
symfony [options] [namespace:]action
I want to make action be autocompletable. The function is simplest so far:
Code:
function _symfony_commands()
{
[ -r "cache/completion/.sf" ] && cat cache/completion/.sf
}
[code]....
But, if there is : symbol which separate namespace from action problems coming:
symfony doct[TAB]
will be completed to
symfony doctrine:
But nothing happens if you want complete after : symbol. I've found out that for readline there is three words because it splits line with $COMP_WORDBREAKS
Code:
$ echo $COMP_WORDBREAKS
"'><=;|&(:
I played with $COMP_WORDS array and tried every thought I had to make it work, but failed.
What I should do to escape colon and make readline consider it as one word? Or there is way perhaps to workaround it?
I wonder if this is possible to extend or regrow the Linux hard disk partition from 8 GB to 20 GB without losing the existing data on the partition ?at the moment this Ubuntu Linux is deployed on top of VMware and I've just regrow the hard drive from 8 GB into 20 GB but can't see the effect immediately.can anyone suggest how to do this without losing the data ?
View 9 Replies View RelatedMy requirement. We have a linux File server which was connected to SAN (IBM DS4700)now I need to increase my capacity by 50GB I've added the 50GB through the IBM storage manager to the File server, but it doesn't showing on my linux file server
bellow the detail of my drives
/dev/mapper/mpath0p6 1.6G 287M 1.2G 20% /
/dev/mapper/mpath0p7 837M 240M 554M 31% /var
/dev/mapper/mpath0p3 4.1G 2.5G 1.4G 65% /usr
/dev/mapper/mpath0p2 5.1G 1.7G 3.2G 36% /home
/dev/mapper/mpath0p1 200M 24M 166M 13% /boot
tmpfs 2.1G 0 2.1G 0% /dev/shm
/dev/mapper/mpath0p8 356G 303G 36G 90% /filesrv
I need to add the 50GB on to the "mpath0p8".
I am developing a device that will run Linux as its operating system.The device is a small form factor X86 device with a flash drive exposed as a SATA-device. So it is not very dissimilar from any other PC running Linux.For several good reasons I am building my own "distribution", instead of using an existing one.What confuses me is how mount/umount of the root file system is handled.I boot my kernel with the commandline "root=/dev/sda1 rw" which works fine. But everytime I do poweroff or reboot Busybox complained about no /etc/fstab, so I decided to build one.Should I have an entry for my root file system? It seems like this is shadowed by the rootfs anyway. I.e. if I have the fstab entry "/dev/sda1 / ext2 1 1" mount still reports rootfs on / type rootfs (rw)/dev/root on / type ext2 (rw,relatime,errors=continue)My questions are:Do I need to worry? Will the drive be correctly unmounted by the kernel on poweroff/reboot?If I want to perform file system checking on boot, can I do that without resorting to an initrd?
View 1 Replies View RelatedI want to run a script when the switch goes down and an other when it goes up. Is there an easy way to pull this off in Debian (preferably with no other than system tools)? I suppose there is no difference (in the OS point of view) between unplugging ethernet cable and the switch losing power.
On an event I get lines like these in the syslog:
Jun 15 17:49:41 debian kernel: [ 5506.956130] igb: eth1 NIC Link is Down
...
Jun 15 17:49:45 debian kernel: [ 5511.168788] igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
In FreeBSD you can pipe log messages (pre-filtered by regex patterns) to a program. What is the easiest way to replicate this on Debian (with as little additional software as possible)?
after installing Debian 6 on a server. When I try to install a software called ActiveCampaign, I get the following error ... "Your server does not appear to be handling sessions properly."I have install Apache, PHP, MySql and Perl already. Also, after the server restarts, I Webmin will not automatically start, even thou it is setup in the Webmin configuration to start with the server. I have to use /etc/init.d/webmin start from a command line after I su.
My last question is about ftp permissions. I have install proftpd and it seems to be working fine, but when I try to edit any file or upload, I can not. In order to upload and manipulate files, I am using WinSCP under root, wich is a big NO NO.Sorry for the three questions, but I figured I would ask all in one post, instead of creating multiple, since I already have your attention.
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
View 1 Replies View RelatedI mostly do .Net development in a Windows 7 virtualbox. I use the host for simple things such as web browsing, skype, chat, etc. All things that are fantastically available on Ubuntu which I in many ways prefer. So this has been begging the question for a while: why even use Windows on the host, seems like a Linux host would use less resources (untested) and allow my Windows VMs to run better while allowing me to do my non-development stuff in an interface I prefer.
So easiest way to do this - I downloaded wubi and installed Ubuntu. I installed in it Virtualbox, and then start add and start my VM to get this message: Failed to open a session for the virtual machine VS2010, Could not open the medium '/host/Users/George Mauer/Virtualbox VMs/VS2010/C:/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi; VD: error VERR_FILE_NOT_FOUND opening image file '/host/Users/George Mauer/Virtualbox VMs/VS2010/C:/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi; (VERR_FILE_NOT_FOUND).
You see what's going on? With wubi, the windows drive gets mounted at /host/ but virtualbox is for some reason appending an absolute path! I would very much like to use the same exact VM file since it would retain Snapshots and I would be able to use it in either windows or Ubuntu mode. However, even if I try to simply mount the drives into a new VM I get an error: Failed to open the hard disk /host/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi. Cannot register the hard disk '/host/Users/George Mauer/.VirtualBox/HardDisks/VS2010.vdi' {guid...} because a hard disk '/host/Users/George Mauer/VirtualBox VMs/VS2010/C:/Users/George Mauer.VirtualBox/HardDisks/VS2010.vdi with UUID {guid...} already exists.
This is especially odd since this worked fine with my recently created Android VM, though this might have something to do with the fact that VirtualBox recently changed their default VM storage locations. Any idea how to fix this? My Linux-fu is weak but I seem to remember from CS class something about symbolic links that might be relevant here?
I am working on tracing the signal handling mechanism in linux kernel internallly. For that, i build the kernel. Now, i want to trace the signal handling mechanism in the old kernel. I got to about SYSLOG and PRINTK for this. But, how to use these tools exactly in tracing the handling of signals internally ?. Is, there any tool similar to backtrace to do that?. How the call flow is done internally ?
View 1 Replies View RelatedPretty soon, I hope, I'll get my brand new PC and wish to install a Linux disto. on it. openSuse may be it But I read recently that people prefer to do a fresh install of a newer version of openSuse, instead of upgrading it, apparently because of problems that may occur by the upgrade. As I understand, this preference apply to all Linux distributions and not only openSuse. Thus I wonder if there's a Linux distro. that's best in handling upgrades?I don't want to make a fresh new install each and every time that my disro. has a new version. I'm afraid to lose the data in that installation, and backing-up the data would be a headache. Also I plan to install a Windows OS alongside the Linux one via the Dual Boot configuration.
View 14 Replies View RelatedI am using malloc and frees a lot in my program. It shows its allocated but when i remove it doesnt show as the memory is removed(I am using the top command to view VIRT memory usage). If this continously grows what would happen to my program (Will it go out of memory?)
View 4 Replies View RelatedThis is my first post in these forums. I'm still quite new to Linux (using Mint 9) so please bear with my not-very-articulate question(s)When I boot up and open up a tty terminal I get a message saying "Memory corruption detected in low memory." I've done an extensive google search about the issue and it seems not uncommon. I ran a memtest with no errors returned, so I'm sure that there's nothing really wrong with the memory; apparently it's a bug in the kernel that's causing this.
View 2 Replies View RelatedI found from command 'top' that 8GB memory are used. However, using command 'ps' with some options to grep the running processes and then summing up the memory used by the running processes are less than 2 GB. Where has the used memory gone ?
View 7 Replies View RelatedI have a 650 GB ext3 LVM partition with RAID 1 on. The partition is 85% full, but the system says "no space left on device" - where did the 15% go?I ran "tune2fs -m 0 /dev/mda1", so it is not the space reserved for the root - so Nautilus reports the same free capacity as GParted now.Some more info:- Ubuntu 9.10 x64- GParted says 650 GiB, 104.83 GiB free- Nautilus says 104.8 GiB free- The system thinks the disk is completely full - I cannot even create (touch) a new empty file
View 9 Replies View RelatedJust rebuilt my file/print server using an ECS 945GCD-M Atom motherboard. Running it under 9.10 (2.6.31-20) using the same cable and port on my Netgear switch that my old server connected to at 1GB/s without issue (old server's NIC was Intel-based).
Found the driver is an AR813x, & downloaded & installed the latest driver from here (1.0.1.9). sudo lshw -C network now shows that it's using the new driver, but still sitting at 100MB/s:
Code:
*-network
description: Ethernet interface
product: Attansic Technology Corp.
[Code].....
I can't seem to get it to come up at the full 1GB/s. I also tried ethtool -s eth0 speed 1000 without success. why networking restart ignores eth0?
my laptop battery capacity has gone low due to constant charging or whatsoever reason. previously it was at 62% and within 2 months its gone down to 32%. I use the laptop atleast for 18hrs every day. so is ther any solution to prevent my battery from losing the capicity.
i use dell studio 15 laptop, with 3 gigs of ram, lithium ion battery(56Wh), ati card, using compiz, p8600 processor. i use this for app development and for listening to music and videos.
I've recently completed a fresh install of 10.04 on a home file server and upgraded the hard drives in my storage array. My PREVIOUS hardware was:
Old version of Ubuntu (I forget which one exactly, but I know I had missed a few upgrade cycles)
3X 500 GB Seagate Baracuda's (for the array)
Areca 1220 Hardware RAID controller
Intel Core 2 Duo 6600
320 GB Seagate for the boot drive
I was running that hardware for about five years or so and it was rock solid. After the upgrade the hardware specs are:
Ubuntu 10.04
Areca 1220 hardware RAID controller
4x 1000GB Samsung
Intel Core 2 Duo 6600
320 GB Seagate for the boot drive.
The fresh install of Ubuntu 10.04 went remarkably well. The drivers for that raid controller are in the kernel, which is great. I was able to access the old array after upgrading Ubuntu. Now I am trying to create a new array with the four 1000 GB drives in a RAID 5. Obviously that gives a maximum storage capacity of 3 TB, greater than the 2 TB threshold that seems to be so important. I've been doing some digging and here is where my questions start:
it appears as though gparted doesn't support file partitions greater than 2 TB, correct?
it also doesn't seem as though parted supports ext3 or ext4, is that correct?
If this is the case, how do I create a partition with ext4 that is greater than 2 TB?
I can see the array volume in gparted (which is a relief) but it lists the size as 2.73 TiB, which I find curious because that is over 2 TB, but not the full capacity of the volume. I can also get to the volume in parted. But I see in the parted documentation that using the makepartfs command is discouraged and instead, one should use the command mkpart to create an empty partition, and then use external tools like mke2fs ( to create the filesystem.
how to proceed from here. What does the community think is the best course of action to create a partition of 3TB in ext4? Then I need to change fdisk to automatically mount the array at every log in, right?
I have window 7 and Ubuntu 10.04 on my laptop.As you know if I extend the capacity of my ubuntu partion in window, I will lose ubuntu and I should reinstall it.I want to know if there is a way for extending my ubuntu partition from 20 Gb to 30 Gb without loosing ubuntu and windows?
View 3 Replies View RelatedI have an 8gb sandisk cruzer flash drive that I wanted to remove the U3 software from. I used with this:and ran it on my windows machine because wine didn't work for itbut after it removed U3 my card only had 3.8gb. Is there any way to restore it back to 8gigs?I've already tried reformatting with the disk utility and it just doesn't give me the option to format higher than 3.8gb.
View 4 Replies View Related1 Areca controller 1222 divided into 2x250Gb in RAID 1 for my Ubuntu installationm6x 2Tb in RAID 5 for my data (mounted as logical drive) resulting in 8Tb of available space (also visible in the disk utility of Ubuntu) Ubuntu 10.04 LTS
Now the problem; when I check the capacity in the file browser on my mounted data drive, I only see 5,1 Tb of total space.
Why when I create ext partitions it reserves 10% of hard capacity?
View 1 Replies View RelatedI have a Lucid Lynx server running with a 6 GB for /home and a data partition of 400 GB mounted inside home (/home/user/data). The data partition is shared through SAMBA for access by other windows machines on the network. However, the capacity of the samba share shows up only as 6 GB (or less) while the actual capacity is 400 GB. I am assuming that this because it is mounted within the home partition. I understand that 400 GB is still available even though the SAMBA drive shows only 6 GB (or less) capacity.
The problem is, I am trying to setup Windows 7 Backup and Restore on one of my laptop. The backup stops with an error that the network drive does not have enough disk space. I think that this is because the samba drive only shows up as 6 GB or less capacity even though it can store more.
How do I fix this problem? How can see the actual size of the SAMBA drive (in my case 400 GB) in stead of the remaining space in /home partition? I know I can reformat my drive and make my /home 400 GB. But I am not sure if this will fix the problem. However, I will prefer to this without formatting.
I really have tried Googling this over many, many hours, and I still can't make sense of it all, especially since I keep reading conflicting info.
In 9.10, the Usplash boot splash was replaced by XSplash, which is based on X11, and when we all realised something was up with GDM (when we couldn't find a way to customise the login anymore), I read it was not GDM controlling that anymore, but X11. So, the way I read it (on many different sites), XSplash in 9.10 controlled both the boot splash and the login, even though the hacks for customising the login involve commands with "gdm" in them.
Now, in 10.04, the short-lived XSplash has been replaced by "Plymouth", and I get all that... as far as the boot splash goes. What I don't understand is the login process.
There is talk of GDM2 coming out, and I've seen people asking when this will be implemented in Ubuntu, but if the move is away from GDM altogether towards a boot/login process totally based on X11, will this happen?
More importantly, what is controlling the login now? Is it still GDM, or is it X11 that gdm commands work on or something? (The hacks people used in 9.10 still work in 10.04)
Also, because I have Xubuntu and Kubuntu in the same system, and have had things like KDM taking over login happen after upgrades, I have to ask what is controlling the boot process and login for Ubuntu's siblings? (To me they now look almost identical, so wonder if they're all X11 based)
I remember having the same problem with kde/Fedora a few years back, and I can't remember the solution. Trying to open any attachment in evolution gives the error "No application is registered as handling this file" and you have to save the attachment, and then open it as normal. I have checked all the file associations, and they are fine, and the same attachment in evolution under Gnome opens fine.
View 5 Replies View RelatedI have been using so far only 32bit ubuntu, but now for various reasons I have to change to 64bit I need to compile a lib for gpg encryption for handling old keys, namely .I have the idea.c file and had some instruction with which I was able to create the lib from it for the 32bit linux. The intruction was:
gcc -Wall -02 -shared -fPIC -o idea idea.c
this seems not to work under 64bit and the old lib does not work under 64bit either obviously. particularly the compiler tells me -02 option is unknown, does it matter? or what is -02 ?
just noticed when I went to burn a cd (just got a new car and at the moment the stereo does't have a auxiliary port and I'm not about to use a shotty fm transmitter) that k3b spikes the cpu through the roof an freezes when I write/burn/convert a m4a media file... I'm using k3b version 1.91.0.
View 3 Replies View Related