I've downloaded a big file from another server via "wget" to my /backup/ directory, and after the download was compeded I did run "ls" and the error was:ls: full_plesk_backup: Value too large for defined data type.
I am trying to convert a mdf file to iso with mdf2iso and this message pops up "Value too large for defined data type". The file is about 4.6 GB. Any other information
i have Ubuntu10.10 (kernel-2.6.35-22-generic) installed. struct stat StatBuff;
[Code]...
I have mounted a windows share folder on /mnt. When i gave any directory within /mnt/ to stat function it fails with errorno 75. perror shows "Value too large for defined data type". Example 1 is fail but Example 2 works fine.
I am trying to generic way to convert the string datatype to other primitive data type. To achieve, i used Template . But i getting error and couldn't resolve the issue and error reported is also clueless.
In my system around 73gb(pc-desktop) i have,1 primary partition(windows)-25gb, 1-extended partition(remaining gb) 3 logical partitions were there in (under) extended partition in one of the logical partition is d:drive. in my hard disk d: drive is -/dev/sda5
previosly i was fat -file system , (d:drive-/dev/sda5), i remember i changed the d: drive(d:drive-/dev/sda5) file system to ext4file system ,with following command using terminal
After doing(changing the file system)this one ,i couldnt see the d:drive data
By doing that
1q) Did i reformatted the partition? i think the new filesystem(ext4) has no knowledge of the data that was on it when it had a FAT filesystem.
2q) How to do undo operation,i tried to change the filesystem type to fat/ntfs in terminal using command --sudo mkfs -t FAT /dev/sda5.
Result:its showing text message-'mkfs.FAT: No such file or directory'(not in single quote)
Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? What doesn't work: detaching from the screen session (doesn't respond until file is done outputting) trying command to move to a different window in the screen session (also doesn't respond) typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up).
I am trying to burn mac osx 10.5 install disk from from a 6.7gb dmg disk image. I thought I would be using 2 DVD-R 4.7GB discsfor this burn, I was hoping when the first was full it would ask for another to finish the burn. Instead it get the message that the DVD will not hold the choosen DMG. file.
Can I do anything besides buy a dual layer DVD that would hold the whole file?
I want to copy about 40GB - to a partiton. There are two hard drives in my box one won't boot but I can aaccess it and mount partitions and I aim to move data from it to a new bootable hard drive. Doing a simple cp copy command may not be the best way to copy and paste such a large chunk? Also I want to backup the data I plan to copy/paste using a USB hard drive to backup. But I could also paste data from the backup to the new drive instead of from old internal hd to new hd. - that's another option.
This problem is not exclusive to Ubuntu, I've experienced it in Windows and OSX as well, but it seems that almost every time I transfer a large number of files (i.e. my music collection) between my desktop computer and laptop via my external hard drive, I end up losing files for no reason. I usually don't notice the files are missing until later on, because I am never informed of any data loss. Now, every time I make a large transfer of files, I just do it two or three times to ensure that I don't lose any files.
Are there any tools out there that let me select a bunch of data and burn it to multiple cd's or DVD's? I'm using k3b but have to manually select cd and dvd size amounts.
we've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month. I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
I declared a variable as int data type which was a placeholder for a resulttatus (meaning, based on the result status that variable varies from 0/1/2/3. It can contain only these four values).data type is 4 byte integer in C#. But I can declare this variable as which represents a 1 byte integer. Since that variable contains only 1 or 2 or 3 or 4, declaring the variable asis wastage of memory space.Before declaring any variable we need to just think of the memory space needed for that variable, our requirement,
in c/c++, double is usually 8 bytes. It has a 52-bit mantissa (or significand, or base), an 11-bit exponent, and a 1-bit sign. My question is: is the mantissa a 52-bit integer? Or is the decimal point just after the first bit. Meaning: if the mantissa was 1000110011100011 (in binary) would that make the value of the mantissa (assuming the exponent was 0) 1000110011100011, or 1.000110011100011? (in binary)
I have a problem - I have files with rows of data and I need to check if the next row (of the same type) has the NEXT date in it so I need to extract a date in YYYYMMDD format from a row (easy enough) then add one day to it and compare it to the the next date I encounter on a subsequent row.
is there a way to recover data from a hd partition type fat32...cause ...cause right now it shows up as unallocated space..earlier i tried installing windows in a unused partition located just above this partition....i need to recover the data real soon...
I'm working with Radiotap headers right now. I want to get the RSSI data. I came through a problem that I can't figure out right now.The value that I need to get is:
Code: s8 IEEE80211_RADIOTAP_DBM_ANTSIGNAL now, when I printf it:
i currently have both xubuntu 10.10 32-bit and windows 7 64-bit installed on my laptop which i use mostly for college work and basic programming (i'm still learning to program).i have a 150gb partition with win7 installed, a 20gb partition with xubuntu installed and a 15gb partition with my linux home folder on, the rest of the space is unused because i thought i may need it for something else in the future.the home folder partition is formatted to etc4, i now have a need to access this data from win7 for college work, i know that windows doesn't support mounting of etc file systems so i have hit a problem.
i thought about changing the type of partition from a ect4 to fat32 but will this delete my data? or ,are there any third party software packages for windows that will allow me to mount ect4?
I've got a server running CentOS 5.5. I used the automated iptables config tool included in the operating system to allow traffic for vsftpd, Apache and UnrealIRCd. When I send large files to FTP, even from the local network, it works fine for a while and then completely times out... on everything. IRC disconnects, FTP can't find it and when I try to ping it I get "Reply from 10.1.10.134: Destination host unreachable" where ..134 is the host address for the Win7 box I'm pinging from. This is especially frustrating as it's a headless server, and as I can't SSH into it to reboot I'm forced to resort to the reset switch on the front, which I really don't like doing.
Edit: the timeouts are global, across all machines both on the local network and users connecting in from outside.
I have been a RPM-based distribution guy for a long time (redhat,centos,suse). We have a large shared and dedicated web environment that is starting to require more and more linux. I am in a position to switch gears and move to ubuntu if it makes sense. Things that are important to me are:
1. ease of deployment (both servers and websites themselves) 2. patch management 3. documentation
i've got a select based application that wants to support a large number of mostly idle connections. the code is java and works on windows, suse enterprise linux, mac os x. it does not work on centos 5.5 (32-bit, 2.6.18 kernel, 1G of memory).
i've read and followed the directions in various articles about tuning linux for large numbers of connections (including the C10K problem), and gotten the number of sockets up to 3200.
these didn't make any apparent difference:
[URL]
on windows, i can get up to around 78,000.
on suse enterprise linux (a few years ago), i got up to 90,000. that's where i got bored and stopped.
on my mac laptop with os x (snow leopard), i got up to 10,500.
i have used ulimit -n 10240
my current goal is 10k sockets.
the test is that i'm opening one socket at a time until it fails. when it fails, many of the sockets which have already been opened also fail, in one giant cascade. sounds like a buffer / memory problem.
each group of 64 sockets gets a thread to manage select calls for them. thus i'm only using around 61 threads total when it fails.
I am attempting to upgrade a system from 4.7 to 5.2 using a (now) DVD drive attached to the onboard IDE. Originally I had tried using a remote NFS image and a USB stick but I thought maybe there was a problem with the image. I can get up to the point of the installation of selecting the keyboard for the system and then it freezes and never goes any further. It doesn't appear to be a kernel panic since I can still switch between consoles.
I've got an MSI K9NGM2-FID with 14 drives in it. It serves as a file server for our backup server. It's got a secondary 4 port Silicon Image SII 3114 SATA card using the sata_sil module, and an old IDE Promise FastTrak TX2000. Technically I could have 16 drives but the 750W PS is walking the fine line on tripping it's self-breaker with the 14 drives and 7 fans. I would like to NOT have to disconnect all of this to do the upgrade.
I thought maybe that running the install using the "noprobe" option would help so it didn't detect and load the modules for the Silicon Image or the Promise cards and detect all of the drives but it still gets stuck on the step after selecting the keyboard. The installation info console and the dmesg console don't really provide any useful information. The installation console says:
INFO : moving (1) to step welcome INFO : moving (1) to step language INFO : moving (1) to step keyboard INFO : moving (1) to step findrootparts
And the last lines of the dmesg console says:
<6>device-mapper: multipath: version 1.0.5 loaded <6>device-mapper: multipath round-robin: version 1.0.0 loaded <6>device-mapper: multipath emc: version 0.0.3 loaded
Is there a hidden "debug" option that will turn on a lot of extra logging?