In another thread I shared how my system occasionally freezes. I found a solution in which I forced the drive connected to sata port 1 (ata1) to run at 1.5 Gbps and let the second drive connected to sata port 2 run at 3.0 Gbps. I have multiple systems on this machine. When I boot into Slackware 12.2 with the 2.6.27.58 kernel.
I see this in dmesg:
ata1: FORCE: PHY spd limit set to 1.5Gbps
ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xe000 irq 23
ata1: SATA link up 1.5 Gbps (SStatus 123 SControl 300)
Looks okay. When I boot into Slackware 13.1 I see this in dmesg:
ata1: FORCE: PHY spd limit set to 1.5Gbps
ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xe000 irq 23
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Further, when I boot into 13.1 the freezing problems return. Therefore I suspect the drive is running at 3.0 Gbps.
1. Why isn't libata.force working?
2. How can I know what speed the drive is running?
how to make the SATA port as 1.5G or 3G, 6G by command line operation without reboot PC by editting grub.conf?I need to switch the SATA port as 1.5G <-> 6G without reboot.
I get some errors in dmesg and a delay as libata probes and tries to negotiate with my eSATA enclosure.
Some backstory to this adventure:My mainboard is an MSI K9AGM2-L with the AMD SB600 chipset providing the AHCI SATA controller for four ports, two of which (#s 3 and 4) are available.
A Rosewill RCR-AK-IM5002 internal card reader (manufacturer page missing) provides a SATA-to-eSATA adaptor port.
I wish to use a SAMSUNG Spinpoint M7E HM641JI 640GB 2.5" SATA hard disk drive externally. First, I had a Vantec NST-260SU-BK 2.5" SATA to USB 2.0 and eSATA enclosure. As explained in this forum post, all seemed well at first. But even though I got absolutely no visible errors, I could see (through hash checking and byte-by-byte comparisons) that writing (copying) to the external disk was not happening correctly. After exchanging the enclosure several times, I finally noticed that the retailer had a different (generic) brand, so I got one of those instead. (The particular unit I got was missing its screws; had a protrusion in its casing that obstructed the circuit board's seating; and caused even my mainboard's firmware to lock up when it was probed, twice, on each boot. A replacement with the same model remedied those three issues.)
Now we come to the present. The Comkia MM-G3BK-SUE MobiMe G3 2.5" SATA to USB 2.0 and eSATA enclosure (manufacturer page missing) doesn't seem to exhibit the writing issue (yet) -- hurray! However, it causes libata to struggle for fifteen seconds or so trying to connect when it is plugged in. (It also causes openSUSE to stall while booting for that reason.) Here is the output from dmesg:
I'm using a RHEL5/Centos5 variant. After struggling with various 2 TB hard drive failures, I started using the Hitachi Deskstar and Ultrastar models, and had none of the earlier green feature related or QA problems I had with all others. However, I began seeing SATA controller freezing under heavy loads, like large rsync mirror operations. My controller is an on-board ICH7R running in AHCI mode. The symptoms are that the drive goes off line and begins to log thousands of SMART errors for Spin-Up Retries. Analysis confirms that the drive is 100% OK after, except for the huge SMART log. This happens on two servers, and multiple drives. Changing the drive to 1.5 Gbps interface speed reduces the problem greatly but does not eliminate it.
I'm working with Hitachi engineering on this, and their only theory is that it has to do with "non-zero buffer offset". Apparently this is a feature of NCQ to receive data out of sequence, and Hitachi says that ICH7R is supports NCQ but not this feature. I am told that BIOS is supposed to filter offending commands but BIOS vendors sometimes err. An alternate is to filter them at the driver, which I believe is libata. Does anyone know anything about this issue? Where might be a good place to make a query about this, with regards to libata? I've contacted the motherboard vendor about the BIOS issue, but I'd like to look into libata also.
I used Gefoce9400GT , 190.53 nvidia driver, configuring Mulitiple X screen both CRT and TV, work fine if CRT and TV connected to card, but I hope foce TV-out output signal, means that if not connected to TV, the SVIDEO can output signal. Because the line too far, cannot detect TV connection, so I think foce output TV-out signal. I used Option "ConnectedMonitor" "CRT, TV". Can force found 2 display device, have 2 screen ok, but not display at TV if not connected TV. I do not understand why the TV has been forced output signal does not display images, they can be connected to the TV show?
I use CentOS 5.3 with kernel 2.6.18-128.1.6.el5 for Clonezilla. It would be simpler for me if any of IDE or SATA drives detects as /dev/sdX device. Because when I cloning computers with Clonezilla it is matter which type of drive connected to cloned computer. So I must firstly clone computers with IDE drives, after that with SATA drives. If all drives would detected as /dev/sda I would can clone all computers at once! Is there simple way to enable new libata library without recompiling kernel? Is there already compiled kernels with this feature?
i'm using Ubuntu in my office. I have to report two issues with network connectivity.1. Wireless doesn't work if come back after suspend/hibernate the computer. The work around I follow is,a. Turnoff the wifi and restart the computer. (It won't shutdown. have to do force using the power button)b. Reinstall the network-manager_0.8-0ubuntu3_amd64.deb and network-manager-gnome_0.8-0ubuntu3_amd64.debc. Restart the computer (Again it won't shutdown. Need force.)d. Now wifi will detect the networks and shutdown works fine hereafter.2. Sometimes the wired gets disconnect frequently (thrice in 2minutes). Physical connectivity is good and working well in Windows.
I installed Ubuntu 10.04, and happened to have an Silicom Images PCXpress card plugged in, and it kindly added the sil_sata module.This is fine for this chip set.I think that my earlier installation of Ubuntu 9.04 used Libata, which does support NCQ and also has good legacy support for the Intel Intel ICH7 chipset I have inside my Sony Vaio SZ notebook. Also, libata also support Silicon Image chipsets as far as I can tell from URL...I wish to remove the Silicom Images SATA modules and replace this with libata and then test the performance between the two device drivers.
I used sg_sata_ident to get the IDENTIFY_DEVICE output on my SATA drive. The output shows the drive as being able to support READ_BUFFER/WRITE_BUFFER. However, when I issue sg_read_buffer /dev/sda, I get an error - DRIVER_SENSE - Invalid command operation supported.
I am trying to set up a samba share, that uses a common group so that all users that connect to it will have write permissions to all files and folders. But I cannot get the "force group" option in smb.conf to work. When I create files on this share, it's using the user's default group, not the group specified with "force group". In smb.conf, I have:
The strange thing is when I create folders, it works fine, the folders are created with the proper permissions and the group is assigned to it. It is just when creating files that it does not work. I have read through some documentation and other posts, but have been unsuccessful setting this up.
I ran "yum update" on 03-06-2009 and it updated my samba version from
3.2.8-0.26.fc10.x86_64 to 3.2.11-0.30.fc10.x86_64
My network shares then became read only!After some digging it turns out my system is not working because of Samba bug 6291, the "force user" option is no longer working.
Ideally I would run another "yum update" and that would fix the problem. Apparently the bug has been fixed in Samba 3.4.0pre2 - when can when we expect that to be released and included when I do a "yum update"?
Alternatively how do I get back to version 3.2.8-0.26.fc10.x86_64 ?
So I have force quit and the little window is stuck. reads "click on a window to force the application to quit. To cancel press <ESC>" How the heck do I get rid of it? I tried xkill, it does not work.
Why is it that Firefox in Fedora 14 will only display the [URL] website and no others ?!? It used to work. Is this the way to force me to upgrade to Fedora 15?
I'm looking for a simply way to Force log off other users who may be logged into the OS. Windows has a simple way to go into users and force log off other users. All I have found in Ubuntu is terminal methods. Are their no GUI tools to accomplish this?
I am trying to force a resolution (800x600) through xorg.conf. How to do that? The story is the following. I am trying to play a game (theocracy) on my toshiba nb100 (max resolution (1024x600) however the game supports only 800x600 resolution. To play I am using xgame [URL] which has the option to use a separate xorg.conf file to run the game. Even though I created a new xorg.conf1 file which only contains in its "screen" section Modes "800x600" notwithstanding the new screen, with the game is set to 1024x600. How to force 800x600 through the xorg.conf.
I have a Dell Inspiron 9200 with the Intel Pent M 1.8ghz processor that has speed stepping. For some reason the speed stepping doesn't seem to be working...is there a way for me to force the CPU to run at a higher speed then the current 600mhz...I installed cpufreq but the commands i give it from the cpufreq wiki don't seem to be doing much.
is there a way to force a VPN connection so that non-encrypted traffic is blocked. the aim is to set up a box that will use only VPN connections and block any traffic that is not connected.
There has to be a way to force your isp to give you a new IP without physically unplugging the adsl/cable or physically powering off the router.
Under what circumstances does the isp assign a new ip? Any software tricks to do this? Stressing there is an adsl router between my linux workstation and the isp, and nat going on. Ie there is a local IP internally.
I am using openDNS on my current Linux box and I was wondering if their is a way to force the DNS settings to stay the same even if ROOT tries to change it (since my dad wants content filtering password protected and I still want my computers root access...)
I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory.
Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many, many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive.
After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes.
Now, the question is, is there any way to force groups of process to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, but that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.
I have a Dell Inspiron 530 with onboard ICH9R RAID support. I have successfully used this with Fedora 9 in a striped configuration.Upon moving to Fedora 13 (fresh/clean install from scratch), I've noticed that it is no longer using dmraid. It now appears to be using mdadm. Additionally, I need to select load special drivers (or something close to that) during the install to have it find my array - which I've never had to do before with F9. While the install appears to work ok and then subsequently run, my array is reporting differences .. presumably because it is trying to manage what mdadm is also trying to manage. More importantly, I can no longer take a full image and successfully restore it as I could with the dmraid F9 install. Is there anyway to force F13 to use the dmraid it successfully used previously?
I have some files in my user trash bin which don't get emptied. Can you please tell me where I can find my trash bin using Superuser Dolphin? Or how can I force empty it? I tried /home/.Trash-0/files but it's all empty.
Bought this USB wireless NIC but didn't realize v3 was on the market and only for about the past month or so..Cannot use NDISWRAPPER for intended specific purpose.A Google search reveals at least one unrequited SOL user in the Ubuntu forums..I can't even find out what chipset is in this thing, doesn't seem to be published anywhere (is there a utility that interrogates USB NICs?)So, looking for suggestions before I give up on this device, BTW here is the typical reports, people might notice that the VendorID/ProdID doesn't return much info on the Internet.Is there a way to force try the driver for v2 of this NIC? I'm not familiar with a procedure that doesn't require some level of device recognition.
My headphones broke so audio only works on the right speaker. As a temporary fix I want to force all sounds to play in mono in the right speaker. I run Ubuntu 9.10. Is there a way to complete this?
I sshed into a Linux machine (bash shell) from a public Windows machine (in our lab) and forgot to log out. I'm now back at my seat in another room and I am too lazy to walk back and log out that session; I can ssh into the Linux machine from my current PC though. Can I force-logout the other session from a new SSH session?
When I ssh to the Linux box from my current PC and type users command, I can see that I'm still logged in there; my name is listed twice - one for the current session and another for the session from lab PC.
I don't have root privileges on the said machine, but I guess that shouldn't matter as I'm just trying to log out myself.
I'm trying to upgrade to Firefox 5 on RHEL5, and am getting the following error:
./firefox-bin: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /home/isdtc/tdiakiw/bin/firefox5/firefox/libxul.so)
As the computer is a work machine,I don't have access to update the libraries directly. To try and get around this I downloaded the libstdc++.so.6.0.10 library.strings libstdc++.so.6.0.10 | grep GLIBCXX shows:
GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2[code]....
Is there some way of forcing firefox to use this library instead? I have tried adding the directory containing the new library (with a so name) to LD_LIBRARY_PATH and then running, but I'm still getting the same error message