01-04-2018, 02:16 PM
(This post was last modified: 01-04-2018, 04:47 PM by DeathBringer.)
After bifurcating my 8X PCIE Slot I am having trouble getting 8 GT/S stable bandwidth. I was able to split the 8X lane into two by altering a bit in the IntelRcSetup bios module pertaining to the first IIO IOU1 controller. I wanted to do this because on my MSI X99A Godlike motherboard and a 40 lane 6850k cpu if you have two video cards at 16X all you have left is one slot running at 8X. This last slot is shared with a M.2 onboard connector via a switch such that it can run either at only 4x pcie cpu mode or 2x Dmi pch mode. So if the M.2 onboard connector is running through the pcie cpu mode it shuts off the 8X PCI-e slot!
Just like any other 16X slot that gets bumped down to 8X when another card is placed in the next slot over I made the assumption that electrically the later half of that last PCI-E#5 slot is still wired to carry bandwidth. So I hacked the bios and created two bifurcated buses and bought a 8X to 4x4x bifurcated riser card adapter and plugged it in to the last pcie slot. This allowed me to at the same time to run one m.2 NVME SSD drive through the onboard connector at 4X cpu mode and another NVME SSD Pcie card through the latter half of the what should have been an inoperable slot at 4x as well!!
The only issue I am having is sometimes after a reboot the pcie ssd will throttle down to 2.5 GT/s operation according to HWInfo64 or SIV utility. This is very random and unpredictable. Verified according to benchmark apps as well. However usually after a cold startup its at 8X most of the time. I can usually get it back up to 8 Gt/s by reinstalling the nvme drivers and rebooting. I have tried to set the link control registers to retrain at 8 gt/s but it only goes up to 5 gt/s on the fly. Think doing a reboot after resets it to 8 Gt/s.
Also I have ruled out the riser card as being involved in this speed problem by plugging in the pcie ssd drive directly into the slot. The problem goes away after removing the bios mod. I think there must be an issue with signal quality as result of splitting the lanes but I have no idea what needs to be done to get this stable. I even have been able to issue SETPCI commands to the Lnkcon register to change the link speed up and down with mixed results. In any case I feel the issue is related to maybe L0 or L1 latencies or difficulty with Pci-e synchronization. Maybe much like a bad overclock not enough voltage perhaps. I don't know.
If anyone with more experience with the perculiarities of the PCI-E bus could help I would appreciate it,
Thank you.
Just like any other 16X slot that gets bumped down to 8X when another card is placed in the next slot over I made the assumption that electrically the later half of that last PCI-E#5 slot is still wired to carry bandwidth. So I hacked the bios and created two bifurcated buses and bought a 8X to 4x4x bifurcated riser card adapter and plugged it in to the last pcie slot. This allowed me to at the same time to run one m.2 NVME SSD drive through the onboard connector at 4X cpu mode and another NVME SSD Pcie card through the latter half of the what should have been an inoperable slot at 4x as well!!
The only issue I am having is sometimes after a reboot the pcie ssd will throttle down to 2.5 GT/s operation according to HWInfo64 or SIV utility. This is very random and unpredictable. Verified according to benchmark apps as well. However usually after a cold startup its at 8X most of the time. I can usually get it back up to 8 Gt/s by reinstalling the nvme drivers and rebooting. I have tried to set the link control registers to retrain at 8 gt/s but it only goes up to 5 gt/s on the fly. Think doing a reboot after resets it to 8 Gt/s.
Also I have ruled out the riser card as being involved in this speed problem by plugging in the pcie ssd drive directly into the slot. The problem goes away after removing the bios mod. I think there must be an issue with signal quality as result of splitting the lanes but I have no idea what needs to be done to get this stable. I even have been able to issue SETPCI commands to the Lnkcon register to change the link speed up and down with mixed results. In any case I feel the issue is related to maybe L0 or L1 latencies or difficulty with Pci-e synchronization. Maybe much like a bad overclock not enough voltage perhaps. I don't know.
If anyone with more experience with the perculiarities of the PCI-E bus could help I would appreciate it,
Thank you.