Home / Component / Graphics / PCI Express 4.0 due next year, could kill off GPU power cables

PCI Express 4.0 due next year, could kill off GPU power cables

The next generation of the PCI Express connector is aiming to launch on motherboards next year and it looks like it could also clean up your build a bit. The new socket will be capable of delivering 300W of power to the GPU, which could remove the need for additional power cables for many graphics cards.

The Peripheral Component Interconnect Special Interest Group (PCI-SIG), made an appearance at Intel's Developer Forum last week to go over the future of PCI Express. The usual bandwidth improvements were touted, full 16x PCIe 4.0 slots should deliver over 31 GB/s of bandwidth, but on top of that, power delivery will also be boosted.

psu-pcie

Image Source: JohnsonJohnson via Tom's Hardware

The PCI Express 4.0 socket is said to deliver at least 300W of power but according to reports, it was suggested that higher power options could be made available, meaning we could see up to 500W of power deliverable via the socket. Essentially, PCIe 4.0 could quite comfortably power something like a GTX 1080 without a power cable.

While these new sockets might mean less cables to GPUs themselves, it likely means we will need some additional power connections on the motherboards themselves, so you will still get some use out of your modular PCI cable set.

Discuss on our Facebook page, HERE.

KitGuru Says: It sounds like PCI Express 4.0 could bring some big changes. Though I must admit, I do quite like seeing a nice set of tidy cables trailing out of a GPU. Either way, we will learn more as we get closer to next year's launch for the tech. What do you guys think about the promises of PCIe 4.0? Would you like to rid yourself of GPU power cables? 

Become a Patron!

Check Also

Leaker claims Nvidia RTX 5070 Ti will pack 8,960 CUDA cores

Leaker Kopite7kimi, known for accurate Nvidia leaks, claims that a GeForce RTX 5070 Ti is in the works and could launch alongside the RTX 5080 at CES.

21 comments

  1. I earn close to $6k-$8k /month from freelancing at home. For those of you who are ready to work simple freelance jobs for 2h-5h a day from comfort of your home and get good paycheck in the same time… Try this job UR1.CA/pm79v

    dfgret45

  2. This idea I like. I’ve always disliked the GPU cables because of their positioning. Never understood why they couldn’t be on the end of the GPU, nearer the MB. Builds are going to get a lot cleaner with this!

  3. Cool just could easily have the pcie connectors on to mobo to power the cards

  4. R.I.P, 6 and 8-pins. Some motherboards already have extra molex and SATA power headers onboard.

  5. There ARE GPUs with the auxiliary power headers at the head of the card o_O

  6. Christopher Lennon

    Good, I hope it destroys all those little “instagram” businesses that overcharge way, way too much for cable extensions….I was hoping they’d be destroyed by the DIY movement, but this will do…

  7. no… i dont feel relaxed knowing there is almoust 42A running throught my motherboard…. do you realize how thick/wide those power traces should be.. no thanks… increase the bandwidth.. keep those external power connections

  8. You guys aren’t getting it, this will simply create incentive for more powerful graphics cards, it is unkikely the external power cabling business would be made to vanish; both AMD and nVidia will make enthusiast level products that exceed 300W post PCIe 4.0.

  9. Maybe not your specific board, but boards made to handle multi-GPU setups already have all that power running through all PCIe slots in use alone. There are even Intel server boards with four sockets capable of running 145W TDP SKUs at once, that’s a lot of amps, wouldn’t you say? Speak nothing of their mission-critical realiability.

  10. I don’t think so, GPU makers are always trying to bring the power consumption down, not increase it.

  11. well, VRM deliver more than 100A of 1V to GPU and CPU. So i don’t see a problem.

  12. That is because they have to make it work with the current power limitation!
    If they could they certainly would take advantage of that extra power(pun intended).

  13. Imagine the cable management potential, Clean AF!

  14. <<hp.. ★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★✫★★::::::!il700r:….,……..

  15. The power limitation stays exactly the same; your PSU.
    So no, I don’t think this will happen.

  16. In an era when companies are seeking to improve power efficiency, more powerful graphics cards don’t necessarily mean heavy electricity consumption any more and as ChrisZ already pointed out, you’ll still be limited by your PSUs output.

  17. No i doubt they will
    300w GPUs allready and too hot and less efficient because of this heat

  18. They could easily already make a graphics card with several 8 pin connectors for the same effect

  19. Problem is, PCI-e 4.0 graphics cards that draw through the connector wont be compatible with older mobos

  20. It will not draw this much power through the slot. The increase will mainly come from the connectors.

  21. I was thinking something on this line that would not the 6 and 8 pins be added to the motherboard in this case because your not getting more power through the motherboard connector. Just be like all the 4 pins that power a CPU which has been at least one and a lot of 2 set connectors going into the motherboard already. So we just have like another 8 to 16 pins of pcie connectors going into the board so the board can power the card. Might help cable management and airflow but the power has to come from somewhere.