logo elektroda
logo elektroda
X
logo elektroda
ADVERTISEMENT
Treść została przetłumaczona polish » english Zobacz oryginalną wersję tematu
  • In many materials, for example PCIe 5.0 interface if systems equipped with this interconnect unit appears GT / s - gigatransfer per second. What does it really mean? It appeared with the presentation of the PCI Express (PCIe) standard developed by PCI-SIG (PCI Special Interest Group). With each subsequent iteration, for example, "doubling the transmission speed from 16 GT / s to 32 GT / s" was regularly repeated. But what does gigatransfer mean?

    We are used to giving transfers in gigabits per second (Gbps, Gb / s), thanks to this it is easy to imagine how many "0's and 1's" can be transferred to a given interface per unit of time. In the case of PCIe, however, the situation is different due to the data encoding used. In PCI Express, transmissions are serial and the clock is integrated into the data transfer itself. In order for the receiver to be able to successfully "recover" the clock signal from the data stream, the coding system must ensure that there are enough edges in a given packet (go from 1 to 0 and from 0 to 1). To achieve increased number of passes, PCIe uses 8b / 10b encoding - each eight bits are encoded as a 10-bit symbol, which is then decoded back to 8 bits by the receiver. This means that 10 bits are required for every eight uncoded bits. Let's look at PCIe 1.1 - a single channel of this interface has a bandwidth of 2.5 Gbps on each side, so a total of 5 Gbps. Since the bus must transmit 10 bits for each data byte sent, the effective baud rate is:

    5 Gbps * 8/10 = 4 Gbps

    Thus, a 16-channel PCIe 1.1 interconnect can transmit either 80 Gbps of encoded or 64 Gbps of unencrypted data. PCIe 2.0 doubled these parameters so that one channel was able to transmit 8 Gbps of uncoded data, and the 16-channel PCIe 2.0 interconnect could transmit up to 128 Gbps of uncoded data, it 16 gigabytes per second . Gigatransfers therefore refer to the amount of raw data that is transferred over an interface. In order to determine the real effective bandwidth from gigatransfers, data encoding must also be taken into account.

    Taking the data encoding into account for the first two generations of PCI Express is quite simple - 8b / 10b encoding means that the speed in GT / s is multiplied by 0.8 to get the effective throughput in Gbps. However, since PCIe 3.0 the 128b / 130b encoding is used, this means that every 128 bits (16 bytes) is encoded in a character that is 130 bits. So in the case of standards from version 3 onwards, the number of gigatransfers is multiplied by 128/130 (?0.985) to get the effective interface bandwidth. This means that the 5.0 version of the PCIe interconnect, which reaches 32 GT / s in each direction - 64 GT / s in total - has a bandwidth of 64 * 128/130, or about 63 Gbps - almost 7.88 gigabytes per second. With 16 interface channels, this means that it is possible to send via this interconnect 126 gigabytes of data per second .

    Source: https://www.edn.com/electronics-news/4380071/What-does-GT-s-mean-anyway-

    Cool? Ranking DIY
    About Author
    ghost666
    Translator, editor
    Offline 
    ghost666 wrote 11960 posts with rating 10223, helped 157 times. Live in city Warszawa. Been with us since 2003 year.
  • ADVERTISEMENT
  • #2 18011560
    ArturAVS
    Moderator
    I wonder why parallel transmission is being abandoned? After all, it's much faster by definition.
  • #3 18011567
    freebsd
    Level 42  
    Parallel transmission: trouble with multiple tracks, trouble with keeping tracks of equal length, noise, crosstalk.
  • ADVERTISEMENT
  • #4 18011578
    ArturAVS
    Moderator
    freebsd wrote:
    trouble with leading many tracks, trouble with keeping tracks of equal length, noise, crosstalk.

    And in the serial line these problems do not occur?
  • #5 18011592
    DamianG
    Level 21  
    Yes, they do, but they are easier to learn. In the case of e.g. disks - it is easier to run 7 cables / tracks at SATA than 40 at ATA. It is because of the above-mentioned problems introduced 80 strand tape to the ATA. Every other wire was ground. It helped ... For a little while.
  • #6 18011616
    freebsd
    Level 42  
    arturavs wrote:
    And in the serial line these problems do not occur?
    There are, but it is easier to drive four paths and use high frequency than to drive multiple paths and try to provide high frequency (+ shielding them). It's easier and cheaper this way. At the same time, for strictly parallel transmission, one parallel connection would have to have a multiple of eight paths (I know: for this control and the possibility of using coding). In short, I am writing :-)
    There is one more benefit from the adopted solution: scaling. One connection can be used, and more can be used as the device needs.
  • ADVERTISEMENT
  • #7 18011697
    Bojleros
    Level 16  
    For pci-e, you can just put a slower device into a faster slot. Somewhere, I also missed CDs with factory-cut sockets. Let's say the socket is pcie x 2, but you put a x4 device into it, agreeing with the performance drop. Scalability + flexibility.

    I am not an HF specialist so let me ask you. Is it not that with a parallel bus wave phenomena force path lengths to be integer or sub-multiples of the wavelength? Reflections is one thing, but it's not also about propagation times / group delays? It seems to me that if you have several serial buses, you consider each line separately, not all lines at once, but I'm not sure.
  • #8 18011762
    tronics
    Level 38  
    DamianG wrote:
    Yes, they do, but they are easier to learn. In the case of e.g. disks - it is easier to run 7 cables / tracks at SATA than 40 at ATA. It is because of the above-mentioned problems introduced 80 strand tape to the ATA. Every other wire was ground. It helped ... For a little while.

    There are less than 40 signals, including 16 lines for data alone (control and address rest). In the case of PCI-E 16x these lines are
    2 smbus
    5 cents
    2 ref clock (1 pair differential)
    16 pairs for Rx and twice as many for Tx
    plus 3 control signals. Does it seem less manageable? Oh no. But ... thanks to the fact that LVDS is, on the one hand, high speed and on the other hand, good noise immunity. Could you make a parallel bus with LVDS? Sure. Only it is inefficient in terms of flexibility (scaling, "mutability"). Now the processor talks with the memory directly, and with the chipset and peripherals through dedicated point-to-point lines - be it Hyper Transport, QPI, DMI or PCIE. Anyway, processors now talk directly even with PCIE peripherals. It was not so and the norm was to connect all peripherals directly to the processor's address space (of course, bridges were involved in this - the northern one was mainly for communication with memory, agp or possibly built-in graphics, and the southern one for peripherals such as pci, ide or usb. premiere before the development of the PCIE And how many years had passed before these standards completely replaced the older solutions in the computer world.
    Now, where are parallel solutions still doing great and there is no indication that this will change? RAM memory.
    And before PCI-E, SPI could be an example of scalable serial transmission.
  • ADVERTISEMENT
  • #9 18011840
    ghost666
    Translator, editor
    tronics wrote:
    DamianG wrote:
    Yes, they do, but they are easier to learn. In the case of e.g. disks - it is easier to run 7 cables / tracks at SATA than 40 at ATA. It is because of the above-mentioned problems introduced 80 strand tape to the ATA. Every other wire was ground. It helped ... For a little while.

    There are less than 40 signals, including 16 lines for data alone (control and address rest). In the case of PCI-E 16x these lines are
    2 smbus
    5 cents
    2 ref clock (1 pair differential)
    16 pairs for Rx and twice as many for Tx
    plus 3 control signals. Does it seem less manageable? Oh no. But ... thanks to the fact that LVDS is, on the one hand, high speed and on the other hand, good noise immunity. Could you make a parallel bus with LVDS? Sure. Only it is inefficient in terms of flexibility (scaling, "mutability"). Now the processor talks with the memory directly, and with the chipset and peripherals through dedicated point-to-point lines - be it Hyper Transport, QPI, DMI or PCIE. Anyway, processors now talk directly even with PCIE peripherals. It was not so and the norm was to connect all peripherals directly to the processor's address space (of course, bridges were involved in this - the northern one was mainly for communication with memory, agp or possibly built-in graphics, and the southern one for peripherals such as pci, ide or usb. premiere before the development of the PCIE And how many years had passed before these standards completely replaced the older solutions in the computer world.
    Now, where are parallel solutions still doing great and there is no indication that this will change? RAM memory.
    And before PCI-E, SPI could be an example of scalable serial transmission.


    Only that from the mentioned lines, neither SMBus nor JTAG have to follow a controlled impedance. So for PCIe x1 you have 3 pairs of signals that you have to electrically "care for" the PCB. And thanks to this, you can even release them with a cable outside the computer.

    As for LVDS, I think SCSI had iterated over LVDS.

Topic summary

The discussion revolves around the concept of GT/s (gigatransfers per second) as used in PCIe (PCI Express) interfaces, particularly PCIe 5.0. Participants explore the differences between serial and parallel transmission, highlighting the advantages of serial connections, such as reduced complexity and improved scalability. They note that while parallel transmission can theoretically offer higher speeds, it faces challenges like crosstalk and the need for equal track lengths. The conversation also touches on the implications of data encoding in PCIe, where the clock signal is integrated into the data stream, making GT/s a more relevant measure than traditional gigabits per second (Gbps). The use of LVDS (Low-Voltage Differential Signaling) is mentioned as a method to enhance signal integrity in high-speed data transfers.
Summary generated by the language model.
ADVERTISEMENT