The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This work reports a low power implementation of a 60Gb/s NRZ optical receiver (RX) in 14nm bulk CMOS finFET featuring a first-order digital CDR with high jitter tolerance (JTOL). The design includes a single phase-rotator (PR) with low-complexity control logic suitable for high-speed applications. Multi-phase clock signals that drive data/edge slicers are created by an open loop quadrature clock generator...
We built a 4-channel photonic carrier with input/output SiN waveguides and a flip-chip-attached SOA array, incorporating end-to-end reflection-management and mode-matching. All channels demonstrate fiber-to-fiber gain of >10dB and support error-free 4-λ × 25-Gb/s WDM links.
We have fabricated InP SOAs with lithographically-defined etched facets. Their more precisely-controlled length compared to cleaved SOAs promises improved coupling tolerances for PICs with flip-chip attached gain blocks. Measured gain is around 20dB and noise figures are 5–6dB.
In the last 10 years interconnects in many high performance servers and supercomputers transitioned from copper interconnects to optical interconnects. In this presentation a technological roadmap towards will be reviewed, focusing on the evolution of interconnect power and density efficiencies.
We present eye width analysis for short multimode fiber links (<100 m) including modal noise and laser relative intensity noise. Contrary to power budgeting for traditional links, eye width budgeting is better suited for short links.
An optical circuit switched network using a 3D-MEMS crossbar is demonstrated in a datacenter-scalable stream processing system. We developed a complete software control, routing and scheduling framework to interconnect clusters of blade servers.
A single-chip CMOS parallel optical transceiver, or Optochip, is presented that addresses the key metrics of power consumption, density, bandwidth, and cost, to enable large-scale parallel optical links through fiber or waveguide-arrays.
We demonstrate a low power optical interconnect transmitter which employs a 990 nm VCSEL with high efficiency and low threshold current, and a 130 nm CMOS driver. The power dissipated by the transmitter is 15.1 mW at 10 Gbps.
We report a compact, low-profile transceiver with 24 transmitter and 24 receiver channels, each operating at 12.5 Gb/s. The achieved 300 Gb/s aggregate bi-directional data rate is the highest ever reported for parallel optical modules.
A chip-to-chip optical interconnect on a printed circuit board achieves a 160-Gb/s aggregate bidirectional data rate through 32 parallel polymer waveguides at 13.5 mW/Gb/s. This is the fastest, widest, and most integrated optical bus ever demonstrated.
We report here on the design, fabrication and high-speed performance of a novel parallel optical module with sixteen 10-Gb/s transmitter and receiver channels for a 160-Gb/s bidirectional aggregate data rate. The module utilizes a single-chip CMOS optical transceiver containing both transmitter and receiver circuits. 16-channel high-speed photodiode (PD) and VCSEL arrays are flip-chip attached to...
We review architectures enabling >100 Gb/s interconnects in data centers. Parallel optical interconnects are cost effective for rack to rack interconnects. On-board optical waveguides offer data rate scalability, density and performance advantages over electrical interconnects.
High-end computing servers configured as symmetric multi-processor (SMP) systems rely on parallel high speed links for interconnection between the processors. With each new generation of processor, the bandwidth of the SMP link is increased. Wired copper cables are still the technology of choice for this application but with each increment in bandwidth, fiber optic interconnects become more competitive...
Increased demand for performance continues to drive higher chip internal clock frequencies and parallelism, as well as raise the demand for higher bandwidth and lower latencies. Today's copper digital communication links are limited by their loss characteristic which are dominated at high data rates by skin effects and dielectric loss (Broomall, 1997). Electrical copper links are typically used to...
Two 20 Gb/s optical transmitters are presented. They are a part of a 4/spl times/12 array intended for backplane data links. The drivers are fabricated in 0.13 /spl mu/m CMOS and include pre-emphasis and regulated output impedance. When coupled to 990 nm VCSELs, they provide optical modulation amplitude of 0 dBm and consume 70 mW and 120 mW.
Terabus is based on a chip-like optoelectronic packaging structure (Optochip) assembled directly onto an organic card with integrated waveguides (Optocard). To-date, Terabus has demonstrated 4times12-array optical transmitters and receivers operating up to 20 Gb/s and 14 Gb/s per channel
The paper reviews the recent results from 100 Gb/s-class parallel interconnects for high productivity computing systems (HPCS) and examines these critical areas as they apply to parallel 850 nm VCSEL based interconnects
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.