After dedicating himself to “disassembling” the historic 8086 processor, expert Ken Shirriff – known for his meticulous reverse engineering on the chips that made the history of computing – he dedicated himself to dissecting theIntel 386CPU that the Santa Clara company presented way back in 1985.
The Intel 386 processor represents a crucial turning point in the evolution of modern computers. At first glance, it may seem like just an “ancestor” in the x86 family, but the 386 played an instrumental role in several key aspects of 20th century computing.
Intel 386 torn to pieces
A chip like the Intel 386 (its full name is Intel 80386) marked the transition to 32-bit architecture, establishing a new de facto standard that would dominate the computing landscape for decades. Its introduction also established the importance of the x86 architecture not only for Intel but for the entire computer industry. A significant change, as it marked the end of IBM’s monopoly on the PC market, paving the way for new leaders such as Compaq.
Shirriff shows the salient features of the Intel 386 processor, focusing on its evolutions and the key discoveries that emerge from the analysis of the die of silicon. A closer look at the photos of die reveals substantial changes in transition from technology at 1.5 µm to that at 1 µm. This 60% size reduction resulted in not only greater density at the transistor level but also significant changes in the processor layout. The transition from 1.5 µm to 1 µm might seem like a negligible technical detail, but it has had a significant impact, lowering the production costs. This strategy of scaling the processor before the next transition to a new microarchitecture resulted in the so-called strategy tick-tock of Intel.
During the phase tick, Intel introduces a new, more advanced process technology. Engineers are focused on upgrading chip manufacturing technology, reducing the size of transistors and improving power efficiency. However, the processor architecture tends to remain largely unchanged from the previous generation.
In the phase tock, Intel introduces a completely new or substantially revised architecture, maintaining the same manufacturing process technology as the “tick” phase. At this stage, the focus is on significant performance improvements, new instructions and features, as well as a more advanced processor design.
The signatures of the designers who designed the Intel 386
Shirriff particularly emphasizes the discovery of signatures of the designers who oversaw the development of the Intel 386 processor. It is in fact customary that i die in silicon bear the initials of the engineers who worked on the project. The expert, however, points out that on the processor 386 DX there is an unusually large number of initials. “I think the designers put their initials next to the unit they worked on, but I couldn’t identify most of the names“, aggiunge Shirriff.
The Intel 386 processor die with the designers’ initials enlarged. Source: Examining the silicon dies of the Intel 386 processor.
The original 386 was built on a process called CHMOS-III to 1.5 µm (this measurement coincided, in particular, with the length of the gate channel for each transistor). Around 1987, Intel switched to an improved process called CHMOS-IVwith 1 µm features, allowing the creation of a die considerably smaller for the 386 processor.
Interestingly, in the move to 1 µm many of the designers’ initials were removed, adding an intriguing element to the story of Intel’s chip design.
The design of the Intel 386 chip
The Intel 386 design process proved to be an interesting journey that transitioned to the use of automated design systems and greater use of simulation. Around that time, company officials realized that more automation would be needed to build a complex chip like the 386 on schedule.
Making a large investment in automated tools, the 386 team completed the design ahead of schedule. In addition to proprietary CAD tools, the team made extensive use of standard Unix software such as
make to manage the various design databases.
The 386 posed new design challenges compared to the previous processor, the 286. It was much more complex, with the double the number of transistors; it also used fundamentally different circuits. While previous processors were built with transistor NMOS386 passed to CMOS (technology still in use today). The CHMOS process mentioned earlier, featured two layers of metal instead of one, changing the way signals were routed on the chip and requiring new design techniques.
The problem of forbidden gap
In his detailed analysis, Shirriff talks about the problem of forbidden gap that Intel faced when designing its 386. Forbidden gap it is a critical region where the arrangement of metal layers on the silicon can cause problems. In the specific case of CHMOS-III, it top layer metal could pass through or approach the lower layer. However, Intel had to deal with the problem where the metal filaments touched each other causing the chip to malfunction, making the product unreliable.
Fonte: A double layer metal CHMOS III technology.
The need to manage this issue required a careful approach to arrangement of the layers metals and the distances between them during the chip design and manufacturing process. This additional complication affected the overall yield of 386 and required innovative solutions to ensure the chip worked properly. The diagram shows a cross section of a CHMOS-III circuit, with an NMOS transistor on the left and a PMOS transistor on the right.
Mixed design top-down e bottom-up
The design of the 386 proceeded with both an approach top-downstarting with the definition of architecture, both from bootom-up, designing standard cells and other basic circuits at the transistor level. The microcodice of the processor, the software that controlled the chip, was a critical component. It was designed with two CAD tools: an assembler and a microcode rule checker.
The high-level design of the chip (register-level RTL) was created and refined until it represented clock times and time steps in detail.
Register-level RTL refers to a registry level description (Register Transfer Level) when designing a processor or integrated circuit. It is a description that focuses on how data is transferred between registers duringexecution of instructions by the processor. It is an intermediate step during the design of a processor, located between high-level design, which concerns the overall architecture of the processor, and gate-level design, which involves the detailed transistor-level description of individual components of the processor.
The RTL was scheduled in MAINSAILa portable language similar to Algol based on SAIL (Stanford Artificial Intelligence Language). Intel used a proprietary simulator called Microsim to simulate RTL.
The diagram of the 5 meter high Intel 386 processor
Using the 1985 Intel annual report as a source, Shirriff also recomposed the historic image reproduced in the figure below. It represents a diagram of the 386 processor, 5 meters tall in reality, which corresponds to a very detailed and enlarged visual representation of the processor. The diagram describes the layout of the main components of the processor: the datapath which includes registers, ALU (Arithmetic Logic Unit), barrel shifter and multiplication/division units, is positioned on the left. The microcode, which represents the low-level instructions which control the operation of the processor, is located in the lower right corner.
Shirriff confesses that given the complexity and artistic appearance of the diagram, it is difficult to determine whether it is a work created for engineering purposes or whether it is a work of art worthy of being exhibited, for example, at Museum of Modern Art (MoMA) in New York.
In the opening image (source: Intel) the second from the right is a young man Pat Gelsingernow CEO of the company, who played a key role in the development of the 80386 processor.