How does a computer microprocessor work? Let's look at some important points about how a computer works.

Home / Laptops

Almost everyone knows that in a computer, the main element among all the “hard” components is the central processor. But the circle of people who understand how a processor works is very limited. Most users have no idea about this. And even when the system suddenly starts to slow down, many believe that it is the processor that is not working well and do not attach importance to other factors. To fully understand the situation, let's look at some aspects of CPU operation.

What is a central processing unit?

What does the processor consist of?

If we talk about how an Intel processor or its competitor AMD works, you need to look at how these chips are designed. The first microprocessor (by the way, it was from Intel, model 4040) appeared back in 1971. It could perform only the simplest addition and subtraction operations with processing only 4 bits of information, i.e. it had a 4-bit architecture.

Modern processors, like the first-born, are based on transistors and are much faster. They are made by photolithography from a certain number of individual silicon wafers that make up a single crystal into which transistors are imprinted. The circuit is created on a special accelerator using accelerated boron ions. In the internal structure of processors, the main components are cores, buses and functional particles called revisions.

Main Features

Like any other device, the processor is characterized by certain parameters, which cannot be ignored when answering the question of how the processor works. First of all this:

  • number of cores;
  • number of threads;
  • cache size (internal memory);
  • clock frequency;
  • tire speed.

For now, let's focus on the clock frequency. It’s not for nothing that the processor is called the heart of the computer. Like the heart, it operates in pulsation mode with a certain number of beats per second. Clock frequency is measured in MHz or GHz. The higher it is, the more operations the device can perform.

At what frequency the processor operates, you can find out from its declared characteristics or look at the information in But while processing commands, the frequency can change, and during overclocking (overlocking) it can increase to extreme limits. Thus, the declared value is just an average indicator.

The number of cores is an indicator that determines the number of processing centers of the processor (not to be confused with threads - the number of cores and threads may not be the same). Due to this distribution, it is possible to redirect operations to other cores, thereby increasing overall performance.

How a processor works: command processing

Now a little about the structure of executable commands. If you look at how a processor works, you need to clearly understand that any command has two components - an operational one and an operand one.

The operational part specifies what must be accomplished in at the moment computer system, the operand determines what the processor should work on. In addition, the processor core can contain two computing centers (containers, threads), which divide the execution of a command into several stages:

  • production;
  • decryption;
  • command execution;
  • accessing the memory of the processor itself
  • saving the result.

Today, separate caching is used in the form of using two levels of cache memory, which avoids interception by two or more commands of accessing one of the memory blocks.

Based on the type of command processing, processors are divided into linear (execution of commands in the order in which they are written), cyclic and branching (execution of instructions after processing branch conditions).

Operations Performed

Among the main functions assigned to the processor, in terms of the commands or instructions executed, three main tasks are distinguished:

  • mathematical operations based on an arithmetic-logical device;
  • moving data (information) from one type of memory to another;
  • making a decision on the execution of a command, and on its basis, choosing to switch to the execution of other sets of commands.

Interaction with memory (ROM and RAM)

In this process, the components to be noted are the bus and the read-write channel, which are connected to the storage devices. ROM contains a constant set of bytes. First, the address bus requests a specific byte from the ROM, then transfers it to the data bus, after which the read channel changes its state and the ROM provides the requested byte.

But processors can not only read data from RAM, but also write it. In this case, the recording channel is used. But, if you look at it, by and large modern computers Purely theoretically, we could do without RAM at all, since modern microcontrollers are capable of placing the necessary bytes of data directly in the memory of the processor chip itself. But there is no way to do without ROM.

Among other things, the system starts from the hardware testing mode (BIOS commands), and only then control is transferred to the loading operating system.

How to check if the processor is working?

Now let's look at some aspects of checking the processor's performance. It must be clearly understood that if the processor were not working, the computer would not be able to start loading at all.

It's another matter when you need to look at the indicator of the use of processor capabilities at a certain moment. This can be done from the standard “Task Manager” (opposite any process it is indicated how many percent of the processor load it provides). To visually determine this parameter, you can use the performance tab, where changes are tracked in real time. Advanced options can be seen using special programs eg CPU-Z.

In addition, you can use multiple processor cores using (msconfig) and additional options downloads.

Possible problems

Finally, a few words about the problems. Many users often ask, why does the processor work, but the monitor does not turn on? This situation has nothing to do with the central processor. The fact is that when you turn on any computer, the graphics adapter is tested first, and only then everything else. Perhaps the problem lies precisely in the processor of the graphics chip (all modern video accelerators have their own graphics processors).

But using the example of the functioning of the human body, you need to understand that in the event of cardiac arrest, the entire body dies. Same with computers. The processor does not work - the entire computer system “dies”.

Modern processors have the shape of a small rectangle, which is presented in the form of a silicon wafer. The plate itself is protected by a special housing made of plastic or ceramic. All the main circuits are protected, thanks to them the full operation of the CPU is carried out. If with appearance everything is extremely simple, what about the circuit itself and how the processor is designed? Let's look at this in more detail.

The CPU does not include large number various elements. Each of them performs its own action, data and control are transferred. Regular users We are used to distinguishing processors by their clock speed, amount of cache memory and cores. But this is not all that ensures reliable and fast work. It is worth paying special attention to each component.

Architecture

The internal design of CPUs often differs from each other; each family has its own set of properties and functions - this is called its architecture. An example of the processor design can be seen in the image below.

But many people are accustomed to meaning processor architecture in a slightly different way. If we consider it from a programming point of view, then it is defined by its ability to execute a certain set of codes. If you buy a modern CPU, then most likely it is x86 architecture.

Cores

The main part of the CPU is called the core, it contains all the necessary blocks, and also carries out logical and arithmetic tasks. If you look at the figure below, you can see what each kernel functional block looks like:

  1. Instruction fetch module. Here, instructions are recognized by the address, which is indicated in the program counter. The number of simultaneous reading of commands directly depends on the number of installed decryption units, which helps to load each cycle of work with the largest number of instructions.
  2. Transition predictor is responsible for the optimal operation of the instruction fetch unit. It determines the sequence of instructions to be executed, loading the kernel pipeline.
  3. Decoding module. This part of the kernel is responsible for defining certain processes to perform tasks. The decoding task itself is very difficult due to the variable instruction size. In the newest processors there are several such blocks in one core.
  4. Data sampling modules. They take information from RAM or cache memory. They carry out exactly the sampling of data that is necessary at this moment to execute the instruction.
  5. Control block. The name itself speaks volumes about the importance of this component. In the core, it is the most important element, since it distributes energy between all blocks, helping to perform every action on time.
  6. Module for saving results. Designed for recording after completion of instruction processing in RAM. The storage address is specified in the running task.
  7. Element of working with interruptions. The CPU is capable of multitasking thanks to the interrupt function, which allows it to stop the progress of one program by switching to another instruction.
  8. Registers. Temporary results of instructions are stored here; this component can be called a small fast RAM. Often its volume does not exceed several hundred bytes.
  9. Command counter. It stores the address of the instruction that will be used at the next processor cycle.

System bus

The CPU system bus connects the devices included in the PC. Only he is directly connected to it; the remaining elements are connected through various controllers. The bus itself contains many signal lines through which information is transmitted. Each line has its own protocol, which provides communication via controllers with other connected computer components. The bus has its own frequency; accordingly, the higher it is, the faster the exchange of information occurs between the connecting elements of the system.

Cache memory

The performance of a CPU depends on its ability to fetch instructions and data from memory as quickly as possible. Due to the cache memory, the execution time of operations is reduced due to the fact that it acts as a temporary buffer, ensuring instant transfer of data from the CPU to RAM or vice versa.

The main characteristic of cache memory is its difference in levels. If it is high, it means the memory is slower and more voluminous. The fastest and smallest memory is the first level. The principle of operation of this element is very simple - the CPU reads data from RAM and enters it into a cache of any level, while deleting the information that was accessed a long time ago. If the processor needs this information again, it will receive it faster thanks to the temporary buffer.

Socket (connector)

Due to the fact that the processor has its own connector (female or slot), you can easily replace it if it breaks or upgrade your computer. Without a socket, the CPU would simply be soldered into the motherboard, making subsequent repairs or replacement more difficult. It is worth paying attention - each slot is intended exclusively for installing certain processors.

Users often inadvertently buy incompatible processor and the motherboard, which causes additional problems.

CPU is the main working component of a computer that performs arithmetic and logical operations, controls the computing process and coordinates the operation of all computer devices.

The central processor generally contains:

    arithmetic-logical unit;

    data buses and address buses;

    registers;

    program counters;

    cache - very fast memory small volume,

    mathematical floating point coprocessor.

Modern processors are implemented as microprocessors. Physically, a microprocessor is an integrated circuit - a thin rectangular wafer of crystalline silicon with an area of ​​only a few square millimeters, on which circuits are placed that implement all the functions of the processor. The crystal slab is usually placed in a plastic or ceramic flat casing and connected by gold wires to metal pins so that it can be attached to the computer's motherboard.

Main characteristics of the processor:

    Performance is the main characteristic showing the speed at which a computer performs information processing operations. It in turn depends on the following characteristics:

    Clock frequency - determines the number of processor cycles per second

    Bit capacity - determines the size of the minimum piece of information called a machine word

    Address space - address bus width, that is, the maximum amount of RAM that can be installed on the computer

8.2.3. The operating principle of the processor.

The processor is the main element of a computer. It directly or indirectly controls all devices and processes occurring in the computer.

In the design of modern processors, there is a clear trend towards a constant increase in clock frequency. This is natural: the more operations a processor performs, the higher its performance. The maximum clock frequency is largely determined by the existing microcircuit production technology (the smallest achievable element sizes, which determine the minimum signal transmission time).

In addition to increasing the clock frequency, increasing processor performance is achieved by developers using less obvious techniques associated with the invention of new architectures and information processing algorithms. Let's look at some of them using an example Pentium processor(P5) and subsequent models.

Let's list the main features of the Pentium processor:

    pipeline information processing;

    superscalar architecture;

    the presence of separate cache memories for commands and data;

    presence of a transition address prediction block;

    dynamic program execution;

    presence of a floating point calculation unit;

    support for multiprocessor operation;

    availability of error detection tools.

The term "superscalar architecture" means that the processor contains more than one computing unit. These computational units are more often called pipelines. Note that the first superscalar architecture was implemented in the domestic computer “Elbrus-1” (1978).

The presence of two pipelines in the processor allows it to simultaneously execute (complete) two commands (instructions).

Each pipeline divides the command execution process into several stages (for example, five):

    fetching (reading) a command from RAM or cache memory;

    decoding (decoding) the command, i.e. determining the code of the operation being performed;

    command execution;

    access to memory;

    storing the obtained results in memory.

To implement each of the listed stages (each operation), a separate device-stage is used. Thus, there are five stages in each Pentium processor pipeline.

In pipeline processing, one cycle of the synchronizing (clock) frequency is allocated for each stage. In each new cycle, the execution of one command ends and the execution of a new command begins. This type of command execution is called threading.

Figuratively, it can be compared to a production conveyor (flow), where at each section with different products the same operation is always performed. At the same time, when a finished product leaves the assembly line, a new one comes onto it, and the rest of the products at this time are at different stages of readiness. The transition of manufactured products from section to section must occur synchronously, according to special signals (in the processor, these are cycles generated by a clock generator).

The total execution time for one instruction in a five-stage pipeline would be five clock cycles. In each clock cycle, the pipeline will simultaneously process (execute) five different instructions. As a result, five commands will be executed in five clock cycles. Thus, pipelining increases processor performance, but it does not reduce the execution time of a single instruction. The gain is obtained due to the fact that several commands are processed at once.

In fact, pipelining even increases the execution time of each individual command due to the additional costs associated with the organization of the pipeline. In this case, the clock frequency is limited by the speed of operation of the slowest stage of the conveyor.

As an example, consider the process of executing a command whose stage execution times are 60, 30, 40, 50 and 20 ns. Let's take the additional costs of organizing pipeline processing to be 5 ns.

If there were no pipelining, then it would take

60 + 30 + 40 + 50 + 20 = 200 ns.

If a conveyor organization is used, then the takt duration should be equal to the duration of the slowest processing stage with the addition of “overhead” costs, i.e. 60 + 5 = 65 ns. Thus, the reduction in command execution time obtained as a result of pipelining will be 200/65"3.1 times.

Note that the pipeline execution time for one instruction is 5 × 65 = 325 ns. This value is significantly more than 200 ns - the command execution time without pipelining. But simultaneous execution of five commands at once gives an average completion time of one command of 65 ns.

The Pentium processor has two L1 caches (they are located inside the processor). As you know, caching increases processor performance by reducing the number of times it waits for information to arrive from slow RAM. The necessary data and commands are taken by the processor from the fast cache memory (buffer), where they are entered in advance.

Having a single cache memory in previous processor designs resulted in structural conflicts. Two instructions executed by the pipeline sometimes simultaneously tried to read information from a single cache memory. Performing separate caching (buffering) for commands and data eliminates such conflicts, allowing both commands to execute simultaneously.

The development of computer technology is continuous. Designers are constantly looking for new ways to improve their products. The most valuable resource of processors is their performance. For this reason, various techniques are being invented to increase processor performance.

One such technique is to save time by predicting possible execution paths of a branching algorithm. This is done using a future branch address prediction block. The idea of ​​how it works is similar to the idea of ​​how cache memory works.

As is known, there are linear, cyclic and branching computational processes. In linear algorithms, commands are executed in the order they are written in RAM: sequentially one after another. For such algorithms, the branch address prediction block introduced into the processor cannot yield any gains.

In branching algorithms, the choice of instruction is determined by the results of checking the branch conditions. If you wait for the end of the computational process at the branch point and then select from RAM the right command, then there will inevitably be time losses due to unproductive idle time of the processor (reading a command from RAM is slow).

The branch address prediction block works proactively and tries to predict the branch address in advance in order to move the desired instruction from slow RAM to a special fast branch target buffer BTB (Branch Target Buffer) in advance.

When the BTB buffer contains a correct prediction, the transition occurs without delay. This is reminiscent of cache memory, which also has misses. Due to misses, the operands have to be read not from the cache memory, but from the slow OP. Because of this, time is lost.

The idea of ​​predicting the jump address is implemented in the processor by two independent prefetch buffers. They work together with a branch prediction buffer, with one of the buffers selecting instructions sequentially, and the second - according to the predictions of the BTV.

The Pentium processor has two five-stage pipelines to perform fixed-point operations. In addition, the processor has an eight-stage floating-point pipeline. Such calculations are required when performing mathematical calculations, as well as for quickly processing dynamic 3D color images.

The development of processor architecture follows the path of constant increase in the volume of cache memory of the first and second levels. The exception was the Pentium 4 processor, whose cache size unexpectedly decreased compared to the Pentium III.

To improve performance in new processor designs, two system buses, operating at different clock frequencies. The fast bus is used to work with the second level cache, and the slow bus is used for traditional information exchange with other devices, such as RAM. The presence of two buses eliminates conflicts when exchanging information between the processor and the main memory and second-level cache located outside the processor chip.

Processors following the Pentium contain a large number of stages in the pipeline. This reduces the execution time of each operation in a separate stage, which means it allows you to increase the processor clock frequency.

The Pentium Pro (P6) processor uses a new approach to the order in which instructions are executed sequentially in RAM.

The new approach is to execute commands in random order as they are ready (regardless of the order in RAM). However, the final result is always generated in accordance with the original order of commands in the program. This order of command execution is called dynamic or anticipatory.

Consider as an example the following fragment of a curriculum written in some (fictional) machine-oriented language.

r1 ¬mem Command 1

r3 ¬r1 + r2 Command 2

r5 ¬r5 + 1 Team 3

r6 ¬r6 – r7 Team 4

The symbols r1…r7 indicate registers general purpose(RON), which are included in the processor register block.

The mem symbol denotes a RAM memory cell.

Let's comment on the recorded program.

Command 1: write to RON r1 the contents of the RAM memory cell whose address is specified in RON r4.

Command 2: write to RON r3 the result of adding the contents of registers r1 and r2.

Command 3: add one to the contents of register r5.

Command 4: reduce the contents of RON r6 by the contents of register r7.

Suppose that when executing instruction 1 (loading an operand from memory into the general-purpose register r1), it turned out that the contents of the memory cell mem are not in the processor cache (a miss occurred; the required operand was not previously delivered to the buffer from RAM).

With the traditional approach, the processor will proceed to execute instructions 2, 3, 4 only after the data from the main memory cell mem enters the processor (more precisely, into register r1). Since reading will occur from slow-running RAM, this process will take quite a long time (by processor standards). While waiting for this event, the processor will be idle, not performing useful work.

In the example above, the processor cannot execute instruction 2 before instruction 1 completes, because instruction 2 uses the results of instruction 1. At the same time, the processor could execute instructions 3 and 4 in advance, which do not depend on the result of instructions 1 and 2.

In such cases, the P6 processor works differently.

The P6 processor does not wait for the completion of execution of instructions 1 and 2, but immediately proceeds to out-of-order execution of instructions 3 and 4. The results of the advance execution of instructions 3 and 4 are stored and retrieved later, after the execution of instructions 1 and 2. Thus, the P6 processor executes instructions in accordance with their readiness for execution, regardless of their initial location in the program.

Productivity is, of course, an important indicator of computer performance. However, it is equally important that fast calculations occur with a small number of errors.

The processor has a self-test device that automatically checks the functionality of most elements of the processor.

In addition, failures occurring within the processor are detected using a special data format. A parity bit is added to each operand, causing all numbers circulating inside the processor to be even. The appearance of an odd number indicates a failure has occurred. The presence of an odd number is like the appearance of a counterfeit banknote without watermarks.

Units for measuring the speed of processors (and computers) can be:

    MIPS (MIPS - Mega Instruction Per Second) - a million commands (instructions) on fixed-point numbers per second;

    MFLOPS (Mega Floating Operation Per Second) - a million operations on floating point numbers per second;

    GFLOPS (Giga Floating Operation Per Second) - a billion operations on floating point numbers per second.

There are reports of the world's fastest computer, ASCI White (IBM Corporation), which reaches 12.3 TFLOPS (trillion operations).

The processor is, without a doubt, the main component of any computer. It is this small piece of silicon, several tens of millimeters in size, that performs all the complex tasks that you set for your computer. Runs here operating system, as well as all programs. But how does it all work? We will try to examine this question in our article today.

The processor manages the data on your computer and executes millions of instructions per second. And by the word processor, I mean exactly what it really means - a small chip made of silicon that actually performs all the operations on the computer. Before we move on to how a processor works, we must first consider in detail what it is and what it consists of.

First let's look at what a processor is. CPU or central processing unit (central processing unit) - which is a microcircuit with a huge number of transistors, made on a silicon crystal. The world's first processor was developed by Intel in 1971. It all started with the Intel 4004. It could only perform computational operations and could only process 4 bytes of data. The next model came out in 1974 - Intel 8080 and could already process 8 bits of information. Next were 80286, 80386, 80486. It was from these processors that the name of the architecture came.

The clock speed of the 8088 processor was 5 MHz, and the number of operations per second was only 330,000, which is much less than in modern processors. Modern devices have frequencies up to 10 GHz and several million operations per second.

We will not consider transistors; we will move to a higher level. Each processor consists of the following components:

  • Core- all information processing and mathematical operations are performed here; there can be several cores;
  • Command decoder- this component belongs to the core, it converts software commands into a set of signals that will be executed by the core transistors;
  • Cache- an area of ​​ultra-fast memory, a small volume, in which data read from RAM is stored;
  • Registers- these are very fast memory cells in which the currently processed data is stored. There are only a few of them and they have a limited size - 8, 16 or 32 bits; the processor bit capacity depends on this;
  • Coprocessor- a separate core that is optimized only for performing certain operations, for example, video processing or data encryption;
  • Address bus- for communication with all devices connected to the motherboard, can have a width of 8, 16 or 32 bits;
  • Data bus- for communication with RAM. Using it, the processor can write data to memory or read it from there. The memory bus can be 8, 16 or 32 bits, this is the amount of data that can be transferred at one time;
  • Synchronization bus- allows you to control the processor frequency and operating cycles;
  • Restart bus- to reset the processor state;

The main component can be considered the core or arithmetic computing device, as well as processor registers. Everything else helps these two components work. Let's look at what registers are and what their purpose is.

  • Registers A, B, C- designed to store data during processing, yes, there are only three of them, but this is quite enough;
  • EIP- contains the address of the next program instruction in RAM;
  • ESP- address of data in RAM;
  • Z- contains the result of the last comparison operation;

Of course, these are not all memory registers, but these are the most important ones and are used most by the processor during program execution. Well, now that you know what the processor consists of, you can look at how it works.

How does a computer processor work?

The CPU's compute core can only perform math, comparisons, and moving data between cells and RAM, but it's enough to let you play games, watch movies, browse the web, and more.

In fact, any program consists of the following instructions: move, add, multiply, divide, difference and go to the instruction if the comparison condition is met. Of course, these are not all commands; there are others that combine those already listed or simplify their use.

All data movements are performed using the move instruction (mov), this instruction moves data between register cells, between registers and RAM, between memory and hard drive. There are special instructions for arithmetic operations. And jump instructions are needed to fulfill conditions, for example, check the value of register A and if it is not zero, then go to the instruction at the desired address. You can also create loops using jump instructions.

This is all very well, but how do all these components interact with each other? And how do transistors understand instructions? The operation of the entire processor is controlled by an instruction decoder. It makes each component do what it's supposed to do. Let's look at what happens when we need to execute a program.

At the first stage, the decoder loads the address of the first instruction of the program in memory into the register of the next instruction EIP, for this it activates the read channel and opens the latch transistor to put data into the EIP register.

In the second clock cycle, the instruction decoder converts the command into a set of signals for the transistors of the computing core, which execute it and write the result to one of the registers, for example, C.

On the third cycle, the decoder increments the address of the next instruction by one so that it points to the next instruction in memory. Next, the decoder proceeds to loading the next command and so on until the end of the program.

Each instruction is already encoded by a sequence of transistors, and converted into signals, it causes physical changes in the processor, for example, changing the position of a latch that allows data to be written to a memory cell, and so on. Different commands require different numbers of clock cycles to execute; for example, one command may require 5 clock cycles, while another, more complex one may require up to 20. But all this still depends on the number of transistors in the processor itself.

Well, this is all clear, but all this will only work if one program is running, and if there are several of them and all at the same time. We can assume that the processor has several cores, and then each core executes separate programs. But no, in fact there are no such restrictions.

Only one program can be executed at one time. All CPU time is shared between everyone running programs, each program executes for a few clock cycles, then the processor is transferred to another program, and all the contents of the registers are stored in RAM. When control returns to this program, the previously saved values ​​are loaded into the registers.

Conclusions

That's all, in this article we looked at how a computer processor works, what a processor is and what it consists of. It might be a little complicated, but we've kept it simple. I hope you now have a better understanding of how this very complex device works.

To conclude the video about the history of processors:

Nowadays there is a lot of information on the Internet on the topic of processors, you can find a bunch of articles about how it works, where registers, clocks, interrupts, etc. are mainly mentioned... But, for a person who is not familiar with all these terms and concepts, it is quite difficult like this fly" to understand the process, but you need to start small - namely, with a basic understanding how the processor works and what main parts it consists of.

So, what will be inside the microprocessor if you disassemble it:

The number 1 denotes the metal surface (cover) of the microprocessor, which serves to remove heat and protect from mechanical damage what is behind this cover (that is, inside the processor itself).

At number 2 is the crystal itself, which in fact is the most important and expensive part of the microprocessor to manufacture. It is thanks to this crystal that all calculations take place (and this is the most important function of the processor) and the more complex it is, the more perfect it is, the more powerful the processor is and the more expensive it is, accordingly. The crystal is made of silicon. In fact, the manufacturing process is very complex and contains dozens of steps, more details in this video:

Number 3 is a special textolite substrate to which all other parts of the processor are attached, in addition, it plays the role of a contact pad - on it back side there are a large number of golden “dots” - these are contacts (you can see them a little in the picture). Thanks to the contact pad (substrate), close interaction with the crystal is ensured, since it is not possible to directly influence the crystal in any way.

The cover (1) is attached to the backing (3) using adhesive-sealant that is resistant to high temperatures. There is no air gap between the crystal (2) and the cover; thermal paste takes its place; when it hardens, it forms a “bridge” between the processor crystal and the cover, which ensures very good heat transfer.

The crystal is connected to the substrate using soldering and sealant, the contacts of the substrate are connected to the contacts of the crystal. This figure clearly shows how the crystal contacts are connected to the substrate contacts using very thin wires (170x magnification in the photo):

In general, the device of processors different manufacturers and even models from the same manufacturer can vary greatly. However circuit diagram The operation remains the same - they all have a contact substrate, a crystal (or several located in one case) and a metal cover for heat dissipation.

For example, this is what the contact substrate of an Intel Pentium 4 processor looks like (the processor is upside down):

The shape of the contacts and the structure of their arrangement depends on the processor and motherboard computer (sockets must match). For example, in the picture just above, the contacts of the processor without “pins”, since the pins are located directly in the motherboard socket.

And there is another situation where the “pins” of the contacts stick out directly from the contact substrate. This feature is typical mainly for AMD processors:

As mentioned above, the device different models processors from the same manufacturer may differ; we have a clear example of this - a quad-core processor Intel Core 2 Quad, which is essentially 2 dual core processor core 2 duo lines combined in one case:

Important! The number of crystals inside a processor and the number of processor cores are not the same thing.

In modern models Intel processors 2 crystals (chips) fit at once. The second chip - the graphics core of the processor, essentially plays the role of a video card built into the processor, that is, even if there is no graphics card in the system, the graphics core will take on the role of a video card, and quite a powerful one at that (in some processor models, the computing power of the graphics cores allows you to play modern games on medium graphics settings).

That's all central microprocessor device, in short, of course.

© 2024 ermake.ru -- About PC repair - Information portal