Building Blocks Of Computer System
Input
& output(I/O)- In
computing, input/output or
I/O is the communication between an information
processing system (such as a computer) and the outside
world, possibly a human or another information processing system. Inputs are the signals or
data received by the system, and outputs
are the signals or data sent from it. The term can also be used as part of an
action; to "perform I/O" is to perform an input or output operation.
I/O devices are used by a person (or other system) to communicate with a
computer. For instance, a keyboard or a mouse may be an input
device for a computer, while monitors and printers are considered
output devices for a computer. Devices for communication between computers,
such as modems and network cards, typically
serve for both input and output.
Note that the designation of a device as either
input or output depends on the perspective. Mouse and keyboards take as input
physical movement that the human user outputs and convert it into signals that
a computer can understand. The output from these devices is input for the
computer. Similarly, printers and monitors take as input signals that a
computer outputs. They then convert these signals into representations that
human users can see or read. For a human user the process of reading or seeing
these representations is receiving input. These interactions between computers
and humans is studied in a field called humanHYPERLINK
"http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction"
HYPERLINK
"http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction"HYPERLINK
"http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction"–HYPERLINK
"http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction"
HYPERLINK
"http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction"HYPERLINK
"http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction"computer
interaction.
In computer architecture, the combination of the CPU and
main memory (i.e. memory
that the CPU can read and write to directly, with individual instructions)
is considered the brain of a computer, and from that point of view any transfer
of information from or to that combination, for example to or from a disk drive, is considered
I/O. The CPU and its supporting circuitry provide memory-mapped I/O that is
used in low-level computer
programming, such as the implementation of device drivers. An I/O algorithm
is one designed to exploit locality and perform efficiently when data reside on
secondary storage, such as a disk drive.
Memory-
Overview of the forms and functions of memory in
the sciences
In psychology,
memory is the process by which information is encoded, stored, and retrieved.
Encoding allows information that is from the outside world to reach our senses
in the forms of chemical and physical stimuli. In this first stage we must
change the information so that we may put the memory into the encoding process.
Storage is the second memory stage or process. This entails that we maintain
information over periods of time. Finally the third process is the retrieval of
information that we have stored. We must locate it and return it to our
consciousness. Some retrieval attempts may be effortless due to the type of
information.
From an information
processing perspective there are three main stages in the formation
and retrieval of memory:
·
Encoding
or registration: receiving, processing and combining of received information
·
Storage:
creation of a permanent record of the encoded information
·
Retrieval, recall or
recollection: calling back the stored information in response to some cue for
use in a process or activity
The loss of memory is described as forgetfulness,
or as a medical disorder,
ALU and its components- Arithmetic And Logic
Unit schematic symbol
Cascadable 8 Bit ALU Texas Instruments SN74AS888
In computing,
an arithmetic and logic unit (ALU) is a digital circuit that
performs arithmetic
and logical operations. The
ALU is a fundamental building block of the central processing
unit of a computer, and even the simplest microprocessors contain
one for purposes such as maintaining timers. The processors found inside modern
CPUs and graphics processing units (GPUs)
accommodate very powerful and very complex ALUs; a single component may contain
a number of ALUs.
Mathematician John von Neumann proposed
the ALU concept in 1945, when he wrote a report on the foundations for a new
computer called the EDVAC.
Research into ALUs remains as an important part of computer science, falling
under Arithmetic and logic structures in the ACM Computing
Classification System.
\
Control
Unit and its functions-
The control unit coordinates the components of a
computer system. It fetches the code of all of the instructions in the program.
It directs the operation of the other units by providing timing and control
signals. All computer resources are managed by the CU. It directs the flow of
data between the Central Processing Unit (CPU) and the other devices.[dubious
– discuss][citation needed]
The control unit was historically defined as one
distinct part of the 1946 reference model of Von Neumann
architecture. In modern computer designs, the control unit is
typically an internal part of the CPU
with its overall role and operation unchanged.
The control unit is the circuitry that controls the
flow of data through the processor, and coordinates the activities of the other
units within it. In a way, it is the "brain within the brain", as it
controls what happens inside the processor, which in turn controls the rest of
the computer. The examples of devices that require a control unit are CPUs and
graphics processing units (GPUs). The modern information age would not be
possible without complex control unit designs. The control unit receives
external instructions or commands which it converts into a sequence of control
signals that the control unit applies to the data path to implement a sequence
of register-transfer
level operations.
Functions of the control unit
The control unit implements the instruction set of the
CPU. It performs the tasks of fetching, decoding, managing execution and then
storing results. It may manage the translation of instructions (not data) to
micro-instructions and manage scheduling the micro-instructions between the
various execution units. On some processors the control unit may be further
broken down into other units, such as a scheduling unit to handle scheduling
and a retirement unit to deal with results coming from the pipeline; It is the
main function of CPU.
Instruction
–word- instruction word
refers
to a processor architecture designed to take advantage of instruction level
parallelism (ILP). Whereas conventional processors mostly only allow
programs that specify instructions to be executed one after another, a VLIW
processor allows programs that can explicitly specify instructions to be
executed at the same time (i.e. in parallel). This type of processor
architecture is intended to allow higher performance without the inherent
complexity of some other approaches.
Traditional approaches to improving performance in
processor architectures include breaking up instructions into sub-steps so that
instructions can be executed partially at the same time (pipelining),
dispatching individual instructions to be executed completely independently in
different parts of the processor (superscalar
architectures), and even executing instructions in an order different from the
program (out-of-order
execution). These approaches all involve increased hardware
complexity (higher cost, larger circuits, higher power consumption) because the
processor must intrinsically make all of the decisions internally for these
approaches to work. The VLIW approach, by contrast, depends on the programs
themselves providing all the decisions regarding which instructions are to be executed
simultaneously and how conflicts are to be resolved. As a practical matter this
means that the compiler software (the software used to create the final
programs) becomes much more complex, but the hardware is simpler than many
other approaches to parallelism.
As is the case with any novel architectural
approach, the concept is only as useful as code generation makes it. An
architecture designed for use in signal processing may have a number of
special-purpose instructions to facilitate certain complicated operations such
as fast Fourier
transform (FFT) computation or certain calculations that recur in tomographic contexts.
However, these optimized capabilities are useless unless compilers are able to
spot relevant source code constructs and generate target code that duly
utilizes the CPU's advanced offerings. Therefore, programmers must be able to
express their algorithms in a manner that makes the compiler's task easier
Instruction
And Execution Cycle-
Once a program is in memory it has to be executed.
To do this, each instruction must be looked at, decoded and acted upon in turn
until the program is completed. This is achieved by the use of what is termed
the 'instruction execution cycle', which is the cycle by which each instruction
in turn is processed. However, to ensure that the execution proceeds smoothly,
it is is also necessary to synchronise the activites of the processor.
To keep the events synchronised, the clock located
within the CPU control unit
is used. This produces regular pulses on the system bus at a specific
frequency, so that each pulse is an equal time following the last. This clock
pulse frequency is linked to the clock speed of the processor - the higher the
clock speed, the shorter the time between pulses. Actions only occur when a
pulse is detected, so that commands can be kept in time with each other across
the whole computer unit.
The instruction execution cycle can be clearly
divided into three different parts, which will now be looked at in more detail.
For more on each part of the cycle click the relevant heading, or use the next
arrow as before to proceed though each stage in order.
Fetch Cycle
The fetch cycle takes the address required from memory, stores it in the instruction register, and moves the program counter on one so that it points to the next instruction.
The fetch cycle takes the address required from memory, stores it in the instruction register, and moves the program counter on one so that it points to the next instruction.
Decode Cycle
Here, the control unit checks the instruction that is now stored within the instruction register. It determines which opcode and addressing mode have been used, and as such what actions need to be carried out in order to execute the instruction in question.
Here, the control unit checks the instruction that is now stored within the instruction register. It determines which opcode and addressing mode have been used, and as such what actions need to be carried out in order to execute the instruction in question.
Execute Cycle
The actual actions which occur during the execute cycle of an instruction depend on both the instruction itself, and the addressing mode specified to be used to access the data that may be required. However, four main groups of actions do exist, which are discussed in full later on.
The actual actions which occur during the execute cycle of an instruction depend on both the instruction itself, and the addressing mode specified to be used to access the data that may be required. However, four main groups of actions do exist, which are discussed in full later on.
Branch- A branch
(sometimes
referred to in botany
as a ramus) or, rarely, faggot, is a woody structural member connected to but
not part of the central trunk of a tree (or sometimes a shrub). Large branches are known as boughs
and small branches are known as twigs.[1]
While branches can be nearly horizontal, vertical, or diagonal, the majority of
trees have upwardly diagonal branches.
The term "twig" often refers to a terminus,
while "bough" refers only to branches coming directly from the trunk.
-Skip-
+------+-----+-----+
|skipEQ|
reg1| reg2| skip the following
instruction if reg1=reg2
+------+-----+-----+
(Effective
PC address = next instruction address + 1)
Skip addressing may be considered a special kind of
PC-relative addressing mode with a fixed "+1" offset. Like
PC-relative addressing, some CPUs have versions of this addressing mode that
only refer to one register ("skip if reg1=0") or no registers,
implicitly referring to some previously-set bit in the status register. Other
CPUs have a version that selects a specific bit in a specific byte to test
("skip if bit 7 of reg12 is 0").
Unlike all other conditional branches, a
"skip" instruction never needs to flush the instruction
pipeline, though it may need to cause the next instruction to be
ignored.
jump and shift instruction-
Shift instruction
The purpose of each shift instruction is to shift
an operand, bit by bit, to the right or to the left. Direction of shift (left
or right) is dependent upon the specific instruction. The operand to be shifted
must first be loaded into the accumulator (or accumulator and accumulator
extension, depending upon which shift instruction is to be executed). All shift
instructions are in the short format only; there are no long-format shift
instructions.
Jump Instructions-Jump instructions modify the
program counter so that executation continues at a specified memory address, no
matter (almost) the value of the current program counter. Branch
instructions, by contrast, are always relative to the current
program counter.
Operation
of control registers- A
control register is a processor register
which changes or controls the general behavior of a CPU or
other digital device. Common tasks performed by control registers include interrupt control,
switching the addressing mode,
paging control, and coprocessor control.