Addressing techniques- Addressing modes are
the method used to determine which part of memory is being referred to by a
machine instruction. There are various types of addressing modes. Which
addressing mode is used is dependent on what type of computer
architecture is being used.
Random
access memory (RAM) is the primary area of memory for a computer. This is where
any application must be loaded to if it is to be run. The central processing
unit (CPU)
reads machine instructions from the RAM and acts on those instructions. This is
what happens whenever any application is run on a computer.
The
machine instructions given to the CPU often must refer to specific portions of
the RAM. In order to do this, the CPU must have a way of knowing which portion
of RAM the machine instruction is referring to. This is where addressing modes
come into play.
Addressing
modes are used to divide up the sections of RAM into individual portions that
may be referenced individually. This is similar to how each house has an
address. This address can then be used by a machine instruction to refer to a
specific portion of memory. The CPU will then access that portion of memory and
perform the action specified by the machine instruction.
There
are many different types of addressing modes. Different types of computer
architecture feature different types of addressing modes. This results in an
incompatibility of software. If an application is designed for one type of
addressing mode, then it will not be able to run when used on a system that
uses a different type of addressing mode. It will be much like speaking to
someone in a language he does not understand.
The
specifics of each type of addressing mode are important for computer
programmers using assembly language. This type of computer language
is a direct representation of the machine instructions sent to the CPU. This is
what makes assembly language able to produce programs that can run several
times faster than other programming languages.
Assembly
language is used in the development of operating systems. A computer programmer
must know the type of addressing modes used on the specific computer
architecture before he can write a functioning operating system or application
in assembly. The differences between addressing modes are part of the reason
that applications are unable to run on different computer architectures.
Direct addressing- In direct addressing
mode, effective address of the operand is given in the address field of the
instruction. It requires one memory reference to read the operand from the
given location and provides only a limited address space. Length of the address
field is usually less than the word length.
Ex
: Move P, Ro, Add Q, Ro P and Q are the address of operand.
Indirect
addressing - Indirect addressing mode, the address
field of the instruction refers to the address of a word in memory, which in
turn contains the full length address of the operand. The advantage of this
mode is that for the word length of N, an address space of 2N can be addressed.
He disadvantage is that instruction execution requires two memory reference to
fetch the operand Multilevel or cascaded indirect addressing can also be used.
Immediate
addressing - This
is the simplest form of addressing. Here, the operand is given in the
instruction itself. This mode is used to define a constant or set initial
values of variables. The advantage of this mode is that no memory reference
other than instruction fetch is required to obtain operand. The disadvantage is
that the size of the number is limited to the size of the address field, which
most instruction sets is small compared to word length.
Relative
addressing - This
is a combination of direct addressing and register indirect addressing. The
value contained in one address field. A is used directly and the other address
refers to a register whose contents are added to A to produce the effective
address.
Indexed addressing-
It is also known as direct addressing mode. When you know the right offset of
the address of the memory you need. For example if you have an array, then if
you need the N-th element of the array, you just add N sizes of the variables
to the starting address of the array.
As an oposite it is the sequential addressing mode (or indirect) where you cant calculate the exact address by just offset of the beginning. If the size of each element in the array is not known (for example if you have N null terminated string which size is not known) you need to traverse all the N-1 elements until you find the one you need - the N-th one :)
As an oposite it is the sequential addressing mode (or indirect) where you cant calculate the exact address by just offset of the beginning. If the size of each element in the array is not known (for example if you have N null terminated string which size is not known) you need to traverse all the N-1 elements until you find the one you need - the N-th one :)
Registers
–
Indexed register – computer’s
cpu is a processor register used for modifying operand addresses during the run
of a program, typically for doing vector/array oprations. Index register were
first used in the British Manchester mark 1 computer,in 1949.
Index
register are used for a special kind of addressing where an immdiate constant
(i.e., which is part of the instruction it self) is added to the content of a
register to form the actual operands or data; architecture which allow more
then one register to be used this way nuturally have an opcode field for
specifiying which register to use.
General purpose register –
Store data (arguments
and result of instructions.) can be assinged variety of user programmes.
Special purpose register –
Responsible for
storing information that is essential for running DLX programs. Types are
·
The instruction register: hold the
instruction that is to be executed.
·
The program counter: keeps address of
the next instruction so that a copy of the instruction can be placed in the
current instruction register.
Overflow register – that the signed result of an
opretion is too large to fit in the
register width using twos complement representation.
Carry register –store
numbers large than a single word to be added /subtracted by carrying a binary
digit from a less significant word to the large significent bit of a more
significent word as needed.it is also used to extend bit shifts and rotates in
a similar manner on many processor.
Shift register –the
shift register is another type of sequential logic circuit that is used for the
storage or transfer of data in the form of binary numbers and than “shift” the
data out once every clock cycle, hence the name shift register.it basically
consists of several single bit “D-type data latches”, one for each bit (0-1)
connected together in a serial or daisy-chain arrangment so that the output
from one data latch becomes the input of the next latch and so on. The data
bits may be fed in or out of the register serially,i.e. one after the other
from either the left or the right direction, or in parllel,i.e. all together.
The number of individual data latches required to make up a singnal shift
register is determinde by the number of bits to be stored with the most common
being 8-bits wide.
Stack register – A stack can be orgnized as a collection
of finite number of register that are used to store temporary information
during the execution of a program. The stack pointer(sp)is a register that
holds the address of top of element of the stack.
Memory Buffer register- A Memory Buffer
Register (MBR) is the register in a computer's processor, or central processing
unit, CPU, that stores the data being transferred to and from the
immediate access store. It acts as a buffer
allowing the processor and memory units
to act independently without being affected by minor differences in operation.
A data item will be copied to the MBR ready for use at the next clock cycle, when it can
be either used by the processor or stored in main memory.
This register holds the contents of the memory
which are to be transferred from memory to other components or vice versa. A word to
be stored must be transferred to the MBR, from where it goes to the specific
memory location, and the arithmetic data to be processed in the ALU
first goes to MBR and then to accumulated register, and then it is processed in
the ALU.
Accumulators-Accumulators on a tabulating machine circa
1936. Each of the four registers can store a 10-digit decimal number.
In a computer's
central processing unit (CPU),
an accumulator is a register
in which intermediate arithmetic and
logic results are stored. Without a register like an accumulator, it
would be necessary to write the result of each calculation (addition,
multiplication, shift, etc.) to main memory, perhaps only
to be read right back again for use in the next operation. Access to main
memory is slower than access to a register like the accumulator because the
technology used for the large main memory is slower (but cheaper) than that
used for a register.
The canonical example for accumulator use is
summing a list of numbers. The accumulator is initially set to zero, then each
number in turn is read and added to the value in the accumulator. Only when all
numbers have been added is the result held in the accumulator written to main
memory or to another, non-accumulator, CPU register.
An accumulator machine,
also
called a 1-operand machine,
or a CPU with accumulator-based architecture, is a kind of CPU where, although
it may have several registers, the CPU mostly stores the results of
calculations in one special register, typically called "the
accumulator". Historically almost all early computers were accumulator
machines; and many microcontrollers
still popular as of 2010 (such as the 68HC12, the PICmicro, the 8051 and several others) are basically
accumulator
stack
pointers-A
stack pointer is a small register that
stores the address of the last program request in a stack.
A stack is a specialized buffer
which stores data from the top down. As new requests come in, they "push
down" the older ones. The most recently entered request always resides at
the top of the stack, and the program always takes requests from the top.
A stack (also called a pushdown stack) operates in
a last-in/first-out sense. When a new data item is entered or
"pushed" onto the top of a stack, the stack pointer increments to the
next physical memory address, and the new item is copied to that address. When
a data item is "pulled" or "popped" from the top of a stack,
the item is copied from the address of the stack pointer, and the stack pointer
decrements to the next available item at the top of the stack.
floating
point- In
computing, floating point
describes a method of representing an approximation to real numbers in a way that
can support a wide range of values. The numbers are, in general, represented
approximately to a fixed number of significant digits
(the mantissa) and scaled using an exponent. The base for the
scaling is normally 2, 10 or 16. The typical number that can be represented
exactly is of the form:
Significant digits × baseexponent
The idea of floating-point representation over
intrinsically integer
fixed-point
numbers, which consist purely of significand, is that
expanding it with the exponent component achieves greater range. For instance,
to represent large values, e.g. distances between galaxies, there is no need to
keep all 39 decimal places down to femtometre-resolution,
employed in particle physics. Assuming that the best resolution is in light years, only 9 most
significant decimal digits matter whereas 30 others bear pure noise and, thus,
can be safely dropped. This is 100-bit saving in storage. Instead of these 100
bits, much fewer are used to represent the scale (the exponent), e.g. 8 bits or
2 decimal digits. Now, one number can encode the astronomic and subatomic
distances with the same 9 digits of accuracy. But, because 9 digits are 100
times less accurate than 9+2 digits reserved for scale, this is considered as precision-for-range
trade-off. The example
also explains that using scaling to extend the dynamic range results in another
contrast with usual fixed-point numbers: their values are not uniformly spaced.
Small values, the ones close to zero, can be represented with much higher
resolution (1 femtometre) than distant ones because greater scale (light years)
must be selected for encoding significantly larger values.[1] That is,
floating-point cannot represent point coordinates with atomic accuracy in the
other galaxy, only close to the origin.
The term floating point refers to the fact that
their radix point
(decimal point, or, more commonly in computers, binary point) can
"float"; that is, it can be placed anywhere relative to the
significant digits of the number. This position is indicated as the exponent
component in the internal representation, and floating-point can thus be
thought of as a computer realization of scientific notation.
Over the years, a variety of floating-point representations have been used in
computers. However, since the 1990s, the most commonly encountered
representation is that defined by the IEEE 754 Standard.
The speed of floating-point operations, commonly
referred to in performance measurements as FLOPS, is an important machine
characteristic, especially in software
that performs large-scale mathematical calculations.
See the related link for more information.