The first computing devices and machines. Calculating machine

Other 19.12.2021
Other

Generations:

I. Computer to email. lamps, the speed is about 20,000 operations per second, each machine has its own programming language. (“BESM”, “Arrow”). II. In 1960, transistors invented in 1948 were used in computers; they were more reliable, durable, and had a large RAM. 1 transistor is able to replace ~ 40 el. lamps and works at a faster speed. Magnetic tapes were used as information carriers. (“Minsk-2”, “Ural-14”). III. In 1964, the first integrated circuits (ICs) appeared and became widespread. An IC is a crystal with an area of ​​10 mm2. 1 IC can replace 1000 transistors. 1 crystal - 30 ton "Eniak". Now you can process several programs in parallel. IV. For the first time, large integrated circuits (LSI) began to be used, which roughly corresponded in power to 1000 ICs. This led to a reduction in the cost of manufacturing computers. In 1980, it was possible to fit the central processing unit of a small computer on a 1/4 inch chip. (“Illiak”, “Elbrus”). V. Synthesizers, sounds, the ability to carry on a dialogue, to execute commands given by voice or touch.

Early counting aids and devices

Computer technology is an essential component of the process of computing and data processing. The first devices for computing were counting sticks. Developing, these devices became more complex, for example, such as Phoenician clay figurines, also intended for visual representation of the number of counted objects. Such devices were used by merchants and accountants of that time. Gradually, from the simplest devices for counting, more and more complex devices were born: an abacus (abacus), a slide rule, a mechanical adding machine, an electronic computer. The principle of equivalence was widely used in the simplest counting device Abacus or Abacus. The number of objects counted corresponded to the number of moved knuckles of this instrument. A relatively complex device for counting could be a rosary used in the practice of many religions. The believer, as on the abacus, counted on the grains of the rosary the number of prayers uttered, and when "

"The Counting Clock" by Wilhelm Schickard

In 1623, Wilhelm Schickard invented the "Counting Clock" - the first mechanical calculator that could perform four arithmetic operations. This was followed by the machines of Blaise Pascal (Pascaline, 1642) and Gottfried Wilhelm Leibniz.

Around 1820, Charles Xavier Thomas created the first successful, mass-produced mechanical calculator, the Thomas Arithmometer, which could add, subtract, multiply, and divide. Basically, it was based on the work of Leibniz. Mechanical calculators counting decimal numbers were used until the 1970s. Leibniz also described the binary number system, the central ingredient in all modern computers. However, until the 1940s, many subsequent designs (including Charles Babbage's machines and even the 1945 ENIAC) were based on a more difficult to implement decimal system.

Jukebox punch card system

In 1801, Joseph Marie Jacquard developed a loom in which the pattern to be embroidered was determined by punched cards. A series of cards could be changed, and changing the pattern did not require changes in the mechanics of the machine. This was an important milestone in the history of programming. In 1838, Charles Babbage moved from developing the Difference Engine to designing a more complex Analytical Engine, the programming principles of which are directly traceable to Jaccard's punched cards. In 1890, the U.S. Census Bureau used punched cards and sorting mechanisms developed by Herman Hollerith to process a stream of Constitutionally mandated decennial census data. Hollerith's company eventually became the core of IBM. This corporation has developed punched card technology into a powerful tool for business data processing and has released an extensive line of specialized recording equipment. By 1950, IBM technology had become ubiquitous in industry and government. Many computer solutions used punched cards until (and after) the late 1970s.

1835-1900s: First programmable machines

In 1835, Charles Babbage described his Analytical Engine. It was a general purpose computer design, using punched cards as the input medium and program, and a steam engine as the power source. One of the key ideas was the use of gears to perform mathematical functions. Following in Babbage's footsteps, though unaware of his earlier work, was Percy Ludgate, an accountant from Dublin [Ireland]. He independently designed a programmable mechanical computer, which he described in a paper published in 1909.

1930s - 1960s: desktop calculators

Arithmometer "Felix" - the most common in the USSR. Produced in 1929-1978

In 1948, the Curta appeared, a small mechanical calculator that could be held in one hand. In the 1950s - 1960s, several brands of such devices appeared on the Western market. The first fully electronic desktop calculator was the British ANITA Mk. VII, which used a Nixie tube display and 177 miniature thyratron tubes. In June 1963, Friden introduced the EC-130 with four functions. It was entirely transistorized, had 13-digit resolution on a 5-inch cathode ray tube, and was marketed by the company for $2,200 in calculators. Square root and inverse functions have been added to the EC 132 model. In 1965, Wang Laboratories produced the LOCI-2, a 10-digit transistor desktop calculator that used a Nixie tube display and could calculate logarithms.

The advent of analog computers in the prewar years

Differential Analyzer, Cambridge, 1938 Before the Second World War, mechanical and electrical analog computers were considered the most modern machines, and many believed that this was the future of computing. Analog computers took advantage of the fact that the mathematical properties of small-scale phenomena - wheel positions or electrical voltage and current - are similar to those of other physical phenomena, such as ballistic trajectories, inertia, resonance, energy transfer, moment of inertia, etc. They modeled these and other physical phenomena with the values ​​of electrical voltage and current.

The first electromechanical digital computers

Konrad Zuse's Z-series In 1936, while working in isolation in Nazi Germany, Konrad Zuse began work on his first Z-series computer with memory and (still limited) programming capability. Created mainly on a mechanical basis, but already on the basis of binary logic, the Z1 model, completed in 1938, did not work reliably enough, due to insufficient accuracy in the execution of its constituent parts. Zuse's next car, the Z3, was completed in 1941. It was built on telephone relays and worked quite satisfactorily. Thus, Z3 became the first working computer controlled by the program. In many ways, the Z3 was similar to today's machines, introducing a number of innovations such as floating point arithmetic for the first time. Replacing the hard-to-implement decimal system with a binary system made Zuse's machines simpler and, therefore, more reliable; this is thought to be one of the reasons why Zuse succeeded where Babbage failed. Programs for the Z3 were stored on perforated film. There were no conditional jumps, but in the 1990s the Z3 was theoretically proven to be a general purpose computer (ignoring limits on physical memory size). In two 1936 patents, Konrad Zuse mentioned that machine instructions could be stored in the same memory as data - thus foreshadowing what later became known as the von Neumann architecture and was only first implemented in 1949 at the British EDSAC.

British "Colossus"

The British Colossus was used to break German ciphers during World War II. "Colossus" became the first fully electronic computing device. It used a large number of vacuum tubes, information was entered from punched tape. The Colossus could be configured to perform various Boolean logic operations, but it was not a Turing-complete machine. In addition to the Colossus Mk I, nine more Mk II models were built. Information about the existence of this machine was kept secret until the 1970s. Winston Churchill personally signed the order to destroy the machine into pieces no larger than the size of a human hand. Due to its secrecy, Colossus is not mentioned in many writings on the history of computers.

The first generation of von Neumann architecture computers

Memory on ferrite cores. Each core is one bit. The first working machine with von Neumann architecture was the Manchester "Baby" - Small-Scale Experimental Machine (Small Experimental Machine), created at the University of Manchester in 1948; it was followed in 1949 by the Manchester Mark I computer, which was already a complete system, with Williams tubes and a magnetic drum for memory, and index registers. Another contender for the title of "first digital stored program computer" was the EDSAC, designed and built at the University of Cambridge. Launched less than a year after "Baby", it could already be used to solve real problems. In fact, EDSAC was created based on the architecture of the EDVAC computer, the successor to ENIAC. Unlike ENIAC, which used parallel processing, EDVAC had a single processing unit. This solution was simpler and more reliable, so this option became the first implemented after each next wave of miniaturization. Many consider the Manchester Mark I / EDSAC / EDVAC to be the "Eves" from which almost all modern computers derive their architecture.

The first universal programmable computer in continental Europe was created by a team of scientists led by Sergey Alekseevich Lebedev from the Kyiv Institute of Electrical Engineering of the USSR, Ukraine. The MESM (Small Electronic Computing Machine) computer was launched in 1950. It contained about 6,000 vacuum tubes and consumed 15 kW. The machine could perform about 3000 operations per second. Another machine of the time was the Australian CSIRAC, which completed its first test program in 1949.

In October 1947, the directors of Lyons & Company, a British company that owns a chain of shops and restaurants, decided to take an active part in the development of commercial computer development. The LEO I computer began operating in 1951 and was the first in the world to be used regularly for routine office work.

The University of Manchester machine became the prototype for the Ferranti Mark I. The first such machine was delivered to the university in February 1951, and at least nine others were sold between 1951 and 1957.

In June 1951 UNIVAC 1 was installed by the US Census Bureau. The machine was developed by Remington Rand, who eventually sold 46 of these machines for over $1 million each. UNIVAC was the first mass-produced computer; all his predecessors were made in a single copy. The computer consisted of 5200 vacuum tubes, and consumed 125 kW of energy. Mercury delay lines were used, storing 1000 words of memory, each with 11 decimal digits plus a sign (72-bit words). Unlike IBM machines, which were equipped with punched card input, UNIVAC used 1930s-style metalized magnetic tape input, which ensured compatibility with some existing commercial systems data storage. Other computers of the time used high-speed punched tape input and input/output using more modern magnetic tapes.

The first Soviet serial computer was the Strela, produced since 1953 at the Moscow plant of calculating and analytical machines. "Arrow" belongs to the class of large universal computers (Mainframe) with a three-address command system. The computer had a speed of 2000-3000 operations per second. Two magnetic tape drives with a capacity of 200,000 words were used as external memory, the amount of RAM was 2048 cells of 43 bits each. The computer consisted of 6200 lamps, 60,000 semiconductor diodes and consumed 150 kW of power.

In 1955, Maurice Wilks invents microprogramming, a principle that was later widely used in the microprocessors of a wide variety of computers. Microprogramming allows you to define or extend the basic set of instructions using firmware (called microprogram or firmware).

In 1956, IBM first sold a device for storing information on magnetic disks - RAMAC (Random Access Method of Accounting and Control). It uses 50 metal discs, 24 inches in diameter, with 100 tracks on each side. The device stored up to 5 MB of data and cost $10,000 per MB. (In 2006, similar storage devices - hard drives - cost about $0.001 per MB.)

1950s - early 1960s: second generation

The next major step in the history of computer technology was the invention of the transistor in 1947. They have become a replacement for fragile and energy-intensive lamps. Transistorized computers are commonly referred to as the "second generation" that dominated the 1950s and early 1960s. Thanks to transistors and printed circuit boards, a significant reduction in the size and volume of power consumption, as well as an increase in reliability, has been achieved. For example, the transistorized IBM 1620, which replaced the lamp-based IBM 650, was about the size of an office desk. However, second-generation computers were still quite expensive and therefore only used by universities, governments, and large corporations.

Second generation computers usually consisted of a large number of printed circuit boards, each containing one to four logic gates or flip-flops. In particular, the IBM Standard Modular System defined the standard for such boards and their connection connectors. In 1959, based on transistors, IBM released the IBM 7090 mainframe and the IBM 1401 middle class machine. more than 100 thousand copies of this machine were produced. It used 4,000 characters of memory (later increased to 16,000 characters). Many aspects of this project were based on a desire to replace punched card machines, which were widely used from the 1920s until the early 1970s. In 1960, IBM released the transistorized IBM 1620, initially only punched tape, but soon upgraded to punched cards. The model became popular as a scientific computer, about 2000 copies were produced. The machine used magnetic core memory up to 60,000 decimal digits.

Also in 1960, DEC released its first model, the PDP-1, intended for use by technical personnel in laboratories and for research.

In 1961, Burroughs Corporation released the B5000, the first two-processor computer with virtual memory. Other unique features were the stack architecture, descriptor-based addressing, and the lack of programming directly in assembly language.

The first Soviet serial semiconductor computers were Vesna and Sneg, produced from 1964 to 1972. The peak performance of the Sneg computer was 300,000 operations per second. The machines were made on the basis of transistors with a clock frequency of 5 MHz. A total of 39 computers were produced.

The best domestic computer of the 2nd generation is considered to be BESM-6, created in 1966. In the architecture of BESM-6, the principle of combining command execution was widely used for the first time (up to 14 unicast machine instructions could be at different stages of execution). Interruption mechanisms, memory protection and other innovative solutions made it possible to use BESM-6 in multiprogram mode and time sharing mode. The computer had 128 Kb of RAM on ferrite cores and external memory on magnetic drums and tape. BESM-6 operated with a clock frequency of 10 MHz and a record performance for that time - about 1 million operations per second. A total of 355 computers were produced.

1960s onwards: third and subsequent generations

The rapid growth in the use of computers began with the so-called. "3rd generation" computers. This began with the invention of the integrated circuit, which was independently invented by Nobel laureate Jack Kilby and Robert Noyce. This later led to the invention of the microprocessor by Tad Hoff (Intel). During the 1960s, there was a certain overlap between 2nd and 3rd generation technologies. At the end of 1975, Sperry Univac continued to manufacture 2nd generation machines such as the UNIVAC 494.

The advent of microprocessors led to the development of microcomputers, small, inexpensive computers that could be owned by small companies or individuals. Microcomputers, the fourth generation of which first appeared in the 1970s, became ubiquitous in the 1980s and beyond. Steve Wozniak, one of the founders of Apple Computer, became known as the developer of the first mass-produced home computer, and later the first personal computer. Computers based on microcomputer architecture, with features added from their larger counterparts, now dominate most market segments.

1970-1990 - the fourth generation of computers

It is generally believed that the period from 1970 to 1990. belongs to computers fourth generation. However, there is another opinion - many believe that the achievements of this period are not so great as to consider it an equal generation. Supporters of this point of view call this decade belonging to the "third and a half" generation of computers, and only since 1985, in their opinion, should the years of life of the fourth generation proper, which is still alive today, be counted.

One way or another, it is obvious that since the mid-1970s there have been fewer and fewer fundamental innovations in computer science. Progress is mainly along the path of development of what has already been invented and invented - primarily by increasing the power and miniaturization of the element base and the computers themselves.

And, of course, the most important thing is that since the beginning of the 80s, thanks to the advent of personal computers, computer technology has become truly mass and generally accessible. A paradoxical situation arises: despite the fact that personal computers and minicomputers still lag behind large machines in all respects, the lion's share of the innovations of the last decade is the graphical user interface, new peripherals, global networks- owe their appearance and development to this "frivolous" technique. Large computers and supercomputers, of course, are by no means extinct and continue to develop. But now they no longer dominate the computer arena as they once did.

The element base of the computer - large integrated circuits (LSI). Machines were intended to dramatically increase labor productivity in science, production, management, healthcare, service and everyday life. A high degree of integration contributes to an increase in the density of the layout of electronic equipment, an increase in its reliability, which leads to an increase in the speed of the computer and a decrease in its cost. All this has a significant impact on the logical structure (architecture) of the computer and its software. The connection between the structure of the machine and its software, especially the operating system (or monitor) is becoming closer - a set of programs that organize the continuous operation of the machine without human intervention. This generation includes the EU computers: ES-1015, -1025, -1035, -1045, -1055, -1065 (“Row 2”), -1036, -1046, -1066, SM-1420, -1600, - 1700, all personal computers (“Electronics MS 0501”, “Electronics-85”, “Iskra-226”, EC-1840, -1841, -1842, etc.), as well as other types and modifications. The fourth-generation computers also include the Elbrus multiprocessor computer complex. "Elbrus-1KB" had a speed of up to 5.5 million floating point operations per second, and the amount of RAM up to 64 MB. Elbrus-2 has a performance of up to 120 million operations per second, a RAM capacity of up to 144 Mb or 16 Mwords (a word of 72 bits), and a maximum throughput of I/O channels is 120 Mb/s.

Example: IBM 370-168

Made in 1972. This machine model was one of the most common. RAM capacity - 8.2 MB. Performance - 7.7 million operations per second.


1990 - ... to the present day - 5th generation of computers

The transition to fifth-generation computers meant a transition to new architectures focused on the creation of artificial intelligence.

It was believed that the architecture of fifth generation computers would contain two main blocks. One of them is the computer itself, in which communication with the user is carried out by a block called the "intelligent interface". The task of the interface is to understand the text written in natural language or speech, and to translate the condition of the task stated in this way into a working program.

Basic requirements for computers of the 5th generation: Creation of a developed human-machine interface (recognition of speech, images); Development of logic programming to create knowledge bases and artificial intelligence systems; Creation of new technologies in the production of computer technology; Creation of new architectures of computers and computing systems.

The new technical capabilities of computer technology were supposed to expand the range of tasks to be solved and make it possible to move on to the tasks of creating artificial intelligence. As one of the components necessary for the creation of artificial intelligence are knowledge bases (databases) in various areas of science and technology. The creation and use of databases requires a high-speed computing system and a large amount of memory. Mainframe computers are capable of high-speed calculations, but are not suitable for high speed comparison and sorting operations of large volumes of records, usually stored on magnetic disks. To create programs that provide filling, updating databases and working with them, special object-oriented and logical programming languages ​​were created that provide the greatest opportunities compared to conventional procedural languages. The structure of these languages ​​requires a transition from the traditional von Neumann computer architecture to architectures that take into account the requirements of the tasks of creating artificial intelligence.

Example: IBM eServer z990

Manufactured in 2003. Physical parameters: weight 2000 kg, power consumption 21 kW, area 2.5 sq. m., height 1.94 m., RAM capacity 256 GB, performance - 9 billion instructions / sec.

Start

Calculator and computer are far from the only devices with which you can carry out calculations. Humanity thought about how to facilitate the processes of division, multiplication, subtraction and addition for itself quite early. One of the first such devices can be considered balance scales, which appeared in the fifth millennium BC. However, we will not plunge so far into the depths of history.

Andy Grove, Robert Noyce and Gordon Moore. (wikipedia.org)

The abacus, known to us as an abacus, was born around 500 BC. Ancient Greece, India, China and the state of the Incas can argue for the right to be considered its homeland. Archaeologists suspect that even computational mechanisms existed in ancient cities, although the existence of such has not yet been proven. However, the Antiker mechanism, already mentioned by us in the previous article, may well be considered a computational mechanism.

With the onset of the Middle Ages, the skills to create such devices were lost. Those dark times were generally a period of sharp decline in science. But in the 17th century, mankind again thought about computers. And they were not slow to appear.

The first computers

The creation of a device that could perform calculations was the dream of the German astronomer and mathematician Wilhelm Schickard. He had many different projects, but most of them failed. Shikkard was not embarrassed by failure, and he eventually succeeded. In 1623, the mathematician designed the "Counting Clock" - an incredibly complex and cumbersome mechanism, which, however, could perform simple calculations.

Shikkard's Counting Clock. Drawing. (wikipedia.org)

"Counting clocks" were of considerable size and large mass, it was difficult to put them into practice. Shikkard's friend, the famous astronomer Johannes Kepler, jokingly remarked that it is much easier to do calculations in your head than to use a clock. Nevertheless, it was Kepler who became the first user of the Schikkard watch. It is known that with their help he performed many of his calculations.

Johannes Kepler. (wikipedia.org)

This device got its name because it was based on the same mechanism that worked in the wall clock. And Shikkard himself can be considered the "father" of the calculator. Twenty years have passed, and the family of computers was replenished with the invention of the French mathematician, physicist and philosopher Blaise Pascal. The scientist introduced Pascalina in 1643.

Pascal summing machine. (wikipedia.org)

Pascal was then 20 years old, and he made the device for his father, a tax collector who had to deal with very complex calculations. The summing machine was driven by gears. To enter the desired number into it, it was necessary to turn the wheels a certain number of times.

Thirty years later, in 1673, the German mathematician Gottfried Leibniz created his project. His device was the first in history to be called a calculator. The principle of operation was the same as that of Pascal's machine.

Gottfried Leibniz. (wikipedia.org)

One very curious story is connected with the Leibniz calculator. At the beginning of the XVIII century, the car was seen by Peter I, who visited Europe as part of the Great Embassy. The future emperor became very interested in the device and even bought it. Legend has it that Peter later sent a calculator to the Chinese Emperor Kangxi as a gift.

From calculator to computer

The case of Pascal and Leibniz developed. In the 18th century, many scientists made attempts to improve computers. The main idea was to create a commercially successful device. Success, in the end, accompanied the Frenchman Charles Xavier Thomas de Colmar.

Charles Xavier Thomas de Colmar. (wikipedia.org)

In 1820, he launched the mass production of computing instruments. Strictly speaking, Colmar was more of a skilled industrialist than an inventor. His "Tom machine" was not much different from Leibniz's calculator. Colmar was even accused of stealing someone else's invention and trying to make a fortune at the expense of someone else's labor.

In Russia, the serial production of calculators began in 1890. The calculator acquired its current form in the 20th century. In the 1960s and 1970s, this industry experienced a real boom. The instruments have improved over the years. In 1965, for example, a calculator appeared that could calculate logarithms, and in 1970, a calculator was first released that fit in a person's hand. But at that time the computer age was already beginning, although humanity had not yet had time to feel it.

Computers

The man who laid the foundations for the development of computer technology is considered by many to be the French weaver Joseph Marie Jacquard. It's hard to tell if this is a joke or not. However, it was Jacquard who invented the punched card. At that time, people did not yet know what a memory card was. The invention of Jaccard may well claim this title. The weaver invented it to control the loom. The idea was that with the help of a punched card, a pattern for the fabric was set. That is, from the moment the punched card was launched, the pattern was applied without human intervention - automatically.

Punched cards. (wikipedia.org)

Jacquard's punched cards, of course, were not electronic device. Before the appearance of such items was still very far away, because Jacquard lived at the turn of the XVIII-XIX centuries. ekov. However, punched cards later became widely used in other areas, going far beyond the redistribution of the famous loom.

In 1835, Charles Babbage described an analytical engine that could have been based on punched cards. The key principle of operation of such a device was programming. Thus, the English mathematician predicted the advent of the computer. Alas, Babbage himself was never able to build the machine he invented. The world's first analog computer was born in 1927. It was created by Vanivar Bush, a professor at the University of Massachusetts.

History of the development of computing technology


2. "Time - events - people"


1. Stages of development of computer technology

Up until the 17th century. the activity of society as a whole and of each person individually was aimed at mastering the substance, i.e., there is a knowledge of the properties of matter and the manufacture of first primitive, and then more and more complex tools of labor, up to mechanisms and machines that make it possible to produce consumer values.

Then, in the process of the formation of an industrial society, the problem of mastering energy came to the fore - first thermal, then electrical, and finally atomic. The mastery of energy made it possible to master the mass production of consumer values ​​and, as a result, to raise the standard of living of people and change the nature of their work.

At the same time, humanity is characterized by the need to express and remember information about the world around us - this is how writing, printing, painting, photography, radio, and television appeared. In the history of the development of civilization, several information revolutions can be distinguished - the transformation of social relations due to fundamental changes in the field of information processing, information technologies. The consequence of such transformations was the acquisition of a new quality by human society.

At the end of the XX century. humanity has entered a new stage of development - the stage of building an information society. Information has become the most important factor economic growth, and the level of development of information activity and the degree of its involvement and influence on the global information infrastructure have become the most important condition for the country's competitiveness in the world economy. Understanding the inevitability of the arrival of this society came much earlier. Back in the 1940s, the Australian economist K. Clark spoke about the approach of the era of the society of information and services, the society of new technological and economic opportunities. The American economist F. Machlup suggested the advent of the information economy and the transformation of information into essential commodity at the end of the 50s. At the end of the 60s. D. Bell stated the transformation of an industrial society into an information one. As for the countries that were previously part of the USSR, the processes of informatization in them developed at a slow pace.

Informatics changes the entire system of social production and interaction of cultures. With the onset of the information society, a new stage begins not only in the scientific and technological, but also in the social revolution. The entire system of information communications is changing. The destruction of old information links between sectors of the economy, areas of scientific activity, regions, countries intensified the economic crisis of the end of the century in countries that paid insufficient attention to the development of informatization. The most important task of society is to restore the channels of communication in the new economic and technological conditions to ensure clear interaction between all areas of economic, scientific and social development, both in individual countries and on a global scale.

Computers in modern society have taken over a significant part of the work related to information. By historical standards, computer information processing technologies are still very young and are at the very beginning of their development. Computer technology today is transforming or replacing older information processing technologies.


2. "Time - events - people"

Consider the history of development computing facilities and methods "in persons" and objects (Table 1).

Table 1. Main events in the history of the development of computational methods, instruments, automata and machines

John Napier

The Scot John Napier in 1614 published a Description of the Amazing Tables of Logarithms. He discovered that the sum of the logarithm of the numbers a and b is equal to the logarithm of the product of these numbers. Therefore, the operation of multiplication was reduced to a simple operation of addition. He also developed a tool for multiplying numbers - “Knuckles of Napier”. It consisted of a set of segmented rods that could be arranged in such a way that, by adding the numbers in segments adjacent to each other horizontally, they obtained the result of their multiplication. "Knuckles of Napier" were soon superseded by other computing devices (mostly of a mechanical type). Napier's tables, the calculation of which required a very long time, were later “built into” a convenient device that speeds up the calculation process - a slide rule (R. Bissacar, late 1620)

Wilhelm Schickard

It was believed that the first mechanical calculating machine was invented by the great French mathematician and physicist B. Pascal in 1642. However, in 1957 F. Hammer (Germany, director of the Keplerian Scientific Center) discovered evidence of the creation of a mechanical, calculating machine approximately two decades before the invention of Pascal Wilhelm Schickard. He called it "counting clock". The machine was designed to perform four arithmetic operations and consisted of parts: a summing device; multiplying device; mechanism for intermediate results. The summing device consisted of gears and represented the simplest form adding machine. The proposed scheme of mechanical counting is considered classical. However, this simple and effective scheme had to be reinvented, since information about Schickard's car did not become public domain.

Blaise Pascal

In 1642, when Pascal was 19 years old, the first working model of an adding machine was made. A few years later, Blaise Pascal created a mechanical adding machine ("pascaline"), which allowed adding numbers in the decimal number system. In this machine, the digits of a six-digit number were set by corresponding rotations of the disks (wheels) with digital divisions, the result of the operation could be read in six windows - one for each digit. The units disk was connected to the tens disk, the tens disk to the hundreds disk, and so on. In just about a decade, he built more than 50 different versions of the machine. Invented by Pascal, the principle of connected wheels was the basis on which most computing devices were built over the next three centuries.

Gottfried Wilhelm Leibniz

In 1672, while in Paris, Leibniz met the Dutch mathematician and astronomer Christian Huygens. Seeing how many calculations an astronomer has to do, Leibniz decided to invent a mechanical device for calculations. In 1673 he completed the creation of a mechanical calculator. Developing the ideas of Pascal, Leibniz used the shift operation for bitwise multiplication of numbers. Addition was carried out on it in essentially the same way as on the “pascal line”, however, Leibniz included in the design a moving part (a prototype of the movable carriage of future desktop calculators) and a handle with which you could turn the stepped wheel or - in subsequent versions of the machine - cylinders located inside the apparatus

Joseph Marie Jacquard

The development of computing devices is associated with the appearance of punched cards and their application. The appearance of perforated cards is associated with weaving. In 1804, the engineer Joseph-Marie Jacquard built a fully automated machine (the Jacquard machine) capable of reproducing the most complex patterns. The operation of the machine was programmed using a deck of punched cards, each of which controlled one shuttle move. The transition to a new pattern took place by replacing a deck of punched cards
Charles Babbage (1791-1871) He discovered errors in Napier's logarithm tables, which were widely used in calculations by astronomers, mathematicians, and sea navigators. In 1821, he began to develop his own computer, which would help to perform more accurate calculations. In 1822, a difference engine (trial model) was built, capable of calculating and printing large mathematical tables. It was a very complex, large device and was intended to automatically calculate logarithms. The model was based on the principle known in mathematics as the “finite difference method”: when calculating polynomials, only the operation of addition is used and multiplication and division are not performed, which are much more difficult to automate. Subsequently, he came up with the idea of ​​creating a more powerful analytical engine. She not only had to solve mathematical problems of a certain type, but to perform various computational operations in accordance with instructions given by the operator. By design, this is nothing more than the first universal programmable computer. The Analytical Engine was supposed to have such components as a “mill” (an arithmetic device in modern terminology) and a “warehouse” (memory). Instructions (commands) were entered into the analytical machine using punched cards (the idea of ​​Jaccard's program control using punched cards was used). The Swedish publisher, inventor and translator Per Georg Scheutz used Babbage's advice to build a modified version of this machine. In 1855 Scheutz's machine was awarded a gold medal at the World Exhibition in Paris. Later, one of the principles underlying the idea of ​​an analytical engine, the use of punched cards, was embodied in a statistical tabulator built by the American Herman Hollerith (to speed up the processing of the results of the US census in 1890)

Augusta Ada Byron

(Countess of Lovelace)

Countess Augusta Ada Lovelace, daughter of the poet Byron, worked with C. Babbage to create programs for his calculating machines. Her writings in this field were published in 1843. However, at that time it was considered indecent for a woman to publish her writings under her full name, and Lovelace put only her initials on the title. Babbage's materials and Lovelace's comments outline such concepts as "subroutine" and "library of subroutines", "instruction modification" and "index register", which began to be used only in the 50s. 20th century The term "library" itself was introduced by Babbage, and the terms "work cell" and "cycle" were proposed by A. Lovelace. “It can be rightly said that the Analytical Engine weaves algebraic patterns in the same way that Jacquecard's loom reproduces flowers and leaves,” wrote the Countess of Lovelace. She was actually the first programmer (the Ada programming language was named after her)

George Bull

J. Boole is rightfully considered the father of mathematical logic. A section of mathematical logic, Boolean algebra, is named after him. In 1847 he wrote the article "Mathematical Analysis of Logic". In 1854, Boole developed his ideas in a work entitled An Inquiry into the Laws of Thought. These works brought about revolutionary changes in logic as a science. J. Boole invented a kind of algebra - a system of notation and rules applied to all kinds of objects, from numbers and letters to sentences. Using this system, Boole could encode statements (statements) using his language, and then manipulate them in the same way that ordinary numbers are manipulated in mathematics. The three basic operations of the system are AND, OR and NOT

Pafnuty Lvovich Chebyshev

He developed the theory of machines and mechanisms, wrote a number of works devoted to the synthesis of hinged mechanisms. Among the numerous mechanisms he invented, there are several models of adding machines, the first of which was designed no later than 1876. Chebyshev's adding machine for that time was one of the most original computers. In his designs, Chebyshev proposed the principle of continuous transmission of tens and the automatic transition of the carriage from digit to digit during multiplication. Both of these inventions entered widespread practice in the 1930s. 20th century in connection with the use of an electric drive and the spread of semi-automatic and automatic keyboard computers. With the advent of these and other inventions, it became possible to significantly increase the speed of mechanical counting devices.
Alexey Nikolaevich Krylov (1863-1945) Russian shipbuilder, mechanic, mathematician, academician of the USSR Academy of Sciences. In 1904, he proposed the design of a machine for integrating ordinary differential equations. In 1912, such a machine was built. It was the first continuous integrating machine that allowed solving differential equations up to the fourth order.

Wilgodt Theophil Odner

Vilgodt Theophilus Odner, a native of Sweden, arrived in St. Petersburg in 1869. For some time he worked at the Russian Diesel plant on the Vyborg side, where in 1874 the first sample of his adding machine was made. Created on the basis of Leibniz stepped rollers, the first serial adding machines had big sizes primarily because it was necessary to allocate a separate roller for each category. Odner instead of stepped rollers used more advanced and compact gears with a changing number of teeth - Odner's wheels. In 1890, Odner received a patent for the production of adding meters, and in the same year 500 adding meters were sold (a very large number at that time). Arithmometers in Russia were called: “Odner adding machine”, “Original-Odner”, “Odner system adding machine”, etc. In Russia until 1917, about 23 thousand Odner adding machines were produced. After the revolution, the production of adding machines was established at the Suschevsky Mechanical Plant. F.E. Dzerzhinsky in Moscow. Since 1931, they began to be called "Felix" adding machines. Further, in our country, models of Odner adding machines with keyboard input and electric drive were created.
Herman Hollerith (1860-1929) After graduating from Columbia University, he goes to work in the census office in Washington. At this time, the United States embarked on an extremely laborious (which lasted seven and a half years) manual processing data collected during the 1880 census. By 1890, Hollerith completed the development of a tabulation system based on the use of punched cards. Each card had 12 rows, each of which could be punched with 20 holes, and they corresponded to data such as age, gender, place of birth, number of children, Family status and other information included in the census questionnaire. The contents of the completed forms were transferred to the cards by appropriate perforation. Punched cards were loaded into special devices connected to a tabulation machine, where they were strung on rows of thin needles, one needle for each of the 240 punched positions on the card. When the needle entered the hole, it made a contact in the corresponding electrical circuit of the machine. The full statistical analysis of the results took two and a half years (three times faster than the previous census). Hollerith subsequently organized Computer Tabulating Recording (CTR). The company's young salesman Tom Watson was the first to see the potential profitability of selling punched-card calculating machines to American businessmen. He later took over the company and renamed it International Business Machines Corporation (IBM) in 1924.

Vannevar Bush

In 1930 he built a mechanical computing device - a differential analyzer. It was a machine that could solve complex differential equations. However, it had many serious shortcomings, primarily gigantic size. Bush's mechanical analyzer was a complex system of rollers, gears and wires connected in a series of large blocks that occupied an entire room. When setting the task for the machine, the operator had to manually select a lot of gears. This usually took 2-3 days. Later, W. Bush proposed a prototype of modern hypertext - the MEMEX project (MEMory EXtention - memory expansion) as an automated bureau in which a person would store his books, records, any information he receives in such a way as to use it at any time with maximum speed and convenience. . In fact, it was supposed to be a complex device equipped with a keyboard and transparent screens onto which texts and images stored on microfilm would be projected. MEMEX would establish logical and associative links between any two blocks of information. Ideally, we are talking about a huge library, universal information base

John Vincent Atanasoff

Professor of physics, author of the first project of a digital computer based on a binary rather than a decimal number system. The simplicity of the binary number system, combined with the simplicity of the physical representation of two characters (0, 1) instead of ten (0, 1, ..., 9) in electrical diagrams computer outweighed the inconvenience of having to convert from binary to decimal and vice versa. In addition, the use of the binary number system contributed to a reduction in the size of the computer and would reduce its cost. In 1939, Atanasoff built a model of the device and began to seek financial help to continue the work. Atanasoff's car was almost ready in December 1941, but was disassembled. In connection with the outbreak of World War II, all work on the implementation of this project ceased. Only in 1973, Atanasoff's priority as the author of the first project of such a computer architecture was confirmed by the decision of the US federal court.
Howard Aiken In 1937, G. Aiken proposed a project for a large calculating machine and was looking for people willing to finance this idea. The sponsor was Thomas Watson, President of IBM Corporation: his contribution to the project amounted to about 500 thousand US dollars. The design of the new machine "Mark-1", based on electromechanical relays, began in 1939 in the laboratories of the New York branch of IBM and continued until 1944. The finished computer contained about 750 thousand parts and weighed 35 tons. The machine operated with binary numbers up to 23 digits and multiplied two numbers of the maximum bit depth in about 4 s. Since the creation of the Mark-1 lasted quite a long time, the palm went not to him, but to Konrad Zuse's Z3 relay binary computer, built in 1941. It is worth noting that the Z3 machine was much smaller than Aiken's machine and also cheaper to manufacture

Konrad Zuse

In 1934, as a student at a technical university (in Berlin), without having the slightest idea of ​​the work of C. Babbage, K. Zuse began to develop a universal computer, in many respects similar to Babbage's analytical engine. In 1938, he completed the construction of the machine, which occupied an area of ​​4 square meters. m., called Z1 (in German, his surname is written as Zuse). It was a fully electromechanical programmable digital machine. She had a keyboard for entering the conditions of tasks. The results of the calculations were displayed on a panel with many small lights. Its restored version is kept in the Verker und Technik Museum in Berlin. It is Z1 in Germany that is called the world's first computer. Zuse later began coding the machine's instructions by punching holes in used 35mm film. The machine, which worked with perforated tape, was called Z2. In 1941, Zuse built a program-controlled machine based on the binary number system - Z3. This machine, in many of its characteristics, was superior to other machines built independently and in parallel in other countries. In 1942, Zuse, together with the Austrian electrical engineer Helmut Schreyer, proposed to create a computer of a fundamentally new type - on vacuum electron tubes. This machine had to work a thousand times faster than any of the machines available at that time in Germany. Speaking about the potential applications of a high-speed computer, Zuse and Schreyer noted the possibility of using it to decrypt encoded messages (such developments were already underway in various countries)

Alan Turing

English mathematician, gave a mathematical definition of the algorithm through the construction, called the Turing machine. During World War II, the Germans used the Enigma machine to encrypt messages. Without a key and a switching scheme (the Germans changed them three times a day), it was impossible to decipher the message. In order to uncover the secret, British intelligence has assembled a group of brilliant and somewhat eccentric scientists. Among them was the mathematician Alan Turing. At the end of 1943, the group managed to build a powerful machine (instead of electromechanical relays, about 2000 electronic vacuum tubes were used in it). The car was named "Colossus". The intercepted messages were encoded, applied to punched tape and entered into the machine's memory. The tape was entered by means of a photoelectric reader at a speed of 5000 characters per second. The machine had five such readers. In the process of searching for a match (decryption), the machine compared the encrypted message with the already known Enigma codes (according to the algorithm of the Turing machine). The work of the group is still classified. The role of Turing in the work of the group can be judged by the following statement by the mathematician I. J. Good, a member of this group: “I do not want to say that we won the war thanks to Turing, but I take the liberty of saying that without him we might have lost it ". The Colossus machine was a tube machine (a major step forward in the development of computer technology) and specialized (decoding secret codes)

John Mauchly

Presper Eckert

(born in 1919)

The first computer is the ENIAC machine (ENIAC, Electronic Numerical Integrator and Computer - electronic digital integrator and calculator). Its authors, American scientists J. Mouchli and Presper Eckert, worked on it from 1943 to 1945. It was intended to calculate the trajectories of projectiles, and was the most difficult for the middle of the 20th century. engineering structure with a length of more than 30 m, a volume of 85 cubic meters. m, weighing 30 tons. 18 thousand vacuum tubes, 1500 relays were used in ENIAK, the machine consumed about 150 kW. Then the idea arose of creating a machine with software stored in the machine's memory, which would change the principles of organizing calculations and pave the way for the emergence of modern programming languages ​​(EDVAC - Electronic Discret Variable Automatic Computer, EDVAC - Electronic Discret Variable Automatic Computer). This machine was created in 1950. The larger internal memory contained both the data and the program. Programs were recorded electronically in special devices - delay lines. The most important thing was that in EDVAK the data were encoded not in the decimal system, but in binary (the number of vacuum tubes used was reduced). J. Mouchli and P. Eckert after creating their own company set out to create a universal computer for wide commercial use - UNIVAC (UNIVAC, Universal Automatic Computer - universal automatic computer). About a year before the first
ENIAC UNIVAC entered service with the US Census Bureau, the partners found themselves in a difficult financial situation and were forced to sell their company to Remington Rand. However, UNIVAC did not become the first commercial computer. They became the LEO machine (LEO, Lyons "Bectronic Office), which was used in England to pay salaries to employees of tea shops (Lyons"). In 1973, the US federal court recognized their copyright for the invention of an electronic digital computer as invalid, - borrowed from J. Atanasoff
John von Neumann (1903-1957)

Working in the group of J. Mauchly and P. Eckert, von Neumann prepared a report - "Preliminary report on the EDVAK machine", in which he summarized the plans for working on the machine. This was the first work on digital electronic computers, which became known to certain circles of the scientific community (for reasons of secrecy, work in this area was not published). Since then, the computer has been recognized as an object of scientific interest. In his report, von Neumann singled out and described in detail five key components of what is now called the "von Neumann architecture" of the modern computer.

In our country, regardless of von Neumann, more detailed and complete principles for the construction of electronic digital computers were formulated (Sergei Alekseevich Lebedev)

Sergei Alekseevich Lebedev

In 1946, S. A. Lebedev became the director of the Institute of Electrical Engineering and organized his own laboratory of modeling and regulation within it. In 1948, S. A. Lebedev focused his laboratory on the creation of MESM (Small Electronic Computing Machine). MESM was originally conceived as a model (the first letter in the abbreviation MESM) of the Large Electronic Computing Machine (BESM). However, in the process of its creation, the expediency of turning it into a small computer became obvious. Due to the secrecy of the work carried out in the field of computer technology, there were no relevant publications in the open press.

The basics of building a computer, developed by S. A. Lebedev, independently of J. von Neumann, are as follows:

1) the composition of the computer should include arithmetic, memory, input-output information, control devices;

2) the calculation program is encoded and stored in memory like numbers;

3) to encode numbers and commands, the binary number system should be used;

4) calculations should be carried out automatically on the basis of the program stored in memory and operations on commands;

5) in addition to arithmetic operations, logical operations are also introduced - comparisons, conditional and unconditional transitions, conjunction, disjunction, negation;

6) memory is built on a hierarchical principle;

7) numerical methods for solving problems are used for calculations.

December 25, 1951 MESM was put into operation. It was the first high-speed electronic digital machine in the USSR.

In 1948, the Institute of Precision Mechanics and Computer Technology (ITM and CT) of the Academy of Sciences of the USSR was created, to which the government entrusted the development of new computer technology, and S. A. Lebedev was invited to head Laboratory No. 1 (1951). When the BESM was ready (1953), it was in no way inferior to the latest American models.

From 1953 until the end of his life, S. A. Lebedev was the director of the ITM and CT of the Academy of Sciences of the USSR, was elected a full member of the Academy of Sciences of the USSR and led the work on the creation of several generations of computers.

In the early 60s. the first computer from a series of large electronic calculating machines (BESM) - BHM-1 is created. When creating BESM-1, original scientific and design solutions were applied. Thanks to this, it was then the most productive machine in Europe (8-10 thousand operations per second) and one of the best in the world. Under the leadership of S. A. Lebedev, two more tube computers, BESM-2 and M-20, were created and put into production. In the 60s. semiconductor versions of the M-20 were created: M-220 and M-222, as well as BESM-ZM and BESM-4.

When designing BESM-6, the method of preliminary simulation modeling was used for the first time (commissioning was carried out in 1967).

S. A. Lebedev was one of the first to understand the great importance of the joint work of mathematicians and engineers in the creation of computer systems. On the initiative of S. A. Lebedev, all the BESM-6 schemes were written in Boolean algebra formulas. This opened up wide opportunities for automating the design and preparation of installation and production documentation.

IBM It is impossible to miss a key stage in the development of computing tools and methods associated with the activities of IBM. Historically, the first computers of classical structure and composition - Computer Installation System / 360 (trade name - “Computer Installation System 360”, hereinafter known simply as IBM / 360) were released in 1964, and with subsequent modifications (IBM / 370, IBM /375) were supplied until the mid-80s, when, under the influence of microcomputers (PCs), they gradually began to disappear from the scene. Computers of this series served as the basis for the development in the USSR and the CMEA member countries of the so-called Unified Computer System (ES COMPUTER), which for several decades was the basis of domestic computerization.
EU 1045

The machines included the following components:

Central processing unit (32-bit) with two-address instruction set;

Main (RAM) memory (from 128 KB to 2 MB);

Magnetic disk drives (NMD, MD) with removable disk packs (for example, IBM-2314 - 7.25 MB, ShM-2311 -29 MB, IBM 3330 - 100 MB), similar (sometimes compatible) devices are known for other the aforementioned series;

Magnetic tape drives (NML, ML) reel type, tape width 0.5 inch, length from 2400 feet (720 m) or less (typically 360 and 180 m), recording density from 256 bytes per inch (typical) and more 2-8 times (increased). Accordingly, the working capacity of the drive was determined by the size of the coil and the recording density and reached 160 MB per ML reel;

Printing devices - line printers of the drum type, with a fixed (usually 64 or 128 characters) character set, including uppercase Latin and Cyrillic (or uppercase and lowercase Latin) and a standard set of service characters; the output of information was carried out on a paper tape 42 or 21 cm wide at a speed of up to 20 lines / s;

Terminal devices (video terminals, and originally electric typewriters) designed for interactive interaction with the user (IBM 3270, DEC VT-100, etc.), connected to the system to perform the functions of managing the computing process (operator console - 1-2 pcs. on a computer) and interactive debugging of programs and data processing (user terminal - from 4 to 64 pieces on a computer).

The listed standard sets of computer devices of the 60-80s. and their characteristics are given here as a historical reference for the reader, who can evaluate them independently, comparing them with modern and known data.

IBM offered the first functionally complete OS - OS/360 - as a shell for the IBM/360 computer. The development and implementation of the OS made it possible to delimit the functions of operators, administrators, programmers, users, and also significantly (and tens and hundreds of times) increase the performance of computers and the degree of loading of technical means. OS/360/370/375 versions - MFT (multiprogramming with a fixed number of tasks), MW (with a variable number of tasks), SVS (virtual memory system), SVM (virtual machine system) - successively replaced each other and largely determined modern understanding of the role of the OS

Bill Gates and

Paul Allen

In 1974, Intel developed the first universal 8-bit microprocessor, the 8080, with 4500 transistors. Edward Roberts, a young US Air Force officer, an electronics engineer, built the Altair microcomputer based on the 8080 processor, which was a huge commercial success, sold by mail and widely used for home use. In 1975, young programmer Paul Allen and Harvard University student Bill Gates implemented the BASIC language for Altair. Subsequently, they founded the company Microsoft (Microsoft).
Stephen Paul Jobs and Stephen Wozniak

In 1976, students Steve Wozniak and Steve Jobs set up a workshop in their garage and realized the Apple-1 computer, marking the beginning of the Apple Corporation. 1983 - Apple Computers Corporation built the Lisa personal computer, the first office computer controlled by a "mouse" manipulator.

In 2001, Steven Wozniak founded Wheels Of Zeus to create wireless GPS technology.

2001 - Steve Jobs introduced the first iPod.

2006 - Apple introduced the first laptop based on Intel processors.

2008 - Apple introduced the world's thinnest laptop, dubbed the MacBook Air.

3. Classes of computers

Applications and methods of use (as well as size and processing power).

Physical representation of processed information

Here allocate analog (continuous action); digital (discrete action); hybrid (at individual stages of processing, various methods of physical representation of data are used).

AVM - analog computers, or continuous computers, work with information presented in a continuous (analog) form, i.e. in the form of a continuous series of values ​​​​of any physical quantity (most often electrical voltage):

Digital computers - digital computers, or computers of discrete action, work with information presented in discrete, or rather, digital form. Due to the universality of the digital form of information representation, a computer is a more universal means of data processing.

GVM - hybrid computers, or computers of combined action, work with information presented in both digital and analog form. They combine the advantages of AVM and CVM. It is expedient to use the GVM for solving the problems of controlling complex high-speed technical complexes.

Generations of computers

The idea of ​​dividing machines into generations was brought to life by the fact that during the short history of its development, computer technology has undergone a great evolution both in terms of the element base (lamps, transistors, microcircuits, etc.), and in terms of changing its structure, the emergence of new opportunities, expanding the scope and nature of use (Table 2.).


table 2

Stages of development of computer information technologies

Parameter Period, years
50s 60s 70s 80s

The present

Purpose of using the computer Scientific and technical calculations

Technical and economy

Management, provision of information

communications, information

maintenance service

Computer mode Single program batch processing Time division Personal work Network processing
Data integration Low Medium high Very high
User location Engine room Separate room terminal hall Desktop

free mobile

User type Software engineers

sional programs

Programmers Users with general computer training

Few trained users

Dialog type Working at the computer remote control Exchange of punched carriers and machinograms Interactive (via keyboard and screen) Interactive with hard menu

active screen type "question - answer"

The first generation usually includes machines created at the turn of the 50s. and based on electron tubes. These computers were huge, cumbersome, and overpriced machines that only large corporations and governments could purchase. The lamps consumed a significant amount of electricity and generated a lot of heat (Fig. 1.).

The set of instructions was limited, the circuits of the arithmetic logic unit and the control unit were quite simple, and there was practically no software. The RAM and performance scores were low. For I / O, punched tapes, punched cards, magnetic tapes and printing devices were used. The speed is about 10-20 thousand operations per second.

Programs for these machines were written in the language of a particular machine. The mathematician who compiled the program sat down at the control panel of the machine, entered and debugged the programs, and made an account on them. The debugging process was very long in time.

Despite the limited capabilities, these machines made it possible to perform the most complex calculations necessary for weather forecasting, solving problems of nuclear energy, etc.

Experience with the first generation of machines has shown that there is a huge gap between the time spent on developing programs and the time of computing. These problems began to be overcome through the intensive development of means for automating programming, the creation of systems of service programs that simplify work on the machine and increase the efficiency of its use. This, in turn, required significant changes in the structure of computers, aimed at bringing it closer to the requirements that arose from the experience of operating computers.

In October 1945, the first computer ENIAC (Electronic Numerical Integrator And Calculator - electronic numerical integrator and calculator) was created in the USA.

Domestic machines of the first generation: MESM (small electronic calculating machine), BESM, Strela, Ural, M-20.

The second generation of computer technology - machines designed in 1955-65. They are characterized by the use of both vacuum tubes and discrete transistor logic elements (Fig. 2). Their RAM was built on magnetic cores. At this time, the range of input-output equipment used began to expand, high-performance devices for working with magnetic tapes (NML), magnetic drums (NMB) and the first magnetic disks appeared (Table 2.).

These machines are characterized by speed up to hundreds of thousands of operations per second, memory capacity - up to several tens of thousands of words.

High-level languages ​​appear, the means of which allow the description of the entire necessary sequence of computational actions in a visual, easily perceived form.

A program written in an algorithmic language is incomprehensible to a computer that only understands the language of its own instructions. Therefore, special programs called translators translate the program from a high-level language into machine language.

There is a wide range library programs to solve various problems, as well as monitor systems that control the mode of broadcasting and execution of programs, from which modern operating systems later grew.

The operating system is the most important part of computer software, designed to automate the planning and organization of the process of processing programs, input-output and data management, resource allocation, preparation and debugging of programs, and other auxiliary service operations.

Machines of the second generation were characterized by software incompatibility, which made it difficult to organize large information systems. Therefore, in the mid-60s. there has been a transition to the creation of computers that are software compatible and built on a microelectronic technological base.

The highest achievement of domestic computer technology created by the team of S.A. Lebedev was the development in 1966 of a semiconductor computer BESM-6 with a capacity of 1 million operations per second.

Third generation machines are families of machines with a common architecture, i.e., software compatible. As an element base, they use integrated circuits, which are also called microcircuits.

Third generation machines appeared in the 60s. Since the process of creating computer technology was continuous, and many people from different countries dealing with the solution of various problems, it is difficult and useless to try to establish when the "generation" began and ended. Perhaps the most important criterion for distinguishing second and third generation machines is one based on the concept of architecture.

Third generation machines have advanced operating systems. They have the capabilities of multiprogramming, i.e., the parallel execution of several programs. Many of the tasks of managing memory, devices and resources began to be taken over by the operating system or directly by the machine itself.

Examples of third-generation machines are the IBM-360, IBM-370, PDP-11, VAX, EC computers (Unified Computer System), SM computers (Small Computers Family), etc.

The speed of machines within the family varies from several tens of thousands to millions of operations per second. The capacity of RAM reaches several hundred thousand words.

The fourth generation is the main contingent of modern computer technology developed after the 70s.

Conceptually, the most important criterion by which these computers can be distinguished from third-generation machines is that fourth-generation machines were designed to make efficient use of modern high-level languages ​​and simplify the programming process for the end user.

In terms of hardware, they are characterized by the widespread use of integrated circuits as an element base, as well as the presence of high-speed random access memory devices with a capacity of tens of megabytes (Fig. 3, b).

From the point of view of the structure, machines of this generation are multiprocessor and multimachine complexes that use a common memory and a common field of external devices. The speed is up to several tens of millions of operations per second, the capacity of RAM is about 1-512 MB.

They are characterized by:

Application of personal computers (PC);

Telecommunication data processing;

Computer networks;

Widespread use of database management systems;

Elements of intelligent behavior of data processing systems and devices.

Computers of the fourth generation include PC "Electronics MS 0511" of the educational computer equipment KUVT UKNTS, as well as modern IBM - compatible computers on which we work.

In accordance with the element base and the level of software development, four real generations of computers are distinguished, a brief description of which is given in table 3.

Table 3

Generations of computers

Comparison Options Generations of computers
first second third fourth
Period of time 1946 - 1959 1960 - 1969 1970 - 1979 since 1980
Element base (for CU, ALU) Electronic (or electric) lamps Semiconductors (transistors) integrated circuits Large integrated circuits (LSI)
Main computer type Large Small (mini) Micro
Basic input devices Remote control, punched card, punched tape input Added alphanumeric display, keyboard Alphanumeric display, keypad Color graphic display, scanner, keyboard
Main output devices Alphanumeric printer (ATsPU), perforated tape output Graph plotter, printer
External memory Magnetic tapes, drums, punched tapes, punched cards Added magnetic disk Perforated tape, magnetic disk Magnetic and optical discs
Key decisions in software Universal programming languages, translators Batch operating systems optimizing translators Interactive operating systems, structured programming languages Software friendliness, network operating systems
Computer operating mode Single program Batch Time divisions Personal work and network processing
The purpose of using a computer Scientific and technical calculations Technical and economic calculations Management and economic calculations Telecommunications, information service

Table 4

The main characteristics of domestic computers of the second generation

Parameter First of all
Hrazdan-2 BESM-4 M-220 Ural-11 Minsk-22 Ural-16
Targeting 2 3 3 1 2 1
Data presentation form floating point floating point floating point

separated comma, character

separated comma, character

floating and fixed

separated comma, character

Machine word length (double bit) 36 45 45 24 37 48
Speed ​​(op. / s) 5 thousand 20 thousand 20 thousand 14-15 thousand 5 thousand 100 thousand
RAM, type, capacity (words)

new core 2048

new core 8192

new core 4096-16 384

new core 4096-16 384

new core

custom core 8192-65 536

VZU, type, capacity (words) NML 120 thousand NML 16 million NML 8 million NML up to 5 million NML 12 million NMB130 thousand

In fifth-generation computers, a qualitative transition from data processing to knowledge processing is supposed to take place.

The fifth generation computer architecture will contain two main blocks. One of them is a traditional computer, but devoid of communication with the user. This connection is made by an intelligent interface. The problem of decentralization of computing with the help of computer networks will also be solved.

Briefly, the basic concept of fifth-generation computers can be formulated as follows:

1. Computers based on ultra-complex microprocessors with a parallel-vector structure, simultaneously executing dozens of sequential program instructions.

2. Computers with many hundreds of processors operating in parallel, which make it possible to build data and knowledge processing systems, effective network computer systems.


Until the 17th century the activity of society as a whole and of each person individually was aimed at mastering the substance, i.e., there is a knowledge of the properties of matter and the manufacture of first primitive, and then more and more complex tools of labor, up to mechanisms and machines that make it possible to produce consumer values.

Then, in the process of the formation of an industrial society, the problem of mastering energy came to the fore - first thermal, then electrical, and finally atomic.

At the end of the XX century. humanity has entered a new stage of development - the stage of building an information society.

At the end of the 60s. D. Bell stated the transformation of an industrial society into an information one.

The most important task of society is to restore communication channels in the new economic and technological conditions to ensure clear interaction between all areas of economic, scientific and social development, both in individual countries and on a global scale.

A modern computer is a universal, multifunctional, electronic automatic device for working with information.

In 1642, when Pascal was 19 years old, the first working model of an adding machine was made.

In 1673, Leibniz invented a mechanical device for calculations (a mechanical calculator).

1804 engineer Joseph-Marie Jacquard built a fully automated machine (Jacquard machine) capable of reproducing the most complex patterns. The operation of the machine was programmed using a deck of punched cards, each of which controlled one shuttle move.

In 1822, C. Babbage built a difference engine (trial model) capable of calculating and printing large mathematical tables. Subsequently, he came up with the idea of ​​creating a more powerful analytical engine. She not only had to solve mathematical problems of a certain type, but to perform various computational operations in accordance with instructions given by the operator.

Countess Augusta Ada Lovelace, together with C. Babbage, worked on creating programs for his calculating machines. Her work in this area was published in 1843.

J. Boole is rightfully considered the father of mathematical logic. A section of mathematical logic, Boolean algebra, is named after him. J. Boole invented a kind of algebra - a system of notation and rules applied to all kinds of objects, from numbers and letters to sentences (1854).

Models of adding machines, the first of which was designed no later than 1876. Chebyshev's adding machine for that time was one of the most original computers. In his designs, Chebyshev proposed the principle of continuous transmission of tens and the automatic transition of the carriage from digit to digit during multiplication.

Alexei Nikolaevich Krylov 1904 proposed the design of a machine for integrating ordinary differential equations. In 1912, such a machine was built.

And others.

An electronic computer (computer), a computer is a set of technical means designed for automatic processing of information in the process of solving computational and information problems.

Computers can be classified according to a number of criteria, in particular:

Physical representation of the processed information;

Generations (stages of creation and element base).

It began to be called arithmetic-logical. It has become the main device of modern computers. Thus, the two geniuses of the 17th century set the first milestones in the history of the development of digital computing. The merits of W. Leibniz, however, are not limited to the creation of an "arithmetic instrument". From his student years until the end of his life, he was engaged in the study of the properties of the binary system ...

...) And modern technology, the level of development of which largely determines the progress in the production of computer technology. Electronic computers in our country are usually divided into generations. Computer technology is characterized primarily by the rapid change of generations - in its short history of development, four generations have already changed, and now we are working on computers of the fifth ...

The computer they created worked a thousand times faster than the Mark-1. But it turned out that most of the time this computer was idle, because in order to set the calculation method (program) in this computer, it took several hours or even several days to connect the wires in the right way. And the calculation itself after that could take only a few minutes or even seconds.

To simplify and speed up the process of programming, Mauchly and Eckert began to design a new computer that could store a program in its memory. In 1945, the famous mathematician John von Neumann was involved in the work, who prepared a report on this computer. The report was sent to many scientists and became widely known, since in it von Neumann clearly and simply formulated general principles the functioning of computers, i.e. universal computing devices. And until now, the vast majority of computers are made in accordance with the principles that John von Neumann outlined in his report in 1945. The first computer in which von Neumann's principles were embodied was built in 1949 by the English researcher Maurice Wilks.

The development of the first electronic serial machine UNIVAC (Universal Automatic Computer) began around 1947 by Eckert and Mauchly, who founded the company ECKERT-MAUCHLI in December of the same year. The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The UNIVAC-1 synchronous, sequential computer was created on the basis of the ENIAC and EDVAC computers. She worked with a clock frequency of 2.25 MHz and contained about 5000 vacuum tubes. An internal storage device with a capacity of 1000 12-bit decimal numbers was made with 100 mercury delay lines.

Shortly after the commissioning of the UNIVAC-1 machine, its developers put forward the idea of ​​automatic programming. It boiled down to the fact that the machine itself could prepare such a sequence of commands that is needed to solve a given problem.

A strong limiting factor in the work of computer designers in the early 1950s was the lack of high-speed memory. According to one of the pioneers of computer technology, D. Eckert, "the architecture of a machine is determined by memory." Researchers have focused their efforts on the memory properties of ferrite rings strung on wire matrices.

In 1951, J. Forrester published an article on the use of magnetic cores for storing digital information. The Whirlwind-1 machine was the first to use magnetic core memory. It consisted of 2 cubes 32 x 32 x 17 with cores, which provided the storage of 2048 words for 16-bit binary numbers with one parity bit.

Soon, the development of electronic computers included IBM. In 1952, it released its first industrial electronic computer, the IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 germanium diodes. An improved version of the IBM 704 machine was fast, it used index registers, and data was represented in floating point form.

IBM 704
After the IBM 704 computer, the IBM 709 machine was released, which, in architectural terms, approached the machines of the second and third generations. In this machine, indirect addressing was first used and I / O channels appeared for the first time.

In 1956, floating magnetic heads on an air cushion were developed by IBM. Their invention made it possible to create new type memories - disk storage devices (memory), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk memories appeared in the IBM 305 and RAMAC machines. The latter had a pack of 50 magnetically coated metal discs that rotated at 12,000 rpm. On the surface of the disk there were 100 tracks for recording data, 10,000 characters each.

Following the first serial computer UNIVAC-1, Remington-Rand in 1952 released the UNIVAC-1103 computer, which worked 50 times faster. Later, software interrupts were used for the first time in the UNIVAC-1103 computer.

Employees of Rernington-Rand used an algebraic form of writing algorithms called "Short Code" (the first interpreter created in 1949 by John Mauchly). In addition, it is necessary to note the US Navy officer and head of the programming team, at that time Captain (later the only female admiral in the Navy) Grace Hopper, who developed the first compiler program. By the way, the term "compiler" was first introduced by G. Hopper in 1951. This compiling program translated the entire program into machine language, written in an algebraic form convenient for processing. G. Hopper also owns the authorship of the term "bug" as applied to computers. Somehow, a beetle (in English - bug) flew into the laboratory through an open window, which, sitting on the contacts, closed them, which caused a serious malfunction in the machine. The burnt beetle was pasted into an administrative log, where various malfunctions were recorded. So the first bug in computers was documented.

IBM took the first steps in the field of programming automation, creating in 1953 for the IBM 701 machine "Quick Coding System". In the USSR, A. A. Lyapunov proposed one of the first programming languages. In 1957, a group led by D. Backus completed work on the first high-level programming language that later became popular, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to the expansion of the scope of computers.

Alexey Andreevich Lyapunov
In the UK in July 1951, at a conference at the University of Manchester, M. Wilks presented the report "The Best Method for Designing an Automatic Machine", which became a pioneering work on the basics of microprogramming. The method of designing control devices proposed by him has found wide application.

M. Wilks implemented his idea of ​​microprogramming in 1957 when creating the EDSAC-2 machine. M. Wilks, together with D. Wheeler and S. Gill, in 1951 wrote the first programming textbook "Programming for electronic calculating machines."

In 1956, the Ferranti company released the Pegasus computer, which for the first time embodied the concept of general purpose registers (RON). With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had at his disposal not one, but several accumulator registers.

The advent of personal computers

Early on, microprocessors were used in various specialized devices such as calculators. But in 1974, several companies announced the creation of a personal computer based on the Intel-8008 microprocessor, that is, a device that performs the same functions as a large computer, but is designed for one user. At the beginning of 1975, the first commercially distributed personal computer "Altair-8800" based on the Intel-8080 microprocessor appeared. This computer sold for about $ 500. And although its capabilities were very limited (RAM was only 256 bytes, there were no keyboard and screen), its appearance was met with great enthusiasm: several thousand sets of the machine were sold in the first months. Buyers supplied this computer with additional devices: a monitor for displaying information, a keyboard, memory expansion units, etc. Soon these devices began to be produced by other companies. At the end of 1975, Paul Allen and Bill Gates (the future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to simply communicate with the computer and easily write programs for it. It also contributed to the growing popularity of personal computers.

The success of the Altair-8800 forced many firms to also engage in the production of personal computers. Personal computers began to be sold already in a complete set, with a keyboard and a monitor, the demand for them amounted to tens, and then hundreds of thousands of pieces a year. There were several magazines devoted to personal computers. Numerous useful programs contributed greatly to the growth in sales practical value. Commercially available programs also appeared, such as the word-editing program WordStar and the spreadsheet VisiCalc (1978 and 1979, respectively). These and many other programs made the purchase of personal computers very profitable for business: with their help it became possible to perform accounting calculations, prepare documents, etc. The use of large computers for these purposes was too expensive.

In the late 1970s, the spread of personal computers even led to some decrease in demand for large computers and minicomputers (minicomputers). This became a matter of great concern to IBM, the leading company in the production of large computers, and in 1979 IBM decided to try its hand at the personal computer market. However, the company's management underestimated the future importance of this market and viewed the creation of a personal computer as just a small experiment - something like one of dozens of work carried out in the company to create new equipment. In order not to spend too much money on this experiment, the company's management gave the unit responsible for this project an unprecedented freedom in the company. In particular, he was allowed not to design a personal computer from scratch, but to use blocks made by other firms. And this unit made full use of the opportunity.

The then-newest 16-bit Intel-8088 microprocessor was chosen as the computer's main microprocessor. Its use made it possible to significantly increase the potential capabilities of a computer, since the new microprocessor made it possible to work with 1 megabyte of memory, and all computers then available were limited to 64 kilobytes.

In August 1981, a new computer called the IBM PC was officially presented to the public, and soon after that it gained great popularity among users. A couple of years later, the IBM PC took the lead in the market, displacing 8-bit computer models.

IBM PC
The secret of the popularity of the IBM PC is that IBM did not make its computer a single one-piece device and did not protect its design with patents. On the contrary, she assembled the computer from independently manufactured parts and did not keep the specifications of these parts and how they were connected a secret. On the contrary, the design principles of the IBM PC were available to everyone. This approach, called the open architecture principle, made the IBM PC a terrific success, although it took away from IBM the sole benefit of the success. Here's how the open architecture of the IBM PC influenced the development of the personal computer.

The prospects and popularity of the IBM PC made it very attractive to manufacture various components and additional devices for the IBM PC. Competition between manufacturers has led to cheaper components and devices. Very soon, many firms were no longer content with the role of manufacturers of components for the IBM PC and began to assemble computers compatible with the IBM PC themselves. Since these firms did not have to bear the huge costs of IBM to research and maintain the structure of a huge company, they were able to sell their computers much cheaper (sometimes 2-3 times) than similar IBM computers.

IBM PC-compatible computers were initially contemptuously referred to as "clones," but the nickname did not catch on as many IBM PC-compatible computer manufacturers began to implement technological advances faster than IBM itself. Users have the opportunity to independently upgrade their computers and equip them with additional devices from hundreds of different manufacturers.

Personal computers of the future

The basis of computers of the future will not be silicon transistors, where information is transmitted by electrons, but optical systems. Photons will become the carrier of information, as they are lighter and faster than electrons. As a result, the computer will become cheaper and more compact. But most importantly, optoelectronic computing is much faster than what is used today, so the computer will be much more productive.

The PC will be small and have the power of today's supercomputers. The PC will become a repository of information covering all aspects of our daily lives, it will not be tied to electrical networks. This PC will be protected from thieves thanks to a biometric scanner that will recognize its owner by fingerprint.

The main way to communicate with a computer will be voice. The desktop computer will turn into a "monoblock", or rather, into a giant computer screen - an interactive photonic display. The keyboard will not be needed, since all actions can be performed with the touch of a finger. But for those who prefer the keyboard, a virtual keyboard can be created on the screen at any time and deleted when it is not needed.

The computer will become the operating system of the house, and the house will begin to respond to the needs of the owner, will know his preferences (make coffee at 7 o’clock, play your favorite music, record the right TV show, adjust temperature and humidity, etc.)

Screen size will not play any role in the computers of the future. It can be as big as your desktop or small. Larger versions of computer screens will be based on photon-excited liquid crystals, which will have much lower power consumption than today's LCD monitors. Colors will be vivid and images will be accurate (plasma displays are possible). In fact, today's concept of "resolution" will be largely atrophied.

The first device designed to facilitate counting was the abacus. With the help of the bones of the accounts, it was possible to perform addition and subtraction operations and simple multiplications.

1642 - French mathematician Blaise Pascal designed the first mechanical calculating machine, the "Pascaline", which could perform mechanical addition of numbers.

1673 - Gottfried Wilhelm Leibniz designed an adding machine that allows you to mechanically perform four arithmetic operations.

First half of the 19th century - English mathematician Charles Babbage tried to build a universal computing device, that is, a computer. Babbage called it the Analytical Engine. He determined that a computer should contain memory and be controlled by a program. According to Babbage, a computer is a mechanical device, the programs for which are set by means of punch cards - cards made of thick paper with information applied using holes (they were already widely used in looms at that time).

1941 - German engineer Konrad Zuse builds a small computer based on several electromechanical relays.

1943 - in the USA, at one of the enterprises of IBM, Howard Aiken created a computer called "Mark-1". It made it possible to carry out calculations hundreds of times faster than manually (using an adding machine), and was used for military calculations. It used a combination of electrical signals and mechanical actuators. "Mark-1" had dimensions: 15 * 2-5 m and contained 750,000 parts. The machine was able to multiply two 32-bit numbers in 4 seconds.

1943 - in the USA, a group of specialists led by John Mauchly and Prosper Eckert began to design the ENIAC computer based on vacuum tubes.

1945 - mathematician John von Neumann was involved in work on ENIAC, who prepared a report on this computer. In his report, von Neumann formulated the general principles of the functioning of computers, that is, universal computing devices. Until now, the vast majority of computers have been made in accordance with the principles that John von Neumann outlined.

1947 - Eckert and Mauchly began development of the first electronic serial machine UNIVAC (Universal Automatic Computer). The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The UNIVAC-1 synchronous, sequential computer was created on the basis of the ENIAC and EDVAC computers. She worked with a clock frequency of 2.25 MHz and contained about 5000 vacuum tubes. An internal storage device with a capacity of 1000 12-bit decimal numbers was made on 100 mercury delay lines.

1949 - English researcher Mournes Wilks built the first computer that embodies von Neumann's principles.

1951 - J. Forrester published an article on the use of magnetic cores for storing digital information. In the Whirlwind-1 machine, magnetic core memory was first used. It consisted of 2 cubes with 32-32-17 cores, which provided the storage of 2048 words for 16-bit binary numbers with one parity bit.

1952 - IBM released its first industrial electronic computer IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 diodes. An improved version of the IBM 704 machine was fast, it used index registers, and data was represented in floating point form.

After the IBM 704 computer, the IBM 709 machine was released, which, in architectural terms, approached the machines of the second and third generations. In this machine, indirect addressing was first used and input-output channels appeared for the first time.

1952 - Remington Rand released the UNIVAC-t 103 computer, which was the first to use software interrupts. Employees at Remington Rand used an algebraic form of writing algorithms called "Short Code" (the first interpreter, created in 1949 by John Mauchly).

1956 - floating magnetic heads on an air cushion were developed by IBM. Their invention made it possible to create a new type of memory - disk storage devices (memory), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk memories appeared in the IBM 305 and RAMAC machines. The latter had a package consisting of 50 magnetically coated metal discs that rotated at a speed of 12,000 rpm. /min On the surface of the disk there were 100 tracks for recording data, 10,000 characters each.

1956 - Ferranti released the Pegasus computer, which for the first time embodied the concept of general purpose registers (RON). With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had at his disposal not one, but several accumulator registers.

1957 - a group led by D. Backus completed work on the first high-level programming language, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to the expansion of the scope of computers.

1960s - The 2nd generation of computers, logical elements of computers are implemented on the basis of semiconductor devices-transistors, algorithmic programming languages ​​are being developed, such as Algol, Pascal and others.

1970s - 3rd generation of computers, integrated circuits containing thousands of transistors on one semiconductor plate. OS, structural programming languages ​​began to be created.

1974 - several companies announced the creation of a personal computer based on the Intel-8008 microprocessor - a device that performs the same functions as a large computer, but is designed for one user.

1975 - the first commercially distributed personal computer Altair-8800 appeared based on the Intel-8080 microprocessor. This computer had only 256 bytes of RAM and no keyboard or screen.

Late 1975 - Paul Allen and Bill Gates (the future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to simply communicate with the computer and easily write programs for it.

August 1981 - IBM introduced the IBM PC. A 16-bit Intel-8088 microprocessor was used as the main microprocessor of the computer, which allowed working with 1 megabyte of memory.

1980s - 4th generation of computers, built on large integrated circuits. Microprocessors are implemented in the form of a single microcircuit, mass production of personal computers.

1990s — 5th generation of computers, ultra-large integrated circuits. Processors contain millions of transistors. Emergence of global computer networks of mass use.

2000s — 6th generation of computers. Computer integration and household appliances, embedded computers, development of network computing.

We recommend reading

Top