| 失效链接处理 | 
| 
      分布式算法与优化 PDF 下载 
	本站整理下载: 
	相关截图: 
![]() 
	主要内容: 
		1 Background 
	
		This section briefly covers or points to some of the background definitions and concepts required 
	
		to appreciate the course. Most of this material would be normally encountered in a first course on 
	
		Algorithms and Data Structures in any undergraduate program in Computer Science. 
	
		1.1 Computers and Algorithms 
	
		A computer is a machine that can be instructed to carry out sequences of arithmetic or logical 
	
		operations to perform a specific task such as solving a particular problem. A computer program 
	
		is a collection of instructions that can be executed by a computer. The method underlying a 
	
		program to solve a particular problem that can be specified by a finite sequence of well-defined, 
	
		unambiguous, computer-implementable instructions is known as an algorithm. Thus an algorithm 
	
		can be viewed as a computational procedure that allows us to solve a computational problem 
	
		specified by a given input/output relationship. For example, consider the sorting problem specified 
	
		the following input/output relationship: 
	
		Input: A sequence of n numbers (x1, x2, . . . , xn) 
	
		Output: A reordering (x(1), x(2), . . . , x(n) 
	
		) of the input sequence such that x(1) ≤ x(2) ≤ · · · ≤ x(n) 
	
		For example, given (4, 3, 1, 2) as the input sequence or instance of the problem, the sorting 
	
		algorithm should return as output the sorted sequence (1, 2, 3, 4). An algorithm is said to be correct 
	
		if it halts with the expected output for every input instance. For example, a sorting algorithm would 
	
		be correct if it produced the sorted sequence for every one of the n! input sequences of any given 
	
		sequence of n numbers. 
	
		A correct algorithm is said to solve the given computational problem. Two algorithms can solve 
	
		a problem, but one may solve it with fewer computer resources than the other. An algorithm that 
	
		uses fewer resources than another is said to be more efficient. To appreciate algorithmic efficiency 
	
		we need some understanding of the computer resources that are crucial to solving a computational 
	
		problem. 
	
		This course assumes you have taken an intermediate to advanced undergraduate course in 
	
		*Analysis of Algorithms* in a sequential single-machine setting. A timeless course is from the 
	
		authors of the classical Book on the topic 1 or its modern versions 2 
	
		. It is impossible to learn the 
	
		contents of this course without such basic undergraduate foundations in analysis of algorithms. 
	
		1.2 Computer architecture 
	
		Computers one encounters today include desktops, laptops, tablets, and smartphones. All such 
	
		computers have the following common features. A processor is an electronic device capable of 
	
		manipulating data as specified by a program. The processor requires memory to store the program 
	
		1 
	
		https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/ 
	
		6-046j-introduction-to-algorithms-sma-5503-fall-2005/index.htm 
	
		2 
	
		https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/ 
	
		6-046j-design-and-analysis-of-algorithms-spring-2015/index.htm 
	
		6 
	
		and data and input/output (I/O) devices such as keyboards, monitor, hard-drive, etc., along with 
	
		support logic. These three main components are depicted in Fig. 1. 
	
		Nearly all modern processors are implemented on a single integrated circuit known as microprocessors or CPUs (Central Processing Units) including Intel, ARM, and Sun SPARC. The memory 
	
		of the computer contains both the instructions that the processor will execute (to run the program) 
	
		and the data it will manipulate. Thus, while instructions are read from memory, the data to be 
	
		manipulated is both read from and written to memory by the processor. Most modern computers follow such a computer architecture known as the von Neumann architecture and are termed 
	
		control-flow computers as they follow a step-by-step program that governs its operations. 
	
		Processor Memory I/O Devices 
	
		Figure 1: Simplified architecture of a computer 
	
		There are four basic types of operations that a processor can perform. The processor can (1) 
	
		write data to memory or to an I/O device, (2) read data from memory or from an I/O device, (3) 
	
		read instructions from memory, and (4) perform internal manipulation of data within the processor 
	
		using the Arithmetic Logic Unit (ALU). The processor has limited internal data storage known as 
	
		its registers or caches and these are used to hold the data or operands that the processor is currently 
	
		manipulating. The ALU performs the internal arithmetic and logical operations to manipulate the 
	
		data in the processor. The instructions that are read from memory and executed by the processor 
	
		not only control the data flow between the registers and the ALU but also the arithmetic operations 
	
		performed by the ALU. 
	
		Clearly, the total number of operations performed by a processor in the course of implementing 
	
		an algorithm is directly proportional to the time taken to produce the desired output from a given 
	
		input. Thus, time taken by an algorithm is a clear measure of its efficiency. 
	
		We need to be aware of a other crucial aspects of computer architecture that can determine 
	
		the efficiency of an algorithm. A bus is a data path in a computer, consisting of various parallel 
	
		wires to which the processor, memory, and all input/output devices are connected. Thus, a bus 
	
		determines how much and how fast data can move across the devices. For example, a processor 
	
		can access data stored in caches much faster than it can from memory through a bus. In many 
	
		systems, the processor makes no distinction between reading or writing data between memory and 
	
		an I/O device such as a hard-disk. However there can be significant difference in the performance 
	
		times. Reading data in memory can be orders of magnitude faster than reading data from external 
	
		hard-disk. We will have to be mindful of such differences when designing efficient algorithms in a 
	
		sequential, parallel as well as distributed computing models. 
 | 
    




    
苏公网安备 32061202001004号


    