操作系统课程作业答案 - 图文

更新时间:2024-04-24 11:36:01 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

第一次作业

复习题1.2 定义处理器寄存器的两种主要类别

用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。

控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。

习题1.6 内存层次的各个元素间的特征是什么?

a)CPU定期检查FGI.如果FGI=1,CPU将把数据接收后,被储存在INPR里面,PR里面的内容传送至AC,并把FGI置为0. 当CPU需要传送数据到打字机时,它会检查FGO.如果FGO=0,CPU处于等待.如果FGO =1,CPU将把AC的内容传送至OUTER并把FGO置为0.当数字符号打印后,打字机将把FGI置为1.

b)在a描述的过程非常浪费.速度远高于打字机的CPU必须反复不断的检查FGI和FGO.如果中 断被使用,当打字机准备接收或者发送数据时,可以向CPU发出一个中断请求.IEN计数器可以由CPU设置(在程序员的控制下). 复习题2.1操作系统设计的三个目标

方便Convenience:操作系统使计算机更易于使用.

有效Efficiency:操作系统允许以更有效的方式使用计算机系统资源. 扩展的能力Ability to evolve:在构造操作系统时,应该允许在不妨碍服务的前提下有效地开发、测试和引进新的系统功能. 复习题2.9解释单体内核和微内核的区别

单体内核(single kernel)是一个提供操作系统应该提供的功能的大内核,包括调度、文件系统、网络、设备驱动程序、存储管理等。内核的所有功能成分都能够访问它的内部数据结构和程序。典型情况下,这个大内核是作为一个进程实现的,所有元素都共享相同的地址空间。

微内核( microkernel )是一个小的有特权的操作系统内核,只提供包括进程调度、内存管理、和进程间通信等基本功能,要依靠其他进程担当起和操作系统内核联系作用。 习题2.1

习题2.3

a)简单批处理系统发展为多道批处理系统的原因

I/O设备的时间相对于处理器速度太慢,在简单批处理系统中,一次只有一个程序执行处理器大部分时间处于空闲,效率低下。多道批处理在多个程序之间切换,同时处理多个批作业,可以使批处理变得更加有效。 b)多道批处理系统发展为分时系统的原因

分时系统给所有进程一个较短的处理时间,避免多道批处理系统中某 些作业占用处理器时间长而导致其他作业等待,有效减小响应时间。

第二次作业 Review Questions

1.For the processing model of Figure 3.6, briefly define each state.

新建new:刚刚创建的进程,操作系统还没有把它加入到可执行进程组中。 就绪ready:进程做好了准备,只要有机会就开始执行。 运行running:该进程正在执行。

阻塞blocked :进程在某些事件发生前不能执行,如I/O操作完成。 退出exit:进程从可执行进程组中释放。

2. List three general categories of information in a process control block. 1)进程标识信息process identification 2)处理器状态信息processor state information 3)进程控制信息process control information

3. What are the steps performed by an OS to create a new process? 1)给进程分配一个唯一的进程标识符

Assign a unique process identifier to the new process. 2)给进程分配空间 Allocate space for the process. 3)初始化进程控制块

Initialize the process control block. 4)设置正确的连接 Set the appropriate linkages. 5)创建或扩充其他数据结构 Create or expand other data structures.

4. What is the difference between a mode switch and a process switch?

模式切换( mode switch )是指内核态与用户态之间的切换,发生模式切换可以不改变当前正处于运行态的进程的状态。发生进程切换( processswitch )时,一个正在执行的进程被中断,操作系统指定另一个进程为运行态。进程切换需要保存更多的状态信息。 Problems

1. Consider a computer with N processors in a multiprocessor configuration.

? a. How many processes can be in each of the Ready, Running, and Blocked states at one time?

? b. What is the minimum number of processes that can be in each of the Ready, Running, and Blocked states at one time?

? a. 在就绪和阻塞状态的进程数量是没有限制的,只与内存用于存放不同状态进程的空间有关。而执行中的进程最多有N个,每个处理器最多有一个进程在执行。

? b.如果所有进程都处于运行或者就绪态,没有处于阻塞状态的进程,这是可能的。反过来,也有可能所有进程都处于阻塞状态,等待某个事件的发生,没有就绪态和执行态的进程

2. Figure 3.9b contains seven states. In principle, one could draw a transition between any two states, for a total of 42 different transitions.

? a. List all of the possible transitions and give an example of what could cause each transition.

? b. List all of the impossible transitions and explain why.

3. In [PINK89], the following states are defined for processes: Execute

(running),Active (ready), Blocked, and Suspend. A process is blocked if it is waiting for permission to use a resource, and it is suspended if it is waiting for an operation to be completed on a resource it has already acquired. In many operating systems, these two states are lumped together as the blocked state, and the suspended state has the definition we have used in this chapter. Compare the relative merits of the two sets of definitions.

假设一个进程已经执行了一段时间,它需要一个额外的磁带设备来写出一个临时文件。在它开始写磁带之前,进程必须得到使用某一设备的许可。当它做出请求时,磁带设备可能并不可用,这种情况下,该进程就处于阻塞态(blocked)。假设操作系统在某一时刻将磁带设备分配给了该进程,这时进程就重新变为活跃态。当进程重新变为执行态时要对新获得的磁带设备进行写操作。这时进程变为挂起态(suspend),等待该磁带上当前所进行的写操作完成。 这种对等待某一设备的两种不同原因的区别,在操作系统组织其工作时是非常有用的。然而这并不能表明哪些进程是换入的,哪些进程是换出的。后一种区别是必需的,而且应该在进程状态中以某种形式表现出来。

第三次作业

1.Table 3.5 lists typical elements found in a process control block for an unthreaded OS. Of these, which should belong to a thread control block and which should belong to a process control block for a multithreaded system?

? 这对于不同的系统来说通常是不同的,但一般来说,进程是资源的所有者,而每个线程都有它自己的执行状态。 ? 关于表3.5中的每一项的一些结论如下:

? 进程标识:进程必须被标识,而进程中的每一个线程也必须有自己的ID。 ? 处理器状态信息:这些信息通常只与进程有关。

? 进程控制信息:调度和状态信息主要处于线程级;数据结构在两级都可出现;进程间通信和线程间通信都可以得到支持;特权在两级都可以存在;存储管理通常在进程级;资源信息通常也在进程级。

2.List reasons why a mode switch between threads may be cheaper than a mode switch between processes.

线程转换(mode switch between threads)包含的状态信息更少。

3.It was pointed out that two advantages of using multiple threads within a process are that (1) less work is involved in creating a new thread within an existing process than in creating a new process, and (2) communication among threads within the same process is simplified. Is it also the case that a mode switch between two threads within the same process involves less work than a mode switch between two threads in different processes?

是的,因为线程转换包含更少的状态信息。

4.In the discussion of ULTs versus KLTs, it was pointed out that a disadvantage of ULTs is that when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. Why is that so?

因为对于用户级线程来说,一个进程的线程结构对操作系统是不可见的, 而操作系统的调度是以进程为单位的。所以当一个线程被阻塞,和该线程 在同一个进程的所有线程都会被阻塞。

5.Can a multithreaded solution using multiple user-level threads achieve

betterperformance on a multiprocessor system than on a single processor system? Explain.

在ULT策略中,多线程的应用不能利用多处理器来提高性能。由于内核每次给每个处理器分配一个进程,在处理器中只有进程的一个线程能执行。

第四次作业 Review Questions

1.What are three contexts in which concurrency arises? 多个应用程序,结构化应用程序,操作系统结构

Multiple applications, structured applications, operating-system structure. 2.What is the basic requirement for the execution of concurrent processes? 加强互斥的能力

The ability to enforce mutual exclusion.

3.List three degrees of awareness between processes and briefly define each. 进程间互相不知道对方Processes unaware of each other:这是一些独立的进程,他们不会一起工作。

进程间间接知道对方Processes indirectly aware of each other:这些进程并不需要知道对方的进程ID号,但他们共享访问某些对象,如一个I/O缓冲区。 进程间直接知道对方Processes directly aware of each other:这些进程可以通过进程ID号互相通信,用于合作完成某些活动。 4.List the requirements for mutual exclusion.

1.必须强制实施互斥:在具有关于相同资源或共享对象的临界区的所有进程中,一次只允许一个进程进入临界区。

2.一个在临界区停止的进程必须不干涉其他进程。

3.绝不允许出现一个需要访问临界区的进程被无限延迟的情况,即不会饿死或饥饿。

4.当没有进程在临界区中时,任何需要进入临界区的进程必须能够立即进入。 5.对相关进程的速度和处理器的数目没有任何要求和限制。 6.一个进程驻留在临界区中的时间是有限的。 5.What operations can be performed on a semaphore? 1)一个信号量可以初始化成非负数。

2)wait操作使信号量减1,如果值为负数,那么进程执行wait就会受阻。 3)signal操作使信号量增加1,如果小于或等于0,则被wait操作阻塞的进程被解除阻塞。

Problems

1. What are three contexts in which concurrency arises? Consider a concurrent program with two processes, p and q, defined as follows. A, B,C, D, and E are arbitrary atomic (indivisible) statements. Assume that the main program (not shown) does a parbegin of the two processes. void p() void q() { { A; D; B; E; C; } }

?Show all the possible interleavings of the execution of the preceding two processes ABCDE;ABDCE;ABDEC;ADBCE;ADBEC;ADEBC;DEABC;DAEBC;DABEC;DABCE;

2. Is busy waiting always less efficient (in terms of using processor time) than a blocking wait? Explain.

就一般情况来说是对的,因为忙等待消耗无用的指令周期.然而,有一种特殊情况,当进程执行到程序的某一点处,在此处要等待直到条件满足,而正好条件已满足,此时忙等待会立即有结果,然而阻塞等待会消耗操作系统资源在换出与换入进程上.

3. Now consider a version of the bakery algorithm without the variable choosing. Then we have int number[n]; while (true) {

number[i] = 1 + getmax(number[], n); for (int j = 0; j < n; j++){

while ((number[j] != 0) && (number[j],j) < (number[i],i)) { }; }

/* critical section */; number [i] = 0; /* remainder */;

}

Does this version violate mutual exclusion? Explain why or why not.

Suppose we have two processes just beginning; call them p0 and p1.Both reach line 3 at the same time. Now, we'll assume both read number[0] and number[1] before either addition takes place. Let p1 complete the line, assigning 1 to number[1], but p0 block before the assignment. Then p1 gets through the while loop at line 5 and enters the critical section. While in the critical section, it blocks;p0 unblocks, and assigns 1 to number[0] at line 3. It proc eeds to the while loop at line 5. When it goes through that loop for j = 1, the first condition on line 5 is true.Further, the second condition on line 5 is false, so p0 enters the critical section.Now p0 and p1 are both in the critical section, violating mutual exclusion. The-28-reason for choosing is to prevent the while loop in line 5 from being entered when process j is setting its number[j]. Note that if the loop is entered and then process j reaches line 3, one of two situations arises. Either number[j] has the value 0 when the first test is executed, in which case process i moves on to the next process, or number[j] has a non-zero value, in which case at some point number[j] will be greater than number[i]

(since process i finished executing statement 3 before process j began). Either way, process i will enter the cr)

4. Consider the first instance of the statement bolt = 0 in Figure 5.2b. a. Achieve the same result using the exchange instruction. b. Which method is preferable? a. exchange(keyi, bolt)

b. The statement bolt = 0 is preferable. An atomic statement such as exchange will use more resources.

第五次作业

复习题6.1.给出可重用资源和可消耗资源的例子。

可重用资源的例子是:处理器、I/O通道、主存和辅存、设备和数据结构,如文件、数据库和信号量。

可消耗资源的例子是:中断、信号、信息及在I/O缓存中的信息。 复习题6.2.产生死锁的三个必要条件是什么? 1)互斥 2)占有且等待 3)不可抢占; 复习题6.3.产生死锁的四个条件是什么?

1)互斥 2)占有且等待 3)不可抢占 4)循环等待 习题6.6 1. W=(2 1 0 0)

2. Mark P3; W=(2 1 0 0)+(0 1 2 0)=(2 2 2 0) 3. Mark P2; W=(2 2 2 0)+(2 0 0 1)=(4 2 2 1) 4. Mark P1; no deadlock detected 没有死锁

习题6.11. 假设在系统中有四个进程和四种类型的资源,系统使用银行家算法来避免死锁。最大资源需求矩阵是

Claim=

?4??4?13??6421??311? ?527?111?其中claimij(1<=i<=4且1<=j<=4)表示进程i对于资源j的最大需求。系统中每一种类型的资源总量由向量[16,5,2,8]给出。当前的资源分配情况由下面的矩阵给出:

?4?1Allocation=??1??3001??210?

102??110?其中Allocationij表示当前分配给进程i的资源j的数量。 a) 说明这个状态是安全的。

b) 说明进程1申请1个单位的资源2是否允许。

c)说明进程3申请6个单位的资源1是否允许。(这个问题和问题b是独立的)

d)说明进程2申请2个单位的资源4是否允许。(这个问题和问题b、c是独立的) 解:a)可用资源=[7,1,0,5]

?0?3需求矩阵(C-A)=??12??3420??101? ?425?001?(1)可先执行进程2,进程2释放资源后,可用资源=[8,3,1,5] (2)执行进程4,进程4释放资源后,可用资源=[11,4,2,5] (3)执行进程1,进程1释放资源后,可用资源=[15,4,2,6] (4)执行进程3,进程3释放资源。 所以存在一个安全序列。该状态是安全的。

b)进程1申请一个单位的资源2后,可用资源变为[7,0,0,5]

?4?1Allocation=??1??3

101??210? ?102?110??0?3需求矩阵(C-A)=??12??3320??101? ?425?001?1) 可执行进程4,释放资源后可用资源变为[10,1,1,5] 2) 执行进程2,释放资源后可以资源变为[11,3,2,5] 3) 执行进程1,释放资源后可用资源变为[15,4,2,6] 4) 执行进程3,可完成。

所以进程1申请1个单位的资源2是允许的。

c)进程3申请6个单位的资源1后,可用资源为[1,1,0,5]

?4?1Allocation=??7??3001??210? ?102?110??0?3需求矩阵(C-A)=??6??3320??101?

425??001?可用资源不能满足任何进程,所以该分配是不允许的。

d)进程2申请两个单位资源4后,但进程2对资源4的最大需求量为1,申请数大于最大需求数,因此不能分配。 习题6.14.

a)3个进程共享4个资源单元,一次只能保留或释放一个单元。每个进程最大需要2个单元。说明不会死锁。

b)N个进程共享M个资源单元,一次只能保留或释放一个单元。每个进程最大需要单元数不超过M,并且所有最大需求的总和小于M+N。说明不会发生死锁。 解:a)说明由于每个进程最多需要2个资源,最坏情况下,每个进程需要获得一个,系统还剩1个,这一个资源,无论分给谁都能完成。完成进程释放资源后,使剩余进程也完成,故系统不会死锁。

b)假定每个进程最多申请X个资源,最坏情况下,每个进程都得到X-1个资源都在申请最后一个资源,这时系统剩余资源数量为M-N(X-1),只要系统还有一个剩余资源,就可以使其中的一个进程获得所需要的全部资源,该进程运行结束以后释放资源,就可以使其他进程得到全部资源的 满足,因此,当M-N(X-1)》1时系统不会发生死锁,解这个不等式X《(M+N-1),系统不会发生死锁,因此,当所有进程的需求总和小于M+N时,系统是不会发生死锁的。

习题6.18 假设有两种类型的哲学家。一类总是先拿起左边的叉子(左撇子),

另一类总是先拿起右边的叉子(右撇子)。左撇子的行为和图6.12中定义的一致。右撇子的行为如下: begin repeat think;

wait(fork[(i+1)mod5]); wait(fork[i]); eat;

signal(fork[i]);

signal(fork[(i+1)mod5]); forever; end; 证明:

a)如果至少有一个左撇子或右撇子,则他们的任何就座安排都可以避免死锁。 b)如果至少有一个左撇子或右撇子,则他们的任何就座安排都可以防止饥饿。 解:

a)假设存在死锁情况,设有D个哲学家,他们每人都有一支叉子而且另一支叉子被邻居占有。不失一般性,设Pj是一个左撇子。 Pj抓牢他的左边叉子而且没有他的右边叉子,他的右边邻居Pk没有完成就餐因此也是一个左撇子。因此依次推理下去所有这D个哲学家都是左撇子。这与既有左撇子又有右撇子的条件矛盾,假设不成立,不存在死锁。

b)假设左撇子Pj饥饿,也就是说,有一部分人在就餐而Pj从不吃。假如Pj没有叉子。这样Pj的左边邻居Pi一定持续地占有叉子而始终不吃完。因此Pi是右撇子,抓住他的右边叉子,但是从不得到他的左边叉子来完成就餐,也就是说Pi也饥饿。现在Pi左边邻居也一定是持续占有右边叉子的右撇子。向左进行这样的推理,得出所有哲学家都是饥饿的右撇子,这同Pj是个左撇子矛盾。因此Pj一直拥有左边子而且在等待他的右边叉子,Pj的右边邻居Pk一直举着他的左边叉子而且从不完成一餐,也就是,Pk是也饥饿的左撇子。如果Pk不是一直拿着他的左边叉子,Pj就可以就餐;因此Pk拿着他的左边叉子。向右推理可得所有

哲学家都是饥饿的左撇子。这与条件矛盾,因此假设不成立,没有人饥饿。

第六次作业 复习题

7.1 What requirements is memory management intended to satisfy?

重定位、保护、共享、逻辑组织、物理组织

7.2 Why is the capability to relocate processes desirable?(课本210)

通常情况下,程序员并不能事先知道在某个程序执行期间会有哪些程序驻留在内存中。此外还希望通过提供一个巨大的就绪进程池,能够把活动进程换入或换出内存,以便使处理器的利用率最大化。在这两种情况下,该过程中的特定位置主内存是不可预测的。

7.6 What is the difference between internal and external fragmentation?

内部碎片:被装入的数据块小于分区大小,从而导致分区内部有空间浪费。它是指已经被分配出去,却不能被利用的内存空间。

外部碎片:与动态分区相关联的现象,指在所有分区外的存储空间变成越来越多的碎片。它指的是还没有被分配出去,但由于空间太小了无法分配给申请内存空间的新进程的内存空闲区域。

7.7 What are the distinctions among logical, relative, and physical addresses?

逻辑地址是指与当前数据在内存中的物理分配地址无关的访问地址,在执行对内存的访问之前必须把它转换成物理地址。

相对地址是逻辑地址的一个特例,是相对于某些已知点(通常是程序的开始处)的存储单元。

物理地址(绝对地址)是数据在内存中的实际位置。 习题

7.2 Consider a fixed partitioning scheme with equal-size partitions of 216 bytes and a total main memory size of 224 bytes. A process table is maintained that includes a pointer to a partition for each resident process. How many bits are required for the pointer?

Answer:The number of partitions equals the number of bytes of main memory divided by the number of bytes in each partition: 224/216 = 28. Eight bits are needed to identify one of the 28 partitions.

7.5 Another placement algorithm for dynamic partitioning is referred to as worst-fit. In this case, the largest free block of memory is used for bringing in a process. Discuss the pros and cons of this method compared to first-, next-, and best-fit. What

is the average length of the search for worst-fit?

Answer: A criticism of the best fit algorithm is that the space remaining after allocating a block of the required size is so small that in general it is of no real use. The worst fit algorithm maximizes the chance that the free space left after a placement will be large enough to satisfy another request, thus minimizing the frequency of compaction. The disadvantage of this approach is that the largest blocks are allocated first; therefore a request for a large area is more likely to fail.

7.13: a=pz+w;其中p=??,w=a%z;

z???a?7.14

a) 段0开始于位置660,随着偏移量,我们有一个物理地址: 段0开始于位置660,660 + 198 = 858

b) 第2个段起始地址为222,长度为198 所以,物理地址为222+156=378

c) 段1为422字节的长度,所以这个地址触发段错误 d) 996 + 444 = 1440 e) 660 + 222 = 882

第七次作业

答案:

8.1. Simple paging: all the pages of a process must be in main memory for process to run, unless overlays are used.

Virtual memory paging: not all pages of a process need be in main memory frames for the process to run.; pages may be read in as needed.

8.3. Algorithms can be designed to exploit the principle of locality to avoid thrashing. In general, the principle of locality allows the algorithm to predict which resident pages are least likely to be referenced in the near future and are therefore good candidates for being swapped out.

8.7. Resident set management deals with the following two issues: (1) how many page frames are to be allocated to each active process; and (2) whether the set of pages to be considered for replacement should be limited to those of the process that caused the page fault or encompass all the page frames in main memory.

Page replacement policy deals with the following issue: among the set of pages considered, which particular page should be selected for replacement.

8.11. The resident set of a process is the current number of pages of that process in mainmemory.

The working set of a process is the number of pages of that process thathave been referenced recently.

Answer:

a. 一个虚拟地址通常由页号和偏移量组成,用页号作为索引查看页表,得到对应的页框号。然后和虚拟地址的偏移量组合得到相应的物理地址。 b. i) 1052/1024=1,105224=28,其中1为页号,28为偏移量,通过查页表知页框号为7,所以物理地址=7*1024+28=7196

ii) 2221/1024=2,222124=173,其中2为页号,173为偏移量,通过查页表知该页不在内存中。

iii) 5499/1024=5,549924=379,页号为5,偏移量为379,查页表知页框号为0,所以物理地址为0*1024+379=379.

8.2

a. Virtual memory can hold (232 bytes of main memory)/( 210 bytes/page) = 222 pages, so 22 bits are needed to specify a page in virtual memory. Each page table contains (210bytes per page table)/(4 bytes/entry) = 28 entries. Thus, each page table can handle 8 of the required 22 bits. Therefore, 3 levels of page tables are needed.

b. Tables at two of the levels have 28 entries; tables at one level have 26 entries. (8 +8 + 6 = 22).

c. Less space is consumed if the top level has 26 entries. In that case, the second level has 26 pages with 28 entries each, and the bottom level has 214 pages with 28 entries each, for a total of 1 + 26 + 214 pages = 16,449 pages. If the middle level has 26 entries, then the number of pages is 1 + 28 + 214 pages = 16,641 pages. If the bottom level has 26 entries, then the number of tables is 1 + 28 + 216 pages =65,973 pages.

8.4 a) FIFO算法时,置换页框3,因为页框3在时间为20的时候被加载进来,是最早被加载的页框。

b) LRU算法,置换页框1,因为页框1的访问时间是160,是最早被访问的。 c) 时钟算法,清除页框3的R位因为它最早加载,清除页框2的R位因为它次最早加载,换出的是页框0因为它的R位为0。 d) 最佳算法,置换页框3,因为它将最晚被访问到。 e)

Answer: a. P b. N

Answer: a.

Page number(5) Offset(11) b.32 entries, each entry is 9 bits wide.

c.If total number of entries stays at 32 and the page size does not change, then each entry becomes 8 bits wide.

思考题:

Consider four frames and the following page access pattern: A, B, A, C, D, E, F, A, B, C, F, A, E, F

a) Assume we employ the FIFO replacement policy. Show the content of the frames at every access. Give the number of page faults. Frame 1 2 3 4

Page faults =7/14

b) Assume we employ the LRU replacement policy. Show the content of the frames at every access. Give the number of page faults. Frame 1 2 3 4 A A B A B A A B C A B C D A B C D E A E C D F F E C D A F E A D B F E A B C F C A B F F C A B A F C A B E F C A E F F C A B A A B A B A A B C A B C D A B C D E E B C D F E F C D A E F A D B E F A B C C F A B F C F A B A C F A B E C E A B F C E F B Page faults =6/14

c)What is the relation between page fault rate and the locality principle? 如果使用局部性原理,就会减少缺页率。 d) To reduce the page fault rate, what do you suggest?

1、内存页框数。增加作业分得的内存块数。 2、页面大小。页面划分越大,中断率越低。 3、程序局部性。程序局部性好可减少缺页中断。

第八次作业

9.1 Briefly describe the three types of processor scheduling

Long Term Scheduler Short Term Scheduler Medium Term Scheduler

长程调度:决定加入到待执行的进程池中;

中程调度:决定加入到部分或全部在主存中的进程集合中; 短程调度:决定哪一个可用进程将被处理器执行。 9.3 What is the difference between turnaround time and response time?

周转时间是一个要求花费在系统上的包括等待时间和服务时间的总的时间。

响应时间对一个交互进程,这是指从提交一个请求到开始接受响应之间的时间间隔。通常进程在处理该请求的同时,就开始给用户产生一些输出。

Turnaround time is the amount of time elapsed from the time of submission to the time of completion whereas Response time is the Average time elapsed from submission till the first response is produced.

9.5 What is the difference between preemptive and nonpreemptive scheduling?

非抢占:在这种情况下,一旦进程处于运行态,他就不断执行直到终止,或者为等待I/O或请求某些操作系统服务而阻塞自己。

抢占:当前正在运行的进程可能被操作系统中断,并转移到就绪态。关于抢占的决策可能是在一个新进程到达时,或者在一个中断发生后把一个被阻塞的进程置为就绪态时,或者基于周期性的时间中断。

Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst.

9.1 Consider the following set of processes:

Process Name A B C D E Arrival Time 0 1 3 9 12 Processing Time 3 5 2 5 5 Perform the same analysis as depicted in Table 9.5 and Figure 9.5 for this set. 每格代表一个时间单位,方框中的数表示当前运行的进程

9.3 Prove that, among non preemptive scheduling algorithms, SPN provides the minimum average waiting time for a batch of jobs that arrive at the same time. Assume that the scheduler must always execute a task if one is available.

我们能够证明在一批n个作业同时到达且忽略以后到来的作业。试验会延伸包含以后的作业。假设各作业到来的顺序是:t1<=t2<=……<=tn.假设n个使用者必须等到工作1的完成:n-1个使用者必须等到工作2的完成,依次继续。那么,平均反应时间是[n*t1+(n-1)*t2+……+tn]/n。如果我们在这个时间表中做些改变,例如调换工作j和k(j=0。换句话说,如果SPN运算法则没被采用,平均反应时间只会增加。

9.16 Five batch jobs,A through E, arrive at a computer center at essentially the same time. They have an estimated running time of 15, 9, 3, 6, and 12 minutes, respectively. Their (externally defined) priorities are 6, 3, 7, 9, and 4 respectively, with a lower value corresponding to a higher priority. For each of the following scheduling algorithms, determine the turnaround time for each process and the average

turnaround for all obs. Ignore process switching overhead. Explain how you arrived at your answers. In the last three cases, assume that only one job at a time runs until it finishes and that all jobs are completely processor bound. a. round robin with a time quantum of 1 minute b. priority scheduling

c. FCFS (run in order 15, 9, 3, 6, and 12) d. shortest job first

第九次作业

10.2 List and briefly define four techniques for thread scheduling.

加载共享:进程不是分配到一个特定的处理器,而是维护一个就绪进程的全局队列,每个处理器只要空闲就从队列中选择一个线程。这里使用术语加载共享来区分这种策略和加载平衡方案,加载平衡是基于一种比较永久的分配方案分配工作的。

组调度:一组相关的线程基于一对一的原则,同时调度到一组处理器上运行。 专用处理器分配:在程序执行过程中,每个程序被分配给一组处理器,处理器的数目与程序中的线程的数目相等。当程序终止是,处理器返回到总的处理器池中,可供分配给另一个程序。

动态调度:在执行期间,进程中线程的数目可以改变。

?

Load sharing: processes are not assigned to a particular processor. A global queue of ready threads is maintained, and each processor, when idle, selects a thread from the queue.

?

Gang scheduling: a set of related threads is scheduled to run on a set of processors at the same time, on a one-to-one basis.

?

Dedicated processor assignment: each program is allocated a number of processors equal to the number of threads in the program, for the duration of the program execution (this is the opposite of the load-sharing approach)

?

Dynamic scheduling: the application is responsible for assigning its threads to processors. It may alter the number of threads during the course of execution.

?

10.5 What is the difference between periodic and aperiodic real-time tasks?

非周期任务有一个必须结束或开始的最后期限,或者有一个关于开始时间和结束时间的约束。而对于周期任务,这个要求描述成“每隔周期T一次”或“每隔T个单位”。

A periodic real-time task is any task where there is a set amount of time allocated to a regularly repeated task whereas an aperiodic task is any task that occurs from a randomly occurring event.

10.1 Consider a set of three periodic tasks with the execution profiles of Table 10.5 Develop scheduling diagrams similar to those of figure 10.5 for this set of tasks.

Process A (1) A (2) ? ? B (1) B (2) ? ? C (1) C (2) ? ? Arrival Time 0 20 ? ? 0 50 ? ? 0 50 ? ? Execution Time 10 10 ? ? 10 10 ? ? 15 15 ? ? Ending Deadline 20 40 ? ? 50 100 ? ? 50 100 ? ? A1 Deadline A2 Deadline

B1 Deadline B2 Deadline

C1 Deadline C2 Deadline Arrival times,

execution times, A1 A2 and deadlines B1 B2 C1 C2 0 10 20 30 40 50 60 70 80 90 100

Fixed-priority A1 B1 A2 C1 B2 C2 scheduling: A has priority A1 A2 B1, C1 B2, C2 Fixed-priority B1 A1 C1 A2 B2 C2 scheduling: B has priority A1 A2 B1, C1 B2, C2

Fixed-priority scheduling: C has priority

Earliest deadline scheduling using

completion deadling

C1 A1 B1 A2 C2 B2 A1 A2 B1, C1 B2, C2 (Missed)

A1 B1 A2 C1 B2 C2 A1 A2 B1, C1 B2, C2

10.2 Consider a set of five aperiodic tasks with the execution profiles of Table 10.6 Develop scheduling diagrams similar to those of Figure 10.6 for this set of tasks. Process Arrival Time Execution Time Starting Deadline A 10 20 100 B 20 20 30 C 40 20 60 D 50 20 80 E 60 20 70 Requirement 0 10 20 30 40 50 60 70 80 90 100 Arrival times A B C D E Starting Deadline B C E D A

Earliest deadline 0 10 20 30 40 50 60 70 80 90 100 Arrival times A B C D E Service C E A B Starting deadline B C E D(missed) A

Earliest deadline with unforced idle time 0 10 20 30 40 50 60 70 80 90 100 110 120

Arrival times Service

Starting deadline

A B C D E B C E D A B C E D A

First-come first-served (FCFS)

0 10 20 30 40 50 60 70 80 90 100

Arrival times A B C D E Service C D A B Starting deadline B C E(missed) D A

第十次作业 Review Questions

11.2 What is the difference between logical I/O and device I/O?

? 逻辑I/O:逻辑I/O模块把设备当作一个逻辑资源来处理,它并不关心实际控制设备的细节。逻辑I/O模块代表用户进程管理的一般I/O功能,允许它们根据设备标识符以及诸如打开、关闭、读、写之类的简单命令与设备打交道。

? 设备I/O:请求的操作和数据(缓冲的数据、记录等)被转换成适当的I/O指令序列、通道命令和控制器命令。可以使用缓冲技术,以提高使用率。 11.3 What is the difference between block-oriented devices and stream-oriented devices? Give a few examples of each.

? 面向块的设备将信息保存在块中,块的大小通常是固定的,传输过程中一次传送一块。通常可以通过块号访问数据。磁盘和磁带都是面向块的设备。 ? 面向流的设备以字节流的方式输入输出数据,其末使用块结构。终端、打印机通信端口、鼠标和其他指示设备以及大多数非辅存的其他设备,都属于面向流的设备。

11.4 Why would you expect improved performance using a double buffer rather than a single buffer for I/O?

? 双缓冲允许两个操作并行处理,而不是依次处理。典型的,在一个进程往一个缓冲区中传送数据(从这个缓冲区中取数据)的同时,操作系统正在清空(或者填充)另一个缓冲区。 Problems 11.3

a. Perform the same type of analysis as that of Table 11.2 for the following sequence of disk track requests: 27, 129, 110, 186, 147, 41, 10, 64, 120. Assume that the disk head is initially positioned over track 100 and is moving in the direction of decreasing track number.

b. Do the same analysis, but now assume that the disk head is moving in the direction of increasing track number.

a FIFO SSTF SCAN C-SCAN 下一个横跨的下一个横跨的下一个横跨的下一个横跨的被访问磁道数 的磁道 27 129 110 186 147 41 10 64 120 73 102 19 76 39 106 31 54 56 被访问磁道数 的磁道 110 120 129 147 186 64 41 27 10 10 10 9 18 39 122 23 14 17 被访问磁道数 的磁道 64 41 27 10 110 120 129 147 186 36 23 14 17 100 10 9 18 39 被访问磁道数 的磁道 64 41 27 10 186 147 129 120 110 36 23 14 17 176 39 18 9 10 平均寻 61.8 道长度 平均寻 29.1 道长度 平均寻 29.6 道长度 平均寻 38 道长度 b.如果磁头沿着增大的方向,只有SCAN和C-SCAN的结果有变化

SCAN C-SCAN 下一个被横跨的磁下一个被横跨的磁访问的磁道数 道 110 120 129 147 186 64 41 27 10 10 9 18 39 122 23 14 17 访问的磁道数 道 110 120 129 147 186 10 27 41 10 10 9 18 39 176 17 14 23 10 平均寻道长度 29.1 64 平均寻道长度 35.1 Review Questions

12.3 What is a file management system?

文件管理系统是一组系统软件,为使用文件的用户和应用程序提供服务。 12.6 Why is the average search time to find a record in a file less for an indexed sequential file than for a sequential file?

在顺序文件中,查找一个记录是按顺序检测每一个记录直到有一个包含符合条件的关键域值的记录被找到。索引顺序文件提供一个执行最小穷举搜索的索引结构。 Problems

12.1 Define: B=block size, R=record size, P=size of block pointer, F=blocking factor; expected number of records within a block. Give a formula for F for the three blocking methods depicted in Figure 12.6.

固定组块: 最大整数

当一个可变长度记录被保存到组块中的时候,组块中会增加一个标记着记录边界的数据,用来标识记录。当跨越式记录桥联块边界的时候,需要用到一些关联着后继组块的结构。一种可能情况是在每个记录前加一个长度标识。另一种可能情况是在两个记录之间加一个特殊的区分标识。因此,我们假设每一个记录需要一个标识,并且标识大小约等于块指针大小。对于跨越式组块,指向它下一个组块的大小为P的块指针被包含在每一个组块中,所以跨越式记录可以很容易地被重定位。由此可知:

可变组块跨越式:

由于不采用跨越的方式,可变长度非跨越式组块会导致平均R/2的空间浪费,但不需要指向后继组块的指针:

12.3 What file organization would you choose to maximize efficiency in terms of speed of access, use of storage space, and ease of updating (adding/deleting/modifying) when the data are

a. updated infrequently and accessed frequently in random order? b. updated frequently and accessed in its entirety relatively frequently? c. updated frequently and accessed frequently in random order? 答案:a.索引文件 b.索引顺序文件 c.索引文件或散列文件

本文来源:https://www.bwwdw.com/article/pspp.html

Top