This test has ten questions; answer at least eight of them.
Up to the first 75 words appearing after a question are taken to be
the answer to the question. Words following the 75th word are not
part of the answer and will be ignored.
-
Is it possible for an operating system to distinguish between a process that's
waiting for a long time (possibly forever) and a process that's deadlocked? If
it is, explain how; if it's not, given an example the operating system would
fail to distinguish. Your answer need not consider efficiency or performance.
Yes, it is. If a process is deadlocked, the OS can trace an allocated-request
chain through processes and resources starting and ending at the blocked
process; that is, the OS can find a cycle in the resource-allocation graph.
Because cycles are a necessary condition for deadlock, a process not in such a
cycle can't be deadlocked and must instead be waiting for a long time.
The answers, approximately verbatim,
and in no particular order:
-
Yes, it is possible for an operating system to distinguish between a process
that's waiting for a long time and a process that's deadlocked. A deadlocked
process must meet four requirements: 1) mutual exclusion condition, 2) hold and
wait condition, 3) No preemption condition, 4) Circular wait condition. All
four of these conditions must be present for a deadlock to occur. If one of
them is absent a deadlock is impossible. Therefore by having the
[…]1
-
It is possible. A deadlocked process would be demanding resources that are
indefinitely unavailable, but the waiting process is a victim of the algorithm
used by the system,2 and thus would not hold resources in the same way a
deadlocked process would.3
-
A process is deadlocked usually when two processes are left open.4 An example would be if
process A is running and is left open, then process B, which needs
the same resources, starts running. This process is considered
deadlocked.5 The operating system can distinguish between a process and a
deadlock. If a process is running forever6 the operating system can check
the processes being repeated and what processes are running. If multiple
processes are running […]7
-
An operating system can tell if a process is deadlocked. The operating system
manages and allocates resources so it can tel if a resource is used and where
it is. The operating system can check if a requested resource is in use and if
two processes are requesting resources held by another process it can detect
that there is a deadlock.8
-
Yes, the operating system can distinguish between the two by keeping track of
the resources the process is using and requesting in a resource graph as well
as all other processes.9 Let arrows lead from the process to
the resources they are requesting and waiting for. If you can follow the lines
of waiting requests in a cycle at any point from the process in question, then
there is a knot, meaning a deadlock.
-
Yes, an operating system can do this by not only periodically checking for long
waits10 but also check
if there is cyclical resource allocation that can lead to deadlock. If there
is in fact cyclical resource allocation and both processes have been waiting a
very long time it is almost safe to assume deadlock.11
-
No, it is not possible for a operating system to distinguish if a process has
been waiting longer than another process.12 An example that I can think of is if an operating system
stored process that are waiting for the CPU in a stack, last in first
out.13 It is possible that
the first process never gets executed.14
-
It is possible to tell the difference. A process that is waiting has no
resources allocated to it,15 and is simply waiting for resources to be freed up. A deadlocked process is
a process that has resources but needs more in order to complete. An operating
system could tell the difference by seeing which processes have no allocated
resources and which ones do.16 This will show which processes are deadlocked and
which processes are waiting […]17
-
The operating system should be able to distinguish between a deadlocked process
and a process that has been waiting by looking at the resources each process
posses.18 If process t1 needs resource A but A is in
use by another process, then it's deadlocked if the process wanting A also
needs a resource from t119 (*). So if the operating system knows that a process is
waiting for a resource it has20 being executed and […]21
-
No. Process A is waiting for process B to release an event or
information that only process B can provide. The OS has no way of knowing
that process A is waiting is due to deadlock, circular wait condition, or
for any other reason.
-
Yes, the OS could terminate all other processes22 to free
up resources for the potentially deadlocked process to use and remove itself
from deadlock.23 Or
the OS could take note of the amount of time the potentially deadlocked
process has been running,24 terminate it, and restart it to see if it waits for the
same amount of time or more to determine if deadlock exists.25
-
It is not possible for an operating system to distinguish between a process
that is waiting for a long time and a process that is deadlocked in the OS. An
example of what the OS would fail to distinguish is between the processes since
deadlock is kind of a long process26 the OS
would be confused as to whether if is one or the other.27 The OS would categorize the two
processes as the same […]28
-
Yes it is possible by utilizing a deadlock detection algorithm much like the
Banker's algorithm is used for deadlock prevention. By monitoring available
resources, what is allocated to each process, and the current requests for each
process, a detection algorithm29 can be used to investigate all possible allocation sequences for the
processes that remain to be completed.30
Consider allocation and requests to both be m×n
matrices, and work and finished vectors of lengths m and
[…]31
-
No, there is no way to distinguish between the two; waiting versus deadlock.
In the case such that three process A, B, and C each access[?] a
resource, but only 232 are
available at any one time. A + B will both eventually use both
resources, and C wants both after A and B finish.33 A will request and then B. Once both
has a resource they are deadlocked34 +
C forever waits. The OS […]35
-
No, this is not possible because of congestion control for best throughput is
to send everything at once and keep the number of unacknowledged packets at max
value, the window size.
-
Yes, the OS can distinguish between the two possible as when they are
deadlocked or just waiting to allocation that[?] is usually deleted.36 The system should have an algorithm that runs[?] periodically to
detect how many processes are waiting.37 The algorithm executes such that if the number of processes are still
there38 and then the work hasn't been completed,39 then system I-O [?] [?] but if it processes numbers are more than
needed it's deadlocked.
-
Yes, an OS could tell the difference by examining the resource matrix.
Resource matrices keep track of which process went/own a given
resource.40 If the operating system kept track of process idle time,
time spent not executing, and cross referenced this to the resource matrix, the
OS would know if the process is waiting for a resource or not.41
Process with high idle time but not waiting for a resource may be hung, but
with […]42
-
Every once in a while a file system is compacted. After compaction, the
file-system free-list size is unchanged. Assuming compaction changes the file
system for the better, how is the free-list implemented? Justify your answer.
The free-list implementation choices are bit maps and interval lists. A bit
map would never change size, regardless of what happens during compaction.
More positively, after compaction there's one free-space block and an interval
list would have one entry.
The answers, approximately verbatim,
and in no particular order:
-
When a file system is compacted or defragmented the operating system takes all
of the data written to the disk and shifts it down to the starting address and
begins to assemble it into logical chunks of data without space between them.
In doing this the free list size is in fact unchanged because the same amount
of data still resides on the disk.1 The free
list is implemented in a way that changes the […]2
-
Since the size of the free list is unchanged and set to a certain amount, the
free list can be implemented to accept objects of a certain size3 and know
how much space it can fill.4 This manages how
much information a file system can take it and even set priorities over what
information is saved. Data compaction allows information that needs to be
saved for long periods of time and data that has to
[…]5
-
It is free list implemented because middleware. This is a layer between an OS
and host-resident system. A middleware hosts OS functions and spreads them
over a network.
-
[ unanswered ]
-
One way is to use a bit map. A disk with n blocks requires a bitmap with n
bits. Free blocks are represented by 1s in the map, allocated blocks by 0s.
We can use free blocks to hold the free list. Only one block of pointers6 is needed to be kept in main memory.
When a file is created, the needed blocks are taken from the block of
pointers.7 When it runs out, a […]8
-
Better performance is achieved when files are closer together and as contiguous
as possible.9 Once
compaction has taken place, the files in the disk are listed closer together;
the remaining free space can be listed in the free-list implementation in a
contiguous manner as well. A linked list implementation would do.10
-
The free list is implemented in a way in which the size of the file would not
change.11 The implementation would not focus[?] on the
size of the file but however compact the details or content within the
file.12 Some
example would be that if change some of the content in it to make it easier to
read or distinguish. The compaction process in this case would make it13 better
without changing its size.
-
The free list should be implemented as an array14 because the size does not change15 and there is
no need for it to be dynamic like a linked list or a tree.
-
The free list has nothing to do with the file system being
compacted.16 That is why the free list didn't change. The
free-list must take a broad view of the file system, meaning that compaction
changed the location of the files but it didn't not change the amount of files
there was.17
-
One of the main factors of compaction is the cost, and it can only be done
dynamically and while the system is in execution.18
Compaction moves all the free space in one big block. The free list is
implemented the same way19 but the big
difference is instead[?] of having 1 & 0 periodically now all the 0s are
clustered together e.g. 000000011111.
-
The free list would be implemented as a hash table20 such that reorganizing the list
would not be too complex.21 When the system is compacted, items in the list that need to be moved
for optimization are given a new hash key[?] and tacked on to the back of the
list in that spot.
-
[ unanswered ]
-
The free list is implemented like a journal.22 The journal keeps track of the
file system regardless of its size and will allow for the same performance
before and after the compression.23
-
The free list is implemented in a double array24 where it acts as an access control
list.25
It provides the file system with a list of agents and objects where it provides
the actions each agent can perform on an object.26 Since the file system is completed the size of the free list
is unchanged because there still the same number of agents and objects.
-
[ unanswered ]
-
There is no free list being implemented,27 or the free list is not being recalculated after
compaction.28 Not sure if compaction is
defragmentation or compression or either.29 Compaction changing the file system for the
better does not imply it uses all the space in a contiguous manner.30
-
The free list is most likely implemented as an array.31 Compaction will shuffle the files or "squeeze" the free space
or fragmentation out between files. For this to happened and the free-space
count remain the same it32 must know exactly
how many free blocks exist between files.33 The easiest way to know
this is to know how many blocks are in use subtracted from the total blocks.
-
A colleague of your thinks that capabilities are nothing more than access
control lists applied to agents. Is your colleague correct? Justify your
answer.
No, the colleague is wrong. An ACL contains (agent, operation) pairs, while a
capability contains (object, operation) pairs. An ACL describes what agents
can perform what operations on a particular object; a capability describes what
operations on what objects a particular agent can perform.
The answers, approximately verbatim,
and in no particular order:
-
Yes, my colleague is correct. Not being able to access something in a control
list1
means that you don't have the capability to access it.2 When you do have access to the control
list means you are capable of accessing it.
-
He is correct. Capabilities specify what the agent is able to do, in the same
way that access control lists control what actions are valid for an agent to
perform during an operation.3
-
Because of framework tools. This can be integrated via data, control of
presentation. It also exploits component portability manage context to
simplify services provides wizard and other tools.
-
Capabilities and ACL are not the same. Capability for an agent is described by
cutting along a row [diagram]. Access control: when cutting along the column
creates ACL. [diagram]4
-
He is correct. An ACL is nothing more than a list where it provides an object
and a series of agents and in each row what operation that agent can perform in
that object. Capabilities are what the agent is allowed to do or perform,
since this list does exactly what ACLs are basically list of agents
capabilities on objects.5 [diagram]
-
Not quite, although they appear similar, they are entirely different
structures. An ACL deals with a single object and every agent that can access
it with certain permissions. However, a capability takes the opposite
approach, breaking down objects and operations by agent. They seem the same,
but ACLs are much more efficient,6 and
capabilities don't allow for the same implementations.7
-
Yes, my colleague is correct, an access control list is a matrix of permissions
between agents and objects.8 With agents on the rows and objects on the columns. If you go along a row
for a single agent it lists in each column the operations that agent can
perform on the object. The entire row holds the capabilities for that agent on
all objects and is known as the agent's capabilities.9
-
In the most basic sense, yes, my colleague is right. When considering a
capability-based system, the system must often implements a set of possible
access rights of the access control list.10 This set
controls the ability to read, write, or execute a memory segment. There can be
additional user defined rights as well as system rights, all of which are
controlled by the set of defined access rights.11
-
The colleague who thinks capabilities are nothing more than access control
lists applied to agents would happen to be incorrect. This reason is being is
because capabilities are more capable to do other things other than control
access lists to agents.12 Capabilities are other
things that exist in networks and OS such as when connectivity and capable
things.13 They can also access needs in the operating system.
-
It is similar but work totally different. ACL uses the column in a matrix to
allow access to different ways when a request is processed. the system checks
the column if user has access or not. Capabilities uses rows in a matrix where
each row has its own domain[?] & access possession to each [?] Capabilities
are also very[?] expensive[?] to the [?] process directly.
-
My colleague is incorrect. Capabilities are the list of all agents and what
things they can do to every object.14 ACLs
are what agents can do to a single object. Capabilities determine what an
agent can do to every object, even if it is nothing.15 ACLs
determine what agents can do to one object.
-
He is correct. During[?] matrix selection[?] each column is an access control
list that shows each agent and each agent's permissible operations.16 This applies
directly to capabilities. In order to get a permission to transfer17 you18 need to
receive a certificate. A certificate can be directly related to an ACL19 which gives you access to certain agents or transferable
rights. These two systems are very much alike.
-
No, the fundamental difference between the two is one of implementation. A
capability still carries with it the problem of having to seek out the object
that the agent must perform operations on whereas ACLs detail the permission
for each agent about to operate on the object in question making them far
easier to implement and less prone to permission confusion.
-
The colleague is not correct in thinking access control list are only
capabilities applied to agents. Capabilities show how objects can be worked on
by a specific agent. Access control lists show what a specific object can have
done to it by agents. Access control contains all the permissions an object
and show who can do what to it. A capability shows what an agent can do but
not what all agents can do.
-
Capabilities partition access matrices into separate rows so that information
for a single subject are all together.20 An
access list is separated into columns separated by objects, each with all
subjects and their privileges over that object. The difference between the two
is how the subject is listed. Capabilities separates the subject based all the
info given on the subject.21 Access control lists all the subjects and the
privileges over the objects. The colleague based
[…]22
-
Yes. Both of these things can explain how a process is able to gain
rights.23 The capabilities
approach is a way of giving a process a specific certificate or right. The use
of agents and the ACL is an abstraction of how this is done as it can give a
process the same rights but in an indirect manner.24
-
My colleague is incorrect. An ACL associates with each object an ordered list
containing all the domains that may access the object and how. The ACL method
slices up the protection matrix by columns. Compared to the ACL, capability
slices up the protection matrix by rows, with the method associated with each
process is a list of object that may be accessed, along with an indication of
which operations are permitted on each (its domain).
[…]25
-
True or false: the worst-case utilization for free space by first-fit is
approximately 50%. Assume that 1) allocation requests are approximately the
same size (that is, are randomly distributed with small variance) and much less
(< 10%) than total free-space size, and 2) there is sufficient demand to
have 100% free-space utilization with some perfect allocation scheme. Justify
your answer.
Ignoring the ends (or wrapping them to form a ring), the free list has \(M\)
contiguous allocated-block sequences. Each sequence is surrounded by
free-space blocks, arbitrarily assign the trailing free-space block to a
allocated sequence. Because the allocation sizes are more or less the same,
the results follow.
The answers, approximately verbatim,
and in no particular order:
-
False. The worst case utilization for free space by first fit is not
approximately 50%. The reason why it is true is because there is sufficient
demand to have 100% free space utilization with some perfect allocation scheme
within the OS.1 This also gives or takes the much less (< 10%) than the total
free space size. 100% - (< 10%)2 would
not approximate to 50% in this case.
-
True, assuming requests are about the same size means that there should be
close to even distribution.3 Of course they can
be allocated in a way that is not helpful, but even at the worst you will find
a next fit, or first fit that's not much bigger than what you already have
allocated.4 Therefore you will always have 50% or
higher utilization.5 [diagram] By allowing for bigger
or smaller file size variations it […]6
-
False. Based on those assumptions, worst-case for first fit would be much
lower.7 If all
allocations were 9.99% of total free space and there was high demand for 100%
utilization of free space, then just to subsequent allocations would already be
better than 50% with some[?] perfect allocation scheme something will have had
to one horribly wrong to achieve only 50% utilization.8
-
True. First-fit allocation finds the first space big enough to fit the block
and places it there. If all block sizes are virtually the same size, the disk
would be split up symmetrically, making disk arm movement minimal.9 Therefore, the worst possible case
for this utilization is 50%.10 There would also be very little unused space on the disk.
-
Since all request are approximately the same size that means some one just
slightly bigger than others with demand for 100% utilization these allocations
will be a tightly packed as possible.11
When the smaller processes are done they will not be able to be replaced by the
larger ones. If the smaller ones finish in an alternating pattern and no small
process can replace them then will only be about 50% utilization.
-
False, there is no way that you could have all of the free space used
up.12 Some disk fragments are too small
to store data. You probably will not allocate the same amount of data with the
size of the disk sector.13 Data is in many different sizes and will take up random parts of the
disk.14
-
True, even assuming the system allocates requests in decreasing order from 9%
to 5% (Full of 9% to half full of 5%, allowing for maximum external
fragmentation),15 and is
then hit with requests for nothing but 9% allocation there will still be 10 5%
allocations in the free space keeping the utilization at exactly
50%.16
-
True. Given enough time, with many requests receiving and releasing space,
eventually there will be small gaps between available allocations.17 Worst case would be every other allocation is of size 1, leaving only
50% utilization of free space. Also if the first request was for 50% of
available space18 and
all subsequent requests were equal or larger, only 50% would be utilized, even
with a normal distribution those extremes are possible.
-
False - assuming both conditions are true, the worst-case utilization would be
higher than 50%.19
-
A good idea would be to have location transparency logical objects are location
agnostic. Because remote file caches can be in primary store.
-
False. First off, assuming that each request is approximately the same size,
then you can assume that each request can more or less replace each other
requests in thinking about size since they are the same. [1] The worse case
utilization would be about 75-80% since all requests are the same size20 they can all fit in
each other places.21 first fit means first available space. With this there won't be much
overlap22 or lost
[…]23
-
True. By assuming that the requests are of similar size and are small, there
is less chance of ending up with too many spaces that cannot fit a
request.24 For example, if the space is 100 units and there are 5 requests all
of size less than 10 units, even if they are placed en to end in order that
they are received, it is likely that 10 of the request will fit being that25
[…]26
-
True. First-fit is the simplest algorithm.27 This scheme will result in memory filling up with
tiny useless holes. First-fit generates the largest holes on the
average.28
-
True. The problem with the first fit is that it searches[?] for the first
biggest hole of memory. In many cases it can be too big to what may be needed.
Using best fit it allows the OS to chose the most fitted size for the
storage.29 In case where the
requested size are similar[?], then it would not be a big problem such as 50%
as mentioned[?].30
-
The more free space that is utilized the better. As for 50% being the
worst-case utilization is true. If free-space utilization is lower than 50%
more allocation requests can come in.31 Being at 50% not all the space is
utilized, and not enough is utilized to accept a lot of requests.32 Having only 50%
utilized can be expensive and useless because not all the space is filled but
it is still taking up space that […]33
-
It is false that the worst-case utilization is with each request being under
20% of total free space34 and there being enough demand for 100% utilization given the right
algorithm that means there is a high number of requests as each request is
under 10% you could fit any request until you have 90% utilization35 meaning worst case is at utilization is at least 90%. This is
well above 50%.
-
[ unanswered ]
-
A colleague of yours discovers that a simple, immediately grant the maximum
request style of resource allocation is as effective as the Banker's algorithm
at avoiding deadlock over some resource. Your colleague modifies the operating
system accordingly, and is dismayed to discover that, while the system still
doesn't deadlock, some of its operating characteristics are worse then they
were in the old, unmodified system using the Banker's algorithm. Name one of
these characteristics, and explain why it differs between the modified and
unmodified versions of the system.
The Banker's algorithm allocates resources in response to explicit requests;
the modified algorithm always allocates the maximum, whether needed or not.
Assuming processes don't always use their resource maximums, the modified
algorithm leads to lower resource utilization than does the Banker's algorithm.
Reduced resource utilization leads to reduced process throughput as the
modified algorithm makes processes wait for resources that would have been
allocated to them by the Banker's algorithm.
The answers, approximately verbatim,
and in no particular order:
-
One of the characteristics of the system1 is that you are granting the
maximum request of only granting a certain amount of the maximum which is the
Banker's algorithm,2 because the Banker's algorithm assumes
that a process won't need all of the resources. Using maximum resources will
slow down the system.3
-
With the Banker's algorithm in use there is no starvation[?] of processes, as
long as there are enough resources to run the processes. The algorithm will
allow to run. With the modified version immediately granting the max request
every time will cause starvation of some processes.4 Say there is a
process that requires 5 resources and one that requires 2. The one needing 5
will be started first.5 Halfway through another
process is added […]6
-
One of the characteristics in this case would be the effects of the Banker's
algorithm avoiding deadlock over some other resources. The operating
characteristics are more than they were when it was old7
because the modified and unmodified solutions do not give the immediate grants
to the maximum request style of resource allocation.8 This makes the Banker's algorithm uneffective,9 but perhaps effects it in a
negative way because it came out worse. The characteristics
change[?] […]10
-
[ unanswered ]
-
One characteristic that will significantly decrease is the operating system's
throughput. If all required resources are being allocated at once that means
that there are some that are being held onto to use after others.11 These resources could be used to get other
processes started and make them finish faster than if they had to wait for the
process holding them finish entirely before getting a hold of them.
-
The old system would run more smoothly[?] (faster) because the Banker's
algorithm is designed to allow as many processes as possible to run based on
resource supply and needs. Bulk allocation reduces deadlock,12 but also slows down the system. The new
implementation may free up too many processes,13 which leaves other processes waiting. This takes too
much time, and therefore makes the system slower than with the Banker's
algorithm.
-
The resource allocation ensures that the system doesn't deadlock, but this
method of resource allocation doesn't allow for multiple processes.14 It allows the
larger processes15
to complete but leaves the smaller processes waiting if other large processes
come along.16 This will
leave smaller processes waiting, possibly for forever, if there are larger
processes constantly being called by the system. this makes the OS hang while
larger processes compete and smaller ones wait.
-
[ unanswered]
-
Assuming my colleague is utilizing a cycle detection algorithm to prevent
deadlock in his maximum first request algorithm, they have most likely found
that while the system may remain in a safe state and in perfect conditions
should even be more efficient than the Banker's algorithm;17 the system may not be as efficient under less than
perfect conditions.18 In an event where
they cannot allocate all their resources to more than one processes, the cycle
[…]19
-
One of the system's operating characteristics that differs in both versions of
the system would be throughput. It differs because in the old system processes
could grow up to their maximum allowance of resources and there was always
enough to accommodate the largest possible request. In the new system all
resources can immediately be lent out, leaving nothing to be used by smaller
processes since there is less to go around, throughput degrades.20
-
It violates the [?] [?] [?] of the operating system . Using the maximum
request style OS can get stuck[?] in a unsafe[?] state in a case where there
are not enough resources to finish or process21 & there is a process that can be
finished[?] with the currently available resources. Banker's algorithm insures
that before a process starts OS has maximum resources that may be needed by a
process.22
-
An operating characteristic that is worse with the new allocation than with the
old is efficiency.23 With resources being
allocated to the maximum request, resources are not available at a rate allowed
by the Banker's algorithm.24
-
Utilization25 will certainly suffer if each
request is granted its maximum amount of resources immediately. The system
will be able to allocate too much new requests before using all its available
resources. The Banker's algorithm always keeps around resources on hand to
satisfy some maximum request.26 But if all resources are allocated immediately the system must
constantly wait on a process to finish before allowing a waiting one to
execute.
-
Granting a maximum request or resources can cause deadlock when every resource
request is granted to requests done at the same time.27 (Dinning philosopher
problem).28 The Banker's algorithm is a better
solution. 1) No process clams more than total resources. 2) No process holds
more than it needs. 3) Sum is no more than total resources available.
-
Process wait time will be impacted. Process wait time is how long a process
has to wait for the resources it needs. In the unmodified system, total
resources 10, process 1 needs 3 and process 2 needs 5. These two processes
could run concurrently because there are enough resources. In the new model,
process 2 needs to wait until process 1 completes.29 This is added unnecessary latency, and turns the OS into
sequential like.
-
What differs between the modified and unmodified versions of the system is that
the maximum request is immediately granted. Therefor since there is not enough
resources for another process they are not issued because this would put[?]
system into an unsafe state.30 This implementation does prevent deadlock, but since it
immediately grants the maximum request it can only service one process at a
time,31 which does not give any process resources until
the existing one […]32
-
[ unanswered ]