iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

NTFS File System



<< Previous page

     Next page >>

Directories

The directory on NTFS is a specific file storing the references to other files and directories establishing the hierarchical constitution of disk data. The directory file is divided into blocks, each of them contains a file name, base attributes and reference to the element MFT which already gives the complete information on an element of the directory. The inner structure of the directory is a binary tree. It means that to search the file with the given name in the linear directory such for example as for FAT, the operating system should look through all elements of the directory until it finds the necessary one. The binary tree allocates the names of files to make the file search faster -- with the help of obtaining binary answers to the questions about the file position. The binary tree is capable to give the answer to the question in what group the required name is situated -- above or below the given element. We begin with such question to the average element, and each answer narrows the area of search twice. The files are sorted according to the alphabet, and the answer to the question is carried out by the obvious way -- matching of initial letters. The search area which has been narrowed twice starts to be researched the same way starting again from the average element.



It is necessary to realise that to search one file among 1000 files for example FAT should do about 500 matchings (most probably the file will be found in the middle of the search) and the system on the basis of a tree -- at all about 10 (2^10 = 102. Saving of search time is in fact. Don't think that in traditional systems (FAT) everything is so uncared-for: firstly the maintenance of the binary tree files list is rather complex and secondly -- even FAT in fulfilment of the modern system (Windows2000 or Windows98) uses similar search optimisation. This is just one more fact to be added to your knowledge. It would be desirable also to clear up the widespread mistake (which I absolutely shared recently) that to add a file in the directory as a tree is more difficult than in the linear directory. These operations are comparable enough on time. To add a file in the directory it is necessary at first to be sure that the file with such name is not present yet there and then we shall have some problems in the linear system with the search of a file described above. These problems compensate with interest the ease of file addition in the directory.

What information can be got just having read a directory file? This is what is given by the command dir. To effect the elementary navigating on the disk it is not necessary to go in MFT for each file, it is only necessary to read the most common information about files from directories files. The main disk directory -- root -- differs from the usual directories by nothing except the special reference to it from the metafile MFT beginning.

Journaling

NTFS is a fail-safe system which can correct itself at practically any real failure. Any modern file system is based on such concept as Transaction -- an action made wholly and correct or not made at all. NTFS just doesn't have intermediate (erratic or incorrect) conditions -- the data variation quantum cannot be divided on before failure or after it bringing breakups and muddle -- it is either accomplished or cancelled.

Example 1. The data record on the disk is being carried out. Suddenly it is clarified that it is impossible to record in the place where we have just decided to record the next chunk of data because of the physical surface damage. The NTFS behaviour is rather logical in this case: the record transaction is rolled away wholly -- the system realises that the record is not effected. The place is marked as failure and the data are recorded in another place and the new transaction starts.

Example 2. The case is more complex -- the data record on the disk is being carried out. Suddenly the power is turned out and the system reboots. On what phase has the record stopped and where is the data? The transaction log comes to help. The system having realised the desire to write on the disk has flagged in the metafile $LogFile this condition. At reboot this file is studied to find out the uncompleted transactions which were interrupted by the crash and which result is unpredictable. All these transactions are cancelled: the place where the record was carried out is marked again as free, MFT indexes and elements are resulted in the condition they were before failure, and the system remains stable in whole. And what about the situation if the error has taken place at record in the journal? It is not terrible: the transaction either has not started (there is only an attempt to record the intention to make it) or was already completed -- that is there is an attempt to record that the transaction is already fulfilled. In the last case at the following load the system itself will quite clear up that actually everything is recorded correctly and will not pay any attention to the "unfinished" transaction.

And nevertheless remember that journalising is not the absolute panacea but only a mean to reduce the number of errors and system failures. An ordinary NTFS user will hardly ever note the error of the system or will have to launch chkdsk. Experience shows that NTFS is restored in the completely correct condition even at failures in the moments very much loaded by disk activity. You can even defragment the disk and in the peak of this process push reset -- the probability of data losses even in this case will be very low. It is important to realise however that the system of NTFS restoration guarantees the correctness of the whole file system only, not your data. If you effected disk writing and have got a crash -- often the correct data cannot be restored. The miracles do not happen.

Compression

Files on the NTFS volume have one rather useful attribute -- "compressed". NTFS has built -- in support of disk compression. Earlier Stacker or DoubleSpace was used for this purpose. Any file or directory in the individual order can be stored on the disk in the compressed form and this process is completely clear for applications. The file compression has very much high speed and only one large negative property -- huge virtual fragmentation of compressed files which however does not bother anybody. The compression is carried out by blocks of 16 clusters and uses so-called "virtual clusters". This decision is extremely flexible and permit to achieve interesting effects -- for example a half of file can be compressed and a half is not. It is achieved because the information storage about compression rate of the defined fragments is very similar to usual file fragmentation: for example the typical record of physical layout for real, not compressed file:

  • File clusters from 1 to 43 are stored in disk clusters from 400
  • File clusters from 44 to 52 are stored in disk clusters from 8530

Physical layout of a typical compressed file:

  • File clusters from 1 to 9 are stored in disk clusters from 400
  • File clusters from 10 to 16 are not stored anywhere
  • File clusters from 17 to 18 are stored in disk clusters from 409
  • File clusters from 19 to 36 are not stored anywhere


It is visible that the compressed file has "virtual" clusters which don't have the real information. As soon as the system sees such virtual clusters, it realises at once that the data of the previous block multiple to16 should be decompressed and the data just will fill in virtual clusters and this is all algorithm.

Security

NTFS contains a lot of means for differentiation of the objects rights, it is supposed to be the most perfect file system from all nowadays existing. In theory it is undoubtedly so, but in current implementations unfortunately the rights system is far enough from the ideal and is a hard but not always logical set of the characteristics. The rights assigned to any object and unambiguously by the system itself. The large variations and additions of the rights were carried out already several times and at the creation of Windows 2000 they came to the rational enough set.

NTFS file system rights are close connected with the system itself, and that means they are not obligatory to be kept by another system if it is given physical access to the disk. For preventing physical access in Windows2000 (NT5) the standard possibility was taken (about this see below). The rights system in its current condition is rather complex and I doubt that I can tell something interesting and useful to the readers. If you are interested in this topic, you can find a lot of books on the NT network architecture where it is described more than in detail.

The description of file system constitution can be completed, it is necessary to describe only some just practical or original things.

Hard Links

This thing is in NTFS for a rather long time but it was used very seldom -- and nevertheless: Hard Link is when the same file has two names (some directory entries are pointing to the same MFT record). Let us admit that the same file has the names 1.txt and 2.txt: if a user deletes file 1, file 2 will remain. If he deletes file 2, file 1 will remain. That means both names are completely equal from the moment of creation. A file is physically deleted only when its last name is deleted.

Symbolic Links (NT5)

There is much more practical possibility permitting to make the virtual directories, very much as virtual disks -- by the command subst in DOS. The applications are wide enough: first it is the simplification of the directories system. If you do not like the directory Documents and settings\Administrator\Documents, you can link it in the root directory and the system will go on communicating with the directory by the former way but you -- with much shorter name completely equivalent to it. For creation of such links it is possible to use the program junction (junction.zip (15 KBytes), 36 KBytes) which was written by the known technician Mark Russinovich (http://www.sysinternals.com/). The program works only in NT5 (Windows 2000) -- as well as the feature itself.

Attention! Please keep in mind that these symbolic links can be deleted correctly by the 'rm' command. Be very carefully -- deleting the link with explorer or any other file manager which do not know such concepts as symbolic linking will delete the information the link is pointing to!

Encryption (NT5)

There is a useful possibility for people who are troubled about their secrets -- each file or directory can also be encrypted and thus cannot be read by another NT installation. In combination with standard and very much safe password on the system itself this possibility provides the safety of selected by you important data for the majority of applications.


Write a comment below. No registration needed!


<< Previous page

Next page >>



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.