Microsoft KB Archive/222189

From BetaArchive Wiki

Article ID: 222189

Article Last Modified on 2/21/2007



APPLIES TO

  • Microsoft Windows 2000 Server
  • Microsoft Windows 2000 Advanced Server
  • Microsoft Windows 2000 Professional Edition
  • Microsoft Windows 2000 Datacenter Server



This article was previously published under Q222189


SUMMARY

This article describes Dynamic Disks and Disk Groups in Windows.

MORE INFORMATION

Windows uses a new feature called Dynamic Disks, which introduces the concept of Disk Groups.

Disk Groups help you organize Dynamic Disks and help to prevent data loss. Windows allows only one Disk Group per computer (this may change). Disk Groups can organize storage when you use Veritas LDM-Pro.

A Disk Group uses a name consisting of the computer name plus a suffix of Dg0. If you use LDM-Pro, the suffix can be incremental, such as Dg1 or Dg2. To view the name of your disk group, see the following registry entry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\dmio\Boot Info\Primary Disk Group\Name

Physical Management of Disks (Adding, Removing, Moving)

Basic Disks

Basic disks store their configuration information in the master boot record (MBR), which is stored on the first sector of the disk. The configuration of a Basic disk consists of the partition information on the disk. Basic Fault Tolerant sets inherited from Windows NT 4.0 are based on these simple partitions, but they extend the configuration with some simple partition relationship information, which is stored on the first track of the disk.

Dynamic Disks

Dynamic disks are associated with Disk Groups. A Disk Group is a collection of disks managed as a collection. Each disk in a Disk Group stores replicas of the same configuration data. This configuration data is stored in a 1 megabyte (MB) region at the end of each Dynamic disk.

The information for Simple, Mirrored, RAID-5, Striped, or Spanned volumes is contained in a private database that is stored at the end of each Dynamic Disk. Each private database replicates across all Dynamic Disks for fault tolerance. Since the information about the disks is contained on the disks, you can move the disk to another computer or install another disk without losing this information. All Dynamic Disks in a computer are members of the same Disk Group.

Configuring New Dynamic Disks

In Windows, you can convert a Basic disk to a Dynamic disk. When you convert a disk, Windows looks for any existing partitions or fault tolerance structures on the disk. Windows then initializes the disk with a Disk Group identity and a copy of the current Disk Group configuration. Windows also adds Dynamic volumes to the configuration, which represents the old partitions and fault tolerant structures on the disk. If there are no pre-existing Dynamic/Online disks, then you must create a new Disk Group. If there are existing Dynamic/Online disks, then you must add the converted disk to the existing Disk Group. Brand new disks are Basic disks with no partitions. When you use the Disk Management MMC utility, you are prompted to convert any Basic disks to Dynamic disks.

Moving Basic disks

You can move both Basic and Dynamic disks from one computer to another. For a Basic disk, you need to physically remove the disk from the computer, install it in the new computer, and then either reboot, or use the Rescan Disks command on the Action menu of the Disk Management MMC utility. Partitions on the Basic disk are available immediately. Microsoft recommends that you move any disks containing Basic fault tolerant sets as a group.

NOTE: When you move Basic fault tolerant sets from a Windows NT 4.0 computer, you must save the configuration to a floppy disk, and then use the Disk Management MMC utility to restore the hard disk.

If you remove a disk from a computer, and then you install a different disk using the same hardware address (for example, with the same SCSI target ID and logical unit number), Windows may not recognize the disk. If the Disk Management MMC utility or the file system writes to that disk, the contents of the new disk may be damaged. With some types of disks, such as PCMCIA or IEEE 1394 disks, Windows recognizes the removal and the insertion of the new disk. However, SCSI and IDE disks have no hardware notification, so these disks can be damaged.

There are cases where removals of SCSI and IDE disks is recognized automatically. However, Microsoft recommends that you do not rely on automatic recognition for these types of disks.

Moving Dynamic Disks

Removing disks from the original computer:

When you remove a Dynamic disk from a computer, information about it and its volumes is retained by the remaining online Dynamic disks. The removed disk is displayed in the Disk Management MMC utility as a "Dynamic/Offline" disk with the name "Missing." You can remove this Missing disk entry by removing all volumes or mirrors on that disk, and then use the Remove Disk menu item associated with that disk.

You must have at least one online Dynamic disk to retain information about Missing disks and their volumes. When you physically remove the last Dynamic disk, you lose the information and the Missing disks are no longer displayed in the Disk Management MMC utility.

Connecting disks to a new computer:

After you physically connect the disks to the new computer, click Rescan Disks on the Action menu in the Disk Management MMC utility. When you physically connect a new Dynamic disk, it is displayed in the Disk Management MMC utility as Dynamic/Foreign.

"Importing" Foreign disks:

If you move one Disk Group to another computer that contains its own Disk Group, the Disk Group you moved is marked as Foreign until you manually import it into the existing group.

To use Foreign/Dynamic disks, use the "Import Foreign Disks" operation associated with one of the disks. The manual operation lists one or more Disk Groups, identified by the name of the computer where they were created. If you expand the details on a Disk Group, it lists the locally-connected disks that are members. Click the appropriate Disk Group, and then click OK. You can then view the dialog box that lists volumes that were found in the Disk Group, along with some indication of the status of those volumes.

Since volumes can span multiple disks, using simple disk spanning, striping, mirroring, or RAID-5 redundancy mechanisms, the display status of a volume in the Import Foreign Disks dialog box can become complicated if some of the disks have not been moved. Another complication may be moving a disk, and then moving additional disks at a later time. This is supported, but can be complicated. For example, if one active mirror of a volume is moved from one system to another, and then another is moved later, one of the two mirrors appears to be up-to-date on one system, and the other mirror appears up-to-date on the other system. When the two mirrors are put together on the same system, they both appear up-to-date, but they have different contents. LDM handles this particular situation by using the mirror that was moved first.


NOTE: Given the complexity of the issues surrounding partial moves, it is recommended that you move all the disks at the same time.

The operation of Import Foreign Disks differs slightly, depending on whether there are pre-existing online Dynamic disks on the target computer. If there are no pre-existing online Dynamic disks, then the Disk Group is brought online directly as it is, except that any unmoved volumes are deleted, along with any unmoved disks that have no volumes defined. If only some disks of a volume are moved, the remaining disks become Missing disks. The disk group retains the same identity that it had before. If there are pre-existing online Dynamic disks, then the configuration information is read from those disks, and the configuration data (with unrelated information removed, as in the no pre-existing disk case) is merged into the existing online disk group. The disks then become members of the existing Disk Group, instead of members of their original Disk Group.

States of volumes after an Import:

The state of a volume after import depends upon whether the volume is simple, mirrored, RAID-5, or spans disks in some way (simple striping behaves like spanning in this respect). It also depends on whether the volume is moved in its entirety, or partially, and on whether part of a volume is moved in one step and the rest is moved in a later step. The state depends either on changes that might have been made to the configuration of a partially moved volume on the original computer or on the new computer.

  • When all disks that contain parts of a volume are moved from one computer to another, all at the same time, the state of a volume after the import should be identical to the original state of the volume. All simple volumes on any moved disks will be recovered to their original state.
  • With a non-redundant volume that spans multiple disks, if some, but not all disks are moved from one system to another, the volume will be disabled on import (it will also become disabled on the original system). As long as the volume is not deleted on either the original or the target system, the remaining disks can be moved later. When all disks are finally moved over, the volume will be recovered to its original state.
  • In an alternative case, start by moving part of a non-redundant volume from one computer to another, and then delete the volume on the original or the target computer. If the space used by the deleted volume is reused by a new volume, when the remaining disks are moved over, the volume is deleted. If the space used by the deleted volume remains free (or the space is reused by a volume, and that new volume is then deleted, making the space free again), then the volume is put back on that free space (after you move the remaining disks). However, LDM cannot distinguish between the case where the space was reused and then freed again (which means that the original volume's data has probably been changed), and the case where the space was not reused (which means that the original volume data is still intact). To signal this, LDM leaves the volume in a Failed state. To restart the volume, use "Reactivate Volume" on the volume's menu.
  • RAID-5 behaves in a manner similar to non-redundant volumes, except that the volume may become online on the new system after moving all but one disk, or may remain online on the original system after moving just one disk. Whether it remains online depends upon whether the parity is known to be valid. Parity starts out as invalid when a RAID-5 volume is first created, since the parity blocks must be computed, which takes some amount of time. Parity is also marked as invalid after a system crash, because an in-progress write (at the time of the crash) may leave a discrepancy between parity blocks and the corresponding data blocks. If the parity of a RAID-5 volume is valid, then one disk can be missing and the RAID-5 volume will still become (or remain) online. If parity is not valid, then all parts of the RAID-5 volume must be available for the volume to become (or to remain) online.
  • In the event of all but one disk of a RAID-5 volume moving from one system to another, and the space on that remaining disk (on the original system) is then reused for a new volume, the RAID-5 volume is retained, but a new, special, Missing disk (which corresponds to no physical disk) is created to "store" the region that has now been orphaned.
  • The state of a partially moved mirrored volume depends upon the state of the original mirror. Mirrors are listed in the LDM configuration as either up-to-date or out-of-date. If a mirror that is marked up-to-date is moved, then the volume will come up online automatically. If a mirror that is marked out-of-date is moved, then the volume will come up in the failed state (though it can be started using Reactivate Volume.
  • If both mirrors of a volume start as up-to-date, and one is moved, then the moved mirror becomes marked out-of-date on the original computer, and the unmoved mirror becomes marked out-of-date on the target computer. At that point, if the second mirror is then moved to the target computer, both mirrors are listed as up-to-date even though they may be different. Different file updates may have occurred on each computer. In that case, the target system favors the mirror that it already has and overwrites the more recently added mirror with the contents of the mirror that was moved first.
  • If an out-of-date mirror is moved from one computer to another, and later an up-to-date mirror of the same volume is moved, then the volume will be online automatically.
  • If one up-to-date mirror is moved first, the mirror on the resulting Missing disk (for the non-moved mirror) can be removed and reallocated to another disk. This leaves a fully mirrored volume on the target computer. In this case, if the second original mirror is moved over, it conflicts in a way that cannot be resolved readily. When this happens, the second mirror comes over as a new volume.



WARNING: Use caution when removing and then moving disks with volume mirrors.

Consider two disks that have mirrors of a volume. If you remove one disk from a computer, the mirror on that disk becomes marked as out-of-date. However, the configuration, which is stored on that disk, cannot be updated, so the copy of the configuration stored on that disk still lists that mirror as up-to-date. Then remove the second disk. At that point, you have two removed disks: one lists both mirrors as up-to-date; the other lists its mirror as up-to-date and the mirror on the other disk as out-of-date. However, the disk that lists the other mirror as out-of-date was updated more recently.

Whether the first disk or the second disk is added to the target computer first (followed by the second disk), and even if both disks are added at the same time, one of the mirrors will be considered out-of-date on the target system. Consequently, the volume will not be redundant until you perform a recovery operation. This recovery operation copies all blocks from the mirror that is up-to-date to the mirror that is out-of-date. This can be quite expensive (for a 10GB volume, this would copy 10GB between disks). The reason that the recovery is needed, even when both disks are moved over at the same time, is that the most recently updated configuration copy (the one listing the other mirror as out-of-date) is favored over a less recently updated configuration copy.

It is better to remove all disks to at the same time, as well as to add all disks at the same time. With SCSI disks, this is fairly easy: stop using the disks, and then defer the "Rescan disks" request until after all disks are removed. When you add the disks to the new computer, again defer the "Rescan disks" request until all disks are physically inserted. With PCMCIA disks, or other disks that trigger direct operating system recognition of removals, this can be more difficult. When you pull a disk, LDM is signaled and processes the disk request. It is difficult to remove all disks at exactly the same time. However, there is some delay in LDM's operation, so if you remove the disks quickly (within a few seconds), then there should not be a problem.

With any kind of disk, the safest way to move them is to power off the original system before removing the disks, and to then power off the target system before adding the disks.


Advanced reading: Disk Group configuration copies

The complete Disk Group configuration is replicated on each member disk. This configuration data is stored in configuration copies. These copies take up the bulk of the 1MB space that LDM reserves for its use on each disk. This amount of space is required so that it can hold configuration data for a large number of Dynamic disks and volumes.

Every update to the configuration of a Disk Group is written to the configuration copies of all online disks in the disk group. If the system crashes during an update, and only some copies were written, then a best copy is chosen based on which copy appears to have the most recent update. Any copies that differ from that best copy are updated with the most recent configuration data.

It is possible for a section of configuration data to become unusable. For example, a bad sector can yield a write error if the revector table of the disk has filled. In such a case, that configuration data becomes "failed" and updates to that copy end. As long as there are other non-failed copies on other online Dynamic disks, this does not present a significant problem, since the copies are stored identically on each disk, and the other configuration copies represent the failed copy.

This does mean, however, that the configuration copy of a single disk should not be totally trusted. For example, a transient error can cause write errors to a configuration copy. At that point, LDM will stop updating the copy, but since the error is transient, a later attempt to read the configuration copy will not necessarily encounter an error. For example, when a single disk is moved from one system to another, the target system might read an out-of-date configuration copy that does not reflect the state of the volumes on that disk.

Such cases of out-of-date configuration copies are very rare, but they are possible. This is another reason why it is a good idea to move all disks at the same time, LDM chooses the most up-to-date of a set of configuration copies, rather than presuming the validity of one copy.

A more likely problem is that a bad sector of a configuration copy is persistent, or is first encountered on a read when no revector data is available. In that case, LDM encounters an error when it tries to read the configuration copy. As long as there is a valid configuration copy on another Dynamic disk from the same Disk Group at the same update level as the disk with the bad copy, everything runs without error.


Additional query words: ldm veritas

Keywords: kbenv kbfaq kbinfo KB222189