Next-Generation Secure Computing Base

From BetaArchive Wiki
Revision as of 09:25, 18 April 2021 by Emir214 (talk | contribs) (→‎Secure Chat: add)

The Next-Generation Secure Computing Base (codenamed Palladium)[1] is a hardware and software[2][3] architecture originally slated to be included in the Microsoft Windows "Longhorn" operating system. Development of the architecture began in 1997.[4][5]

The NGSCB was the result of years of research within Microsoft to create a secure computing solution that equaled the security of more closed systems while preserving the openness and flexibility of the Windows platform.[6] The NGSCB relied on new software components and specially designed hardware to create a new execution environment where more sensitive operations could be performed securely.[7] Microsoft's primary stated objective with the NGSCB was to "protect software from software."[6][2][8]

History

The idea of creating an architecture where software components can be loaded in a known and protected state predates the development of NGSCB.[9] A number of attempts were made in the 1960s and 1970s to produce secure computing systems,[10][11] with variations of the idea emerging in more recent decades.[12][13]

In 1999, the Trusted Computing Platform Alliance, a consortium of various technology companies, was formed in an effort to promote trust in the PC platform.[14] The TCPA would release several detailed specifications for a trusted computing platform with focus on features such as code validation and encryption based on integrity measurements, hardware based key storage, and attestation to remote entities. These features required a new hardware component designed by the TCPA called the Trusted Platform Module (referred to as a Security Support Component,[15] Secure Cryptographic Processor,[6] or Security Support Processor[6] in earlier Microsoft documentation). While most of these features would later serve as the foundation for Microsoft's NGSCB architecture, they were different in terms of implementation.[4] The TCPA was superseded by the Trusted Computing Group in 2003.[16]

Development

Development of the NGSCB began in 1997 after Microsoft developer Peter Biddle conceived of new ways to protect content on personal computers.[7]

Microsoft later filed a number of patents related to elements of the NGSCB design.[17] Patents for a digital rights management operating system,[9] loading and identifying a digital rights management operating system,[18] key-based secure storage,[19] and certificate based access control[20] were filed on January 8, 1999. A method to authenticate an operating system based on its central processing unit was filed on March 10, 1999.[21] Patents related to the secure execution of code[22] and protection of code in memory[23] were filed on April 6, 1999.

During its Windows Hardware Engineering Conference of 2000, Microsoft showed a presentation titled Privacy, Security, and Content in Windows Platforms which focused on the protection of end user privacy and intellectual property.[24] The presentation mentioned turning Windows into a "platform of trust" designed to protect the privacy of individual users.[24] Microsoft made a similar presentation during WinHEC 2001.[25]

The NGSCB was publicly unveiled under the name "Palladium" on 24 June 2002 in an article by Steven Levy of Newsweek that focused on its origin, design and features.[26][27][28] Levy stated that the technology would allow users to identify and authenticate themselves, encrypt data to protect it from unauthorized access, and allow users to enforce policies related to the use of their information. As examples of policies that could be enforced, Levy stated that users could send e-mail messages accessible only by the intended recipient, or create Microsoft Word documents that could only be read a week after they were created. To provide this functionality, the technology would require specially designed hardware components, including updated processors, chipsets, peripherals, and a Trusted Platform Module.[26] In August 2002, Microsoft posted a recruitment advertisement seeking a group program manager to provide vision and industry leadership in the development of several Microsoft technologies, including its NGSCB architecture.[29]

Encrypted memory was once considered for the NGSCB, but the idea was later discarded as the only threat conceived of that would warrant its inclusion was the circumvention of digital rights management technology.[30][31]

In 2003, Microsoft publicly demonstrated the NGSCB for the first time at its Windows Hardware Engineering Conference[32][33][34] and released a developer preview of the technology later that year during its Professional Developers Conference.[35][36][37] (See In builds of Windows "Longhorn".)

In PDC 2003, Microsoft announced that NGSCB would ship as part of "Longhorn", and that betas and other releases would be in sync with and delivered with "Longhorn". Version 1 of NGSCB would have focused on enterprise applications. Example opportunities were document signing, secure instant messaging, internal applications for viewing secure data, and secure e-mail plug-in.[3]

Timeline of NGSCB by the time of WinHEC 2004 (Source: Peter N. Biddle, Microsoft)[38]

During WinHEC 2004, Microsoft announced that it would revise the technology in response to feedback from customers and independent software vendors who stated that they did not want to rewrite their existing programs in order to benefit from its functionality.[39][40] After the announcement, some reports stated that Microsoft would cease development of the technology.[41][42] (Notably, it was reported that the NGSCB code would not be updated in the Longhorn developer preview that would come out in WinHEC 2004. See In builds of Windows "Longhorn"[42]) Microsoft denied the claims and reaffirmed its commitment to delivering the technology.[43][44] Later that year, Microsoft's Steve Heil stated that the company would make additional changes to the technology based on feedback from the industry.[45]

In 2005, Microsoft's lack of continual updates on its progress with the technology had led some in the industry to speculate that it had been cancelled.[46] At the annual Microsoft Management Summit event, then Microsoft CEO Steve Ballmer said that the company was building on the foundation it had started with the NGSCB to create a new set of hypervisor technologies for its Windows operating system.[47] During WinHEC 2005, Microsoft announced that it had scaled back its plans for the technology in order to ship the post-reset Windows "Longhorn" operating system within a reasonable timeframe. Instead of providing an isolated software environment, the NGSCB would offer full operating system volume encryption with a feature known as Secure Startup (which would later be renamed as Bitlocker Drive Encryption).[48] Microsoft stated that it planned to deliver other aspects of its NGSCB architecture at a later date.[49]

In July 2008, Peter Biddle stated that negative perception was the main contributing factor responsible for the cancellation of the architecture.[50]

Name

In Greek and Roman mythology, the term "palladium" refers to an object that the safety of a city or nation was believed to be dependent upon.[51]

On 24 January 2003, Microsoft announced that "Palladium" had been renamed as the "Next-Generation Secure Computing Base." According to NGSCB product manager Mario Juarez, the new name was chosen not only to reflect Microsoft's commitment to the technology in the upcoming decade, but also to avoid any legal conflict with an unnamed company that had already acquired the rights to the Palladium name. Juarez acknowledged that the previous name had been a source of criticism, but denied that the decision was made by Microsoft in an attempt to deflect criticism.[1]

Reception

Architecture

Note: this section discusses NGSCB before WinHEC 2004.

NGSCB essentially would have divided the computing environment into two separate and distinct operating modes.[52] Thus, NGSCB would have been composed of two parts: the traditional "left-hand side" (LHS) and the “right-hand side” (RHS) security system. The LHS and RHS would have been a logical, but physically enforced, division or partitioning of the computer.[53]

The LHS would have been composed of traditional applications such as Microsoft Office,[53] along with a conventional operating system, such as Windows.[53][52] Drivers, viruses, and any software with minor exceptions would also have run on the LHS. However, the new hardware memory controller would not have allowed certain "bad" behaviors. Examples would have been code which copied all of memory from one location to the next, or which put the CPU into real mode.[2][3] Another term for the LHS is standard mode.[52]

Meanwhile, the RHS would have worked in conjunction with the LHS system and the central processing unit (CPU). With NGSCB, applications would have run in a protected memory space that is highly resistant to software tampering and interference.[53] The RHS[53] or nexus mode[52] would have been composed of a “nexus” and trusted agents,[53] called Nexus Computing Agents.[52]

Hardware would have created this secure space.[2] Thus, creating a nexus required modifcation of the CPU, the memory controller or chipset, the keyboard, the video graphics card, and the graphics adapters; and the addition of a new part called the trusted platform module (TPM). The TPM would be permanently attached to the motherboard and could not be removed. However, the TPM would have been shipped with its functionality disabled, making NGSCB an opt-in system. Also, users could independently choose to disable all TPM functionality, effectively disabling NGSCB.[54] (See Trusted Plaform Module for more information.)

The RHS would also comprise a security support component that would have used a public key infrastructure key pair along with encryption functions to provide a secure state.[53] Other terms for the RHS are the nexus mode or the isolated execution space, in which the nexus and NCAs would have executed.[52]

Typically, there would have been one chipset in the computer that both the LHS and RHS would have used.[52] The LHS and RHS would have also shared hardware resources, including the CPU, RAM, and some I/O devices.[55] The RHS was required not to rely on LHS for security. If adversarial LHS code were present, NGSCB must not leak secrets. However, the RHS was required to rely on the LHS for stability and services. NGSCB would not have run in the absence of LHS cooperation.[2][3] NGSCB needed the following from the LHS:

What NGSCB Needs From The LHS

  • Basic OS services - scheduler
  • Device Driver work for Trusted Input / Video
  • Memory Management additions to allow nexus to participate in memory pressure and paging decisions
  • User mode debugger additions to allow debugging of agents (explained later)
  • Window Manager coordination
  • Nexus Manager Device driver (nexusmgr.sys)
  • NGSCB management software and services

— Brandon Baker, A Technical Introduction to NGSCB, [2]

NGSCB would not have changed the device driver model, resorting to secure reuse of LHS driver stacks whenever possible (i.e., RHS encrypted channel through LHS unprotected account). NGSCB would have needed very minimal access to real hardware. Every line of privileged code was considered a potential security risk. Therefore, there would have been no third-party code nor kernel-mode plug-ins.[2]

The nexus would halt and exit upon receiving an authorized request to stop from the standard side,[54] or LHS. All nexuses would halt, whether through this process or because of a system exception, clearing all nexus and NCA memory.[54]

Nexus

Diagram of the Nexus design.

The nexus, previously referred to as the "Nub"[4] or "Trusted Operating Root"[57][58] would have hosted, protected, and controlled NCAs.[52] It would have provided NCAs with security services so that they could have provided users with trustworthy computing.[2] The nexus would have isolated trusted agents, managed communications to and from trusted agents, and cryptographically sealed stored data (e.g., stored in a hard disk drive). More particularly, the nexus would have executed in kernel mode in trusted space (see Strong process isolation) and provided basic services to trusted agents, such as the establishment of the process mechanisms for communicating with trusted agents and other applications,[53] otherwise known as an interprocess communication (IPC) mechanism, memory mapping, thread management. The IPC would have provided communication channels among and between NCAs and other programs that were not trusted and that were operating on the same computer or on different computers.[54]

The nexus would have also provided special trust services such as attestation of a hardware/software platform or execution environment and the sealing and unsealing of secrets.[53] The nexus would have stored one or more secrets (private keys and symmetric keys) that it would only provide to the cryptographically-identified NCA running on a specific hardware platform.[54] Simply stated, the nexus would have offered services to store cryptographic keys and encrypt and decrypt information,[56] and it would have identified[56] and cryptographically[54] authenticated NCAs.[56]

The nexus was also intended to have controlled access to trusted applications and resources by using a security reference monitor, which is part of the nexus security kernel, and would have managed all essential NGSCB services, including memory management, exclusive access to device memory and secure input and output, and access to any non-NGSCB system services.[56]

The nexus was described either as a kernel,[2][55][56] like a kernel,[2] a “high assurance” operating system,[53] not a complete operating system,[56] as an operating system component, or as a secure system component.[52]

The nexus was required to be small so that every NCA owner could, in principle, examine and trust the implementation of the nexus. To keep the nexus small, it was required to meet the following design criteria:

  • Contain code that could be duplicated among NCAs;
  • Multiplex its time among NCAs.
  • Use the hardware Sealed storage and Attestation functions to store keys on behalf of NCAs, and attest to the combined nexus-NCA software stack more flexibly than would be possible with the hardware primitives. This would have allowed the hardware primitives to be simple and inexpensive, while allowing the NCA primitives to be much more flexible and extensible without sacrificing security.
  • Provide (at each NCA’s request) standard management tasks, such as key migration between nexus versions and NCA versions.[54]

In order to minimize its size, the nexus would have only implemented the operating system services, such as access control and memory isolation, that would have been necessary to preserve its integrity. Beyond these components, the nexus and its trusted applications would have relied on services provided by the main operating system, such as the physical storage of data,[56] opening communication channels to identified standard processes, or performing I/O operations that are not trusted, such as hard-disk access.[54] Typically, the nexus would have cryptographically protected this data before exposing it to the main operating system.[56]

The nexus could have booted any time, could have shut down when not needed, and could have restarted later.[2][3] It would have started in a controlled initial state. Nexus startup would have been atomic and protected in a controlled initial state.[2] The nexus would have been authenticated during computer startup. After it is authenticated, the nexus would the protected operating environment within Windows. Programs could then request that the nexus perform trusted services such as starting an NCA.[56]

The nexus could have provided encryption technology to authenticate and protect data that would be entered, stored, communicated, or displayed on the computer and to help ensure that the data would not be accessed by other applications and hardware devices.[55][56] (For more detail, see Strong process isolation.)

The nexus would have provided a limited set of application programming interfaces (APIs) and services for trusted applications, including sealed storage and attestation functions. The set of nexus system components was chosen to guarantee the confidentiality and integrity of the nexus data, even when the nexus encountered malicious behavior from the main operating system.[56]

NGSCB would have allowed a PC to run one nexus at a time.[52] The hardware would have loaded any nexus, but only one at a time. Each nexus would have gotten the same services. The hardware would have kept nexus secrets separate. Nothing about this architecture would have prevented any nexus from running.[2]

NGSCB would have enforced policy but would not have had set the policy. The owner could have therefore controlled which nexuses are allowed to run.[2]

On the software side, Microsoft would have built a nexus designed to complement Windows, and expected other developers and vendors to build nexuses of their own.[52] The Microsoft nexus would run any agent. The platform owner could set policy that limits this. The owner could pick some other delegated evaluator if they choose.[2]

Microsoft pledged to make the source code of the nexus available for public review,[54] so that it could be evaluated and validated by third parties for both security and privacy considerations.[59]

Users would have had the independent choice to identify which nexuses could run[54] This meant they could run any nexus, or write their own and run it, on the hardware. That nexus could only report the attestation provided by the TPM. As Baker put it, "The TPM won't lie". The nexus would not have been able to pretend to be another nexus. Other systems would need to decide if they trust the new derived nexus. Users would just have needed to prove to others that their derivative is legitimate.[2] Users could have also independently chosen to specify nexuses that could access the public key certificate for the TPM, to name nexuses that would have access to PKSeal and PKUnSeal (thereby enabling the Attestation function in NGSCB), and to name nexuses that were authorized to change the foregoing selections, through a secure user interface presented by the computer early in the startup sequence, not through standard software because it could be subject to a software attack.[54]

The nexus might permit all applications to run, or a machine owner might configure a machine policy in which the nexus permits only certain agents to run. In other words, the nexus would run any agent that the machine owner tells it to run. The machine owner might also tell the nexus what not to run.[56] Alternatively stated, users could have run any agent, or could have written their own, and run it on the nexus. That agent could report the attestation provided by the nexus. "The nexus won’t lie," according to Baker. The agent could not pretend to be another agent. Other systems would need to decide if they trusted the new derived nexus. Users would just have needed to prove to others that their derivative is legitimate.[2]

Nexus Computing Agents

Nexus Computing Agents (NCAs),[52] or trusted agents,[53] would have been application processes strictly managed by the nexus. They would have consisted of user-mode code executing within the isolated execution space (nexus mode).[52] They were trusted software components, hosted by the nexus, that would have run in the protected operating environment. NCAs would have been used to process data and transactions in curtained memory.[55][56]

An NCA could have been be an application in and of itself, or an NCA could have been part of an application that would have also run in the standard Windows environment.[52] In other words, an NCA or trusted agent could have been a program, a part of a program, or a service that runs in user mode in trusted space.[53] Each NCA would have had access only to the memory allocated to it by the nexus. This memory would not have been shared with other processes on the system, unless explicitly allowed by the NCA.[52]

The NCA would have been represented by a signed[3] extensible markup language (XML) document called a manifest.[54] The manifest would have identified the program hash either by naming it directly in the manifest or by naming a public key. This public key would have meant, informally, “Any cryptographic hash signed by the corresponding private key has the same software identity.” In addition, the manifest would have defined the code modules that can be loaded for each NCA, associated human-readable names with the program, noted version numbers, and expressed program-debugging policy[54] (i.e., whether the NCA was debuggable). Debugging an agent really meant debugging via the LHS shadow process.[3]

The manifest would have also provided information about an application that a machine user would have used to determine if the app should run, and defined agent components, agent properties (e.g. system requirements and descriptive properties such as the version number), and agent policy requests. The machine owner's policy might, however, have overridden the policy requests in the manifest.[3]

NCAs would have been monolithic, with no DLLs. Code could have been shared using statically-linked libraries. The composition of NCAs would have been based on IPC, which was blocking and message-oriented. NCAs and LHS processes could have both used IPC: for NCAs to communicate with other NCAs; for LHS applications to communicate with NCAs they start. Access to IPCs would have been controlled by policy.[3]

An NCA or trusted agent would have called,[53] or could have made requests to,[56] the nexus for security-related services and critical general services[53] or essential NGSCB services,[56] such as memory management.[53] An NCA or trusted agent would have been able to store secrets using sealed storage and to authenticate itself using the attestation services of the nexus. Each trusted agent or entity would have controlled its own domain of trust, and they need not have relied on each other.[53] Alternatively stated, each NCA would have controlled its own trust relationships, and NCAs would not have needed to trust or rely on each other.[56]

Each NCA was required operate without forcing the computer to restart, shut down running programs, or cause compatibility problems for existing programs. This would allow NCAs to run without restricting the other software that is currently running, or can potentially run, on a computer.[54]

The nexus would have determined the unique code identity of each NCA, which would have enabled the NCA to be specifically designated as trusted or not trusted by multiple entities, such as the user, IT department, a merchant, or a vendor. The mechanism for identifying code in NGSCB would have been unambiguous and policy-independent.[56]

NCAs could have been written in C or C++, using any compiler. Agents could be instantiated from managed or unmanaged code. An RHS CLR was planned; it would have allowed agents to be written in any .Net language.[3]

NCAs were divided into three categories: "Application," "Component," and "Trusted Service Provider."[60] Application agents were stand-alone applications, and were good for clients in multi-tier applications, such as an online banking client. The entire application would have run on the RHS. Component agents were components of a larger application. Most of the application would run on the LHS, but agents would have been used for specific trusted applications. Component agents were suitable for adding trusted features to existing Windows applications, such as a document signing component of a word processor.[3] Trusted Service Provider NCAs would have run entirely in Nexus mode and would have provided services to other NCAs. For example, the Trusted UI Engine, which would have rendered and managed the UI of an NCA, and would have alerted such NCA when events occur on active UI elements, would have been an example of a Trusted Service Provider NCA.[60]

Features

Note: this section discusses NGSCB before WinHEC 2004.

Developers could have used four main capabilities to protect data against software attacks on NGSCB systems: strong process isolation, sealed storage, secure paths to and from the user,[52] also called secure input and output,[53] and attestation.[52] All NGSCB-enabled application capabilities would have been built off of these four key features[2][3] or pillars.[3] The first three would have been needed to protect against malicious code,[2][3] while attestation would have broken new ground in distributed computing.[3]

Strong process isolation

"Typical NGSCB Configuration" (Source: Microsoft)[56]. Note how the diagram corresponds with the earlier diagrams. Note also that attacks only affect the LHS, not the RHS.

Strong process isolation would have been a mechanism for protecting data in memory. It would have been created and maintained by the nexus and enforced by NGSCB hardware[52][3] and software.[2][3] Hardware would have notified the nexus of certain operations, and the nexus would have arbitrated page tables, control registers, and others.[2][3]

Agents and the nexus would have run in curtained memory, inaccessible to other agents, to the standard Windows kernel,[2][3] nor to hardware direct memory access[56] (DMA) devices,[2][3] by using a special structure called the DMA exclusion vector.[56]

Strong process isolation would have provided an execution and memory space,[52] a trusted space, by carving out a secure area (the RHS).[53] This space would have been a specific portion of RAM within the address space.[56] This space would have been protected from external access and software-based attacks (even those launched from the kernel),[52] and would have provided a restricted and protected address space for applications and services that have higher security requirements.[56] This curtained memory would have guaranteed against leaking information to the world outside of the nexus, and would have permitted only certain certified applications to execute under the nexus and to access the curtained memory.[53]

Operations that run on the RHS would have been protected and isolated from the LHS, which would have made them significantly more secure from attack.[53] In other words, strong process isolation would have prevented rogue applications from changing NGSCB data or code while it was running.[2][3]

Because of strong process isolation and curtained memory, the main operating system would have been largely unaware of the NGSCB system. The nexus could be started at any time through authenticated startup, which enables hardware and software components to authenticate themselves within the system. After the nexus was authenticated, the nexus and its trusted applications would be protected in isolated memory and could not be accessed by the main operating system.[56] The nexus was required not to interact with the main operating system in any way that would allow events happening at the main operating system to compromise the behavior of the nexus.[53] Thus, this protected operating environment provided a higher level of secure processing while leaving the rest of the computer's hardware and software unaffected.[56]

In user interface terms, when users run trusted applications within this curtained memory, all processes would operate within special "trusted windows." A non-writable banner with a trusted icon and program name would appear on the top of each trusted window. The trusted window could not be covered by other windows for programs that were running in the standard operating system environment. If more than one trusted window is open on the desktop, they would not overlap.[56]

Sealed storage

Sealed storage would have been a mechanism for protecting data in storage.[52] It would have allowed the user to encrypt information[53] with a key rooted in the hardware,[2][3] so that it could only be accessed by a trustworthy application,[53] the designated trusted entity that stored it,[52] or by authenticated entities.[2][3]

NGSCB would have provided sealed data storage by using a special security support component (SSC). The SSC would have provided the nexus with individualized encryption services to manage the cryptographic keys, including the NGSCB public and private key pairs and the Advanced Encryption Standard (AES) key from which keys would have been derived for trusted applications and services.[56] Each nexus would have generated a random keyset on first load. The TPM chip on the motherboard would have protected the nexus keyset.[2][3] An NCA would have used these derived keys for data encryption.[56] Simply put, agents would have used nexus facilities to seal (encrypt and sign) private data. The nexus would have protected the key from any other agent/application, and the hardware would have prevented any other nexus from gaining access to the key.[2][3] File system operations by the standard operating system would have provided the storage services.[56]

The trustworthy application could have included just the application that created the information in the first place, or any application that was trusted by the application that owned the data. Therefore, sealed storage would have allowed a program to store secrets that could not be retrieved by nontrusted programs, such as a virus or Trojan horse,[53] and sealed storage would have prevented rogue applications from getting at encrypted data.[2][3] In addition, sealed storage could not be read if another operating system is started or if the hard disk is moved to another computer. NGSCB would have provided mechanisms for backing up data and for migrating secure information to other computers.[56]

Sealed storage would have also verified the integrity of data when unsealing it.[2][3]

Sealed storage would have bound long-lived confidential information to code. Long-lived information referred to confidential information for which the lifetime of the information would have exceeded the lifetime of the process that accesses it. For example, a banking application might need to store confidential banking records for use at a later time, or a browser might need to store user credentials on the hard disk and protect that data from tampering.[56]

Encrypted files would have been useless if stolen or surreptitiously copied.[56]

Secure paths to and from the user

Secure paths to and from the user would have been mechanisms for protecting data moving from input devices to the NCAs, and from NCAs to the monitor screen.[52] NGSCB would have supported secure input through upgraded keyboards and Universal Serial Bus (USB) devices,[2][55][56][3] allowing a local user at a local keyboard or other device to communicate privately with an NCA[55] or a trusted application.[56] Other protected input devices included mice, and integrated input for laptops.[2][3]

Data entered by the user and presented to the user could not be read by software, such as spyware or “Trojan horses,”[52] programs that could read keystrokes, or allowed a remote user or program to act as a legitimate local user.[55] Malicious software could not mimic or intercept input, or intercept, obscure or alter output;[52] in the keyboard setting, malicious software could not be used to record, steal or modify keystrokes[53] Secure path would have enabled one to be sure that they are dealing with the real user, not an application spoofing the user.[2][3]

Secure output would have been similar.[53] Graphic adapters were generally optimized for performance rather than security, allowing software to read or write to video memory easily and making securing video very difficult.[56] A secure channel would have existed between the nexus and the graphics adapter.[2][3] The information that would have appeared onscreen could be presented to the user so that no one else could intercept it and read it. Taken together, these things would have allowed a user to know with a high degree of confidence that the software in his computer is doing what it was supposed to do.[53] NGSCB dialog boxes could be obscured, and they had visual cues that allowed users to be certain that the window was not being displayed by the standard side[54] or the LHS.

An NGSCB system configured for two-factor authentication (Source: Microsoft).[55] Note again how the diagram corresponds with the two diagrams above.

A two-factor authentication combination of NGSCB-enabled smart cards and biometric devices could have been combined with the trustworthy computing capabilities of an NGSCB system to provide the strongest user authentication at reasonable cost. This combination of NGSCB and authentication input devices would have addressed the human factor problem involved in password credentials, and the hijacking scenario in case of biometrics authentication.[55] The NGSCB-enabled devices would have been attached to the system, and the user authentication software component would have run as an NCA in the protected operating environment.[55]

NGSCB would have added the following secure input capabilities to two-factor authentication to provide the strongest possible user authentication:

  • Integrity: The NGSCB system verifies that user authentication information was not modified after it was submitted. For example, the system verifies that another entity did not substitute different information.
  • Confidentiality: The NGSCB system maintains the security of the user authentication information by ensuring that no other entity can read the information.
  • Authentication: The NGSCB system verifies that the user authentication information it receives is submitted by secure input devices. No other entity could have sent the information.


— Microsoft, Secure User Authentication for the Next-Generation Secure Computing Base, [55]

Smart cards, biometrics, and other authentication input devices could have been made trustworthy by embedding an input security support component into the device or into the hub to which the device connects. When these devices would be plugged into the computer, and the NGSCB system was turned on, the system could have determined whether the devices were secure and could set up a path for the exchange of authentication information between the devices and the user authentication software component NCA that would be running in the computer's protected operating environment.[55]

Attestation

Attestation would have been a mechanism for authenticating a given software and hardware configuration, either locally or remotely.[52] Attestation referred to the ability of a piece of code to digitally sign or otherwise attest to a piece of data and further assure the recipient that the data was constructed by an unforgeable, cryptographically identified software stack.[53][56] It would have enabled a user to verify that they were dealing with an application and machine configuration they trusted.[3]

Attestation would have let other computers know that a computer was really the computer it claimed to be, and was running the software it claimed to be running.[53] It would have based on secrets rooted in hardware combined with cryptographic representations (hash vectors) of the nexus and/or software running on the machine. Attestation would have been a core feature for enabling many of the privacy benefits in NGSCB.[52] Because NGSCB software and hardware would have been cryptographically verifiable to the user and to other computers, programs, and services, the system could verify that other computers and processes were trustworthy before engaging them or sharing information. Thus, when used in conjunction with certification and licensing infrastructure,[56] attestation would allowed the user to reveal selected characteristics of the operating environment to external requestors,[53][56] and to prove to remote service providers that the hardware and software stack is legitimate. By authenticating themselves to remote entities, trusted applications could have created, verified, and maintained a security perimeter that would not have required trusted administrators or authorities.[56]

For example, a banking company might have provide NGSCB-capable computers to its high-profile customers to help provide secure remote access and processing for Internet banking transactions that contain highly sensitive and valuable information. The banking company would have then built its own NGSCB-trusted application that would have used a secure network protocol, enabling the customers to communicate with a server application on the company's servers.[56]

When requested by an agent, the nexus could have prepared a chain that authenticated: the agent by digest, signed by the nexus; the nexus by digest, signed by the TPM; and the TPM by public key, signed by the OEM or the IT department. The machine owner would have set policies to control which forms of attestation each agent or group of agents could use. A secure communications agent would have provided higher-level services to agent developers. It would have opened a secure channel to a service using a secure session key, and responded to an attestation challenge from the service based on user policy.[2][3]

All trust relationships could have been traced back to the "root of trust." Trust relationships were only as strong as their root. For example, if the CA gave away all its secrets to untrustworthy entities, a requestor could potentially download a malicious program without knowing that the trust relationship had been compromised. The NGSCB root of trust would have been made as strong as possible by embedding a 2048-bit RSA public and private key in the SSC that would have stored the shared secrets. The coprocessor's private key could not be accessed; it could only be used to encrypt and decrypt secrets.[56] The computer's RSA private key would have embedded in hardware and never exposed. If computer-specific secrets were somehow accessed by a sophisticated hardware attack, they only applied to data on the compromised computer and could not be used to develop widely deployable programs that could compromise other computers. In the case of an attack, a compromised computer could be identified by IT managers, service providers, and other systems, and then excluded from the network.[56]

Neither nexus nor agent could directly determine if it were running in the secured environment. They would have been required to use Remote Attestation or Sealed Storage to cache credentials or secrets to prove the system is sound.[2]

Attestation involved secure authentication of subjects (e.g., software, machines, services)[2][52][3] through code ID.[2][52] This would have been separate from user authentication.[2][52][3]

WinHEC 2004

Biddle announced during his presentation in WinHEC 2004:

The original plans required users to update both their hardware and software.

But Peter Biddle, product unit manager for Microsoft's security business, told delegates at the WinHEC show: "Customers have told us that they require the benefits 'out of the box', without having to write or rewrite applications."

As a result, NGSCB will not shield individual applications but will create 'secure compartments'. The operating system will contain compartments for elements such as the actual operating system, computing tasks and administration and management.

— Tom Sanders, Microsoft shakes up Longhorn security, [40]

Security would still have been first. The same nexus would support both Windows client and server. "Longhorn" would run with or without NGSCB. NGSCB would have more direct support for Windows (e.g., “Cornerstone”), and would have been more closely aligned to Windows components (e.g., compartments). Isolation would have been provided per compartment, rather than per process.[38]

System services would be provided to the operating system in the system-only compartment. An IPC mechanism would have been used to call them. The same services described in Features would still be provided, namely, isolation, sealed storage, trusted path, and attestation. TPM 1.2 would have been used to root sealed storage.[38]

The nexus would have managed sufficient hardware to provide useful isolation, including the CPU, memory, and TPM (crypto processor). The secure compartment would have managed secure I/O. The primary operating system would have managed all other hardware.[38]

"Cornerstone" would have prevented a thief who booted another operating system or ran a hacking tool from breaking core Windows protections, and provided a root key which could be used by third-parties to protect their secrets against the same attack. User login and authentication would have been done in a secure compartment. Meanwhile, under Code Integrity Rooting (CIR), boot and system files would have been validated and their integrity checked prior to the release of the SYSKEY into the legacy operating system.[38]

The Integrated Security Support Component intended for Secure input could have protected against both software and hardware attacks, but required new keyboards, new mice, user retraining, among other costs, which would have been out of proportion to the problems that most users will face. It was scrapped in favor of Intel’s Trusted Mobile Keyboard Controller for mobile devices, and work to get changes for USB in chipsets without requiring new USB devices nor new drivers.[61]

Trusted Platform Module

The Trusted Platform Module (TPM) is a cryptographic co-processor specified by the Trusted Computing Group. It contains cryptographic keys, performs basic cryptographic services,[52][62] and stores cryptographic hashes,[52] or platform measurements. The TPM anchors chain of trust for keys, digital certificates and other credentials.[62]

The public-private key pair that is created as part of the TPM’s manufacturing process is called the endorsement key (EK),[52] which is unique, generated only once, and is the root key for establishing the identity used for attestation. It is a 2048-bit RSA key pair. Microsoft claimed that it would not be involved in generating the EK. The TPM owner could enable/disable the access to the EK, thus enabling/disabling attestation.[62]

The private key component of the EK never leaves,[52] and is known only to,[62] the TPM. The private key would never have been accessible to software executing in the operating system. It would have been used only to instantiate the NGSCB environment and to provide services to the nexus.[52]

In an NGSCB-capable computer, even the public key on the TPM (also referred to as platform credentials) would have been secured against accidental disclosure or unauthorized access. The public key component of the EK would have been used by NGSCB only to create “alias” keys (called attestation identity keys (AIKs) in the TPM specification) that could have been used to ensure anonymity. The public key would have been accessible only by software that the machine owner explicitly trusted (trust being established by the user taking overt action to run this software). Trusted software could then implement policies as determined by the machine owner. These policies control access to the computer's public key by other clients, servers or services. In contrast to most public key infrastructure systems, the public key in an NGSCB-capable system would not be made widely available. This design was implemented to prevent indiscriminate tracking of users or computers on the Internet through their public keys.[52] It would have been protected to mitigate against identity profiling and tracking.[62]

According to Microsoft, version 1.2 of the Trusted Platform Module is the first version compatible with NGSCB. Previous versions do not include the required functionality.[52]

TPM is an implementation of a root-of-trust, meaning, a system can be trusted if it behaves in the expected manner for the intended purpose. A Root-of-Trust allows third parties to rely on this trust, and serves to anchor a certificate verification chain that is unique to a given system.[62]

During boot, the TPM would gather measurements about the running environment: namely, the BIOS, the loader, the trusted operating system, and applications. "To measure" means that it would perform hash, would log and extend appropriate register. TPM would only measure the running environment. Collected PCRs values would have been later used for [[#Sealed storage|sealed storage & attestation. The remote entity could decide whether to trust the running platform based on the PCR values. Secrets would be sealed to a particular state of the platform using these measurements. PCRs would have been part of the sealed message. For large data blocks, data would be encrypted; only the key would be sealed. Unseal would return decrypted data only when the PCR(s) match.[62]

In builds of Windows "Longhorn"

In released pre-reset builds of Windows "Longhorn", NGSCB components reside in %SYSTEMDRIVE%\WINDOWS\NGSCB. The first build of Windows "Longhorn" known to include the NGSCB directory and subfolders is either 4015.main or 4039, and the last is build 4066.[63] A report dated 5 May 2004 stated, "The NGSCB code won't be updated in the enhanced Longhorn developer's preview update, due out later this week".[42] That build, 4074, does not contain the \Windows\NGSCB\ folder.

The build version of NGSCB that shipped with build 4051, distributed to PDC 2003 attendees, was 6.0.3252.1. It had to be configured manually by running ngconfig.exe, located in \Windows\NGSCB\, through the Command Prompt.

The Longhorn SDK, also distributed to PDC 2003 attendees, also contained APIs for NGSCB.[3]

The NGSCB developer preview SDK was provided so that developers could understand the features and APIs of NGSCB, but did not demonstrate the security of NGSCB. The SDK enabled developers to prototype most applications they might write on Version 1 of NGSCB, but with a warning that the SDK might change before RTM. The developer preview also included a software emulator which simulated the NGSCB environment; new hardware was not necessary to run it.[3]

The developer preview supported creating an agent in Visual Studio (debugging must be done on the command line), simulated sealed storage, simulated attestation, IPC, and standard Windows and CRT style APIS, but did not provide secure path nor strong process isolation.[3]

Configuring NGSCB in build 4051

Sample apps

Secure Chat

Legacy

The development of the Next-Generation Secure Computing Base ultimately led to the creation of Microsoft's Bitlocker drive encryption feature, which was one of the first mainstream device encryption features to support version 1.2 of the Trusted Platform Module, and the first device encryption feature to be integrated with the Windows operating system.[64]

"Cornerstone", discussed in WinHEC 2004, was the codename of the Secure Startup-Full Volume Encryption security technologies included with Windows Vista Enterprise.[65]

Certain design elements of the NGSCB would become part of Microsoft's virtualization technologies.[47][66] Windows 8, released in 2012, includes a feature called Measured Boot which allows a trusted server to verify the integrity of the Windows startup process.[67][68][69] Although it is not directly related to the NGSCB architecture, it serves a purpose comparable to the architecture's attestation feature in that they are both designed to validate a platform's configuration.[70]

In addition, features based on those originally intended for the NGSCB would later become available in competing operating systems. In 2012, Giesecke & Devrient produced a parallel execution environment called MobiCore for the Android operating system designed to host secure user applications and protect confidential data.[71] In 2013, Apple released a new feature for its iOS operating system called Secure Enclave to protect a user's biometric information.[72]

References

  1. 1.0 1.1 Lemos, Robert (24 January 2003). What's in a name? Not Palladium. CNET News.com. Retrieved on 15 April 2021.
  2. 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23 2.24 2.25 2.26 2.27 2.28 2.29 2.30 2.31 2.32 2.33 2.34 2.35 2.36 2.37 2.38 Baker, Brandon (2003). A Technical Introduction to NGSCB. Security Summit East 2003 Washington, DC. Archived from the original on 23 July 2014. Retrieved on 16 April 2021.
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 3.28 3.29 3.30 3.31 3.32 3.33 3.34 3.35 3.36 3.37 Keith Kaplan and Ellen Cram (October 2003). Next Generation Secure Computing Base - Overview and Drilldown. Professional Developers Conference 2003. Retrieved on 17 April 2021.
  4. 4.0 4.1 4.2 Biddle, Peter (5 August 2002). Re: Dangers of TCPA/Palladium. Retrieved on 15 April 2021.
  5. Merritt, Rick (15 July 2002). Microsoft scheme for PC security faces flak. EE Times. Retrieved on 15 April 2021.
  6. 6.0 6.1 6.2 6.3 Aday, Michael. Palladium. Retrieved on 15 April 2021.
  7. 7.0 7.1 Schoen, Seth (5 July 2002). Palladium summary?. Retrieved on 15 April 2021.
  8. PeterNBiddle (28 November 2012). How four Microsoft engineers proved that the “darknet” would defeat DRM (comment). Retrieved on 15 April 2021.
  9. 9.0 9.1 Paul England, John D. DeTreville, Butler W. Lampson (inventors) (11 December 2001 (publication date)). US6330670B1 - Digital rights management operating system - Google Patents. Retrieved on 15 April 2021.
  10. William A. Arbaugh, David J. Farber, Angelos D. Keromytis, Jonathan M. Smith (inventors) (6 February 2001 (publication date)). US6185678B1 - Secure and reliable bootstrap architecture - Google Patents. Retrieved on 15 April 2021.
  11. Anderson, James (October 1972). Computer Security Technology Planning Study. Electronic Systems Division. Retrieved on 15 April 2021.
  12. David P. Jablon, Nora E. Hanley (inventors) (30 May 1995 (publication date)). US5421006A - Method and apparatus for assessing integrity of computer system software - Google Patents. Retrieved on 15 April 2021.
  13. Kuhn, Markus (30 April 1997). The TrustNo 1 Cryptoprocessor Concept. Retrieved on 15 April 2021.
  14. Trusted Computing Platform Alliance. TCPA - Trusted Computing Platform Alliance. Archived from the original on 2 August 2002. Retrieved on 15 April 2021.
  15. Microsoft (July 2003). Microsoft Next-Generation Secure Computing Base - Technical FAQ. Archived from the original on 24 October 2008. Retrieved on 15 April 2021.
  16. Trusted Computing Group (February 2011). TRUSTED COMPUTING GROUP (TCG) TIMELINE. Archived from the original on 17 August 2011. Retrieved on 15 April 2021.
  17. Lampson, Butler. Curriculum Vitae. Archived from the original on 9 June 2011. Retrieved on 15 April 2021.
  18. Paul England, John D. DeTreville, Butler W. Lampson (inventors) (4 December 2001 (publication date)). US6327652B1 - Loading and identifying a digital rights management operating system - Google Patents. Retrieved on 15 April 2021.
  19. Paul England, John D. DeTreville, Butler W. Lampson (20 March 2007 (publication date)). US7194092B1 - Key-based secure storage - Google Patents. Retrieved on 15 April 2021.
  20. Paul England, John D. DeTreville, Butler W. Lampson (inventors) (16 November 2004 (publication date)). US6820063B1 - Controlling access to content based on certificates and access predicates - Google Patents. Retrieved on 15 April 2021.
  21. Paul England, John D. DeTreville, Butler W. Lampson (inventors) (6 February 2007). US7174457B1 - System and method for authenticating an operating system to a central processing unit, providing the CPU/OS with secure storage, and authenticating the CPU/OS to a third party - Google Patents. Retrieved on 15 April 2021.
  22. Paul England, Butler W. Lampson (18 November 2003 (publication date)). US6651171B1 - Secure execution of program code - Google Patents. Retrieved on 15 April 2021.
  23. Paul England, Butler W. Lampson (10 August 2004 (publication date)). US6775779B1 - Hierarchical trusted code for content protection in computers - Google Patents. Retrieved on 15 April 2021.
  24. 24.0 24.1 Microsoft. Privacy, Security And Content In Windows Platforms. Archived from the original on 2 April 2015. Retrieved on 15 April 2021.
  25. Microsoft. Privacy, Security, and Content Protection. WinHEC 2001. Archived from the original on 26 June 2017. Retrieved on 15 April 2021.
  26. 26.0 26.1 Levy, Steven (24 June 2002). The Big Secret. Newsweek. Retrieved on 15 April 2021.
  27. Palladium: Microsoft’s big plan for the PC. Geek.com. Archived from the original on 24 June 2002. Retrieved on 15 April 2021.
  28. ExtremeTech Staff (24 June 2002). “Palladium”: Microsoft Revisits Digital-Rights Management. ExtremeTech. Retrieved on 15 April 2021.
  29. Lettice, John (13 August 2002). MS recruits for Palladium microkernel and/or DRM platform. Retrieved on 15 April 2021.
  30. Biddle, Peter (22 February 2008). Attack isn't news, and there are mitigations. Retrieved on 15 April 2021.
  31. Biddle, Peter (23 February 2008). Threat Model Irony. Retrieved on 15 April 2021.
  32. Bekker, Scott (6 May 2003). Palladium on Display at WinHEC. Redmond. Retrieved on 15 April 2021.
  33. Microsoft PressPass (7 May 2003). At WinHEC, Microsoft Discusses Details of Next-Generation Secure Computing Base. Archived from the original on 23 July 2017. Retrieved on 15 April 2021.
  34. Evers, Joris (7 May 2003). Microsoft turns to emulators for security demo. Networkworld. Retrieved on 15 April 2021.
  35. Paula Rooney, Elizabeth Montalbano (1 October 2003). Microsoft To Hand Over Early Whidbey, Yukon Code At PDC. CRN. Retrieved on 15 April 2021.
  36. Evers, Joris (30 October 2003). Developers get hands on Microsoft's NGSCB. Networkworld. Retrieved on 15 April 2021.
  37. Microsoft PressPass (15 December 2003). A Review of Microsoft Technology for 2003, Preview for 2004. Archived from the original on 8 September 2014. Retrieved on 15 April 2021.
  38. 38.0 38.1 38.2 38.3 38.4 38.5 38.6 Biddle, Peter (May 2004). Next-Generation Secure Computing Base. WinHEC 2004. Archived from the original on 27 August 2006. Retrieved on 17 April 2021.
  39. Evers, Joris (5 May 2004). WinHEC: Microsoft revisits NGSCB security plan. Networkworld. Archived from the original on 18 November 2005. Retrieved on 15 April 2021.
  40. 40.0 40.1 Sanders, Tom (6 May 2004). Microsoft shakes up Longhorn security. vnunet.com. Archived from the original on 29 November 2005. Retrieved on 15 April 2021.
  41. Bangeman, Eric (5 May 2004). Microsoft kills Next-Generation Secure Computing Base. Ars Technica. Retrieved on 15 April 2021.
  42. 42.0 42.1 42.2 Rooney, Paula (5 May 2004). Microsoft Shelves NGSCB Project As NX Moves To Center Stage. CRN. Retrieved on 15 April 2021.
  43. eWeek Editors (5 May 2004). Microsoft: Palladium Is Still Alive and Kicking. eWeek. Retrieved on 15 April 2021.
  44. Thurrott, Paul (7 May 2004). WinHEC 2004 Show Report and Photo Gallery. Paul Thurrott's Supersite for Windows. Archived from the original on 23 July 2014. Retrieved on 15 April 2021.
  45. Fried, Ina (8 September 2004). Controversial Microsoft plan heads for Longhorn. CNET. Retrieved on 15 April 2021.
  46. Evers, Joris (24 February 2005). Silence Fuels Speculation on Microsoft Security Plan. PCWorld. Retrieved on 15 April 2021.
  47. 47.0 47.1 Ballmer, Steve (20 April 2005). Steve Ballmer: Microsoft Management Summit. Microsoft. Archived from the original on 8 September 2014. Retrieved on 15 April 2021.
  48. Sanders, Tom (26 April 2005). Longhorn security gets its teeth kicked out. vnunet.com. Archived from the original on 28 April 2005. Retrieved on 15 April 2021.
  49. Microsoft Shared Source Initiative Home Page. Archived from the original on 6 June 2005. Retrieved on 15 April 2021.
  50. Biddle, Peter (16 July 2008). Perception (or, Linus gets away with being honest again). Retrieved on 15 April 2021.
  51. Myth Index. PALLADIUM. Greek Mythology Index. Archived from the original on 17 January 2008. Retrieved on 15 April 2021.
  52. 52.00 52.01 52.02 52.03 52.04 52.05 52.06 52.07 52.08 52.09 52.10 52.11 52.12 52.13 52.14 52.15 52.16 52.17 52.18 52.19 52.20 52.21 52.22 52.23 52.24 52.25 52.26 52.27 52.28 52.29 52.30 52.31 52.32 52.33 52.34 52.35 52.36 Microsoft (November 2003). Privacy-Enabling Enhancements in the Next-Generation Secure Computing Base. Archived from the original on 28 December 2005. Retrieved on 15 April 2021.
  53. 53.00 53.01 53.02 53.03 53.04 53.05 53.06 53.07 53.08 53.09 53.10 53.11 53.12 53.13 53.14 53.15 53.16 53.17 53.18 53.19 53.20 53.21 53.22 53.23 53.24 53.25 53.26 53.27 53.28 53.29 53.30 Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr (inventors) (10 February 2005 (publication date)). US7530103B2 - Projection of trustworthiness from a trusted environment to an untrusted environment - Google Patents. Retrieved on 16 April 2021.
  54. 54.00 54.01 54.02 54.03 54.04 54.05 54.06 54.07 54.08 54.09 54.10 54.11 54.12 54.13 54.14 Microsoft (2003). NGSCB: Trusted Computing Base and Software Authentication. Archived from the original on 13 January 2005. Retrieved on 17 April 2021.
  55. 55.00 55.01 55.02 55.03 55.04 55.05 55.06 55.07 55.08 55.09 55.10 55.11 Microsoft (2003). Secure User Authentication for the Next-Generation Secure Computing Base. Archived from the original on 23 November 2015. Retrieved on 16 April 2021.
  56. 56.00 56.01 56.02 56.03 56.04 56.05 56.06 56.07 56.08 56.09 56.10 56.11 56.12 56.13 56.14 56.15 56.16 56.17 56.18 56.19 56.20 56.21 56.22 56.23 56.24 56.25 56.26 56.27 56.28 56.29 56.30 56.31 56.32 56.33 56.34 56.35 56.36 56.37 56.38 56.39 56.40 56.41 Microsoft (2003). Security Model for the Next-Generation Secure Computing Base. Archived from the original on 8 September 2014. Retrieved on 16 April 2021.
  57. Amy Carroll, Mario Juarez, Julia Polk and Tony Leininger (June 2002). Microsoft "Palladium": A Business Overview. Microsoft PressPass. Archived from the original on 5 August 2002. Retrieved on 15 April 2021.
  58. Biddle, Peter (19 September 2002). Cryptogram: Palladium Only for DRM. Retrieved on 15 April 2021.
  59. Microsoft (April 2003). The Next-Generation Secure Computing Base: An Overview. Archived from the original on 7 June 2003. Retrieved on 17 April 2021.
  60. 60.0 60.1 Cram, Ellen (October 2003). Security Developer Center: Next-Generation Secure Computing Base: Development Considerations for Nexus Computing Agents (Security (General) Technical Articles). Microsoft Developer Network (MSDN). Archived from the original on 2 December 2003. Retrieved on 15 April 2021.
  61. Wooten, David. Securing the User Input Path On NGSCB Systems. WinHEC 2004. Archived from the original on 27 August 2006. Retrieved on 17 April 2021.
  62. 62.0 62.1 62.2 62.3 62.4 62.5 62.6 Stephen Heil and Pavel Zeman. TPM 1.2 Trusted Platform Module And Its Use In NGSCB. WinHEC 2004. Archived from the original on 27 August 2006. Retrieved on 17 April 2021.
  63. Builds of "Longhorn" with NGSCB?
  64. Microsoft (26 March 2012). Windows BitLocker Drive Encryption Frequently Asked Questions. Retrieved on 15 April 2021.
  65. Thurrott, Paul (9 September 2005). Pre-PDC Exclusive: Windows Vista Product Editions Revealed. Windows IT Pro. Archived from the original on 17 October 2014. Retrieved on 17 April 2021.
  66. Clarke, Gavin (19 April 2005). Microsoft running late in virtualization. The Register. Retrieved on 15 April 2021.
  67. Microsoft TechNet. Windows 8 Boot Process - Security, UEFI, TPM. Archived from the original on 7 April 2013. Retrieved on 15 April 2021.
  68. Microsoft TechNet. Windows 8 Boot Security FAQ. Archived from the original on 2 March 2014. Retrieved on 15 April 2021.
  69. Microsoft (31 May 2018). Measured Boot. Retrieved on 15 April 2021.
  70. Microsoft (7 September 2012). Secured Boot and Measured Boot: Hardening Early Boot Components against Malware. Retrieved on 15 April 2021.
  71. Giesecke & Devrient (4 May 2012). G&D; announces MobiCore® integrated security platform to support Samsung GALAXY S III in Europe. Archived from the original on 12 May 2012. Retrieved on 15 April 2021.
  72. Apple (10 September 2013). Apple Announces iPhone 5s—The Most Forward-Thinking Smartphone in the World. Archived from the original on 11 September 2013. Retrieved on 15 April 2021.

External links

Microsoft

BetaArchive