Next-Generation Secure Computing Base: Difference between revisions

From BetaArchive Wiki
(add)
Line 40: Line 40:
NGSCB essentially would have divided the computing environment into two separate and distinct operating modes.<ref name = "privacy-enhancements">{{cite web | url = http:/download.microsoft.com/download/8/d/5/8d5ec8cf-3e09-49e0-95dd-0a6a3ded510f/NGSCB_Privacy_Enhancements.doc | title = Privacy-Enabling Enhancements in the Next-Generation Secure Computing Base | author = Microsoft | date = November 2003 | filetype = DOC | archiveurl = https://web.archive.org/web/20051228130120/http:/download.microsoft.com/download/8/d/5/8d5ec8cf-3e09-49e0-95dd-0a6a3ded510f/NGSCB_Privacy_Enhancements.doc | archivedate = 28 December 2005 | accessdate = 15 April 2021}}</ref> Thus, NGSCB would have been composed of two parts: the traditional ''"left-hand side"'' (LHS) and the ''“right-hand side”'' (RHS) security system.  The LHS and RHS would have been a logical, but physically enforced, division or partitioning of the computer.<ref name = "projecting-patent">{{cite web | url = https://patents.google.com/patent/US7530103B2 | title = US7530103B2 - Projection of trustworthiness from a trusted environment to an untrusted environment - Google Patents | author = Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr (inventors) | date = 10 February 2005 (publication date) | accessdate = 16 April 2021}}</ref>
NGSCB essentially would have divided the computing environment into two separate and distinct operating modes.<ref name = "privacy-enhancements">{{cite web | url = http:/download.microsoft.com/download/8/d/5/8d5ec8cf-3e09-49e0-95dd-0a6a3ded510f/NGSCB_Privacy_Enhancements.doc | title = Privacy-Enabling Enhancements in the Next-Generation Secure Computing Base | author = Microsoft | date = November 2003 | filetype = DOC | archiveurl = https://web.archive.org/web/20051228130120/http:/download.microsoft.com/download/8/d/5/8d5ec8cf-3e09-49e0-95dd-0a6a3ded510f/NGSCB_Privacy_Enhancements.doc | archivedate = 28 December 2005 | accessdate = 15 April 2021}}</ref> Thus, NGSCB would have been composed of two parts: the traditional ''"left-hand side"'' (LHS) and the ''“right-hand side”'' (RHS) security system.  The LHS and RHS would have been a logical, but physically enforced, division or partitioning of the computer.<ref name = "projecting-patent">{{cite web | url = https://patents.google.com/patent/US7530103B2 | title = US7530103B2 - Projection of trustworthiness from a trusted environment to an untrusted environment - Google Patents | author = Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr (inventors) | date = 10 February 2005 (publication date) | accessdate = 16 April 2021}}</ref>


The ''LHS'' would have been composed of traditional applications such as [[Microsoft Office]],<ref name = "projecting-patent" /> along with a conventional operating system, such as [[Windows]].<ref name = "projecting-patent" /><ref name = "privacy-enhancements" /> Another term for the LHS is ''standard mode''.<ref name = "privacy-enhancements" />
The ''LHS'' would have been composed of traditional applications such as [[Microsoft Office]],<ref name = "projecting-patent" /> along with a conventional operating system, such as [[Windows]].<ref name = "projecting-patent" /><ref name = "privacy-enhancements" /> Drivers, viruses, and any software with minor exceptions would also have run on the LHS. However, the new hardware memory controller would not have allowed certain "bad" behaviors. Examples would have been code which copied all of memory from one location to the next, or which put the CPU into real mode.<ref name = "Baker" /> Another term for the LHS is ''standard mode''.<ref name = "privacy-enhancements" />


Meanwhile, the ''RHS'' would have worked in conjunction with the LHS system and the central processing unit (CPU). With NGSCB, applications would have run in a protected memory space that is highly resistant to software tampering and interference.<ref name = "projecting-patent" /> The RHS<ref name = "projecting-patent /> or nexus mode<ref name = "privacy-enhancements" /> would have been composed of a “nexus” and trusted agents,<ref name = "projecting-patent /> called Nexus Computing Agents.<ref name = "privacy-enhancements" /> The RHS would also comprise a security support component  that would have used a public key infrastructure key pair along with encryption functions to provide a secure state.<ref name = "projecting-patent" /> Other terms for the RHS are the ''nexus mode'' or the ''isolated execution space'', in which the nexus and NCAs would have executed.<ref name = "privacy-enhancements" />
Meanwhile, the ''RHS'' would have worked in conjunction with the LHS system and the central processing unit (CPU). With NGSCB, applications would have run in a protected memory space that is highly resistant to software tampering and interference.<ref name = "projecting-patent" /> The RHS<ref name = "projecting-patent /> or nexus mode<ref name = "privacy-enhancements" /> would have been composed of a “nexus” and trusted agents,<ref name = "projecting-patent /> called Nexus Computing Agents.<ref name = "privacy-enhancements" /> The RHS would also comprise a security support component  that would have used a public key infrastructure key pair along with encryption functions to provide a secure state.<ref name = "projecting-patent" /> Other terms for the RHS are the ''nexus mode'' or the ''isolated execution space'', in which the nexus and NCAs would have executed.<ref name = "privacy-enhancements" />
Line 46: Line 46:
Typically, there would have been one chipset in the computer that both the LHS and RHS would have used.<ref name = "privacy-enhancements" />
Typically, there would have been one chipset in the computer that both the LHS and RHS would have used.<ref name = "privacy-enhancements" />


The RHS was required not to rely on LHS for security. If adversarial LHS code were present, NGSCB must not leak secrets. However, the RHS was required to rely on the LHS for stability and services. NGSCB would not have run in the absence of LHS cooperation.<ref name = "Baker" />
The RHS was required not to rely on LHS for security. If adversarial LHS code were present, NGSCB must not leak secrets. However, the RHS was required to rely on the LHS for stability and services. NGSCB would not have run in the absence of LHS cooperation.<ref name = "Baker" /> NGSCB needed the following from the LHS:
 
{{quotation|* Basic OS services - scheduler
* Device Driver work for Trusted Input / Video
* Memory Management additions to allow nexus to participate in memory pressure and paging decisions
* User mode debugger additions to allow debugging of agents (explained later)
* Window Manager coordination
* Nexus Manager Device driver (nexusmgr.sys)
* NGSCB management software and services|Brandon Baker|A Technical Introduction to NGSCB|<ref name = "Baker" />}}
 
NGSCB would not have changed the device driver model, resorting to secure reuse of LHS driver stacks whenever possible (i.e., RHS encrypted channel through LHS unprotected account). NGSCB would have needed very minimal access to real hardware. Every line of privileged code was considered a potential security risk. Therefore, there would have been no third-party code nor kernel-mode plug-ins.<ref name = "Baker" />
 
<gallery>
<gallery>
File:NGSCBWHEC03.png|Diagram of the NGSCB software architecture as presented during WinHEC 2003.
File:NGSCBWHEC03.png|Diagram of the NGSCB software architecture as presented during WinHEC 2003.
Line 58: Line 68:
The ''nexus'', previously referred to as the "Nub"<ref name="Dangers of TCPA/Palladium."/> or "Trusted Operating Root"<ref>{{cite web | url = http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp | title = Microsoft "Palladium": A Business Overview | author = Amy Carroll, Mario Juarez, Julia Polk and Tony Leininger | publisher = Microsoft PressPass | date = June 2002 | archiveurl = http://web.archive.org/web/20020805211111/http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp | archivedate = 5 August 2002 | accessdate = 15 April 2021}}</ref><ref>{{cite web | url = http://www.metzdowd.com/pipermail/cryptography/2002-September/003146.html | title = Cryptogram: Palladium Only for DRM | first = Peter | last = Biddle | date = 19 September 2002 | accessdate = 15 April 2021}}</ref> would have been the new kernel introduced by the NGSCB. The nexus would have been responsible for the secure interaction between the specialized hardware components, and would also have been responsible for the isolation and management of Nexus Computing Agents.
The ''nexus'', previously referred to as the "Nub"<ref name="Dangers of TCPA/Palladium."/> or "Trusted Operating Root"<ref>{{cite web | url = http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp | title = Microsoft "Palladium": A Business Overview | author = Amy Carroll, Mario Juarez, Julia Polk and Tony Leininger | publisher = Microsoft PressPass | date = June 2002 | archiveurl = http://web.archive.org/web/20020805211111/http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp | archivedate = 5 August 2002 | accessdate = 15 April 2021}}</ref><ref>{{cite web | url = http://www.metzdowd.com/pipermail/cryptography/2002-September/003146.html | title = Cryptogram: Palladium Only for DRM | first = Peter | last = Biddle | date = 19 September 2002 | accessdate = 15 April 2021}}</ref> would have been the new kernel introduced by the NGSCB. The nexus would have been responsible for the secure interaction between the specialized hardware components, and would also have been responsible for the isolation and management of Nexus Computing Agents.


The nexus was described either as a “high assurance” operating system,<ref name = "projecting-patent" /> as an operating system component, or as a secure system component.<ref name = "privacy-enhancements" /> As a basic OS, the nexus would have been composed of (and not of) the following features:
The nexus was described either as like a kernel,<ref name = "Baker" /> a “high assurance” operating system,<ref name = "projecting-patent" /> as an operating system component, or as a secure system component.<ref name = "privacy-enhancements" /> As a basic OS, the nexus would have been composed of (and not of) the following features:


{{quotation|* Section 1 of Intro to Operating Systems Textbook  
{{quotation|* Section 1 of Intro to Operating Systems Textbook  
Line 76: Line 86:
** All of the kernel loaded at boot and hashed in the TPM|Brandon Baker|A Technical Introduction to NGSCB|<ref name = "Baker" />}}
** All of the kernel loaded at boot and hashed in the TPM|Brandon Baker|A Technical Introduction to NGSCB|<ref name = "Baker" />}}


The Nexus would have hosted, protected, and controlled NCAs.<ref name = "privacy-enhancements" />
The nexus could boot any time, could shut down when not needed, and could restart later. It would have started in a controlled initial state. Nexus startup would have been atomic and protected in a controlled initial state.<ref name = "Baker" />
 
The nexus would have hosted, protected, and controlled NCAs.<ref name = "privacy-enhancements" />


In a patent granted in favor of Microsoft, the nexus was described as:
In a patent granted in favor of Microsoft, the nexus was described as:
Line 84: Line 96:
The nexus 251 isolates trusted agents 255, 260, manages communications to and from trusted agents 255, 260, and cryptographically seals stored data (e.g., stored in a hard disk drive). More particularly, the nexus 251 executes in kernel mode in trusted space and provides basic services to trusted agents 255, 260, such as the establishment of the process mechanisms for communicating with trusted agents and other applications, and special trust services such as attestation of a hardware/software platform or execution environment and the sealing and unsealing of secrets. Attestation is the ability of a piece of code to digitally sign or otherwise attest to a piece of data and further assure the recipient that the data was constructed by an unforgeable, cryptographically identified software stack.|Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr|Projection of trustworthiness from a trusted environment to an untrusted environment|<ref name = "projecting-patent" />}}
The nexus 251 isolates trusted agents 255, 260, manages communications to and from trusted agents 255, 260, and cryptographically seals stored data (e.g., stored in a hard disk drive). More particularly, the nexus 251 executes in kernel mode in trusted space and provides basic services to trusted agents 255, 260, such as the establishment of the process mechanisms for communicating with trusted agents and other applications, and special trust services such as attestation of a hardware/software platform or execution environment and the sealing and unsealing of secrets. Attestation is the ability of a piece of code to digitally sign or otherwise attest to a piece of data and further assure the recipient that the data was constructed by an unforgeable, cryptographically identified software stack.|Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr|Projection of trustworthiness from a trusted environment to an untrusted environment|<ref name = "projecting-patent" />}}


NGSCB would have allowed a PC to run one nexus at a time. Microsoft would have built a nexus designed to complement Windows, and expected other developers and vendors to build nexuses of their own.<ref name = "privacy-enhancements" />  
NGSCB would have allowed a PC to run one nexus at a time.<ref name = "privacy-enhancements" /> The hardware would have loaded any nexus, but only one at a time.<ref name = "Baker" />
 
Each nexus would have gotten the same services. The hardware would have kept nexus secrets separate. Nothing about this architecture would have prevented any nexus from running.<ref name = "Baker" />
 
NGSCB would have enforced policy but would not have had set the policy.<ref name = "Baker" /> The owner could control which nexuses are allowed to run.<ref name = "Baker" />
 
On the software side, Microsoft would have built a nexus designed to complement Windows, and expected other developers and vendors to build nexuses of their own.<ref name = "privacy-enhancements" /> The Microsoft nexus would run any agent. The platform owner could set policy that limits this. The owner could pick some other delegated evaluator if they choose.<ref name = "Baker" /> 


=== Nexus Computing Agents ===
=== Nexus Computing Agents ===
Line 96: Line 114:
== Features ==
== Features ==


Developers could have used four main capabilities to protect data against software attacks on NGSCB systems: ''strong process isolation'', ''sealed storage'', ''secure paths to and from the user'',<ref name = "privacy-enhancements" /> also called ''secure input and output'',<ref name = "projecting-patent" /> and ''attestation''.<ref name = "privacy-enhancements" />
Developers could have used four main capabilities to protect data against software attacks on NGSCB systems: ''strong process isolation'', ''sealed storage'', ''secure paths to and from the user'',<ref name = "privacy-enhancements" /> also called ''secure input and output'',<ref name = "projecting-patent" /> and ''attestation''.<ref name = "privacy-enhancements" /> All NGSCB-enabled application capabilities would have been built off of these four key features or pillars. The first three would have been needed to protect against malicious code.<ref name = "Baker />


=== Strong process isolation ===
=== Strong process isolation ===


Strong process isolation would have been a mechanism for protecting data in memory. It would have been created and maintained by the nexus and enforced by NGSCB hardware.<ref name = "privacy-enhancements" /> It would have provided an execution and memory space,<ref name = "privacy-enhancements" /> a trusted space by carving out a secure area (the RHS).<ref name = "projecting-patent" /> This space would have been protected from external access and software-based attacks (even those launched from the kernel).<ref name = "privacy-enhancements" /> Operations that run on the RHS would have been protected and isolated from the LHS, which would have made them significantly more secure from attack.<ref name = "projecting-patent" /> In other words, strong process isolation would have prevented rogue applications from changing NGSCB data or code while it was running.<ref name = "Baker" />
Strong process isolation would have been a mechanism for protecting data in memory. It would have been created and maintained by the nexus and enforced by NGSCB hardware<ref name = "privacy-enhancements" /> and software.<ref name = "Baker" /> Hardware would have notified the nexus of certain operations, and the nexus would have arbitrated page tables, control registers, and others.<ref name = "Baker" />
 
Agents and the nexus would have run in curtained memory, inaccessible to other agents, to the standard Windows kernel, nor to hardware DMA.<ref name = "Baker" /> It would have provided an execution and memory space,<ref name = "privacy-enhancements" /> a trusted space by carving out a secure area (the RHS).<ref name = "projecting-patent" /> This space would have been protected from external access and software-based attacks (even those launched from the kernel).<ref name = "privacy-enhancements" /> Operations that run on the RHS would have been protected and isolated from the LHS, which would have made them significantly more secure from attack.<ref name = "projecting-patent" /> In other words, strong process isolation would have prevented rogue applications from changing NGSCB data or code while it was running.<ref name = "Baker" />


=== Sealed storage ===
=== Sealed storage ===


Sealed storage would have been a mechanism for protecting data in storage.<ref name = "privacy-enhancements" /> It would have allowed the user to encrypt information so that it can only be accessed by a trustworthy application<ref name = "projecting-patent" /> or the designated trusted entity that stored it.<ref name = "privacy-enhancements" /> This could include just the application that created the information in the first place, or any application that is trusted by the application that owns the data. Therefore, sealed storage would have allowed a program to store secrets that could not be retrieved by nontrusted programs, such as a virus or Trojan horse,<ref name = "projecting-patent" /> and sealed storage would have prevented  rogue applications from getting at your encrypted data. Sealed storage would also have verified the integrity of data when unsealing it.<ref name = "Baker" />
Sealed storage would have been a mechanism for protecting data in storage.<ref name = "privacy-enhancements" /> It would have allowed the user to encrypt information<ref name = "projecting-patent" /> with a key rooted in the hardware,<ref name = "Baker" /> so that it could only be accessed by a trustworthy application,<ref name = "projecting-patent" /> the designated trusted entity that stored it,<ref name = "privacy-enhancements" /> or by authenticated entites.<ref name = "Baker" /> The trustworthy application could have included just the application that created the information in the first place, or any application that was trusted by the application that owned the data. Therefore, sealed storage would have allowed a program to store secrets that could not be retrieved by nontrusted programs, such as a virus or Trojan horse,<ref name = "projecting-patent" /> and sealed storage would have prevented  rogue applications from getting at your encrypted data. Sealed storage would also have verified the integrity of data when unsealing it.<ref name = "Baker" />
 
Each nexus would have generated a random keyset on first load. The TPM chip on the motherboard would have protected the nexus keyset. Agents would have used nexus facilities to seal (encrypt and sign) private data. The nexus would have protected the key from any other agent/application, and the hardware would have prevented any other nexus from gaining access to the key.<ref name = "Baker" />


=== Secure paths to and from the user ===
=== Secure paths to and from the user ===


Secure paths to and from the user would have been mechanisms for protecting data moving from input devices to the NCAs, and from NCAs to the monitor screen.<ref name = "privacy-enhancements" /> Data entered by the user and presented to the user could not be read by software such as spyware or “Trojan horses.” Malicious software could not mimic or intercept input, or intercept, obscure or alter output.<ref name = "privacy-enhancements" /> Secure path would have enabled one to be sure that they are dealing with the real user, not an application spoofing the user.<ref name = "Baker" />
Secure paths to and from the user would have been mechanisms for protecting data moving from input devices to the NCAs, and from NCAs to the monitor screen.<ref name = "privacy-enhancements" /> Protected input devices were the keyboard and mouse. Secure input meant USB for desktops, integrated input for laptops.<ref name = "Baker" /> Data entered by the user and presented to the user could not be read by software such as spyware or “Trojan horses.” Malicious software could not mimic or intercept input, or intercept, obscure or alter output.<ref name = "privacy-enhancements" /> Secure path would have enabled one to be sure that they are dealing with the real user, not an application spoofing the user.<ref name = "Baker" />
 
With NGSCB, keystrokes would have been encrypted before they could be read by software and decrypted once they reach the RHS. This meant that malicious software could not be used to record, steal or modify keystrokes.<ref name = "projecting-patent" />


With NGSCB, keystrokes would have been encrypted before they could be read by software and decrypted once they reach the RHS. This meant that malicious software could not be used to record, steal or modify keystrokes. Secure output would have been similar. The information that would have appeared onscreen could be presented to the user so that no one else could intercept it and read it. Taken together, these things would have allowed a user to know with a high degree of confidence that the software in his computer is doing what it was supposed to do.<ref name = "projecting-patent" />
Secure output would have been similar.<ref name = "projecting-patent" /> A secure channel would have existed between the nexus and the graphics adapter.<ref name = "Baker" /> The information that would have appeared onscreen could be presented to the user so that no one else could intercept it and read it. Taken together, these things would have allowed a user to know with a high degree of confidence that the software in his computer is doing what it was supposed to do.<ref name = "projecting-patent" />


=== Attestation ===
=== Attestation ===


Attestation would have been mechanism for authenticating a given software and hardware configuration, either locally or remotely.<ref name = "privacy-enhancements" /> Attestation would have let other computers know that a computer was really the computer it claimed to be, and was running the software it claimed to be running.<ref name = "projecting-patent" /> It would have based on secrets rooted in hardware combined with cryptographic representations (hash vectors) of the nexus and/or software running on the machine. Attestation would have been a core feature for enabling many of the privacy benefits in NGSCB.<ref name = "privacy-enhancements" /> Because NGSCB software and hardware would have been cryptographically verifiable to the user and to other computers, programs, and services, the system could verify that other computers and processes were trustworthy before engaging them or sharing information. Thus, attestation would allowed the user to reveal selected characteristics of the operating environment to external requestors.<ref name = "projecting-patent" />
Attestation would have been a mechanism for authenticating a given software and hardware configuration, either locally or remotely.<ref name = "privacy-enhancements" /> Attestation would have let other computers know that a computer was really the computer it claimed to be, and was running the software it claimed to be running.<ref name = "projecting-patent" /> It would have based on secrets rooted in hardware combined with cryptographic representations (hash vectors) of the nexus and/or software running on the machine. Attestation would have been a core feature for enabling many of the privacy benefits in NGSCB.<ref name = "privacy-enhancements" /> Because NGSCB software and hardware would have been cryptographically verifiable to the user and to other computers, programs, and services, the system could verify that other computers and processes were trustworthy before engaging them or sharing information. Thus, attestation would allowed the user to reveal selected characteristics of the operating environment to external requestors.<ref name = "projecting-patent" /> Attestation involved secure authentication of subjects (e.g., software, machines, services) through code ID. This would have been separate from user authentication.<ref name = "Baker" /><ref name = "privacy-enhancements" />
 
When requested by an agent, the nexus could have prepared a chain that authenticates: the agent by digest, signed by the nexus; the nexus by digest, signed by the TPM; and the TPM by public key, signed by the OEM or the IT department. The machine owner would have set policies to control which forms of attestation each agent or group of agents could use. A secure communications agent would have provided higher-level services to agent developers. It would have opened a secure channel to a service using a secure session key, and responded to an attestation challenge from the service based on user policy.<ref name = "Baker" />
 
Neither nexus nor agent could directly determine if it is running in the secured environment. They would have been required to use Remote Attestation or Sealed Storage to cache credentials or secrets to prove the system is sound.<ref name = "Baker" />  


== Hardware ==
== Hardware ==

Revision as of 09:04, 16 April 2021

The Next-Generation Secure Computing Base (codenamed Palladium)[1] is a software architecture originally slated to be included in the Microsoft Windows "Longhorn" operating system. Development of the architecture began in 1997.[2][3]

The NGSCB was the result of years of research within Microsoft to create a secure computing solution that equaled the security of more closed systems while preserving the openness and flexibility of the Windows platform.[4] The NGSCB relied on new software components and specially designed hardware to create a new execution environment where more sensitive operations could be performed securely.[5] Microsoft's primary stated objective with the NGSCB was to "protect software from software."[4][6][7]

History

The idea of creating an architecture where software components can be loaded in a known and protected state predates the development of NGSCB.[8] A number of attempts were made in the 1960s and 1970s to produce secure computing systems,[9][10] with variations of the idea emerging in more recent decades.[11][12]

In 1999, the Trusted Computing Platform Alliance, a consortium of various technology companies, was formed in an effort to promote trust in the PC platform.[13] The TCPA would release several detailed specifications for a trusted computing platform with focus on features such as code validation and encryption based on integrity measurements, hardware based key storage, and attestation to remote entities. These features required a new hardware component designed by the TCPA called the Trusted Platform Module (referred to as a Security Support Component,[14] Secure Cryptographic Processor,[4] or Security Support Processor[4] in earlier Microsoft documentation). While most of these features would later serve as the foundation for Microsoft's NGSCB architecture, they were different in terms of implementation.[2] The TCPA was superseded by the Trusted Computing Group in 2003.[15]

Development

Development of the NGSCB began in 1997 after Microsoft developer Peter Biddle conceived of new ways to protect content on personal computers.[5]

Microsoft later filed a number of patents related to elements of the NGSCB design.[16] Patents for a digital rights management operating system,[8] loading and identifying a digital rights management operating system,[17] key-based secure storage,[18] and certificate based access control[19] were filed on January 8, 1999. A method to authenticate an operating system based on its central processing unit was filed on March 10, 1999.[20] Patents related to the secure execution of code[21] and protection of code in memory[22] were filed on April 6, 1999.

During its Windows Hardware Engineering Conference of 2000, Microsoft showed a presentation titled Privacy, Security, and Content in Windows Platforms which focused on the protection of end user privacy and intellectual property.[23] The presentation mentioned turning Windows into a "platform of trust" designed to protect the privacy of individual users.[23] Microsoft made a similar presentation during WinHEC 2001.[24]

The NGSCB was publicly unveiled under the name "Palladium" on 24 June 2002 in an article by Steven Levy of Newsweek that focused on its origin, design and features.[25][26][27] Levy stated that the technology would allow users to identify and authenticate themselves, encrypt data to protect it from unauthorized access, and allow users to enforce policies related to the use of their information. As examples of policies that could be enforced, Levy stated that users could send e-mail messages accessible only by the intended recipient, or create Microsoft Word documents that could only be read a week after they were created. To provide this functionality, the technology would require specially designed hardware components, including updated processors, chipsets, peripherals, and a Trusted Platform Module.[25] In August 2002, Microsoft posted a recruitment advertisement seeking a group program manager to provide vision and industry leadership in the development of several Microsoft technologies, including its NGSCB architecture.[28]

In 2003, Microsoft publicly demonstrated the NGSCB for the first time at its Windows Hardware Engineering Conference[29][30][31] and released a developer preview of the technology later that year during its Professional Developers Conference.[32][33][34]

Diagram of NGSCB architecture revision shown during WinHEC 2004.

During WinHEC 2004, Microsoft announced that it would revise the technology in response to feedback from customers and independent software vendors who stated that they did not want to rewrite their existing programs in order to benefit from its functionality.[35][36] After the announcement, some reports stated that Microsoft would cease development of the technology.[37][38] Microsoft denied the claims and reaffirmed its commitment to delivering the technology.[39][40] Later that year, Microsoft's Steve Heil stated that the company would make additional changes to the technology based on feedback from the industry.[41]

In 2005, Microsoft's lack of continual updates on its progress with the technology had led some in the industry to speculate that it had been cancelled.[42] At the annual Microsoft Management Summit event, then Microsoft CEO Steve Ballmer said that the company was building on the foundation it had started with the NGSCB to create a new set of hypervisor technologies for its Windows operating system.[43] During WinHEC 2005, Microsoft announced that it had scaled back its plans for the technology in order to ship the post-reset Windows "Longhorn" operating system within a reasonable timeframe. Instead of providing an isolated software environment, the NGSCB would offer full operating system volume encryption with a feature known as Secure Startup (which would later be renamed as Bitlocker Drive Encryption).[44] Microsoft stated that it planned to deliver other aspects of its NGSCB architecture at a later date.[45]

In July 2008, Peter Biddle stated that negative perception was the main contributing factor responsible for the cancellation of the architecture.[46]

Name

In Greek and Roman mythology, the term "palladium" refers to an object that the safety of a city or nation was believed to be dependent upon.[47]

On 24 January 2003, Microsoft announced that "Palladium" had been renamed as the "Next-Generation Secure Computing Base." According to NGSCB product manager Mario Juarez, the new name was chosen not only to reflect Microsoft's commitment to the technology in the upcoming decade, but also to avoid any legal conflict with an unnamed company that had already acquired the rights to the Palladium name. Juarez acknowledged that the previous name had been a source of criticism, but denied that the decision was made by Microsoft in an attempt to deflect criticism.[1]

Reception

Architecture

NGSCB essentially would have divided the computing environment into two separate and distinct operating modes.[48] Thus, NGSCB would have been composed of two parts: the traditional "left-hand side" (LHS) and the “right-hand side” (RHS) security system. The LHS and RHS would have been a logical, but physically enforced, division or partitioning of the computer.[49]

The LHS would have been composed of traditional applications such as Microsoft Office,[49] along with a conventional operating system, such as Windows.[49][48] Drivers, viruses, and any software with minor exceptions would also have run on the LHS. However, the new hardware memory controller would not have allowed certain "bad" behaviors. Examples would have been code which copied all of memory from one location to the next, or which put the CPU into real mode.[6] Another term for the LHS is standard mode.[48]

Meanwhile, the RHS would have worked in conjunction with the LHS system and the central processing unit (CPU). With NGSCB, applications would have run in a protected memory space that is highly resistant to software tampering and interference.[49] The RHS[49] or nexus mode[48] would have been composed of a “nexus” and trusted agents,[49] called Nexus Computing Agents.[48] The RHS would also comprise a security support component that would have used a public key infrastructure key pair along with encryption functions to provide a secure state.[49] Other terms for the RHS are the nexus mode or the isolated execution space, in which the nexus and NCAs would have executed.[48]

Typically, there would have been one chipset in the computer that both the LHS and RHS would have used.[48]

The RHS was required not to rely on LHS for security. If adversarial LHS code were present, NGSCB must not leak secrets. However, the RHS was required to rely on the LHS for stability and services. NGSCB would not have run in the absence of LHS cooperation.[6] NGSCB needed the following from the LHS:

* Basic OS services - scheduler

  • Device Driver work for Trusted Input / Video
  • Memory Management additions to allow nexus to participate in memory pressure and paging decisions
  • User mode debugger additions to allow debugging of agents (explained later)
  • Window Manager coordination
  • Nexus Manager Device driver (nexusmgr.sys)
  • NGSCB management software and services

— Brandon Baker, A Technical Introduction to NGSCB, [6]

NGSCB would not have changed the device driver model, resorting to secure reuse of LHS driver stacks whenever possible (i.e., RHS encrypted channel through LHS unprotected account). NGSCB would have needed very minimal access to real hardware. Every line of privileged code was considered a potential security risk. Therefore, there would have been no third-party code nor kernel-mode plug-ins.[6]

Nexus

Diagram of the Nexus design.

The nexus, previously referred to as the "Nub"[2] or "Trusted Operating Root"[50][51] would have been the new kernel introduced by the NGSCB. The nexus would have been responsible for the secure interaction between the specialized hardware components, and would also have been responsible for the isolation and management of Nexus Computing Agents.

The nexus was described either as like a kernel,[6] a “high assurance” operating system,[49] as an operating system component, or as a secure system component.[48] As a basic OS, the nexus would have been composed of (and not of) the following features:

* Section 1 of Intro to Operating Systems Textbook

    • Process and Thread Loader/Manager
    • Memory Manager
    • I/O Manager
    • Security Reference Monitor
    • Interrupt handling/Hardware abstraction
  • But no Section 2??
    • No File System
    • No Networking
    • No Kernel Mode/Privileged Device Drivers
    • No Direct X
    • No Scheduling
    • No…
  • Kernel mode has no pluggables
    • All of the kernel loaded at boot and hashed in the TPM

— Brandon Baker, A Technical Introduction to NGSCB, [6]

The nexus could boot any time, could shut down when not needed, and could restart later. It would have started in a controlled initial state. Nexus startup would have been atomic and protected in a controlled initial state.[6]

The nexus would have hosted, protected, and controlled NCAs.[48]

In a patent granted in favor of Microsoft, the nexus was described as:

A nexus is a “high assurance” operating system that provides a certain level of assurance as to its behavior and can comprise all the kernel mode code on the RHS. For example, a nexus might be employed to work with secret information (e.g., cryptographic keys, etc.) that should not be divulged, by providing a curtained memory that is guaranteed not to leak information to the world outside of the nexus, and by permitting only certain certified applications to execute under the nexus and to access the curtained memory. The nexus 251 should not interact with the main operating system 201 in any way that would allow events happening at the main operating system 201 to compromise the behavior of the nexus 251. The nexus 251 may permit all applications to run or a machine owner may configure a machine policy in which the nexus 251 permits only certain agents to run. In other words, the nexus 251 will run any agent that the machine owner tells it to run. The machine owner may also tell the nexus what not to run.

The nexus 251 isolates trusted agents 255, 260, manages communications to and from trusted agents 255, 260, and cryptographically seals stored data (e.g., stored in a hard disk drive). More particularly, the nexus 251 executes in kernel mode in trusted space and provides basic services to trusted agents 255, 260, such as the establishment of the process mechanisms for communicating with trusted agents and other applications, and special trust services such as attestation of a hardware/software platform or execution environment and the sealing and unsealing of secrets. Attestation is the ability of a piece of code to digitally sign or otherwise attest to a piece of data and further assure the recipient that the data was constructed by an unforgeable, cryptographically identified software stack.

— Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr, Projection of trustworthiness from a trusted environment to an untrusted environment, [49]

NGSCB would have allowed a PC to run one nexus at a time.[48] The hardware would have loaded any nexus, but only one at a time.[6]

Each nexus would have gotten the same services. The hardware would have kept nexus secrets separate. Nothing about this architecture would have prevented any nexus from running.[6]

NGSCB would have enforced policy but would not have had set the policy.[6] The owner could control which nexuses are allowed to run.[6]

On the software side, Microsoft would have built a nexus designed to complement Windows, and expected other developers and vendors to build nexuses of their own.[48] The Microsoft nexus would run any agent. The platform owner could set policy that limits this. The owner could pick some other delegated evaluator if they choose.[6]

Nexus Computing Agents

Nexus Computing Agents (NCAs)[48], or trusted agents[49] would have been application processes strictly managed by the nexus. They would have consisted of user-mode code executing within the isolated execution space (nexus mode).[48] An NCA could have been be an application in and of itself, or an NCA could have been part of an application that would have also run in the standard Windows environment.[48] In other, words, an NCA or trusted agent could have been a program, a part of a program, or a service that runs in user mode in trusted space.[49] Each NCA would have had access only to the memory allocated to it by the nexus. This memory would not have been shared with other processes on the system, unless explicitly allowed by the NCA.[48]

An NCA trusted agent would call the nexus for security-related services and critical general services, such as memory management.[49] Each NCA would have had access only to the memory allocated to it by the nexus. This memory would not have been shared with other processes on the system, unless explicitly allowed by the NCA.[48] A NCA or trusted agent would have been able to store secrets using sealed storage and to authenticate itself using the attestation services of the nexus. Each trusted agent or entity would have controlled its own domain of trust, and they need not have relied on each other.[49]

Nexus Computing Agents were divided into three categories: "Application," "Component," and "Trusted Service Provider."[52]

Features

Developers could have used four main capabilities to protect data against software attacks on NGSCB systems: strong process isolation, sealed storage, secure paths to and from the user,[48] also called secure input and output,[49] and attestation.[48] All NGSCB-enabled application capabilities would have been built off of these four key features or pillars. The first three would have been needed to protect against malicious code.[6]

Strong process isolation

Strong process isolation would have been a mechanism for protecting data in memory. It would have been created and maintained by the nexus and enforced by NGSCB hardware[48] and software.[6] Hardware would have notified the nexus of certain operations, and the nexus would have arbitrated page tables, control registers, and others.[6]

Agents and the nexus would have run in curtained memory, inaccessible to other agents, to the standard Windows kernel, nor to hardware DMA.[6] It would have provided an execution and memory space,[48] a trusted space by carving out a secure area (the RHS).[49] This space would have been protected from external access and software-based attacks (even those launched from the kernel).[48] Operations that run on the RHS would have been protected and isolated from the LHS, which would have made them significantly more secure from attack.[49] In other words, strong process isolation would have prevented rogue applications from changing NGSCB data or code while it was running.[6]

Sealed storage

Sealed storage would have been a mechanism for protecting data in storage.[48] It would have allowed the user to encrypt information[49] with a key rooted in the hardware,[6] so that it could only be accessed by a trustworthy application,[49] the designated trusted entity that stored it,[48] or by authenticated entites.[6] The trustworthy application could have included just the application that created the information in the first place, or any application that was trusted by the application that owned the data. Therefore, sealed storage would have allowed a program to store secrets that could not be retrieved by nontrusted programs, such as a virus or Trojan horse,[49] and sealed storage would have prevented rogue applications from getting at your encrypted data. Sealed storage would also have verified the integrity of data when unsealing it.[6]

Each nexus would have generated a random keyset on first load. The TPM chip on the motherboard would have protected the nexus keyset. Agents would have used nexus facilities to seal (encrypt and sign) private data. The nexus would have protected the key from any other agent/application, and the hardware would have prevented any other nexus from gaining access to the key.[6]

Secure paths to and from the user

Secure paths to and from the user would have been mechanisms for protecting data moving from input devices to the NCAs, and from NCAs to the monitor screen.[48] Protected input devices were the keyboard and mouse. Secure input meant USB for desktops, integrated input for laptops.[6] Data entered by the user and presented to the user could not be read by software such as spyware or “Trojan horses.” Malicious software could not mimic or intercept input, or intercept, obscure or alter output.[48] Secure path would have enabled one to be sure that they are dealing with the real user, not an application spoofing the user.[6]

With NGSCB, keystrokes would have been encrypted before they could be read by software and decrypted once they reach the RHS. This meant that malicious software could not be used to record, steal or modify keystrokes.[49]

Secure output would have been similar.[49] A secure channel would have existed between the nexus and the graphics adapter.[6] The information that would have appeared onscreen could be presented to the user so that no one else could intercept it and read it. Taken together, these things would have allowed a user to know with a high degree of confidence that the software in his computer is doing what it was supposed to do.[49]

Attestation

Attestation would have been a mechanism for authenticating a given software and hardware configuration, either locally or remotely.[48] Attestation would have let other computers know that a computer was really the computer it claimed to be, and was running the software it claimed to be running.[49] It would have based on secrets rooted in hardware combined with cryptographic representations (hash vectors) of the nexus and/or software running on the machine. Attestation would have been a core feature for enabling many of the privacy benefits in NGSCB.[48] Because NGSCB software and hardware would have been cryptographically verifiable to the user and to other computers, programs, and services, the system could verify that other computers and processes were trustworthy before engaging them or sharing information. Thus, attestation would allowed the user to reveal selected characteristics of the operating environment to external requestors.[49] Attestation involved secure authentication of subjects (e.g., software, machines, services) through code ID. This would have been separate from user authentication.[6][48]

When requested by an agent, the nexus could have prepared a chain that authenticates: the agent by digest, signed by the nexus; the nexus by digest, signed by the TPM; and the TPM by public key, signed by the OEM or the IT department. The machine owner would have set policies to control which forms of attestation each agent or group of agents could use. A secure communications agent would have provided higher-level services to agent developers. It would have opened a secure channel to a service using a secure session key, and responded to an attestation challenge from the service based on user policy.[6]

Neither nexus nor agent could directly determine if it is running in the secured environment. They would have been required to use Remote Attestation or Sealed Storage to cache credentials or secrets to prove the system is sound.[6]

Hardware

Encrypted memory was once considered for the NGSCB, but the idea was later discarded as the only threat conceived of that would warrant its inclusion was the circumvention of digital rights management technology.[53][54]

Trusted Platform Module

The Trusted Platform Module is the hardware component that securely stores the cryptographic keys for the Nexus and Nexus Computing Agents, which makes the sealed storage and attestation features of the Nexus possible.

The Trusted Platform Module includes an asymmetric 2048-bit RSA key pair, referred to as the Endorsement Key (EK), which is unique to each particular module and is generated as part of its manufacturing process. The public key is accessible to applications or services that have established a trusted relationship with the owner, and is also used to provide the owner with Attestation Identity Keys (AIKs).

According to Microsoft, version 1.2 of the Trusted Platform Module is the first version compatible with its NGSCB architecture. Previous versions do not include the required functionality.[48]

In builds of Windows "Longhorn"

A successful attempt to configure the Next-Generation Secure Computing Base in Windows "Longhorn" build 4053.

In released pre-reset builds of Windows "Longhorn", NGSCB components reside in %SYSTEMDRIVE%\WINDOWS\NGSCB. The last build of Windows "Longhorn" known to include the NGSCB directory and subfolders is build 4066.[55]

Legacy

The development of the Next-Generation Secure Computing Base ultimately led to the creation of Microsoft's Bitlocker drive encryption feature, which was one of the first mainstream device encryption features to support version 1.2 of the Trusted Platform Module, and the first device encryption feature to be integrated with the Windows operating system.[56] Certain design elements of the NGSCB would become part of Microsoft's virtualization technologies.[43][57] Windows 8, released in 2012, includes a feature called Measured Boot which allows a trusted server to verify the integrity of the Windows startup process.[58][59][60] Although it is not directly related to the NGSCB architecture, it serves a purpose comparable to the architecture's attestation feature in that they are both designed to validate a platform's configuration.[61]

In addition, features based on those originally intended for the NGSCB would later become available in competing operating systems. In 2012, Giesecke & Devrient produced a parallel execution environment called MobiCore for the Android operating system designed to host secure user applications and protect confidential data.[62] In 2013, Apple released a new feature for its iOS operating system called Secure Enclave to protect a user's biometric information.[63]

References

  1. 1.0 1.1 Lemos, Robert (24 January 2003). What's in a name? Not Palladium. CNET News.com. Retrieved on 15 April 2021.
  2. 2.0 2.1 2.2 Biddle, Peter (5 August 2002). Re: Dangers of TCPA/Palladium. Retrieved on 15 April 2021.
  3. Merritt, Rick (15 July 2002). Microsoft scheme for PC security faces flak. EE Times. Retrieved on 15 April 2021.
  4. 4.0 4.1 4.2 4.3 Aday, Michael. Palladium. Retrieved on 15 April 2021.
  5. 5.0 5.1 Schoen, Seth (5 July 2002). Palladium summary?. Retrieved on 15 April 2021.
  6. 6.00 6.01 6.02 6.03 6.04 6.05 6.06 6.07 6.08 6.09 6.10 6.11 6.12 6.13 6.14 6.15 6.16 6.17 6.18 6.19 6.20 6.21 6.22 6.23 6.24 6.25 6.26 6.27 Baker, Brandon (2003). A Technical Introduction to NGSCB. Security Summit East 2003 Washington, DC. Archived from the original on 23 July 2014. Retrieved on 16 April 2021.
  7. PeterNBiddle (28 November 2012). How four Microsoft engineers proved that the “darknet” would defeat DRM (comment). Retrieved on 15 April 2021.
  8. 8.0 8.1 Paul England, John D. DeTreville, Butler W. Lampson (inventors) (11 December 2001 (publication date)). US6330670B1 - Digital rights management operating system - Google Patents. Retrieved on 15 April 2021.
  9. William A. Arbaugh, David J. Farber, Angelos D. Keromytis, Jonathan M. Smith (inventors) (6 February 2001 (publication date)). US6185678B1 - Secure and reliable bootstrap architecture - Google Patents. Retrieved on 15 April 2021.
  10. Anderson, James (October 1972). Computer Security Technology Planning Study. Electronic Systems Division. Retrieved on 15 April 2021.
  11. David P. Jablon, Nora E. Hanley (inventors) (30 May 1995 (publication date)). US5421006A - Method and apparatus for assessing integrity of computer system software - Google Patents. Retrieved on 15 April 2021.
  12. Kuhn, Markus (30 April 1997). The TrustNo 1 Cryptoprocessor Concept. Retrieved on 15 April 2021.
  13. Trusted Computing Platform Alliance. TCPA - Trusted Computing Platform Alliance. Archived from the original on 2 August 2002. Retrieved on 15 April 2021.
  14. Microsoft (July 2003). Microsoft Next-Generation Secure Computing Base - Technical FAQ. Archived from the original on 24 October 2008. Retrieved on 15 April 2021.
  15. Trusted Computing Group (February 2011). TRUSTED COMPUTING GROUP (TCG) TIMELINE. Archived from the original on 17 August 2011. Retrieved on 15 April 2021.
  16. Lampson, Butler. Curriculum Vitae. Archived from the original on 9 June 2011. Retrieved on 15 April 2021.
  17. Paul England, John D. DeTreville, Butler W. Lampson (inventors) (4 December 2001 (publication date)). US6327652B1 - Loading and identifying a digital rights management operating system - Google Patents. Retrieved on 15 April 2021.
  18. Paul England, John D. DeTreville, Butler W. Lampson (20 March 2007 (publication date)). US7194092B1 - Key-based secure storage - Google Patents. Retrieved on 15 April 2021.
  19. Paul England, John D. DeTreville, Butler W. Lampson (inventors) (16 November 2004 (publication date)). US6820063B1 - Controlling access to content based on certificates and access predicates - Google Patents. Retrieved on 15 April 2021.
  20. Paul England, John D. DeTreville, Butler W. Lampson (inventors) (6 February 2007). US7174457B1 - System and method for authenticating an operating system to a central processing unit, providing the CPU/OS with secure storage, and authenticating the CPU/OS to a third party - Google Patents. Retrieved on 15 April 2021.
  21. Paul England, Butler W. Lampson (18 November 2003 (publication date)). US6651171B1 - Secure execution of program code - Google Patents. Retrieved on 15 April 2021.
  22. Paul England, Butler W. Lampson (10 August 2004 (publication date)). US6775779B1 - Hierarchical trusted code for content protection in computers - Google Patents. Retrieved on 15 April 2021.
  23. 23.0 23.1 Microsoft. Privacy, Security And Content In Windows Platforms. Archived from the original on 2 April 2015. Retrieved on 15 April 2021.
  24. Microsoft. Privacy, Security, and Content Protection. WinHEC 2001. Archived from the original on 26 June 2017. Retrieved on 15 April 2021.
  25. 25.0 25.1 Levy, Steven (24 June 2002). The Big Secret. Newsweek. Retrieved on 15 April 2021.
  26. Palladium: Microsoft’s big plan for the PC. Geek.com. Archived from the original on 24 June 2002. Retrieved on 15 April 2021.
  27. ExtremeTech Staff (24 June 2002). “Palladium”: Microsoft Revisits Digital-Rights Management. ExtremeTech. Retrieved on 15 April 2021.
  28. Lettice, John (13 August 2002). MS recruits for Palladium microkernel and/or DRM platform. Retrieved on 15 April 2021.
  29. Bekker, Scott (6 May 2003). Palladium on Display at WinHEC. Redmond. Retrieved on 15 April 2021.
  30. Microsoft PressPass (7 May 2003). At WinHEC, Microsoft Discusses Details of Next-Generation Secure Computing Base. Archived from the original on 23 July 2017. Retrieved on 15 April 2021.
  31. Evers, Joris (7 May 2003). Microsoft turns to emulators for security demo. Networkworld. Retrieved on 15 April 2021.
  32. Paula Rooney, Elizabeth Montalbano (1 October 2003). Microsoft To Hand Over Early Whidbey, Yukon Code At PDC. CRN. Retrieved on 15 April 2021.
  33. Evers, Joris (30 October 2003). Developers get hands on Microsoft's NGSCB. Networkworld. Retrieved on 15 April 2021.
  34. Microsoft PressPass (15 December 2003). A Review of Microsoft Technology for 2003, Preview for 2004. Archived from the original on 8 September 2014. Retrieved on 15 April 2021.
  35. Evers, Joris (5 May 2004). WinHEC: Microsoft revisits NGSCB security plan. Networkworld. Archived from the original on 18 November 2005. Retrieved on 15 April 2021.
  36. Sanders, Tom (6 May 2004). Microsoft shakes up Longhorn security. vnunet.com. Archived from the original on 29 November 2005. Retrieved on 15 April 2021.
  37. Bangeman, Eric (5 May 2004). Microsoft kills Next-Generation Secure Computing Base. Ars Technica. Retrieved on 15 April 2021.
  38. Rooney, Paula (5 May 2004). Microsoft Shelves NGSCB Project As NX Moves To Center Stage. CRN. Retrieved on 15 April 2021.
  39. eWeek Editors (5 May 2004). Microsoft: Palladium Is Still Alive and Kicking. eWeek. Retrieved on 15 April 2021.
  40. Thurrott, Paul (7 May 2004). WinHEC 2004 Show Report and Photo Gallery. Paul Thurrott's Supersite for Windows. Archived from the original on 23 July 2014. Retrieved on 15 April 2021.
  41. Fried, Ina (8 September 2004). Controversial Microsoft plan heads for Longhorn. CNET. Retrieved on 15 April 2021.
  42. Evers, Joris (24 February 2005). Silence Fuels Speculation on Microsoft Security Plan. PCWorld. Retrieved on 15 April 2021.
  43. 43.0 43.1 Ballmer, Steve (20 April 2005). Steve Ballmer: Microsoft Management Summit. Microsoft. Archived from the original on 8 September 2014. Retrieved on 15 April 2021.
  44. Sanders, Tom (26 April 2005). Longhorn security gets its teeth kicked out. vnunet.com. Archived from the original on 28 April 2005. Retrieved on 15 April 2021.
  45. Microsoft Shared Source Initiative Home Page. Archived from the original on 6 June 2005. Retrieved on 15 April 2021.
  46. Biddle, Peter (16 July 2008). Perception (or, Linus gets away with being honest again). Retrieved on 15 April 2021.
  47. Myth Index. PALLADIUM. Greek Mythology Index. Archived from the original on 17 January 2008. Retrieved on 15 April 2021.
  48. 48.00 48.01 48.02 48.03 48.04 48.05 48.06 48.07 48.08 48.09 48.10 48.11 48.12 48.13 48.14 48.15 48.16 48.17 48.18 48.19 48.20 48.21 48.22 48.23 48.24 48.25 48.26 48.27 48.28 Microsoft (November 2003). Privacy-Enabling Enhancements in the Next-Generation Secure Computing Base. Archived from [http:/download.microsoft.com/download/8/d/5/8d5ec8cf-3e09-49e0-95dd-0a6a3ded510f/NGSCB_Privacy_Enhancements.doc the original] on 28 December 2005. Retrieved on 15 April 2021.
  49. 49.00 49.01 49.02 49.03 49.04 49.05 49.06 49.07 49.08 49.09 49.10 49.11 49.12 49.13 49.14 49.15 49.16 49.17 49.18 49.19 49.20 49.21 49.22 49.23 49.24 Bryan Mark Willman, Paul England, Kenneth D. Ray, Keith Kaplan, Varugis Kurien, Michael David Marr (inventors) (10 February 2005 (publication date)). US7530103B2 - Projection of trustworthiness from a trusted environment to an untrusted environment - Google Patents. Retrieved on 16 April 2021.
  50. Amy Carroll, Mario Juarez, Julia Polk and Tony Leininger (June 2002). Microsoft "Palladium": A Business Overview. Microsoft PressPass. Archived from the original on 5 August 2002. Retrieved on 15 April 2021.
  51. Biddle, Peter (19 September 2002). Cryptogram: Palladium Only for DRM. Retrieved on 15 April 2021.
  52. Cram, Ellen (October 2003). Security Developer Center: Next-Generation Secure Computing Base: Development Considerations for Nexus Computing Agents (Security (General) Technical Articles). Microsoft Developer Network (MSDN). Archived from the original on 2 December 2003. Retrieved on 15 April 2021.
  53. Biddle, Peter (22 February 2008). Attack isn't news, and there are mitigations. Retrieved on 15 April 2021.
  54. Biddle, Peter (23 February 2008). Threat Model Irony. Retrieved on 15 April 2021.
  55. Builds of "Longhorn" with NGSCB?
  56. Microsoft (26 March 2012). Windows BitLocker Drive Encryption Frequently Asked Questions. Retrieved on 15 April 2021.
  57. Clarke, Gavin (19 April 2005). Microsoft running late in virtualization. The Register. Retrieved on 15 April 2021.
  58. Microsoft TechNet. Windows 8 Boot Process - Security, UEFI, TPM. Archived from the original on 7 April 2013. Retrieved on 15 April 2021.
  59. Microsoft TechNet. Windows 8 Boot Security FAQ. Archived from the original on 2 March 2014. Retrieved on 15 April 2021.
  60. Microsoft (31 May 2018). Measured Boot. Retrieved on 15 April 2021.
  61. Microsoft (7 September 2012). Secured Boot and Measured Boot: Hardening Early Boot Components against Malware. Retrieved on 15 April 2021.
  62. Giesecke & Devrient (4 May 2012). G&D; announces MobiCore® integrated security platform to support Samsung GALAXY S III in Europe. Archived from the original on 12 May 2012. Retrieved on 15 April 2021.
  63. Apple (10 September 2013). Apple Announces iPhone 5s—The Most Forward-Thinking Smartphone in the World. Archived from the original on 11 September 2013. Retrieved on 15 April 2021.

External links

Microsoft

BetaArchive