Chat with us, powered by LiveChat Hardware-Security_Chapter_01_.pdf - STUDENT SOLUTION USA

1CHAPTER

INTRODUCTION TO HARDWARESECURITY

CONTENTS1.1 Overview of a Computing System …………………………………………………………………….. 21.2 Layers of a Computing System………………………………………………………………………… 4

1.2.1 Electronic Hardware …………………………………………………………………… 41.2.2 Types of Electronic Hardware …………………………………………………………. 5

1.3 What Is Hardware Security? ………………………………………………………………………….. 61.4 Hardware Security vs. Hardware Trust ……………………………………………………………….. 7

1.4.1 What Causes Hardware Trust Issues? …………………………………………………. 71.4.2 What Security Issues Result From Untrusted Entities? ………………………………. 9

1.5 Attacks, Vulnerabilities, and Countermeasures………………………………………………………. 101.5.1 Attack Vectors …………………………………………………………………………. 101.5.2 Attack Surface…………………………………………………………………………. 111.5.3 Security Model ………………………………………………………………………… 121.5.4 Vulnerabilities …………………………………………………………………………. 121.5.5 Countermeasures………………………………………………………………………. 13

1.6 Conflict Between Security and Test/Debug …………………………………………………………… 141.7 Evolution of Hardware Security: A Brief Historical Perspective …………………………………….. 151.8 Bird’s Eye View………………………………………………………………………………………… 161.9 Hands-on Approach …………………………………………………………………………………… 161.10 Exercises ………………………………………………………………………………………………. 18

1.10.1 True/False Questions ………………………………………………………………….. 181.10.2 Short-Answer Type Questions …………………………………………………………. 191.10.3 Long-Answer Type Questions ………………………………………………………….. 19

References……………………………………………………………………………………………………. 19

Computer security has become an essential part of the modern electronic world. Hardware security,which deals with the security of electronic hardware, encompassing its architecture, implementation,and validation, has evolved alongside it into an important field of computer security. In the context ofthis book, “hardware” indicates electronic hardware. Like any field of security, the topic of hardwaresecurity focuses on attacks crafted to steal or compromise assets and approaches designed to protectthese assets. The assets under consideration are the hardware components themselves, for instance,integrated circuits (ICs) of all types, passive components (such as, resistors, capacitors, inductors),and printed circuit boards (PCBs); as well as the secrets stored inside these components, for instance,cryptographic keys, digital rights management (DRM) keys, programmable fuses, sensitive user data,firmware, and configuration data.

Figure 1.1 illustrates different fields of security related to a modern computing system. Network se-curity focuses on the attacks on a network connecting multiple computer systems, and the mechanisms

Hardware Security. https://doi.org/10.1016/B978-0-12-812477-2.00006-X

Copyright © 2019 Elsevier Inc. All rights reserved.1

2 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

FIGURE 1.1

The landscape of security in modern computing systems.

to ensure its usability and integrity under potential attacks. Software security focuses on malicious at-tacks on software, often exploiting different implementation bugs, such as inconsistent error handlingand buffer overflows, and techniques to ensure reliable software operation in presence of potential se-curity risks. Information security focuses on the general practice of providing confidentiality, integrity,and availability of information through protection against unauthorized access, use, modification, ordestruction. Hardware security, on the other hand, focuses on attacks and protection of hardware. Itforms the foundation of system security, providing trust anchor for other components of a system thatclosely interact with it. The remaining chapters of the book illustrate how a variety of attacks on hard-ware challenge this notion, and how effective countermeasures against these attacks can be employedto ensure the security and trust of hardware.

The book covers all topics related to electronic hardware and systems security encompassing var-ious application domains, including embedded systems, cyber-physical systems (CPS), internet ofthings (IoT), and biomedical systems (for example, implants and wearables). It describes security andtrust issues, threats, attacks, vulnerabilities, protection approaches, including design, validation, andtrust monitoring solutions for hardware at all levels of abstraction: from hardware intellectual prop-erties (IPs) to ICs to PCBs and systems. The coverage also includes associated metrics, tools, andbenchmarks.

1.1 OVERVIEW OF A COMPUTING SYSTEMA computing system is a system of interconnected components. The following highlights the majorcomponents in such a system and their roles: memory for information storage; processor for informa-tion processing, and input/output devices (for example, peripheral devices, such as keyboards, printers,and displays) for interfacing with human users or other systems. These systems are capable of capturing

1.1 OVERVIEW OF A COMPUTING SYSTEM 3

and transforming information; and communicating them with other computing systems. Informationstorage and processing are often performed on digital data. However, in many applications, there is ananalog front-end that acquires analog signals from the physical world, conditions and then digitizesthem. A digital processing unit then performs specific operations on the digital form. Optionally, aback-end unit then transforms the processed digital signal into analog to interface with the physicalworld. Traditionally, computing systems have been broadly classified into two categories: (a) general-purpose systems and (b) embedded systems. The first category included systems, such as desktop,laptop, and servers, which had the following characteristics: (1) complex and optimized architecture,(2) versatile and easily programmable, and (3) suitable for diverse use-case scenarios. On the otherhand, the second category included systems, such as digital cameras, home automation devices, wear-able health monitors, and biomedical implants, which have the following characteristics: (1) highlycustomized design, (2) tight hardware-software integration, and (3) unique use-case constraints.

Over the years, the gap between these two categories narrowed with embedded systems becomingmore flexible, and having more computing power to handle general-purpose applications. Two newclasses of systems have emerged, which borrow features from both categories: (1) cyber-physical sys-tems and (2) internet of things. In the first class, computer-based information processing systems aredeeply intertwined with the Internet and its users, and the physical world. Examples of such systemsinclude smart grid, autonomous vehicles, and robotic systems. The second class, on the other hand,includes computing systems that connect to the Internet, the cloud, and other endpoint devices, andinteract with the physical world by collecting and exchanging data using embedded sensors and con-trolling physical devices through actuators. Such devices include smart home automation devices andpersonal health monitors. Both classes of devices increasingly rely on artificial intelligence to makeautonomous decisions, to have situational awareness, and to better respond to different usage patternsthrough learning. The distinction between these two classes is getting blurred gradually, with CPS hav-ing similar characteristics as IoT devices. Devices falling into these classes share many features, whichhave security implications, such as, (1) long and complex life, during which the security requirementsmay change; (2) machine-to-machine communication without any human in the loop, which may createan insecure communication link, and need for novel authentication approaches; and (3) mass produc-tion in the millions with identical configuration, which can help an attacker identify vulnerabilities ofone device, and use that knowledge to break into many.

Moreover, modern computing systems usually do not operate in isolation. They are connected withother computers and/or the cloud, which is a collection of computers that provides shared computingor storage resources to a bunch of other computers. Figure 1.2 shows different components of a modern

FIGURE 1.2

Different layers in the organization of modern computing systems.

4 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

FIGURE 1.3

Attack impact and difficulty at different layers of a computing system.

computing system, for example, a CPS or IoT system, starting from hardware units to cloud and thedata/applications in the cloud. Each component in this organization is associated with diverse securityissues and corresponding solutions. The weakest link in this complex, often physically distributedsystem usually determines the security of the whole system. Achieving security of the entire systemrequires a significant rethinking on how to integrate specific security solutions for each component intoa holistic protection approach.

1.2 LAYERS OF A COMPUTING SYSTEMModern computing systems can be viewed as an organization consisting of multiple layers of abstrac-tion, as illustrated in Fig. 1.3. The hardware layer lies at the bottom of it, followed by the firmware thatinterfaces with the physical hardware layer. The firmware layer is followed by the software stack, com-prising of an optional virtualization layer, the operating system (OS), and then the application layer. Alltypes of computing systems discussed in the previous sections share this common structure. The databeing processed by a computing system is stored in the hardware layer in volatile (for example, static ordynamic random access memory) or non-volatile (such as NAND or NOR flash) memory and accessedby the software layers. A system is connected to another system or to the Internet using networkingmechanisms that are realized by a combination of hardware and software components. Computer secu-rity issues span all these layers. While hardware security issues are relatively fewer than those at otherlayers (as shown in Fig. 1.3), they usually have much larger impacts on system security. In particular,they typically affect a much larger number of devices than security issues in software and network, asmanifested by the recent discoveries, such as the Spectre and Meltdown bugs [9] in modern processors.

1.2.1 ELECTRONIC HARDWAREThe hardware in a computing system can, itself, be viewed as consisting of three layers, as illustratedin Fig. 1.4. At the top of it, we have a system-level hardware, that is, the integration of all physical

1.2 LAYERS OF A COMPUTING SYSTEM 5

FIGURE 1.4

Three abstraction layers of modern electronic hardware (shown for two example devices).

components (such as PCBs, peripheral devices, and enclosures) that make a system, such as a smartthermostat, or a smartphone. At the next level, we have one or more PCBs, which provide mechanicalsupport and electrical connection to the electronic components that are required to meet the functionaland performance requirements of a system. PCBs are typically constructed with multiple layers ofan insulating substrate (for example, fiberglass) that allow power and signals to be connected amongcomponents using conductive metal (e.g., copper) traces. At the bottom-most layer, we have activecomponents (such as ICs, transistors, and relays), and passive electronic components. Different layersof hardware abstraction bring in diverse security issues, and require commensurate protections. Thebook covers major security issues and solutions at all levels of hardware abstraction.

1.2.2 TYPES OF ELECTRONIC HARDWAREThe ICs or chips used in a PCB do various tasks, such as signal acquisition, transformation, process-ing, and transfer. Some of these chips (for example, an encryption or image compression chip) work ondigital signals and are called digital ICs, whereas others work on analog or both types of signals, andcalled analog/mixed-signal (AMS) chips. Examples of the latter type include voltage regulators, poweramplifiers, and signal converters. The ICs can also be classified based on their usage model and avail-ability in the market. Application-specific integrated circuits (ASIC) represent a class of ICs, whichcontain customized functionalities, such as signal processing or security functions, and meet specificperformance targets that are not readily available in the market. On the other hand, commercial off-the-shelf (COTS) ICs are the ones, which are already available in the market, often providing flexibility andprogrammability to support diverse system design needs. These products can be used out-of-the-box,but often needs to be configured for a target application. Examples of COTS components include fieldprogrammable gate arrays (FPGA), microcontrollers/processors, and data converters. The distinctionbetween ASIC and COTS is often subtle, and when a chip manufacturer decides to sell its ASICs intothe market, they can become “off-the-shelf” to the original equipment manufacturers (OEMs), whobuild various computing systems using them.

6 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

1.3 WHAT IS HARDWARE SECURITY?Information or data security have remained an issue of paramount concern for system designers andusers alike since the beginning of computers and networks. Consequently, protection of systems andnetworks against various forms of attacks, targeting corruption/leakage of critical information andunauthorized access, have been widely investigated over the years. Information security, primarilybased on cryptographic measures, have been analyzed and deployed in a large variety of applications.Software attacks in computer systems have also been extensively analyzed, and a large variety of so-lutions have been proposed, which include static authentication and dynamic execution monitoring.Study of hardware security, on the other hand, is relatively new, since hardware has been traditionallyconsidered immune to attacks, and hence used as the trust anchor or “root-of-trust” of a system. How-ever, various security vulnerabilities and attacks on hardware have been reported over the last threedecades. Earlier, they primarily focused on implementation-dependent vulnerabilities in cryptographicchips leading to information leakage. However, emerging trends in electronic hardware production,such as intellectual-property-based (IP-based) system on chip (SoC) design, and a long and distributedsupply chain for manufacturing and distribution of electronic components—leading to reduced con-trol of a chip manufacturer on the design and fabrication steps—have given rise to many growingsecurity concerns. This includes malicious modifications of ICs, also referred to as Hardware Trojanattacks [12], in an untrusted design house or foundry. This is an example of a hardware security is-sue, which can potentially provide a kill switch to an adversary. Other examples include side-channelattacks, where secret information of a chip can be extracted through measurement and analysis ofside-channels, that is, physical signals, such as power, signal propagation delay, and electromagneticemission; IP piracy and reverse-engineering, counterfeiting, microprobing attacks on ICs, physicaltampering of traces or components in PCBs, bus snooping in PCBs, and access to privileged resourcesthrough the test/debug infrastructure. They span the entire life-cycle of hardware components, fromdesign to end-of-life, and across all abstraction levels, from chips to PCBs to system. These attacks,associated vulnerabilities and root causes and their countermeasures form the field of hardware security[1,2,10,13,14].

Another important aspect of hardware security relates to the hardware design, implementation,and validation to enable secure and reliable operation of the software stack. It deals with protectingsensitive assets stored in a hardware from malicious software and network, and providing an appropri-ate level of isolation between secure and insecure data and code, in addition to providing separationbetween multiple user applications [1]. Two major topics in this area are as follows. (1) Trusted exe-cution environment (TEE), such as ARM’s TrustZone, Intel SGX, and Samsung Knox, which protectscode and data of an application from other untrusted applications with respect to confidentiality (theability to observe a data), integrity (the ability to change it), and availability (the ability to access cer-tain data/code by the rightful owner). The confidentiality, integrity, and availability are referred to asCIA requirements. They form three important pillars for secure execution of software on a hardwareplatform. Establishment of these requirements is enabled by a joint hardware-software mechanism,with hardware providing architectural support for such an isolation, and facilitating effective use ofcryptographic functions, and software providing efficient policies and protocols. (2) Protection ofsecurity-critical assets in an SoC through appropriate realization of security policies, such as accesscontrol and information flow policies, which govern the CIA requirements for these assets. Figure 1.5depicts these focus areas of the hardware security field.

1.4 HARDWARE SECURITY VS. HARDWARE TRUST 7

FIGURE 1.5

Scope of hardware security and trust.

1.4 HARDWARE SECURITY VS. HARDWARE TRUSTHardware security issues arise from its own vulnerability to attacks (e.g., side-channel or Trojan at-tacks) at different levels (such as, chip or PCB), as well as from lack of robust hardware support forsoftware and system security. On the other hand, hardware trust issues arise from involvement of un-trusted entities in the life cycle of a hardware, including untrusted IP or computer-aided design (CAD)tool vendors, and untrusted design, fabrication, test, or distribution facilities. These parties are capableof violating the trustworthiness of a hardware component or system. They can potentially cause devia-tions from intended functional behavior, performance, or reliability. Trust issues often lead to securityconcerns, for example, untrusted IP vendor can include malicious implant in a design, which can leadto denial of service (DoS), or information leakage attacks during field operation. However, trust issuescan also lead to other incidents, such as poor parametric behavior (for example, reduced performanceor energy-efficiency), or degraded reliability, or safety issues. The evolving nature of the global sup-ply chain and the horizontal semiconductor business model are making the hardware trust issues evermore significant. It, in turn, is driving new research and development efforts in trust verification andhardware design for trust assurance.

1.4.1 WHAT CAUSES HARDWARE TRUST ISSUES?Figure 1.6 shows the major steps in the life cycle of an IC. It starts from a design house creating thefunctional specifications (e.g., data compression, encryption, or pattern recognition) and parametricspecifications (e.g., the operating frequency or standby power) of a design. Next, it goes through asequence of design and verification steps, where the high-level description of a design (for instance,an architecture level description) is transformed into logic gates, then into a transistor level circuit,

8 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

FIGURE 1.6

Major steps in the electronic hardware design and test flow.

and finally, into a physical layout. During this transformation process, a design is verified for cor-rect functional behavior and for performance, power, and other parametric constraints. The layout isthen transferred to a fabrication facility, which creates a mask for the layout and then goes through acomplex sequence of lithography, etching, and other steps to produce a “wafer”, which is typically acircular silicon disk containing a batch of ICs. Each IC in the wafers is then individually tested forcertain defects using special test patterns. ICs are referred to as “die” at this stage. These dies are thencut by diamond saw from the wafer and assembled into a package made of ceramic, or other materials.The packaged dies, or ICs, are then tested for compliance with functional and parametric features usinganother set of test patterns in a manufacturing test facility. This step is vital in the life cycle of an IC,since it ensures that defective chips not meeting functional or parametric specifications are discarded,and do not go into the supply chain. During the early stage of an IC development process, this stepis used to identify and debug design bugs (as opposed to manufacturing defects), and information onidentified bugs is fed back to the design team in order to incorporate appropriate correction. The testingand debug process for a complex IC is usually facilitated by incorporating specialized structures in adesign, which is called design-for-test (DFT) and design-for-debug (DFD) infrastructure, respectively.The primary goal behind inserting these structures is to increase the controllability and observability ofinternal nodes in a design, which are difficult to access from a fabricated chip. However, as we discusslater, it inherently creates conflict with security goals, which aim to minimize controllability and ob-servability of these nodes, such that an attacker cannot easily access or control internal circuit nodes.For example, direct access to the read/write control for embedded memory in a processor through

1.4 HARDWARE SECURITY VS. HARDWARE TRUST 9

FIGURE 1.7

Attack vectors and countermeasures for each stage in an IC’s life span.

the DFT/DFD interface can help an attacker leak, or manipulate sensitive data stored in the protectedregions of memory.

The chips that pass manufacturing testing then go into the supply chain for distribution. In currentbusiness models, most OEMs acquire these chips from a supply chain, and then integrate them ina PCB, install firmware or configuration bitstream into COTS components, and create a completesystem. This long development cycle of hardware involves multiple third-party vendors and facilities.They are often untrusted and globally distributed. In Fig. 1.6, the stages marked by red (medium grayin print version) box are usually untrusted, whereas stages marked with yellow (light gray in print)may or may not be untrusted; the ones marked with green (dark gray in print version) are usuallytrusted. In the next section, we describe what kind of attacks can be mounted on a hardware in thesestages. It is worth noting that, PCB design, fabrication, and test process follow a similar flow, and ahorizontal business model—as in the case of IC, where the design and manufacturing companies—isspread around the world to reduce the total production cost. Hence, PCBs are often subject to a similarset of vulnerabilities as ICs.

1.4.2 WHAT SECURITY ISSUES RESULT FROM UNTRUSTED ENTITIES?Figure 1.7 illustrates some of the key security issues resulting from untrusted design/fabrication/testprocess for an IC. Connected to the same is our consideration of an SoC life cycle that integrates anumber of IPs, typically acquired from third-party IP vendors, into a design that meets functional andperformance criteria. These IP vendors are often physically distributed across the globe. Since chipmanufacturers do not publish information about their IP sources for business reason, we consideredseveral example SoCs that go into mobile computing platforms (such as cell phones), and created a list

10 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

FIGURE 1.8

Long and globally distributed supply chain of hardware IPs makes SoC design increasingly vulnerable to diverse

trust/integrity issues.

of common IP blocks, which are integrated into these SoCs [1]. Figure 1.8 shows the map of possiblesources of these IPs. Usually, an IP design house specializes in a specific class of IP (for example,memory, communication, or crypto-IP). From this map, it is fair to assume that the IPs used in anSoC are very likely to come from different, and physically distributed third-party IP vendors, whichwould result in these IPs being untrusted from an SoC designer’s point of view. Note that a foundrywould have access to the entire unencrypted design file for an SoC, consisting of all IP blocks, theinterconnect fabric, and the DFT/DFD structures. While a third-party IP vendor can possibly insert amalicious design component or hardware Trojan, untrusted design, fabrication, and test facilities wouldhave several attack options, such as piracy of a design, reverse engineering, and Trojan implantation.As shown in Fig. 1.7, these security issues can be addressed through targeted design or test solutions,which we will describe later in this book.

1.5 ATTACKS, VULNERABILITIES, AND COUNTERMEASURESIn this section, we briefly introduce the main types of hardware attacks, the threat models for theseattacks, the known functional and non-functional vulnerabilities, and the countermeasures that can betaken to protect against these attacks.

1.5.1 ATTACK VECTORSAttack vectors—as they relate to hardware security—are means or paths for bad actors (attackers) toget access to hardware components for malicious purposes, for example, to compromise it or extractsecret assets stored in hardware. Example of hardware attack vectors are side-channel attacks, Trojanattacks, IP piracy, and PCB tampering. Attack vectors enable an attacker to exploit implementation

1.5 ATTACKS, VULNERABILITIES, AND COUNTERMEASURES 11

FIGURE 1.9

Possible attack surfaces in a computing system.

level issues (such as, side-channel attacks and PCB tampering) or take advantage of lack of control onhardware production cycle (such as, Trojan attacks).

1.5.2 ATTACK SURFACEAttack surface is the sum of all possible security risk exposures. It can also be explained as the aggre-gate of all known, unknown, and potential vulnerabilities, and controls across all hardware, software,and network components. Tapping into different locations, components, and layers (including hard-ware/software) of the target system, an attacker can exploit one or more vulnerabilities and mountan attack, for example, extract secret information from a system. Figure 1.9 illustrates major attacksurfaces of a smartphone, composed of software, network, data, and hardware components. From thefigure, it is evident that the total surface area of a system could be large, and hardware is a critical partof it. In the context of hardware security, attack surfaces define the level of abstraction in which theattacker focuses on launching a hardware attack. Keeping the attack surface as small as possible is acommon goal for developing countermeasures. With respect to hardware security, three main attacksurfaces are as follows.

Chip Level Attacks: Chips can be targeted for reverse engineering, cloning, malicious insertion,side-channel attacks, and piracy [10,11]. Counterfeit or fake chips can be sold as original units if theattacker can create a copy that has a similar appearance or features as the original. Trojan-infectedchips can also find their place in the supply chain, which can pose a threat of unauthorized access, ormalfunction. Side-channel attacks can be mounted on a chip with the goal to extract secret informationstored inside it. For example, a cryptochip performing encryption with a private key, or a processor

12 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

running protected code and/or operating on protected data are both vulnerable to leakage of secretinformation through this attack.

PCB-Level Attacks: PCBs are common targets for attackers, as they are much easier to reverse-engineer and tamper than ICs. Design information of most modern PCBs can be extracted throughrelatively simple optical inspection (for example, X-Ray tomography) and efficient signal processing.Primary goals for these attacks are to reverse engineer the PCB, and obtain the schematic of the boardto redesign it and create fake units. Attackers may also physically tamper a PCB (for instance, cut atrace or replace a component) to make them leak sensitive information, or bypass DRM protection.

System-Level Attacks: Complex attacks involving the interaction of hardware-software compo-nents can be mounted on the system. By directly focusing on the most vulnerable parts in a system,such as DFT infrastructure at PCB level (for example, JTAG) and memory modules, attackers may beable to compromise the system’s security by gaining unauthorized control and access to sensitive data.

1.5.3 SECURITY MODELAttacks on hardware systems can take many forms. An attacker’s capabilities, physical or remote ac-cess of the system, and assumptions of system design and usage scenarios play essential roles in thetechniques that can be used to launch an attack. In order to describe a security issue or solution, it isimportant to unambiguously describe the corresponding security model. A security model should havetwo components: (1) Threat Model, which describes the threats including, the purpose and mechanismof an attack; and (2) Trust Model, which describes the trusted parties or components. In order to de-scribe the security issues arising from malicious implants in third-party IPs, the threat model needsto describe the objective of the attackers, for example, to leak secret from an SoC or to disrupt itsfunctional behavior; and the way the attack is mounted, for instance, through the insertion of a Trojanthat triggers malicious memory write operation under a rare internal condition. The trust model needsto describe which parties are trusted, for example, the SoC designer and CAD tools are trusted in thiscase.

1.5.4 VULNERABILITIESVulnerabilities refer to weakness in hardware architecture, implementation, or design/test process,which can be exploited by an attacker to mount an attack. These weaknesses can either be functional ornonfunctional, and they vary based on the nature of a system and its usage scenarios. A typical attackconsists of an identification of one or more vulnerabilities, followed by exploiting them for a successfulattack. Identification of vulnerabilities is usually the hardest step in the attack process. Following is adescription of some typical vulnerabilities in hardware systems:

Functional Bug: Most vulnerabilities are caused by functional bugs and poor design/testing prac-tices. They include weak cryptographic hardware implementation and inadequate protection of assetsin an SOC. Attackers may find these vulnerabilities by analyzing the functionality of a system fordifferent input conditions to look for any abnormal behaviors. Additionally, vulnerabilities may be dis-covered accidentally, which makes it easier for an attacker to perform malicious activities using thesenewly discovered issues in the system.

Side-Channel Bug: These bugs represent implementation-level issues that leak critical informationstored inside a hardware component (for example processors or cryptochips) through different formsof side-channels [4]. Attackers may find these vulnerabilities by analyzing the side-channel signals

1.5 ATTACKS, VULNERABILITIES, AND COUNTERMEASURES 13

FIGURE 1.10

State of the practice in security design and validation along the life cycle of a system on chip.

during operation of a hardware component. Many powerful attacks based on side-channel bugs relyon statistical methods to analyze the measured traces of a side-channel parameter [2]. Criticality of aside-channel bug depends on the amount of information leakage through a side channel.

Test/Debug infrastructure: Most hardware systems provide a reasonable level of testability anddebuggability, which enable designers and test engineers to verify the correctness of operation. Theyalso provide means to study internal operations and processes running in a hardware, which are es-sential for debugging a hardware. These infrastructures, however, can be misused by attackers, whereextraction of sensitive information or unwanted control of a system can be possible using the test/debugfeatures.

Access control or information-flow issues: In some cases, a system may not distinguish betweenauthorized and unauthorized users. This vulnerability may give an attacker access to secret assets andfunctionality that can be misused or leveraged. Moreover, an intelligent adversary can monitor theinformation flow during system operation to decipher security-critical information, such as, controlflow of a program and memory address of a protected region from a hardware.

1.5.5 COUNTERMEASURESAs hardware attacks have emerged in the past years, countermeasures to mitigate them have also beenreported. Countermeasures can either be employed at design or test time. Figure 1.10 shows the currentstate of the practice in the industry for SoCs in terms of: (a) incorporating security measures in a design(referred to as “security design”), and (b) verifying that these measures protect a system against knownattacks (referred to as “security validation”). SoC manufacturing flow can be viewed as consisting of

14 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

four conceptual stages: (1) exploration, (2) planning, (3) development, and (4) production. The firsttwo stages and part of the development stage form the pre-silicon part of SoC life cycle, which consistsof exploring the design space, architecture definition, and then deriving a design that meets designtargets. Part of the development stage, followed by the production of SoCs, form the post-silicon partof the SoCs’ life, which consists of verifying and fabricating the chips. Security assessment is per-formed during the exploration stage, which identifies the assets in an SoC, possible attacks on them,and requirements for secure execution of software, when applicable. This step ends up creating a setof security requirements. Next, an architecture is defined (referred to as “security architecture”) to ad-dress these requirements, which includes protection of test/debug resources against malicious access,and safeguarding cryptographic keys, protected memory regions, and configuration bits. Once the ar-chitecture is defined and the design is gradually created, pre-silicon-security validation is performed tomake sure the architecture and its implementation adequately fulfill the security requirements. Similarsecurity validation is performed after chips are fabricated (referred to as “post-silicon security val-idation”) to ensure that the manufactured chips do not have security vulnerabilities and, hence, areprotected against known attacks. Both pre- and post-silicon security validation come in various forms,which vary in terms of coverage of security vulnerabilities, the resulting confidence, and the scalabilityof the approach to large designs. These techniques include code review and formal verification duringpre-silicon validation, fuzzing, and penetration testing during post-silicon validation [16].

Design solutions: Design-for-security (DfS) practices have emerged as powerful countermeasures.DfS offers effective low-overhead design solutions that can provide active or passive defense againstvarious attacks. DfS techniques, such as obfuscation [6], use of reliable security primitives, side-channel resistance (for example, masking and hiding techniques), and hardening schemes for Trojaninsertion, can reliably protect against many major attack vectors. Likewise, SoC security architecturethat is resilient against software attacks has been a significant aspect of SoC platform security.

Test and verification solutions: Test and verification techniques have constituted a major categoryof protection approaches against the diverse security and trust issues. Both pre-silicon verification—functional as well as formal—and post-silicon manufacturing testing have been considered as mech-anisms to identify security vulnerabilities and trust issues for chips, PCBs, and systems. The bookcovers various DfS and test/verification solutions, which are developed to protect hardware againstmany vulnerabilities.

1.6 CONFLICT BETWEEN SECURITY AND TEST/DEBUGSecurity and test/debug of an SoC often impose conflicting design requirements during its design phase.Post-manufacturing testing and debug using DFT, for example, scan chain, and DFD structures consti-tute some of the important activities in a SoC lifecycle. Effective debug demands internal signals of IPblocks to be observable during execution in silicon. However, security constraints often cause severerestrictions to internal signal observability, thus making debugging a challenge. These constraints arisefrom the need to protect many critical assets, such as, locks for high-assurance modules, encryptionkeys, and firmware. While these security assets themselves are difficult to observe during debugging,they also create observability challenge for other signals, for example, signals from an IP containinglow-security assets that need to be routed through an IP block with a high-security asset.

1.7 EVOLUTION OF HARDWARE SECURITY 15

Unfortunately, in current industrial practice, this problem is difficult to address. First, it lacks formalcentralized control on security assets, since they are determined per-IP basis. Second, debug require-ments are usually not considered during the integration of security assets, which often leads to thediscovery of the debug issues very late during actual debug with silicon execution. Fixing the problemat that point may require a silicon “respin”, that is, design correction followed by re-fabrication, whichis expensive and often an unacceptably long process. Hence, there is a growing emphasis to develophardware architecture, which ensures the security of DFT and DFD infrastructure, while ensuring theirdesired role in helping with SoC test/debug process.

1.7 EVOLUTION OF HARDWARE SECURITY: A BRIEF HISTORICALPERSPECTIVE

Over the past three decades, the field of hardware security has been evolving rapidly with the discoveryof many vulnerabilities and attacks on hardware. Figure 1.11 provides a brief timeline for the evolutionof hardware security. Before 1996, there were only sporadic instances of hardware IP piracy, primarilycloning of ICs, leading to the development of some IP watermarking and other anti-piracy techniques.In 1996, a groundbreaking hardware attack was introduced in the form of timing analysis attack [3], anattack which aims to extract information from a cryptographic hardware on the basis of a systematicanalysis of computation time for different operations. In 1997, fault injection analysis was reportedas an attack vector that can lead to compromising the security of a system [7]. The attack focuses onapplying environmental stress to the system in order to force it to leak sensitive data. The first poweranalysis based side-channel attack was introduced in 1999 [2]; it focused on analyzing the powerdissipations at runtime to retrieve secrets from a cryptochip.

In 2005, there were reports on production and supply of counterfeit ICs, including cloned and re-cycled chips, which created major security and trust concerns. The concept of hardware Trojans wasintroduced in 2007 [12], which unveiled the possibility of inserting malicious circuits in a hardwaredesign with the aim to disrupt normal functional behavior, leak sensitive information, grant unautho-rized control, or degrade the performance of the system. Some recent hardware vulnerabilities that havereceived significant attention from industry and academic community includes “Meltdown” and “Spec-tre” [9]; they exploit implementation-dependent side-channel vulnerabilities in modern processors to

FIGURE 1.11

The evolution of hardware security over the past decades.

16 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

access private data from a computer, such as user passwords. These vulnerabilities have been discov-ered and reported by different processor manufacturers, who have introduced software fixes for them.

Similar to the realm of software security, countermeasures for hardware attacks have been devel-oped in a reactive manner. Over the years, many design and test solutions have evolved to mitigateknown attacks. The idea of hardware tagging was introduced in 1998, where every IC instance was as-signed with a unique ID. Hardware security primitives, such as physical unclonable functions (PUFs)and true random number generators (TRNGs) were introduced in early 2000 to improve the level ofprotection against hardware attacks [5,15]. The United States Department of Defense introduced sev-eral sponsored research programs to facilitate growth in hardware security solutions. In 2008, DARPAintroduced the Integrity and Reliability of Integrated Circuits (IRIS) program to develop techniques forhardware integrity and reliability assurance through destructive and nondestructive analysis. In 2012, areport published by the senate armed services showed that a set of counterfeit devices was discoveredin different branches of the US Air Force [8], accentuating the gravity of the problem. The total num-ber of these counterfeits exceeded 1 million, and the investigation concluded with an amendment thatenforces counterfeit-avoidance practices. The Supply Chain Hardware Integrity for Electronics De-fense (SHIELD) program was introduced by DARPA in 2014 to develop technology to trace and trackelectronic components—PCB to chip to small passive components—as they move through the supplychain. Over the past decade, many such efforts by both government and industry to enable secure andtrusted hardware platform have been observed with more to come in near future.

1.8 BIRD’S EYE VIEWTable 1.1 provides a bird’s-eye view on major hardware security issues and countermeasures, whichwe have covered in this book. For each attack, it provides information on the adversary, attack surface,and attack objective; whereas for a countermeasure, it lists the stage of hardware lifecycle when it isapplied, the goal, and the associated overhead. This table is expected to serve as a quick reference forthe readers to some of the key concepts presented in the book.

1.9 HANDS-ON APPROACHWe have included hands-on experiments for several major hardware security topics in this book. Webelieve a practical learning component is crucial in understanding the diverse security vulnerabilitiesand the defense mechanisms in a complex system. To do the experiments, we have custom-designed aneasy-to-understand, flexible, and ethically “hackable” hardware module, in particular, a printed circuitboard (PCB) with basic building blocks that can emulate a computer system and create a networkof connected devices. It is called “HaHa”, that is Hardware Hacking module. Appendix A providesa detailed description of the HaHa board and associated components. Relevant chapters of the bookinclude a short description of the experiments that can be performed to better understand the topic ofthe chapters. The experiments, we also hope, would help to stimulate interest in students to furtherinvestigate the security issues, and to explore effective countermeasures. In addition to the board, thehands-on experiment platform includes corresponding software modules, and well-written instructionsto realize diverse security attacks in this platform, all of which are available as companion materials inthe book’s own website.

1.9 HANDS-ON APPROACH 17

Table 1.1 Bird’s-eye view of the hardware attacks & countermeasures

ATTACKS

Type ofAttack

What it is Adversary Goal Life-cyclestages

Chapter #

HardwareTrojan Attacks

Malicious designmodification

(in chip or PCB)

Untrusted foundry,untrusted IP Vendor,untrusted CAD tool,

untrusted designfacilities

• Causemalfunction• Degradereliability

• Leak secret info

• Design• Fabrication

Chapter 5

IP Piracy Piracy of the IP byunauthorized entity

Untrusted SoCDesigner,

untrusted foundry

• Produceunauthorized copy

of the design• Use an IP outsideauthorized use cases

• Design• Fabrication

Chapter 7

PhysicalAttacks

Causing physicalchange to hardware

or modifyingoperating conditionto produce variousmalicious impacts

End user, badactor with physical

access

• Impact functionalbehavior

• Leak information• Cause denial of

service

• In field Chapter 11

Mod-chipAttack

Alteration of PCBto bypass

restrictions imposedby system designer

End user • Bypass securityrules imposedthrough PCB

• In field Chapter 11

Side-ChannelAttacks

Observingparametric

behaviors (i.e.,power, timing, EM)

to leak secretinformation

End user, badactor with physical

access

• Leak secretinformation being

processed inside thehardware

• In field Chapter 8

Scan-basedAttacks

Leveraging DFTcircuits to facilitateside-channel attack

End user, badactor with physical

access

• Leak secretinformation being

processed inside thehardware

• In field• Test-time

Chapter 9

Microprobing Using microscopicneedles to probe

internal wires of achip

End user, badactor with physical

access

• Leak secretinformation

residing inside thechip

• In field Chapter 10

ReverseEngineering

Process ofextracting the

hardware design

Design house,foundry,end user

• Extract designdetails of the

hardware

• Fabrication• In field

Chapter 7

(continued on next page)

18 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

Table 1.1 (continued)

COUNTERMEASURES

Type of Coun-termeasure

What it is Parties involved Goal Life-cyclestages

Chapter #

TrustVerification

Verifying the designfor potential

vulnerabilities toconfidentiality,integrity, andavailability

• Verificationengineer

• Provide assuranceagainst known

threats

• Pre-siliconverification

• Post-siliconvalidation

Chapter 5

HardwareSecurity

Primitives(PUFs, TRNGs)

Providing securityfeatures to support

supply chainprotocols

• IP integrator• Value added

reseller(for enrollment)

• Authentication• Key generation

• ThroughoutIC supply chain

Chapter 12

HardwareObfuscation

Obfuscating theoriginal design toprevent piracy andreverse engineering

• Design house• IP integrator

• Prevent piracy• Reverse

engineering• Prevent Trojan

insertion

• Design-time Chapter 14

Masking &Hiding

Design solutions toprotect against

side-channel attacks

• Design house To preventside-channel attacksby reducing leakage

or adding noise

• Design-time Chapter 8

SecurityArchitecture

Enabledesign-for-securitysolution to prevent

potential andemerging security

vulnerabilities

• Design house• IP integrator

Addressconfidentiality,integrity, and

availability issueswith design-time

solution

• Design-time Chapter 13

SecurityValidation

Assessment ofsecurity

requirements

• Verification andvalidation engineer

Ensure dataintegrity,

authentication,privacy

requirements,access control

policies

• Pre-siliconverification

• Post-siliconvalidation

Chapter 16

1.10 EXERCISES1.10.1 TRUE/FALSE QUESTIONS1. Hardware is not considered as the “root-of-trust” for system security.

2. Hardware security should not matter if a strong software tool is used to protect user’s data.

3. Hardware contains different forms of assets that can be accessed by bad actors.

REFERENCES 19

4. Meltdown and Spectre are two newly discovered vulnerabilities found in most modern processors.5. Hardware development lifecycle involves a number of untrusted entities.6. Hardware trust issues do not lead to any security issue.7. Side-channel attacks are attack vectors that exploit implementation-level weakness.8. Test and debug features in a hardware often represent a conflict with security objectives.9. A functional bug can be exploited by an attacker for extracting assets in a SoC.

10. Verification solutions can protect against several hardware security issues.

1.10.2 SHORT-ANSWER TYPE QUESTIONS1. Describe different levels of abstraction of electronic hardware.2. State the differences: (1) general-purpose systems vs. embedded systems, (2) ASIC vs. COTS.3. Describe two major areas of focus for hardware security.4. What are the hardware trust issues, and how do they impact the security of a computing system?5. What are the differences between functional and side-channel bugs?6. Why and how do security and test/debug requirements conflict?7. Provide examples of some security assets inside SoCs.

1.10.3 LONG-ANSWER TYPE QUESTIONS1. Describe different aspects of a system’s security, and briefly discuss their relative impact.2. Explain the current state of practice in the security design of and verification process for SoCs.3. Describe the major steps of the electronic hardware design and test flow, and discuss the security

issues in each stage.4. What are the different attack surfaces for a computing system (say, a smartphone), and for the

hardware components inside it?5. Describe different types of security vulnerabilities in hardware.

REFERENCES[1] S. Ray, E. Peeters, M.M. Tehranipoor, S. Bhunia, System-on-chip platform security assurance: architecture and validation,

Proceedings of the IEEE 106 (1) (2018) 21–37.[2] P. Kocher, J. Jaffe, B. Jun, Differential power analysis, in: CRYPTO, 1999.[3] P. Kocher, Timing attacks on implementations of Die–Hellman, RSA, DSS, and other systems, in: CRYPTO, 1996.[4] F. Koeune, F.X. Standaert, A tutorial on physical security and side-channel attacks, in: Foundations of Security Analysis

and Design III, 2005, pp. 78–108.[5] M. Barbareschi, P. Bagnasco, A. Mazzeo, Authenticating IoT devices with physically unclonable functions models, in: 10th

International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, 2015, pp. 563–567.[6] A. Vijayakumar, V.C. Patil, D.E. Holcomb, C. Paar, S. Kundu, Physical design obfuscation of hardware: a comprehensive

investigation of device and logic-level technique, IEEE Transactions on Information Forensics and Security (2017) 64–77.[7] J. Voas, Fault injection for the masses, Computer 30 (1997) 129–130.[8] U.S. Senate Committee on Armed Services, Inquiry into counterfeit electronic parts in the Department of Defense supply

chain, 2012.[9] Meltdown and Spectre: Here’s what Intel, Apple, Microsoft, others are doing about it. https://arstechnica.com/gadgets/

2018/01/meltdown-and-spectre-heres-what-intel-apple-microsoft-others-are-doing-about-it/.[10] M. Tehranipoor, U. Guin, D. Forte, Counterfeit integrated circuits, Counterfeit Integrated Circuits (2015) 15–36.

20 CHAPTER 1 INTRODUCTION TO HARDWARE SECURITY

[11] R. Torrance, D. James, The State-of-the-Art in Semiconductor Reverse Engineering, ACM/EDAC/IEEE Design AutomationConference (DAC) (2011) 333–338.

[12] M. Tehranipoor, F. Koushanfar, A Survey of Hardware Trojan Taxonomy and Detection, IEEE Design and Test of Comput-ers (2010) 10–25.

[13] Y. Alkabani, F. Koushanfar, Active Hardware Metering for Intellectual Property Protection and Security, Proceedings of16th USENIX Security Symposium on USENIX Security (2007) 291–306.

[14] G. Qu, F. Koushanfar, Hardware Metering, Proceedings of the 38th annual Design Automation (2001) 490–493.[15] R. Pappu, B. Recht, J. Taylor, N. Gershenfeld, Physical One-Way Functions, Science (2002) 2026–2030.

[16] F. Wang, Formal Verification of Timed Systems:A Survey and Perspective, Proceedings of the IEEE (2004) 1283–1305.

error: Content is protected !!