Текст
                    PICMG 2.9 R1.0
CompactFCI
System Management Specification
February 2, 2000
including:
ECN 2.9-1.0-001: Slot Connectivity Data
May 20, 2002





PICMG 2.9R1.0: ECN 2.9-1.0-001 PICMG Specification Engineering Change Notice ECN 2.9-1.0-001 Topic: Slot Connectivity Data Affected Specification: PICMG 2.9 R1.0 Sponsor(s): Software Interoperability Subcommittee Participants in Final Ballot Group: APW Electronic Solutions, Artela Systems, Astec Power, Brooktrout, Hybricon, Ibus/Phoenix, Intel, Interphase, Motorola Computer Group, Pentair, Pigeon Point Systems, Rittal/Kaparel, Sanmina, StarGen, Sun Microsystems Description The focus of this ECN is to enable hardware-independent software to determine the following connectivity for slots within a CompactPCI chassis: • Geographic addresses (or physical slot numbers) • Logical PCI addresses (based on the bus, device and function numbers) by which operating systems and device drivers access the boards in those slots • PCI and H. 110 bus connectivity • HA Hot Swap capabilities • PICMG 2.16 fabric and/or node connectivity • PICMG 2.17 fabric and/or node connectivity This ECN addresses systems with any combination of CompactPCI buses, H. 110 buses, PICMG 2.16 fabrics and PICMG 2.17 fabrics. Although the ECN is focused on backplane connectivity, the text and data structures are defined such that future revisions of the PICMG 2.9 specification can easily add other types of descriptive information. Justification Software access to the above information is crucial to achieving three higher level goals: • For systems based on PICMG 2.9, the CompactPCI System Management specification, enabling integration of information from two critical domains. The system management domain is based on the Intelligent Platform Management Interface (IPMI) and inherently uses physical slot numbers to identify boards. Meanwhile, the operating system domain typically uses PCI logical addresses to refer to boards on a CompactPCI bus. PICMG 2.9 R1.0 does not provide a mechanism for hardware-independent software to correlate information between ADOPTED May 20, 2002 1
PICMG 2.9R1.0: ECN 2.9-1.0-001 these domains, each of which may contain unique information critical to the overall management of a system. • Enabling application and system software to communicate with an operator about specific boards in the system by designating them in a simple and precise manner that is consistent across systems and vendors. Currently there is no hardware- independent method of determining physical slot numbers for specific boards for the different architectural configurations that vendors are building today and in the near future. The consequences of operator action on the wrong slot can be severe. • Enabling application and system software to determine the various capabilities of specific slots. Currently there is no hardware-independent method of determining the types of connectivity provided by a slot. This information may be crucial to overall management of the system. In summary: the overall situation is that there are many possible combinations of features that a slot may implement, but there is no hardware-independent means for software to determine which features are implemented. Additionally, there are many possible ways that logical addresses may be mapped on a backplane, but there is no hardware- independent means to map logical address to physical slot numbers for all the possible backplane configurations. The Software Interoperability subcommittee intends to develop some simple supporting material that may be helpful to implementers. This material could include, for instance, a few examples of backplanes and boards, showing how they could be described by the data structures specified in this ECN. Interested PICMG members can check for this material in the “Software Interoperability Materials” directory on the members-only side of the PICMG website. Style Specific proposed changes are provided in the next section, in the style described below. ECN text describing changes in the affected document uses this font. Text intended for inclusion in the body of the specification uses this font. Section headers for text intended for inclusion in the body of the specification are preceded with the notation: <Header Level x> y.z Title ADOPTED May 20, 2002 2
PICMG 2.9R1.0: ECN 2.9-1.0-001 where: x indicates the level of header to be used, and y.z indicates the anticipated number that would be automatically generated by Microsoft Word by the inclusion of this new section named Title. Newly inserted Figure and Table items are assigned numbers in a special series for this ECN. Figures are numbered E01-Fn, with Tables numbered E01-Tn. Specific Proposed Changes Section 1.5 Supporting Documents Update the reference to PICMG 2.1 for the current release number. • PICMG 2.1 R2.0 CompactPCI Hot Swap Specification • PICMG 2.0 R3.0 CompactPCI Core Specification, as amended by ECN 2.0-3.0-002 Add references to IPMI V1.5 Revision 1.1, PICMG 2.5, 2.7, 2.16, 2.17, and the PCI-to-PCI Bridge spec. • Intelligent Platform Management Interface Specification Vl. 5 Document Revision 1.1.' • PCI-to-PCI Bridge Architecture Specification. PCI Special Interest Group, 5200 N.E. Elam Young Parkway, Hillsboro OR 97124-6497, Phone: (503) 696-2000 http://www.pcisig.com/ • PICMG 2.5 R1.0 CompactPCI Computer Telephony Specification • PICMG 2.7 R1.0 CompactPCI Dual System Slot Specification • PICMG 2.16 R1.0 CompactPCI Packet Switching Backplane Specification • PICMG 2.17 R1.0 CompactPCI StarFabric Specification New Chapter 5 IPMI Extensions Add a new chapter to specify optional commands and FRU information records, introduced by this ECN, to describe the connectivity supported by the backplane and boards. <Header Level 1 > 5 IPMI Extensions This section defines IPMI extensions to provide additional functionality needed for CompactPCI systems. These extensions are implemented via commands and FRU information records. <Header Level 2> 5.1 Command Extensions 1 The IPMI vl .5 reference is in addition to the IPMI vl .0 reference. IPMI vl .5 provides necessary context for some of the additions made by this ECN. This ECN, however, does not change the original focus of the other chapters from IPMI vl.0. ADOPTED May 20, 2002 3
PICMG 2.9R1.0: ECN 2.9-1.0-001 This section describes commands to provide functionality beyond that defined by the Intelligent Platform Management Interface Specification that is useful for CompactPCI systems. <Header Level 3> 5.1.1 Standard Command Format The Intelligent Platform Management Interface Specification VI.5 Document Revision 1.1 defines Group Extension network functions (2Ch/2Dh). Within the Group Extension network function, is a value to identify the defining body. A value of OOh is used to identify PICMG2 as the defining body. Refer to the Intelligent Platform Management Interface Specification for more details. All commands defined in this specification shall be sent using a network function of 2Ch and a defining body identifier of OOh (PICMG). All responses shall be sent using a network function of 2Dh and a defining body identifier of OOh (PICMG). Table E01-T1 shows the standard command and response formats. Table E01-T1 — Standard Command and Response Formats Request Data Response Data byte data field 1 PICMG Identifier (OOh) 2:n Optional command specific data 1 Completion Code 2 PICMG Identifier (OOh) 3:n Optional command specific response data Table E01-T2 lists the PICMG-defined command values. Table E01-T2 - PICMG Command Values Command Name Value Section PICMG Spec Get PICMG Properties OOh 5.1.2 Get Address Info Olh 5.1.3 2.0/2.5/2.16/2.17 Get Shelf Address Info 02h 5.1.4 2.0/2.5/2.16/2.17 <Header Level 3> 5.1.2 Get PICMG Properties Command The Get PICMG Properties command returns miscellaneous properties about the implementation of PICMG-defined commands and FRU information. IPM devices and BMCs should implement the Get PICMG Properties command. The Get PICMG Properties command can be used to facilitate reducing the amount of polling for FRU Device IDs behind a given IPM Device or BMC, especially if the IPM Device developer allocates FRU Device IDs densely and with lower numerical values. Table E01-T3 shows the format of the Get PICMG Properties command. 2 The IPMI Specification vl.5 defines a value of 0 to indicate the defining body as “CompactPCI.” PICMG has arranged with the IPMI organization to rename this value to “PICMG.” ADOPTED May 20, 2002 4
PICMG 2.9R1.0: ECN 2.9-1.0-001 Table E01-T3 - Get PICMG Properties Command Request Data Response Data byte data field 1 PICMG Identifier - Indicates that this is a PICMG-defined group extension command. A value of OOh shall be used. 1 Completion Code 2 PICMG Identifier - Indicates that this is a PICMG-defined group extension command. A value of OOh shall be used. 3 PICMG Extensions Version — Indicates the version of PICMG extensions implemented by the IPM device or BMC. 7:4 = BCD encoded minor version 3:0 = BCD encoded major version This specification defines version 1.0 of the PICMG extensions. IPM devices and BMCs implementing the extensions as defined by this specification shall report a value of Olh. The value OOh is reserved. 4 Max FRU Device ID — The numerically largest FRU Device ID implemented by this IPM Device or BMC. 5 FRU Device ID for 1PM Device - Indicates a FRU device ID for the FRU containing the IPM device or BMC. <Header Level 3> 5.1.3 Get Address Info Command The Get Address Info command returns addressing information for the FRU containing the specified FRU device. IPM devices and BMCs should implement the Get Address Info command. Multi-board sets that include a BMC or IPM device implementing the Get Address Info command shall: • Implement a BMC or IPM device on only one of the boards in the set, • Implement a FRU device for each board in the set, • Return the valid hardware address for the board containing the specified FRU device, and • Return the IPMB address(es) of the single BMC or IPM device. Table E01-T4 shows the format of the Get Address Info command. Table E01-T4 - Get Address Info Command Request Data byte data field 1 PICMG Identifier Indicates that this is a PICMG-defined group extension command. A value of OOh shall be used. 2 FRU Device ID - Indicates an individual FRU device. This byte is optional. If this byte is not present, the command shall return addressing information for the FRU containing the IPM device or BMC. ADOPTED May 20, 2002 5
PICMG 2.9R1.0: ECN 2.9-1.0-001 Response Data 1 Completion Code 2 PICMG Identifier - Indicates that this is a PICMG-defined group extension command. A value of OOh shall be used. 3 Hardware Address — The hardware address of the FRU containing the specified FRU device. For PICMG 2.x boards this is the geographic address for the slot in which the FRU is installed. 4 IPMB0 Address - Indicates the 1PMB address for IPMB0 if implemented. This address applies to the IPM device or BMC that implements this command and is the same irrespective of the FRU Device ID specified in the request. A value of FFh indicates that IPMB0 is not implemented. 5 IPMB1 Address - Indicates the IPMB address for IPMB1 if implemented. This address applies to the IPM device or BMC that implements this command and is the same irrespective of the FRU Device ID specified in the request. A value of FFh indicates that IPMB 1 is not implemented. A normal Completion Code shall be returned when the command executes successfully. A “Parameter out of range” Completion Code shall be returned for any FRU device IDs that are not implemented. A “Requested data not present” Completion Code shall be returned for FRU device IDs that are implemented but not populated. The Hardware Address field is defined as 8 bits to allow for future expansion. If an address is less than 8 bits, the lower bits shall be used and the upper bits shall be set to zero (0). <Header Level 3> 5.1.4 Get Shelf Address Info Command The Get Shelf Address command returns the shelf address information known by the IPM device or BMC. IPM devices and BMCs that support a method of determining shelf geographic address information should implement the Get Shelf Address Info command. Table E01-T5 shows the format of the Get Shelf Address Info command. Table E01-T5 - Get Shelf Address Info Command Request Data Response Data byte data field 1 PICMG Identifier - Indicates that this is a PICMG-defined group extension command. A value of OOh shall be used. 1 Completion Code 2 PICMG Identifier - Indicates that this is a PICMG-defined group extension command. A value of OOh shall be used. 3:4 Shelf Address - Indicates the shelf address of the IPM device or BMC. LS-Byte first. For PICMG 2.x systems this is the shelf geographic address. ADOPTED May 20, 2002 6
PICMG 2.9R1.0: ECN 2.9-1.0-001 The Shelf Address field is defined as 16 bits to allow for future expansion. If an address is less than 16 bits, the lower bits shall be used and the upper bits shall be set to zero (0). <Header Level 2> 5.2 FRU Information Extensions This section defines FRU information records to describe various features of backplanes, chassis, boards, and other modules. Some of the FRU information records defined by this specification are oriented towards backplanes and chassis, while other records are oriented towards boards or other modules. It is expected that a given module will only implement an appropriate subset of the defined records. This specification does not define the physical location of any supported chassis or backplane FRU information. System designers should implement backplane and/or chassis FRU information as appropriate for their specific design. <Header Level 3> 5.2.1 Standard Record Format The IPMI Platform Management FRUInformation Storage Definition defines the format for FRU information, and more specifically, the MultiRecord area. Within the MultiRecord area there are provisions for OEM-defined records. This specification uses OEM records within the MultiRecord area to describe various properties of the backplane, chassis and boards. A specific “manufacturer ID” (12634d / 00315Ah) is used within the OEM records to identify the records as PICMG defined. Refer to the IPMI Platform Management FRUInformation Storage Definition for more details. Each FRU information record defined in this specification begins with a standard MultiRecord header as shown in Table E01-T6. Table E01-T6 - Standard MultiRecord Header Byte Offset Name Description 0 RTI Record Type ID- Defined in the IPMI Platform Management FRU Information Storage Definition. For all the records defined in this specification, a value of COh (OEM) shall be used. 1 EOL/VER End of List / Version - Indicates the record format version and if this is the last record in the MultiRecord area.. 7:7 - End of List, 0 if more records exist, 1 if this is the last record. 6:4 - Reserved, write as 000b 3:0 - Record format version (=010b unless otherwise specified) 2 RL Record Length - The length of the data following the standard header in bytes. 3 RC Record Checksum - Used to calculate a zero checksum of the data following the header. 4 HC Header Checksum — Used to calculate a zero checksum of the standard MultiRecord header. ADOPTED May 20, 2002 7
PICMG 2.9R1.0: ECN 2.9-1.0-001 Immediately following the standard header is the record-specific data. The record- specific data follows the format shown in Table E01-T7. Table E01-T7 - General Format of Record-Specific Data Byte Offset Name Description 0:2 PID PICMG ID--A three byte ID assigned to PICMG. For all records defined in this specification, a value of 12634 (00315Ah) shall be used. This value is stored LS-Byte first. 3 PRI PICMG Record ID — Indicates the record type as defined in this specification. 4 RFV Record Format Version - Indicates the version of the record format. 5:(RL-1) Data Record type specific data. Variable length. PRI indicates the specific record type. Values for PRI are defined in Table E01-T8. Table E01-T8 - PICMG Record ID Values Record Type Value Section PICMG Spec PCI Connectivity OOh 5.2.2 2.0 PCI Device Mapping Olh 5.2.3 NA HA Hot Swap Connectivity 02h 5.2.4 2.1 H. 110 Connectivity 03h 5.2.5 2.5 Backplane Point-to-Point Connectivity 04h 5.2.6 2.16/2.17 On-board StarFabric Connectivity 05h 5.2.7 2.17 <Header Level 3> 5.2.2 PCI Connectivity Record The PCI Connectivity record describes PCI bus segmentation and IDSEL connection as implemented on the backplane. For each slot a descriptor is provided that describes the PCI connectivity for the associated slot. Backplanes that have PCI buses should provide FRU information containing a PCI Connectivity record in the MultiRecord area. If the backplane FRU information does not include a PCI Connectivity record, but does include other backplane connectivity records defined in this specification, software may assume that the backplane does not support backplane PCI buses. The format of the PCI Connectivity record data is shown in Table E01-T9. ADOPTED May 20, 2002 8
F |>1Г АЛП PICMG 2.9R1.0: ECN 2.9-1.0-001 Table E01-T9 - PCI Connectivity Record Data Byte Offset Name Description 0:2 P1D PICMGID-A three byte ID assigned to PICMG. For all records defined in this specification, a value of 12634 (00315Ah) shall be used. This value is stored LS-Byte first. 3 PRI PICMG Record ID — Indicates a PCI Connectivity record (OOh). 4 RFV Record Format Version — Shall be 0 for this version of the PCI Connectivity record. 5 EPS DC Extended PCI Slot Descriptor Count — Indicates the number of Extended PCI Slot Descriptors (EPSDs) present in this record. 6:(я+5) PSD[l..n] PCI Slot Descriptor - An array of n PCI Slot Descriptors indexed by GA, starting with GA=1, where n is the number of slots. Each PCI slot descriptor describes a single slot’s PCI connectivity. (и+6):(н+2пг+5) if present EPSD[l..w] Extended PCI Slot Descriptor — If present, an array of m EPSDs. Each EPSD provides additional PCI slot connectivity information for an individual slot. Each PSD entry provides bit-fields indicating the segment to which they belong as well as the ADxx signal that is connected to the slot’s IDSEL pin. The combination of these two identifiers and the PCI Device Mapping record data from the board that hosts the segment allows a full physical/logical (PCI device number) mapping. Table E01-T10 shows the format of PSD entries. Table E01-T10 - PCI Slot Descriptor Bits Name Description 3:0 IC IDSEL Connection - Indicates which ADxx line is connected to the IDSEL pin. IC is set to xx minus 16, where xx is the number of the address line. A value of 0 indicates a standard system slot (which has no IDSEL connection). If the slot is either: 1) system slot capable with a non-zero IC value or 2) a dual system slot, one or more EPSDs for the slot shall be present. In the dual system slot case, one EPSD shall be present for each of the P1/P2 and P4/P5 segments. 7:4 SID Segment ID - Indicates the physical backplane PCI bus segment. A value of OFh in this field indicates no PCI bus connectivity. If present, each 16-bit EPSD entry provides bit-fields indicating the slot to which they apply and additional information about the PCI connectivity of that slot. Table E01-T11 shows the format of EPSD entries. ADOPTED May 20, 2002 9
PICMG 2.9R1.0: ECN 2.9-1.0-001 Table E01-T11 - Extended PCI Slot Descriptor Bits Name Description 4:0 GA Geographic Address - Indicates the geographic address of the slot to which this EPSD applies. Also identifies the PSD that this EPSD extends, when used as an index into the PSD array. 5 SSC System Slot Capable - A value of 1 indicates that the slot is system slot capable. 9:7 SI Segment ID - Identifies the physical backplane PCI bus segment to which this EPLD refers. 10 IN Interface Number - Identifies the backplane interface in this slot that hosts the Segment ID bus segment: O = P1/P2 1 = P4/P5 15:11 RSVD Reserved - Shall be 0 <Header Level 3> 5.2.3 PCI Device Mapping Record Boards that host one or more backplane segments implement a specific mapping from PCI device numbers to ADxx lines for Type 0 configuration cycles targeting each segment. Refer to PICMG 2.0 R3.0 ECN02 for more details. PCI Device Mapping records describe how the board maps PCI device numbers into ADxx lines during PCI configuration cycles, along with other board-specific information about the backplane segment(s) hosted by the board. Boards that provide the enumeration service for backplane PCI segments should provide FRU information containing a PCI Device Mapping record in the MultiRecord area for each hosted backplane segment. The combination of the PCI Connectivity record data from the backplane and the PCI Device Mapping record data from the board enable a full physical/logical (PCI device number) mapping. Each PCI Device Mapping Record is associated with a board in a specific slot that provides enumeration services for a backplane segment through an interface in that slot. The Geographic Address of the slot can be determined by issuing the Get Address Info command to the IPM Device responsible for the board. The format of the PCI Device Mapping record data is shown in Table E01-T12. Table E01-T12 - PCI Device Mapping Record Data Byte Offset Name Description 0:2 PID PICMG ID - A three byte ID assigned to PICMG. For all records defined in this specification, a value of 12634 (00315Ah) shall be used. This value is stored LS-Byte first. 3 PRI PICMG Record ID — Indicates a PCI Device Mapping record (Olh). 4 RFV Record Format Version — Shall be 0 for this version of the PCI Device Mapping record. 5:6 BPID Backplane PCI Interface Descriptor — This field describes how ADOPTED May 20, 2002 10
PICMG 2.9R1.0: ECN 2.9-1.0-001 PCI device numbers are mapped into ADxx lines during PCI configuration cycles to the backplane PCI segment. 7 if present RI Root ID — This byte is only present if RL is greater than 7. This byte indicates the PCI root from which the slot path (SP) is based. The interpretation of this byte depends on the value of the On-Board Host (OBH) field. For a segment that is hosted on-board (OBH =1): This value will be 0 for board sets with a single PCI tree. For board sets with multiple PCI trees, this value will be 0 for the first PCI tree, and will be unique and different from 0 for additional PCI trees. For a segment that is hosted by a J1/J2 interface to a peripheral slot (OBH = 0): This value is a FRU Device ID for the board that implements the J1/J2 interface. 8:(RL-1) if present SP Slot Path - These bytes are only present if RL is greater than 8. Each byte identifies a PCI to PCI bridge device by encoding the device number and function number by which that bridge is accessed on its primary bus. The device number is in bits 7:3. The function number is in bits 2:0. Successive bytes identify bridges successively further from the root of the on-board PCI tree and closer to a leaf bridge. A zero length slot path indicates that there are no PCI to PCI bridges between the root of the on- board PCI tree and the backplane interface. Multi-board sets shall report the entire PCI slot path on the combined board set as a single on-board PCI slot path. The 16-bit BP1D field provides bit-fields indicating how the board connects to the backplane PCI segment. Table E01-T13 specifies the format of the BPID field. Table E01-T13 — Backplane PCI Interface Descriptor Field Bits Name Description 4:0 AC ADxx Constant - This 5-bit field provides the constant that is combined with the PCI device number to determine xx and thereby the ADxx line used to select a device on this segment 5 AO ADxx Operator — This 1 -bit field indicates whether a PCI device number is added to (AO=1) or subtracted from (AO=0) the ADxx Constant to determine xx on this segment. That is, ADxx Operator determines “oper” in xx = ADxx Constant oper Device Number. 6 IN Interface Number - Indicates the interface to which this record applies: 0 = J1/J2 interface 1 = J4/J5 interface 7 OBH On-Board Host - Indicates whether this bus segment is hosted on-board or by a J1/J2 backplane interface: 0 = This bus segment is hosted by a J1/J2 backplane interface to a peripheral slot. ADOPTED May 20, 2002 11
PICMG 2.9R1.0: ECN 2.9-1.0-001 15:8 RSVD 1 = This bus segment is hosted by an on-board host. Reserved - Shall be 0. <Header Level 3> 5.2.4 HA Hot Swap Connectivity Record The HA Hot Swap Connectivity record identifies the high availability hot swap capabilities supported by the backplane. For each slot a descriptor is provided that describes the hot swap capabilities for the associated slot. The standard backplane as defined in PICMG 2.0 R 3.0 supports Full Hot Swap operation. The absence of a HA Hot Swap Connectivity record in the backplane FRU information implies a standard backplane. Table E01-T14 specifies the format of the HA Hot Swap Connectivity record data. Table E01-T14 — HA Hot Swap Connectivity Record Data Byte Offset Name Description 0:2 PID PICMG ID- A three byte ID assigned to PICMG. For all records defined in this specification, a value of 12634 (00315Ah) shall be used. This value is stored LS-Byte first. 3 PRI PICMG Record ID — Indicates an HA Hot Swap Connectivity record (02h). 4 RFV Record Format Version — Shall be 0 for this version of the HA Hot Sw ap Connectivity record. 5:(л7~ь4) HSCD Hot Swap Connectivity Descriptor — An array of n Hot Swap Connectivity Descriptors (HSCDs) indexed by GA, starting with GA=1, where n is the number of slots. Each HSCD describes a single slot’s hot swap connectivity as follows: 0 = No radial slot control signals 1 = Radial BDSEL# and HEALTHY# 2 = Radial BDSEL#, HEALTHY#, and PCI RST# 3 = Radial BD SEL#, HEALTHY#, PCIRST#, and M66EN <Header Level 3> 5.2.5 H.l 10 Connectivity Record The H.l 10 Connectivity record describes the H.l 10 bus connectivity implemented by the backplane. For each slot a descriptor is provided that specifies the H. 110 bus segment connectivity for the associated slot. Backplanes that implement H. 110 buses should provide FRU information containing an H.l 10 Connectivity record in the MultiRecord area. If the backplane FRU information does not include an H.l 10 Connectivity record, but does include other backplane connectivity records defined in this specification, software may assume that the backplane does not support backplane H.l 10 buses. The format of the H.l 10 Connectivity record data is shown in Table E01-T15. ADOPTED May 20, 2002 12
PICMG 2.9R1.0: ECN 2.9-1.0-001 Table E01-T15 - H.l 10 Connectivity Record Data Byte Offset Name Description 0:2 PID PICMGID- A three byte ID assigned to PICMG. For all records defined in this specification, a value of 12634 (00315Ah) shall be used. This value is stored LS-Byte first. 3 PRI PICMG Record ID — Indicates an H.l 10 Connectivity record (03h). 4 RFV Record Format Version - Shall be 0 for this version of the H.l 10 Connectivity record. 5:(я+4) SID[1..«] Segment ID - An array of n Segment Ids indexed by GA, starting with GA=1, where n is the number of slots. Each segment ID indicates to which H. 110 bus segment the associated slot is connected. A value of FFh in this field indicates no H. 110 bus connection. <Header Level 3> 5.2.6 Backplane Point-to-Point Connectivity Records Backplane Point-to-Point Connectivity records describe the point-to-point connections as implemented on the backplane. Backplanes that support point-to-point links should provide FRU information containing Backplane Point-to-Point Connectivity records in the MultiRecord area. If the backplane FRU information does not include a Backplane Point-to-Point Connectivity record, but does include other backplane connectivity records defined in this specification, software may assume that the backplane does not support point-to-point connectivity. The format of the Backplane Point-to-Point Connectivity record data is shown in Table E01-T16. Table E01-T16 - Backplane Point-to-Point Connectivity Record Data Byte Offset Name Description 0:2 PID PICMG ID — A three byte ID assigned to PICMG. For all records defined in this specification, a value of 12634 (00315Ah) shall be used. This value is stored LS-Byte first. 3 PRI PICMG Record ID — Indicates a Backplane Point-to-Point Connectivity record (04h) 4 RFV Record Format Version - Shall be 0 for this version of the Backplane Point-to-Point Connectivity record. 5:(m+4) PTPSDL Point-to-Point Slot Descriptor List — A list of variable length Point-to-Point Slot Descriptors (PTPSDs) totaling m bytes in length. Each PTPSD describes the number of links and the connectivity for a specific type of point-to-point links in one slot. ADOPTED May 20,2002 13
PICMG 2.9R1.0: ECN 2.9-1.0-001 Each variable length PTPSD provides fields indicating a specific type of point-to-point connectivity for the corresponding slot. Slots with multiple types of point-to-point connectivity should have one PTPSD for each type of point-to-point connectivity. Table E01-T17 shows the PTPSD format. Table E01-T17 Point-to-Point Slot Descriptor Byte Offset Name Description 0 PTPLT Point-to-Point Link Type — Indicates the type of point-to- point connectivity described by this PTPSD, as follows: 0 — Indicates a PICMG 2.16 Ethernet Node Slot 1 — Indicates a PICMG 2.16 Ethernet Fabric Slot 2 — Indicates an OEM-defined Ethernet slot 3 — Indicates a PICMG 2.17 StarFabric Basic Node Slot 4 — Indicates a PICMG 2.17 StarFabric Multi- Segment Node Slot 5 — Indicates a PICMG 2.17 StarFabric Fabric-Native Node Slot 6 — Indicates a PICMG 2.17 StarFabric Fabric Slot 7 — Indicates an OEM-defined StarFabric slot All other values are reserved. 1 SA Slot Address - Indicates the hardware address for this slot. For PICMG 2.x systems this is the geographic address for the slot. 2 PTPLC Point-to-Point Link Count — Indicates the number of point- to-point links in this slot of the type specified in PTPLT. 3:(3w+2) PTPLD[l..n] Point-to-Point Link Descriptors — An array of n Point-to- Point Link Descriptors (PTPLDs) where n is specified in the PTPLC byte. Each PTPLD describes a point-to-point link within the associated slot. Each 24-bit PTPLD entry provides bit-fields indicating the remote slot and the remote link within the remote slot to which the local link is connected. The term ‘remote’ refers to slots and links of the specified type in slots other than the slot associated with the PTPLD. The term ‘local’ refers to links of the specified type within the slot associated with the PTPLD. Table E01-T18 shows the format of PTPLD entries. ADOPTED May 20, 2002 14
PICMG 2.9R1.0: ECN 2.9-1.0-001 Table E01-T18 — Point-to-Point Link Descriptor Bits Name Description 7:0 RS Remote Slot — Indicates the hardware address of the remote slot to which this point-to-point link is connected. In PICMG 2.x systems, this is the GA of the remote slot. 12:8 Remote Link - Indicates the link number within the remote slot to which this point-to-point link is connected. For PICMG 2.16 links, a value of IFh is used to indicate connection to link port T of the remote slot. For PICMG 2.17 links, links shall be referenced in accordance with the numbering defined for the relevant slot type, except for the special treatment of FabricA and FabricB links defined in this section. A value of IFh is used to indicate connection to the FabricA link of the remote slot. A value of 1 Eh is used to indicate connection to the FabricB link of the remote slot. 17:13 LL Local Link - Indicates the link number within the local slot. For PICMG 2.16 links, a value of IFh indicates that the local link is link port T of the local slot. For PICMG 2.17 links, links shall be referenced in accordance with the numbering defined for the relevant slot type, except for the special treatment of FabricA and FabricB links defined in this section. A value of 1 Fh is used to indicate the FabricA link of the local slot. A value of lEh is used to indicate the FabricB link of the local slot. 23:18 RSVD Reserved — Shall be 0 - END - ADOPTED May 20, 2002 15

PICMG 2.9 R1.0 ComuactPCl System Management Specification February 2, 2000
©Copyright 1995, 1996, 1997, 1998, 1999, 2000 PCI Industrial Computers Manufacturers Group (PICMG). The attention of adopters is directed to the possibility that compliance with or adoption of PICMG ® specifications may require use of an invention covered by patent rights. PICMG shall not be responsible for identifying patents for which a license may be required by any PICMG specification, or for conducting legal inquiries into the legal validity or scope of those patents that are brought to its attention. PICMG specifications are prospective and advisory only. Prospective users are responsible for protecting themselves against liability for infringement of patents. Special attention is called to the fact that implementation of an IPMI-based system requires a royalty-free, reciprocal patent license. Additional information on the licensing requirements for IPMI through the IPMI adopter’s agreement can be found in section 1.5 of this document. I2C is a trademark of Philips Semiconductors. I2C is a two-wire communications bus/protocol developed by Philips. IPMB is a subset of the I2C bus/protocol and was developed by Intel. Implementations of the I2C bus/protocol or the IPMB bus/protocol may require licenses from various entities, including Philips Electronics N.V. and North American Philips Corporation. NOTICE: The information contained in this document is subject to change without notice. The material in this document details a PICMG specification in accordance with the license and notices set forth on this page. This document does not represent a commitment to implement any portion of this specification in any company's products. WHILE THE INFORMATION IN THIS PUBLICATION IS BELIEVED TO BE ACCURATE, PICMG MAKES NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH REGARD TO THIS MATERIAL INCLUDING, BUT NOT LIMITED TO ANY WARRANTY OF TITLE OR OWNERSHIP, IMPLIED WARRANTY OF MERCHANTABILITY OR WARRANTY OF FITNESS FOR PARTICULAR PURPOSE OR USE. In no event shall PICMG be liable for errors contained herein or for indirect, incidental, special, consequential, reliance or cover damages, including loss of profits, revenue, data or use, incurred by any user or any third party. Compliance with this specification does not absolve manufacturers of CompactPCI equipment, from the requirements of safety and regulatory agencies (UL, CSA, FCC, IEC, etc.). PICMG and the PICMG and CompactPCI logos are registered trademarks of the PCI Industrial Computers Manufacturers Group. All other brand or product names may be trademarks or registered trademarks of their respective holders. February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R 1.0 Page 2 of 32
Table of Contents 1. Overview (Environment)............................................................................6 1.1 Description and Goals of the Specification...................................................6 1.2 Justification................................................................................6 1.3 Using this Specification.....................................................................7 1.3.1 The Developers..........................................................................7 1.3.2 The I2C bus and capacitive loading......................................................8 1.3.3 Hot-swap................................................................................9 1.3.4 The BMC.................................................................................9 1.3.5 Intelligent versus non-intelligent devices.............................................10 1.4 Definitions.................................................................................10 1.5 Supporting Documents........................................................................11 2. Electrical Characteristics.......................................................................13 2.1 Standard Node...............................................................................13 2.1.1 Standard Node Parameters...............................................................13 2.1.2 Hot-swap Capability....................................................................14 2.1.2.1 Initialization.....................................................................14 2.1.2.2 Signal Transient Rejection........................................................ 14 2.1.2.3 Transmission Violations............................................................14 2.1.2.4 Protocol Violations................................................................15 2.1.3 Node Power.............................................................................15 2.2 Non-Standard Node...........................................................................16 2.3 Management Bus Topology.....................................................................16 2.3.1 Line Loading Limitations...............................................................16 2.3.2 Line Biasing Requirements..............................................................17 3. System Management Requirements...................................................................18 3.1 Chassis.....................................................................................18 3.1.1 Backplane..............................................................................18 3.1.1.1 IPMBO..............................................................................18 3.1.1.2 IPMB 1.............................................................................18 3.1.1.3 Treatment of ALERT#............................................................... 19 3.1.1.4 IPMB Extension connector...........................................................19 3.1.2 System Management Power................................................................19 3.1.3 Bridging and Extending.................................................................20 3.2 System Board Computer and BMC...............................................................20 3.2.1 Baseboard Management Controller........................................................21 3.2.1.1 System Interface...................................................................21 3.2.1.2 Single-ported IPMB.................................................................21 3.2.1.3 Dual-ported IPMB...................................................................21 3.2.1.4 Optional and Private Busses........................................................22 3.2.1.5 Repository Storage.................................................................22 3.2.1.6 1PMI Compatibility and Interoperability............................................22 3.2.2 BMC Deployment..........................................................................22 3.2.2.1 BMC Power..........................................................................23 3.2.2.2 System Interface...................................................................23 3.2.2.3 Single-ported BMC..................................................................23 3.2.2.4 Dual-ported BMC....................................................................23 3.2.2.5 Ancillary BMC Support..............................................................23 3.3 Address Allocation for Peripherals..........................................................24 3.3.1 General Allocation Principles...........................................................24 3.3.2 Programmatic Allocation of Peripheral Addresses.........................................24 3.3.2.1 Power Supply Management Node Address Mapping.......................................25 3.3.2.2 CompactPCI Peripheral Management Node Address Mapping..............................25 3.4 CompactPCI peripheral cards.................................................................26 CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 3 of 32
3.4.1 Peripheral Management Node Minimum Functionality.....................................26 3.5 Peripheral Management Controllers........................................................26 4. IPMI Functional Requirements..................................................................27 4.1 BMC Functional Requirements..............................................................27 4.1.1 BMC Management of Message Transfers..................................................27 4.1.1.1 System Interface to IPMB transfers................................................27 4.1.1.2 IPMB to System Interface Transfers................................................28 4.1.1.3 IPMB to IPMB transfers............................................................28 4.1.1.4 System Interface to Optional Bus Transactions.....................................28 4.1.2 IPMI Requirements for the BMC........................................................29 4.1.3 Hot-swap Requirements for the BMC....................................................29 4.1.4 I2C Error Recovery Requirements of the BMC...........................................29 4.1.5 Optional BMC Functions.................................................................29 4.1.5.1 Local sensor support..............................................................29 4.1.5.2 FRU Commands......................................................................29 4.1.5.3 ALERT# Function...................................................................30 4.1.6 IPM Command Functions................................................................30 4.2 Peripheral Management Controller Functional Requirements.................................31 4.2.1 PM Address Configuration.............................................................31 4.2.2 Hot-swap Transient Tolerance.........................................................31 4.2.3 IPM Device Functions.................................................................32 4.2.4 Sensor Device Functions..............................................................32 4.2.5 FRU Device Functions.................................................................32 February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 4 of 32
Figures Figure 1 - CompactPCI System Management Block Diagram....................................8 Figure 2 - Idealized Schematic of a Standard Node........................................13 Figure 3 - BMC Block Diagram.............................................................21 Tables Table 1 - Standard Node Parameters.......................................................13 Table 2 - I2C Transmission Violation Timeout Limits......................................14 Table 3 - System Management Line Parameters..............................................16 Table 4 - CompactPCI Backplane Pin Assignments for IPMBs.................................18 Table 5 - IPMB Connector.................................................................19 Table 6 - General Address Allocation per IPMB............................................24 Table 7 - Power Supply Address Allocation................................................25 Table 8 - CompactPCI Peripheral Card Address Allocation..................................25 Table 9 - Reference to IPMI defined commands.............................................30 Table 10 - Dummy Message Format..........................................................32 CompactPCI® System Management Specification PICMG 2.9 RLO February 2, 2000 Page 5 of 32
Overview (Environment) 1. Overview (Environment) 1.1 Description and Goals of the Specification This document defines an implementation of a system management bus in a CompactPCI system. The bus uses an I2C hardware layer, and is based on the Intelligent Platform Management Interface (IPMI) and Intelligent Platform Management Bus (IPMB) specifications. The remainder of this chapter is devoted to a survey of the architecture, theory, and issues behind system management with guidance on how best to use this specification. The following chapters are organized to present first hardware then software specifications, requirements, and options. The main goal of this specification is to provide the PICMG community with the minimum requirements necessary to guarantee interoperability of the system management components that each of them develops. A secondary goal of this specification is to define these requirements in such a way as to allow the maximum use of commercial components. In so doing, developers will have the opportunity to capitalize on the economies of scale provided by the mainstream computer markets. A significant aspect of this specification is the requirement that system management operate in an environment where CompactPCI peripheral cards can be hot-swapped. The additional requirements in both hardware and firmware to achieve this are a key component of this specification. The target audience for this specification is described in section 1.3.1. Readers of this document should be familiar with the specifications referenced in section 1.5 1.2 Justification The Intelligent Platform Management Interface (IPMI), was announced by Intel, Dell, Hewlett-Packard Company, and NEC on February 17, 1998 to provide a standard interface to hardware used for monitoring a server’s physical characteristics, such as temperature, voltage, fans, power supplies and chassis. The IPMI specification defines a common interface and message-based protocol for accessing platform management hardware. IPMI is comprised of three specifications: Intelligent Platform Management Interface, Intelligent Platform Management Bus (IPMB) and Intelligent Chassis Management Bus (ICMB). The IPMI specification defines the interface between system software and platform management hardware, the IPMB specification defines the internal Intelligent Platform Management Bus, and the ICMB specification defines the Intelligent Chassis Management Bus, an external bus for connecting additional IPMI-enabled systems. Although IPMI is not tied to a specific operating system or management application, it is complementary to higher level management software interfaces such as: • the Simple Network Management Protocol (SNMP); • Desktop Management Interface (DMI); • Common Information Model (CIM);and • Windows Management Interface (WMI), which facilitates the development of cross-platform solutions. By incorporating this technology into CompactPCI, PICMG leverages off the work being done by the IPMI consortium, and expands it to include management of the CompactPCI cards themselves; something not being done in standard PCI. For telecommunications applications that require alarming of the type found in the central office telecommunications environment, IPMI is a natural building block. CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 6 of 32 February 2, 2000
Overview (Environment) 1.3 Using this Specification To succeed, this specification must coordinate the design efforts of various parties involved in the design of computer systems that support system management. This section defines and assigns the design responsibilities for the several types of developers whose activities must be coordinated. It concludes with a general discussion, for the benefit of all developers, of the architecture, design requirements, and considerations addressed by the specification. 1.3.1 The Developers The term developer is used throughout this document as a collective noun for those who create a system or subsystem through design effort, integration of existing designs, or some combination of the two. Within the broad community of developers, the following types are defined here on the basis of the unique goals and responsibilities each has. The developer is a virtual entity. It is quite likely that an actual company fulfills the role of several if not all of the following types of developers. It is just as likely, particularly in the case of the system developer, that more than one company participates in a development function. System developers are the designers or integrators of the mechanical enclosures, backplanes, and manageable subsystems such as power supplies and peripheral cards that comprise a CompactPCI based product. Commonly, more than one company in a vendor-client relationship may participate in system development. These companies must communicate amongst each other the decisions each has made as part of the overall system development effort. This group is responsible for ensuring that the system management network functions in accordance with this specification and with the intended management applications for all configurations envisioned for the product. This group relies on all the other developers to deliver compliant subsystems or products for incorporation in the final product. Members of this group should direct particular attention to the electrical characteristics in section 2.3 and the system management requirements of section 3.1. Software developers, in this specification, are the authors who write the management applications, services layers, device drivers, and related code that manages the system. The boundary between system management code and the management network is the system interface of the BMC (see below). Members of this group should direct particular attention to the IPMI functional requirements of section 4. BMC developers are the designers of the Baseboard Management Controller (BMC) that interfaces the host processor running a management application to the system management network. The BMC developer is working at the IC level, principally as a firmware designer. This document and the IPMI and I2C specifications define the interface between the BMC and the system management network. Peripheral developers rely on BMC adherence to these specifications to ensure compatible operation with the BMC. The interface between the BMC and the host processor is the system interface and is largely defined by the IPMI specification. The developer of the subsystem, usually the SBC, in which the BMC is used is most concerned with this facet of the BMC. BMC developers should direct particular attention to the electrical characteristics in section 2.1 and the system management requirements of section 3.2.1. SBC developers are the designers and manufacturers of the System Board Computer (SBC). In many cases, the SBC is the host for the BMC so this group represents the designers of any subsystem that is host to the BMC. Accordingly, this specification only addresses the design requirements for deploying a BMC. Members of this group should direct particular attention to the electrical characteristics in section 2.1 and the system management requirements of section 3.2.2. Peripheral Developers are the designers and manufacturers of the manageable subsystems that contain IPM devices presented to the management network. Developers of standard peripherals, such as CompactPCI add-in cards, implement a standard node as in defined in section 2.1. Developers of other manageable subsystems, e.g. power supplies, fans, etc., are not required to implement a standard node in their product although it is recommended that they do. Developers that elect to implement non-standard nodes are obligated to publish relevant information for the benefit of the system developer seeking to use their product (see section 2.2 ). In additional to these electrical characteristics, members of this group should direct particular attention to the system management requirements of sections 3.5 and 4.2. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 7 of 32
Overview (Environment) Figure 1 - CompactPCI System Management Block Diagram The figure above presents a generic block diagram of a CompactPCI system with system management. The glossary provides an explanation of the acronyms used in the figure. This block diagram and the remainder of this section present the technology and design considerations for CompactPCI System Management. As the design considerations are presented, the need for decisions on feature set and technical tradeoffs will become apparent. The guiding principle of this specification is that architectural requirements placed on the chassis, of which the backplane is a significant component, are minimal in order to provide maximum flexibility in defining products. Conversely, functional requirements placed on the manageable subsystems are comprehensive to ensure operability over a broad range of implementations. 1.3.2 The l2C bus and capacitive loading The electrical foundation for CompactPCI System Management is the Inter-IC (I2C) bus developed by Philips Semiconductor. The relevant features of this bus are: • Two wire serial interface, (clock and data); • Open-collector/drain drivers, the bus is pulled up by biasing; • Multi-mastering capability, devices arbitrate for the bus using a collision detection scheme; and • 100 Kilobit data rate, 400 Kilobit rate is defined but not used in this specification. The I2C bus is protected by patents held by Philips Semiconductor. IC manufacturers that sell devices incorporating the technology will already have secured the rights to use it, relieving the purchaser of that burden. Regarding this specification, the most significant technical issue with the I2C bus is that it has a defined limit to the capacitive load it is required to drive. The capacitive load is composed of loading presented by the devices as well as loading presented by the transmission medium (signal traces) of the bus itself. For large systems, tradeoffs between device count and transmission line type and length may be required. The system developer is ultimately responsible for establishing how much of the load budget is allocated to the interconnection and how much goes to devices. To manage this requirement, this specification defines a ’’standard node” (section 2.1.1) which establishes a common unit for which system developers can budget and to which peripheral developers can design. The system developer may elect to use non-standard-node-based subsystems and even discrete devices. In such cases, the system developer is obligated to establish the electrical and functional suitability of the non- standard nodes and devices. For this reason, peripheral developers that offer subsystems which do not February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 8 of 32
Overview (Environment) implement a standard node must assist this process by providing the information a system developer will need to properly incorporate the subsystem into their product (section 2.2). The open-collector nature of the I2C line drivers requires that each line of the 12C bus be biased by a pull up resistor. The pull-up biasing is distributed among the various physical elements that comprise the management network. The amount of line biasing required of each element is that which is necessary to drive the capacitive loading that the element brings. Because of this pay-as-you-go approach, the management network exhibits a relatively uniform rise time regardless of how much or little the network is loaded. 1.3.3 Hot-swap Besides ensuring the electrical integrity of the I2C bus during normal operation, section 2.1.2 also defines the requirements for a system management node such that it may be plugged into an active system management network. This requirement comes from the desire to support system management on hot- swappable CompactPCI peripherals without compromising their capacity for hot-swapping. This is achieved by a combination of limiting the line transients during the insertion event as well as having the connected devices tolerate a certain amount of glitch energy. The additional constraints placed on the device to make it hot-swap capable - and tolerant - are not difficult to achieve, however, existing devices may not satisfy those constraints. The system developer must be mindful of the potential incompatibility between pre-existing devices and a hot-swap capable system management bus segment. Only the system developer knows whether or not a product requires hot-swap capability. Therefore only the system developer is allowed to compromise the hot-swap capability of the system management bus. As such, both BMC developers and peripheral developers implementing the standard node are obligated to support hot-swapping. CompactPCI peripherals that are system-management capable must implement a standard node if they are also designed to be hot-swap capable. In cases where the system developer perceives a requirement to use a pre-existing device and to support hot-swap activity on the system management bus, then the system developer must make architectural accommodations. For example: • Verify first that the pre-existing device does not already satisfy the constraints for hot-swap; • If not, place the device behind a hot-swap capable IPM device which can bridge from the system management bus to the device's I2C interface; or • Use a BMC with dual ports and use one bus for hot-swap activity and the other for the non-hot- swap activity. In any case, the system developer must recognize that any device which does not satisfy the constraints for hot-swap renders the bus on which it resides incapable of reliably supporting hot-swap activity. 1.3.4 The BMC Section 3.2 defines the architectural requirements for the BMC, which is responsible for interfacing between the host of the system management application and the management network. The BMC definition in this specification is an extension of that in the IPMI specification, which defines, but does not require, an IPMB port on the BMC. Indeed, this specification requires one IPMB port and provides for a second IPMB port as an option. Since both a single-ported and a dual-ported BMC are allowed, the system developer must consider the tradeoffs of specifying a system management network around a single IPMB or dual IPMBs when deciding on the architecture of a system. System management networks designed around a single IPMB may use System Board Computers based on either type of BMC and would therefore work with the largest population of SBCs. Systems designed to use dual IPMBs can segregate devices that are incompatible, e.g. legacy devices and hot-swap capable devices (see below). Also, the electrical constraints imposed by the I C bus are less confining when two busses are available. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 9 of 32
Overview (Environment) 1.3.5 Intelligent versus non-intelligent devices The IPMI specification is a layer on top of the I2C interface protocol and uses only "Master Transmitter" data transfer formats. The IPMI specification maintains backward compatibility with pre-existing I2C devices by tasking the BMC with low-level communications with such devices. The term "legacy device" is used in this document to refer to those devices, sometimes referred to as non-intelligent or dumb devices, which do not support the full interface protocol of the IPMI specification. Two issues in using legacy devices are the inflexibility in their addressing and the method of exposing them to the management application. Legacy devices were originally hardwired to respond to a particular address. Subsequent versions offered a choice of several predetermined addresses and recent designs allow address configuration. Intelligent devices provide for address configuration, which gives greater flexibility in managing addressing allocation. This specification defines in sections 3.3.1 and 3.3.2 the address allocation method for System Management that avoids addressing conflicts among standard nodes. The system developer must ensure against address conflicts when deploying legacy devices of limited addressing flexibility within CompactPCI system management networks. The IPM] specification provides two methods to interface with legacy devices residing on the system management bus. The first is to have the management application be specifically aware of the device and to have it manage the device using IPMI commands; the IPMI specification defines commands which can direct the BMC to pass messages between the management application and a legacy device. The second method has the BMC created with specific awareness of the device. The BMC is then responsible for managing the device and presenting it as a virtual IPM device to the application. This second approach requires a knowledge of the specific devices to be managed prior to developing the BMC. This specification does not require the existence of any legacy devices but neither does it prohibit them. As such, the BMC is not required to support any legacy devices although custom BMCs may elect to do so. The decision to develop a custom BMC with legacy device support must be coordinated with the hot-swap objectives of the system in which it is to be deployed. In order to promote a market for a "standard" BMC, this specification recommends that, where possible, legacy devices be located on private I2C busses off the BMC and, in any case, that the system management application be responsible for managing these devices. 1.4 Definitions Glossary BMC Baseboard Management Controller FRU Field Replaceable Unit ICMB Intelligent Chassis Management Bus IPM device Intelligent Platform Management compliant device IPMB Intelligent Platform Management Bus IPMI Intelligent Platform Management Interface PM Peripheral Management controller RAID Redundant Array of Inexpensive Drives SBC System Board Computer SDR Sensor Data Record SRT Synchronous Receiver / Transmitter CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 10 of 32 February 2, 2000
Overview (Environment) Terms used in this document Shall - Indicates a mandatory requirement in order to claim conformance with this specification. Should - Indicates a preferred implementation but with a flexibility of choice. May - Indicates a flexibility of choice with no implied preference. Peripheral Management Controller - (PM) is any intelligent IPM device that is not the BMC and resides within the chassis. A special class of PMs are those that reside on the CompactPCI peripheral cards. Management Network - refers to the full extent of interfaces to which the peripheral management controllers and the BMC are connected. The management network consists of at least one IPMB but may refer to more than one IPMB. Management Bus - refers to a single, physical interconnection of management controllers. In the electrical section of the specification, the term "line" is used interchangeably. This specification contemplates an implementation where multiple bus segments may be electrically buffered together into a single functioning IPMB. The IPMB consists of at least one management bus but may refer to more than one such bus. Standard Node - A parametrically defined connection to the management network which includes an IPMI compliant device. System Board Computer (SBC) - The computer that runs the system management application software and acts as host for the management network's BMC. Traditionally, this is the hardware that resides in the system slot of the CompactPCI chassis. While a product with more than one system board can be contemplated, only one system board (at least, at a time) will serve as the host for the BMC. 1.5 Supporting Documents PICMG maintains a System Management Website with additional supporting documentation, application information, updates to CompactPCI specific commands, industry links and other up-to-date information. As of the date of issue of this document, this website is located at http://www.picmg.org/gcompactpcisms.htm. As of the date of issue of this document, the following specifications can be found at http://developcr.intel.com/design/servers/ipmi/spec.htm • IPMI Specification Vl.O Document Revision 1.1—Defines the messages and system interface to platform management hardware. (August 26, 1999) • IPMB Specification V1.0—Defines an internal management bus for extending platform management within a chassis.(September 16, 1998) • Platform Event Trap Specification Vl.O, revision 1.00—Defines a common format for SNMP Traps generated by platform management hardware, BIOS, or system boot agents. (December 7, 1998) • IPMI Platform Management FRU Information Storage Definition Vl.O Document Revision 1.1 - Defines and describes the common format and use of the FRU (Field Replaceable Unit) Information storage in platforms using IPMI. ( September 27, 1999) • IPMB Address Allocation Vl.O—Presents the allocation and use ofI2C slave addresses for devices on the IPMB. (September 16, 1998) As of the date of issue of this document, the following specification can be found at http://developer.intel.com/design/servers/ipmi/tools.htm • IPMI Developer's Guide (Draft rev 0.7) - A companion document for the IPMI Specification, this manual contains a brief introduction to the key elements of IPMI and provides information on how to use the IPMI Specification and implement IPMI as part of a management system. (September 16, 1998) CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 11 of 32
Electrical Characteristics As of the date of issue of this document, instructions for executing the IPMI adopters agreement can be found at http://developer.intel.com/design/servers/ipmi/index.htm - contributor and copies of the agreement are available at http://developer.intel.com/design/servers/ipmi/adopter.pdf As of the date of issue of this document, the following specification can be found at http://www-us.semiconductors.philips com/i2c/support/ - general • The I2C-bus specification version 2.0 - ( December 1998) As of the date of issue of this document, the following specifications are available from PICMG at https://www.picmg.org/gspecorderformsec.htm or by contacting PCI Industrial Computer Manufacturers Group (PICMG), 401 Edgewater Place, Suite 600, Wakefield, MA 01880 USA, Tel: 781.246.9318, Fax: 781.224.1239. • PICMG 2.0 R3.0 CompactPCI Core Specification • PICMG 2.1 R1.0 CompactPCI Hot Swap • PICMG 2.11 R1.0 Power Interface Specification February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 12 of 32
Electrical Characteristics 2. Electrical Characteristics This section defines electrical parameters, limitations, and requirements for all the components of the system management network. The peripheral developer is the target audience for sections 2.1 and 2.2. The system developer is the target audience for section 2.3. 2.1 Standard Node The standard node establishes generic limits within which a peripheral developer can deploy a PM. The standard node is defined by its electrical characteristics and also by certain functional requirements. The functional requirements are necessary to extend IPMI to the hot-swap environment and to establish a convention for managing chassis with power faults. 2.1.1 Standard Node Parameters The elements of the standard node are depicted in Figure 2. Standard nodes shall be implemented within the parametric limits supplied in Table 1. Additional explanatory text follows. The ”I2C Transceiver (Xcvr)" in Figure 2 represents only the line interface portion of one of the IPMB signals of the PM. The term ’’driver” may be used to refer to the transceiver, particularly when describing its output behavior. The transceiver shall comply with the electrical characteristics of the I2C specification. Additionally it shall comply with the hot-swap requirements of section 2.1.2 Connector Figure 2 - Idealized Schematic of a Standard Node ^IPMBPWR IPMB_SDA » Or IPMB_SCL RP is the pull-up resistor for the standard node. R] shall be connected directly to VSM so that line biasing is not effected by a load fault local to the node. P The node series resistor Rs is optional as indicated in Table 1. It is provided for controlling electrical transients that may occur during insertion associated with hot-swapping. It also may be used to keep strong drivers at or above the minimum node fall time. The location of the series resistor along the interconnection has negligible effect on the performance of the node and is therefore not specified. Standard Node Parameters Min Max Units Sys Mgl. Voltage (VSM) 4.85 5.25 Volts VSM Ramp Time (TSLEW) 100 ms DC Node Current (Ism) 100 mA Peak Node Current (Ismpk) 500 mA Pull-up (RP) 34000 72000 Ohms Series resistor (Rs) 0 200 Ohms Capacitive Loading 10 20 PF Node Fall time 5 250 ns Input Glitch rejection 50 ns Table 1 - Standard Node Parameters The maximum capacitive load that the standard node can present is specified in Table 1. This figure includes the capacitive loading of the I2C transceiver, the transmission line (PCB trace) between the device and the connector, any vias associated with the PCB trace and the connector itself. The amount of allowable interconnection is therefore derived from the available capacitance remaining after accounting for all the capacitive loading except for that of the PCB trace. Nominally, device capacitance is half the capacitive load of the node. It is the responsibility CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 13 of 32
Electrical Characteristics of the peripheral developer to account for the effects of transmission line segments on his board when computing total capacitive loading. Since line loading is a function of the transmission line geometry the peripheral designer must know the distributed loading of the transmission line used and must determine the implementation-specific length limitation. This effective length limit applies to the sum of the lengths of the interconnection segments. The standard node is powered from the system management voltage domain as detailed in section 2.1.3. 2.1.2 Hot-swap Capability The requirement to maintain stable system management operation in a hot-swap environment places additional constraints on devices joining the system management network as well as devices already resident on the network during an insertion event. The standard node is required to be hot-swap capable. To be hot-swap capable, a device shall minimize line disturbance as defined in section 2.1.2.1 and shall tolerate insertion anomalies as defined in section 2.1.2.2 below. Additionally the device shall recover from irregular bit streams as defined in sections 2.1.2.3 and 2.1.2.4 below. Power requirements for hot-swap capability are defined in section 2.1.3. 2.1.2.1 Initialization The standard node shall implement adequate safeguards against excessive disturbance of the system management network as the node is being inserted into a live system. This requirement is met by ensuring that the clock and data lines of the I2C device are held in their high impedance state from the time the node is powered until the node is initialized and ready to initiate a transaction on the bus. Protection against excessive disturbance of the system management power domain is addressed in section 2.1.3. 2.1.2.2 Signal Transient Rejection The design limits for signal transients produced by the insertion and extraction events associated with hot- swapping are defined in Table 1. Hot-swap capable I2C devices are required to reject any line transient within these limits. Rejecting the transient means the device makes the same interpretation of the signaling level as it would if the transient were not there. This requirement is essentially the same as the glitch rejection requirement for fast-mode I2C devices. Although this specification is based on the 100 Kilobit/sec data rate of standard-mode I2C devices, the fact that 400 Kilobit/sec, fast-mode devices can be clocked at the lower rate suggests one means of achieving the glitch rejection required by this section. In any case, peripheral developers are responsible for selecting the appropriate I2C device in implementing their standard management node. 2.1.2.3 Transmission Violations Transmission violations are signal sequences that will cause management controllers to behave abnormally, blocking or corrupting some or all transmissions on the I2C bus. These sequences do not occur normally but could occur as a result of a device being removed at an inopportune time. These violations rely on a timeout mechanism for detection and recovery. Table 2 lists nominal values for the timeout parameters for which the I2C and IPMB specifications are the controlling documents. Hot-swap capable I2C devices are required to recover from the violations defined below. Min Max Units Source Specification Overall Message Duration T1 20 ms IPMB Chapter 4 Time-out waiting for bus free T2 60 ms IPMB Chapter 4 T8 3 ms IPMB Chapter 4 I2C Clock Low hold T8 4.7 |US I2C Table 5 Table 2 - I2C Transmission Violation Timeout Limits February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 14 of 32
Electrical Characteristics Aborted Transfer is the case when the bus goes dormant any time after a start condition and before a corresponding stop condition. Normally, once a master asserts the start condition, all other devices not already arbitrating for the bus will recognize the bus as being busy until a stop condition is detected. The master, having won the arbitration, is required to complete the transmission within the maximum timeout parameters of chapter 4 of the IPMB specification. The bus is in a busy state immediately after the start condition. The bus is dormant if, before a stop condition occurs, the SDA and SCL lines are high for a period of time greater than the minimum ’’Time-out waiting for bus free" period, T2, specified in chapter 4 of the IPMB specification. A master shall be capable of detecting a dormant bus and shall not consider a dormant bus to be busy; i.e. the master is free to arbitrate for a dormant bus. The selection of the base device used to implement the standard node will have a bearing on the means used to tolerate an aborted transfer. See the implementation note in section 4.2.2. If the standard node uses a dummy transaction to resynchronize the bus, the means for transmitting the dummy transaction (e.g. connecting additional ports of a microcontroller to SDA and SCL) shall not cause the standard node to violate the electrical loading requirements of section 2.1.1. Stuck Line is the case when either the clock or data line is held active low beyond its respective limit. An I2C device shall not hold the clock low longer than the maximum "I2C Clock Low hold" limit, T8, specified in chapter 4 of the IPMB specification. It shall not hold the data line low longer than the maximum "Overall Message Duration" limit, Tl, in the IPMB specification. See the implementation note in section 4.2.2 It has been observed that some devices with integrated I2C hardware (Synchronous Receiver/Transmitter or SRT) can lock up when exposed to a clock signal with an extended active low duration. Accordingly, all I2C devices are required by this specification to function properly when either the clock or data line is held low up to its respective maximum limit. It is recommended that IPM devices be capable of isolating their clock and data lines from the bus. Devices with the capability of isolating their clock and data lines should remove themselves from the bus if they have reason to believe they are the offending device. 2.1.2.4 Protocol Violations Protocol violations are message sequences that do not follow the format prescribed by the IPMB specification. These anomalies occur above the I2C protocol, i.e. protocol violations can occur with perfectly valid I2C transactions. The Functional Specification chapter of the IPMB specification describes various protocol violations from which intelligent devices are required to recover. In several cases, the protocol violation is defined in terms of a time limit on expected actions. The Timing Specifications chapter of the IPMB specification sets these timeout limits. Hot-swap IPM devices are required to meet the protocol violation recovery requirements of the IPMB specification. 2.1.3 Node Power The standard node powers its PM from the system management power domain denoted as VSM in Figure 2. The VSm power domain is separated from other power domains in order to enable the system developer to supply power from an auxiliary source, such as a battery, during a failure of chassis power. It is the responsibility of the system developer to provide a stable, non-interrupted, VSM power domain. If the power domain is battery-backed, or otherwise powered from an auxiliary source, VSM shall not be interrupted any time system management is intended to operate. If no auxiliary power is provided, the system developer shall connect VSM to a 5V power source. System developers and backplane designers refer to section 3.1.2 for additional information on providing VSM. Figure 2 illustrates a typical application of VSm to a standard node. Take special note that the connection of pull-up resistor RP to VSM shall be directly to the pin of the connector. This is done in order to retain the integrity of the pull-up function in the face of an on-board VSm fault. All other components of the node shall be isolated from VSM through the protection circuitry. Table 1 specifies the limits for maximum D.C current ISm and peak instantaneous current Ismpk that may be drawn by the PM. Maximum D.C. current defines the limit for average current draw over an appreciable February 2, 2000 Page 15 of 32 CompactPCI® System Management Specification PICMG 2.9 R1.0
Electrical Characteristics period of time. Maximum peak instantaneous current limit is defined to protect the connector, to bound system startup current, and to budget for power decoupling required for hot-swap transient limiting. Current that exceeds the DC limit constitutes a load fault. Any current in excess of Ism but less than Ismpk is considered surge current and the peripheral shall ensure that the VSM domain is only subjected to surge current for a limited amount of time. The duration of a surge is unspecified and is a function of the current limiting circuitry described below. The peripheral shall never subject VSm to current draws in excess of Ismpk, particularly during hot-swap insertion. Note that for this reason, decoupling capacitance for the node shall be isolated through the protection circuitry from VSM and should be kept to a minimum. The VSm domain shall be guarded from the load faults defined above by means of appropriate current limiting circuitry. The current limiting circuitry should be designed as much as practical to minimize surge current duration, e.g. through careful sizing of a fuse or through active means such as a FET switch. The current limiting circuitry shall ensure that instantaneous current draw never exceeds Ismpk , e.g. through appropriate series resistance. The less current load the node presents, the more quickly VNOde will ramp to a useable level on board. This is desirable in that the node will more quickly begin its monitoring chores. However, the node may also be exposed to relatively slow ramp rates of VSm during global power up of the system. The worst case (maximum) ramp-on time TSLEW is defined in Table 1. The standard node shall implement a power-on reset circuit that operates within the limits of TSLEW. 2.2 Non-Standard Node Any device resident on the management network that does not meet all the requirements of section 2.1 is defined to be a non-standard node. All non-complying electrical characteristics, including non-compliant loading presented by the non-standard node shall be made available to the system designer for this purpose. As such, the peripheral developer that produces a product containing a non-standard management node shall publish those characteristics of section 2.1 with which the node does not comply. In the case where non-compliance results from exceeding a parametric limit (e.g. line loading), the peripheral developer shall publish the new value of the parametric limit, which the product will meet. Note that this section is not applicable to CompactPCI peripheral cards, as they are required to have standard nodes. 2.3 Management Bus Topology This section specifies the electrical requirements the system developer shall meet to produce a compliant management network within which the nodes of the previous sections will function properly. In the course of allocating load budget to devices and network interconnection, the system developer shall comply with the loading requirements of section 2.3.1 and the biasing requirements of section 2.3.2. 2.3.1 Line Loading Limitations Table 3 lists the maximum capacitive loading for each signal of the management network. The parameter is called signal loading and is defined as the capacitive loading resulting from devices connected to the trace as well as capacitive loading of the trace itself. The signal loading value in Table 3 is included here for reference only. The I2C specification is the controlling document for maximum capacitive loading. Line Parameters Min Max Units Signal Loading 400 PF Line resistance 1 ohm Current Capacity 5 mA Bias Time Const. 700 1400 ns Table 3 - System Management Line Parameters The system management network shall be designed such that the signal loading shall be less than or equal to the maximum signal loading value of the I2C specification as referenced in Table 3. The total capacitive load shall be calculated from the worst-case device loading plus the line loading of the network. The worst-case February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 16 of 32
System Management Requirements device loading is determined from all intended configurations of the product that affect the system management network including build options and intended installations of removable subsystems. In addition to the line loading constraints discussed above, each line of the system management bus shall provide satisfactory conducting capabilities as follows. No path between any two nodes of the management network shall exhibit a resistance in excess of the line resistance defined in Table 3. All signal lines of the management network shall be capable of carrying no less than the current capacity defined in Table 3 over the temperature range defined for the product. 2.3.2 Line Biasing Requirements This specification has adopted a distributed approach to biasing the open-collector lines of the I C bus. This pay-as-you-go approach requires that each element of the bus provide pull-up biasing proportional to the capacitive loading that the element brings. The proportion is chosen so that, as the management network is loaded toward its capacitive limit, it is biased more strongly to its DC drive limit. The proportion is expressed as a time constant, which is the product of the bias resistance and load capacitance. The line biasing requirements of this section apply to the fixed media of the system such as backplane traces, cables and related conducting paths between field-replaceable nodes. Each line of the bus shall be biased to the system management voltage through a resistance such that the product of the medium’s line capacitance and the biasing resistance shall be within the minimum and maximum bias time constants defined in Table 3. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 17 of 32
System Management Requirements 3. System Management Requirements This chapter contains operational requirements for the overall system and the various subsystems within a managed chassis. 3.1 Chassis The system developer is the target audience for this section. In this section, chassis refers to the collection of all components in the product except for the SBC, CompactPCI add-in cards and other pluggable, manageable subsystems. Generally, what remains is the backplane, cables and related conducting paths between the PMs and possibly some integrated, non-removable system management devices. 3.1.1 Backplane If a product contains an SBC and at least one CompactPCI peripheral then the backplane topology requirements of this section apply. As discussed previously, the architectural decision to use one or both channels of the BMC rests with the system developer and is reflected in the routing of the IPMBs on the backplane. The management network interface between the SBC and the backplane is defined by the pinout of Table 4. JI J2 IPMBO Vsm IPMB 1 ALERT# Signal - SCLK SDAT SCLK SDAT Pin name - IPMB_SCL IPMB_SDA IPMB_PWR SMB.SCL SMBJSDA SMB_ALERT# System Slot B17 C17 A4 D19 C19 E19 Peripheral Slot B17 C17 A4 Table 4 - CompactPCI Backplane Pin Assignments for IPMBs 3.1.1.1 IPMB 0 Routing of IPMB 0 is mandatory to all J1 connectors on the CompactPCI bus per the pinout of Table 4. Nonetheless, the routing of IPMB 0 is still subject to the topology limitations of section 2.3 and the system developer shall ensure both these requirements are satisfied. The topology limitations are sufficient to support a standard CompactPCI configuration of seven peripherals and a system slot. Attention must be paid to the topology limitations when more involved configurations are contemplated. See section 3.1.3. Within the constraints of those same topology limitations, the system developer may elect to route IPMB 0 to additional manageable resources in the chassis. Pursuant to section 2.3.2, the chassis shall contain appropriate biasing for the capacitive loading associated with this bus. If the management network contains a bridge to an ICMB, locating the bridge on this IPMB segment is recommended. See section 3.1.3. 3.1.1.2 IPMB 1 Support for IPMB 1 on the backplane is optional. If the SBC in the system slot is equipped with a dual ported BMC, IPMB 1 is available to the backplane at the J2 connector of the system slot per Table 4. The routing of this IPMB segment is unspecified and is therefore at the disposal of the system developer subject to the topology limitations of section 2.3. Note that the pin numbers defined in Table 4 for IPMB 1 on the system slot correspond to reserved pins on the .12 connectors of the peripheral slots per the PICMG core specification. Therefore a compliant backplane will not bus IPMB 1 to the J2 connectors of the peripheral cards. This IPMB shall be electrically biased per section 2.3.2. February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 18 of 32
System Management Requirements 3.1.1.3 Treatment of ALERT# The ALERT# signal is a low true, wire OR’ed open collector signal. This signal is generated by certain legacy sensor devices to indicate to their management applications that they are in need of service. This signal is used at the option of the system developer that has a requirement to support a population of this class of device. ALERT# is routed in common from these devices to J2 as SMB_ALERT# (refer to Table 4) and ultimately connected to the ALERT# pin on the BMC. See section 4.1.5.3. Source current for ALERT# is provided by a 4.7K 5% resistor pulled up to VSM located on the SBC as described in section 3.1.8 of the CompactPCI Core Specification. Implementation Note Backplane providers do not typically deploy this class of sensor device on their own behalf. For maximum applicability of their product, these providers may elect to provide IPMB connectors for IPMB 0 or IPMB 1 (or both). If they elect to provide these connectors, they shall route ALERT# to the appropriate pins per section 3.1.1.4. Note that regardless of which IPMB the connector supports, the ALERT# pin is always routed to J2 per section 3.1.1. 3.1.1.4 IPMB Extension connector The IPMB may be implemented over multiple physical segments subject to the topology limitations of section 2.3. This section defines a common connection method for cabling remote system management components. The standard connector for PICMG System Management off-board extension is the Molex 1.25mm pitch, Micro Wire-to-Board and Wire-to- Wire series or mateable equivalent in a five-pin configuration. This connector series is available in vertical and right angle, surface-mount and through-hole versions. The standard extension connector for CompactPCI system management is Molex part number 53398-0590 or equivalent. The reference receptacle is Molex part number 51021-0500 or equivalent. Table 5 lists the pinout of the connector when used in a PICMG System Management application. Extension Connector Pin Assignment Pin Signal 1 SCL 2 GND 3 SDA 4 Vsm 5 ALERT# Table 5 - IPMB Connector The system developer that chooses to route an IPMB to a connector is reminded of that, with the addition of cable and device(s) attached to the connector, the composite network is still subject to the total loading limits of section 2.3. With this in mind, the system developer shall determine and communicate the restrictions that apply to that which may be connected to the connector. The maximum current through any pin shall be less than or equal to 1 Amp. Power faults (e.g. a bad cable with a short between the power and ground wires) may be applied to the connector and appropriate measures shall be taken to respect the per-pin current limitations. The wiring should be so chosen that crosstalk and interference are minimized between the bus lines. The pin-out arrangement has been chosen such that SCL can be paired with GND, and SDA can be paired with Vsm- Bypass capacitors must be placed close to the VSM and GND pins of the connector at each end of the extension wiring to ensure that VSm provides an effective return path for signal current. 3.1.2 System Management Power This power domain is defined as the primary power source for all nodes on the system management bus per section 2.1.3. At the system developer’s option, this power domain may be sustained during chassis power failures through battery backup or similar means. While this is the intended purpose of this power domain, those system developers not requiring this degree of system management support shall connect this domain directly to a 5V power source. It is preferred that this domain is sustained during a chassis power outage in which case the voltage shall remain in tolerance as chassis power transitions in and out of operation. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 19 of 32
System Management Requirements Voltage and current tolerances for system management power are defined in Table 1. Total power demand for system management power is calculated by totaling the DC current draw for each node. Required surge current capacity is calculated by adding to the total DC current the maximum surge current for one node. Surge current is defined to provide for the clearing of a power fault e.g. by blowing a fuse or tripping a breaker. Because the duration of the surge current is not specified, the delivery of surge current must be coordinated between the decoupling capacitance on the backplane and the source of VSM. System management power shall be decoupled and distributed through conductors adequately sized to allow the surge current to flow while maintaining system management voltage at all points in the domain within the tolerances defined in Table 1. Implementation Note In order to enable the development of standard backplanes, a predetermined decoupling requirement provides for a standard allocation of surge response between the backplane and the supply for VSM. These backplane developers should provide a minimum of 400 uF of decoupling capacitance on the VSM voltage domain. System developers using such backplanes should use a power supply with adequate transient response time to meet the voltage transient response to an ISmpk current load step on VSM as required in the previous paragraph of this section. Typically, this implies a transient response time of no more than 100 microseconds on the part of the power supply. In any case, the system developer maintains the responsibility for ensuring that the transient response of VSM is properly coordinated between the backplane and power supply used. 3.1.3 Bridging and Extending The existence of repeater devices for the rC bus provides a means of extending the IPMB over multiple physical segments. Through these repeaters, the multiple electrical segments function as a single, virtual IPMB. In such a case, the electrical constraints of section 2.3 apply to each electrical segment individually. Conversely, the addressing requirements that follow in this section apply globally to the all the devices connected to the one, virtual IPMB. The IPMI spec defines a device to bridge from an IPMB to an Intelligent Chassis Management Bus (ICMB). The bridge device allows the IPMB to join a community of IPMBs in a multi-chassis CompactPCI system management network. The intent in bridging IPMI segments is to allow all devices to communicate with each other. Refer to the ICMB specification for guidance on how multiple chassis can be interconnected using this technology. Since IPMB 0 on J1 is specified to be available and bussed to all CompactPCI cards, it is recommended that ICMB bridges reside either on IMPB 0 or on both IMPB 0 and IMPB 1. 3.2 System Board Computer and BMC The target audience of this section principally is the designer of the BMC. The SBC developer is included here because the SBC is commonly the host for the BMC. Section 3.2.1 is targeted at the BMC designer while section 3.2.2 addresses the requirements for deploying the BMC. A chief function of the BMC is that of a bridge between the system interface and managed resources (see Figure 3). The hardware level of the system interface is described in the IPMI specification for platforms based on Intel-style processors. However, the CompactPCI specification does not require the use of any particular processor. For the purpose of this specification then, the IPMI specification for the hardware level of the system interface is considered a recommendation only. February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 20 of 32
System Management Requirements 3.2.1 Baseboard Management Controller This section defines the interface requirements of a BMC for the CompactPCI system management network. The basis for this specification is the BMC as defined in chapter 3 of the IPMI specification. In summary, that specification requires the BMC to provide: • a system interface, • the capacity to receive events, • an SDR repository and SEL, • a watchdog timer function capable of system reset action, • and several other internal functions. The IPMI specification defines an optional IPMB interface. A block diagram of the BMC for the CompactPCI system management network is depicted in Figure 3. The interface requirements are defined below. 3.2.1.1 System Interface As previously discussed, this specification does not define the system interface. The SBC developer is free to define a suitable interface in view of the processor technology of the SBC. The system interface shall support the BMC in its requirement to implement the command set defined in section 4.1. The system interface options defined in the IMPI specification serve as examples of a compliant system interface. 3.2.1.2 Single-ported IPMB A minimum of one IPMB interface shall be provided by the BMC. This interface shall comply with the IPMB definition called out as an option in the IPMI specification. Per the IPMI specification, this IPMB is accessed through the system interface as channel 0 and is referred as IPMB 0 throughout this specification. The physical interface of IPMB 0 shall meet all the electrical specifications of section 2.1.1. In particular, the port is hot-swap capable and I2C compliant. IPMB 0 is a standard means by which the BMC receives events and is a component of the event receiver function. The BMC shall locate itself at address 0x20 on IPMB 0 in accordance with the IPMI specification. 3.2.1.3 Dual-ported IPMB A second IPMB interface may be provided by the BMC. If this option is implemented, the BMC is called dual-ported. The dual-ported BMC shall provide the IPMB 0 per section 3.2.1.2 and a second IPMB accessed through the system interface as channel 1. This interface shall comply with the IPMB definition called out in the IPMI specification as well as the electrical specifications of section 2.1.1. This IPMB is referred to as IPMB 1 throughout this specification. IPMB 1 is another means by which the BMC receives events and is a component of the event receiver function. The BMC shall support transactions between IPMB 1 and the system interface. The BMC is not required to support transactions between IPMB 1 and IPMB 0. Any traffic between IPMB 0 and IMPB 1 is neither required nor forbidden. No addressing mechanism is currently specified for any such inter-IPMB communication. See section 4.1.1.3. February 2, 2000 Page 21 of 32 CompactPCI® System Management Specification PICMG 2.9 R1.0
System Management Requirements The BMC shall locate itself at address 0x20 on IPMB 1 in accordance with the IPMI specification. 3.2.1.4 Optional and Private Busses The BMC may provide additional, unspecified interfaces to resources managed by the BMC and the system management application. An example of the use of a private bus is given by the requirements of Section 3.2.1.5. In this example, the BMC may implement repository storage as a FLASH EPROM in which the interface is a number of address, data, and control bits suitable for controlling the storage device. Another BMC may implement local storage as a serial EPROM on an I2C or IPMB interface. In all cases, the specific interface is defined by the BMC developer. 3.2.1.5 Repository Storage The BMC is required to provide access to a repository for Sensor Data Records (SDR) and to maintain a System Event Log (SEL) per the IPMI specification. The type and location of the SDR and SEL storage medium is unspecified except that the storage must be nonvolatile. The minimum storage requirement specified by IPMI is to hold 16 entries in the SEL. This allows for minimal, low cost solutions. Most systems will benefit from an SDR and a larger SEL. The minimum storage requirements suggest the possibility of reserving storage directly in the BMC, however, this may prove to be impractical or to yield inadequate event logging. SBC and BMC developers are encouraged to use an auxiliary storage device to hold the SDRs and SEL. For example, a private FC bus may be implemented out of the BMC in order to access an external serial electrically erasable PROM (SEEPROM). In any case, the BMC developer must have a priori knowledge of non-volatile storage devices to use them as SDR repository, SEL, or FRU storage. This a priori knowledge suggests that the storage be located on the same board as the BMC for use in any chassis, but this location is not required by this specification. Placing supplementary storage on the IPMBs is possible but poses its own hazards. Pursuant to section 2.3.1, the device capacitance of the auxiliary storage on IPMBs shall be included in the loading budget for the bus. The IPMI specification allows repository storage to consist of SEEPROM devices residing on the IPMB. However, because the BMC is required to support hot-swap and is required to manage storage, an implementation of the BMC that places auxiliary storage on the IPMBs shall meet the electrical requirements of section 2.1.1. The relevance of this requirement is underscored by IPMB 0 where system developers are likely to require hot-swap capability. Developers are cautioned that the capability for meeting the requirements of section 2.1.1 is unverified for non-intelligent (legacy) devices. Use of these devices on IPMBs where hot-swapping is required places responsibility on the BMC developer and the system developer to confirm section 2.1.1 compliance of that device. 3.2.1.6 IPMI Compatibility and Interoperability In addition to the preceding requirements, the BMC is required to comply with all requirements of the IPMI specification with the exception of the system interface requirements as noted in section 3.2.1.1. Refer to the IPMI specification for a definition of these remaining requirements such as watchdog timer functions, capability of system-reset action, and so forth. The requirements of this specification define a core functionality for the BMC and generally do not preclude the addition of OEM-specific resources to the basic BMC. When implementing such additional resources, channel 1 is reserved for accesses to IPMB 1 and may not be used for access to these resources. Single-ported BMCs shall, when transactions on channel 1 are attempted, return an appropriate error code indicating this channel is unsupported. Refer to section 4.1.1.1. 3.2.2 BMC Deployment The target audience for this section is the SBC developer or anyone developing a subsystem that is the host for the BMC. February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 22 of 32
System Management Requirements 3.2.2.1 BMC Power The BMC shall draw power from the system management power domain VSm in accordance with the standard node power requirements of section 2.1.3. In addition to those requirements, the BMC shall draw power from its host’s local power domain in the event of a failure in the system management power domain. 3.2.2.2 System Interface The host processor for the system management application shall be capable of communicating with the BMC. As previously stated, the hardware level of the system interface is described in the IPMI specification for platforms based on Intel-style processors. However, the CompactPCI specification does not require the use of any particular processor. For the purpose of this specification then, the IPMI specification for the hardware level of the system interface is considered a recommendation only. Note that the BMC, in conjunction with its host board, shall implement a standard node. 3.2.2.3 Single-ported BMC IPMB 0 of the BMC shall be routed to the JI connector of the system slot per the pinout assignment for the system slot IPMB 0 of Table 4. The combined electrical characteristics of the BMC IPMB interface, signal trace and connector JI shall meet the standard node requirements of section 2.1 3.2.2.4 Dual-ported BMC For those SBCs which elect to deploy a dual-ported BMC, in addition to the IMPB 0 requirements of section 3.2.2.3 , IPMB 1 of the BMC shall be Routed to the J2 connector of the system slot per the pinout assignment for the system slot IPMB 1 of Table 4. The combined electrical characteristics of the BMC IPMB interface, signal trace and connector J2 shall meet the standard node requirements of section 2.1 3.2.2.5 Ancillary BMC Support The SBC shall provide the capability for the BMC to reset the system per the IPMI specification. Additional ancillary support may be specified by the BMC, e.g. use of private busses for local management, interface to auxiliary storage, etc. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 23 of 32
System Management Requirements 3.3 Address Allocation for Peripherals 3.3.1 General Allocation Principles The basic principle of address allocation is found in the IPMB vl .0 Address Allocation specification in a note that states, in essence, that management controllers should be configurable for a minimum of eight possible addresses. The total address space within which the devices reside is so limited, and the potential users so varied, that reserved addresses are not practical. Instead, the management controller is expected to provide enough flexibility in address location that the system developer can define resource locations on a per-design basis. The means by which the management controller is configured for a given address is unspecified. Two example methods are: • providing pins on the management controller to allow jumper selection of one of a set of predetermined addresses; or IPMB Address Allocation Map lOh-lEh Available for third party add-ins 30h-3Eh Available for third party add-ins 52h-60H Available for chassis-specific functions Allocable to Power Supplies 62h-6Ch Available for chassis-specific functions 80h-8Eh Reserved for Board Set manufacturer use BOh Available for third party add-ins B2h-C0h Available for third party add-ins Allocable to CompactPCI peripheral cards C2h Reserved for SMBUS auto-allocation C4h-ECh Available for third party add-ins Allocable to CompactPCI peripheral cards EEh Reserved for Board Set manufacturer use Table 6 - General Address Allocation per IPMB • providing an in-circuit programmable device, which has been programmed at manufacturing time from which the management controller retrieves the preprogrammed address during initialization. These and other methods may be used to achieve the flexibility in address location that the system developer needs. PICMG has established default address allocations for certain classes of peripherals. These are defined in the following section. For system management nodes not addressed by the programmatic address allocation of the next section, the assignment of addresses shall be made under the direction of the IPMB vl .0 Address Allocation specification referenced in section 1.5. That document is the controlling specification but is excerpted here in for reference. Generally, address ranges not listed are already defined for legacy devices or components of the IPMI specification. 3.3.2 Programmatic Allocation of Peripheral Addresses This specification provides for the establishment of a defined, autonomous address allocation mechanism for a given class of peripherals. For such a class of peripherals, a predetermined address range is established and a means of selecting a unique address from the range is defined. In effect, this is a special case of the jumper-selection method of address configuration provided in the previous section. This specification does not define the means by which the unique address is selected but a key premise is that the chassis drives the selection means, e.g. via selector pins on a connector. In this way the system developer is able to manage programmatic address allocation while peripheral developers are able to design to the defined address range. The limited range of available addresses, however, means that these programmatic ranges must coexist with more flexible allocation methods. Even though a programmatic allocation scheme reserves a range of IPMB addresses for an architecturally limited, maximum number of instances of a peripheral, the system developer is only required to reserve the IPMB addresses for the peripherals that may actually exist in a February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 24 of 32
System Management Requirements specific product. For example, a system that has no CompactPCI peripheral slots at geographical addresses above 7, is free to use the IPMB address above BCh (see Table 8) for other IPM devices. The peripheral classes for which a programmatic address allocation is defined are listed in the sections that follow. Manufacturer’s interest groups that would benefit from a programmatic address assignment are encouraged to establish it. The PCI Industrial Computer Manufacturers Group (PICMG) is available to assist in these efforts. 3.3.2.1 Power Supply Management Node Address Mapping The PICMG Power Interface Specification defines a set of three geographical addressing signals as the selection means for a programmatic allocation of IPMB addresses for its power supplies. For a given power supply bay, the backplane straps these geographical address pins to one of seven states as shown in Table 7. This table defines the IPMB address assigned to each geographical address. The reservation of these addresses is made under the direction of the IPMB vl.O Address Allocation specification. Power Supply Address Allocation Geo. Addr. IPMB Addr. 0 52h 1 54h 2 56h 3 58h 4 5 Ah 5 5Ch 6 5Eh 7 Disabled Table 7 - Power Supply Address Allocation 3.3.2.2 CompactPCI Peripheral Management Node Address Mapping In a similar fashion to the power supply geographical address assignments, the CompactPCI specification defines signals which uniquely define the slot into which the peripheral is installed. These geographical addressing signals are available on the J2 connector and shall be used per Table 8 to define the IPMB address that the PM will use. The reservation of these addresses is made under the direction of the IPMB vl.O Address Allocation specification. Note the discontinuity in IPMB addresses between geographical addresses 9 and 10. This is to avoid the reserved address C2h. The geographical addresses "0" and ”31” are reserved by the CompactPCI specification and should not be encountered in compliant backplanes. A compliant peripheral installed in a legacy backplane that does not support geographical addressing will interpret the unconnected lines as the highest geographical address, i.e. 31, and shall not enable its IPM device under these circumstances. For 3U CompactPCI peripheral cards, the use of J2 is optional and may not be installed on a given product. 3U CompactPCI peripheral cards that do not have J2 populated shall provide an alternate, field-adjustable means of setting the geographical address in lieu of using the signals off J2. The mechanism to set the geographical address shall be constrained to allow only valid addresses to be specified, particularly with regard to addresses 0 and 31. This option promotes the possibility of human error in assigning geographical addresses that properly correspond to the slot location. Also, non-unique assignment of a geographical CompactPCI Peripheral Address Mapping Geo. Addr. IPMB Addr. Geo. Addr. IPMB Addr. 0 Disabled 16 DOh 1 BOh 17 D2h 2 B2h 18 D4h 3 B4h 19 D6h 4 B6h 20 D8h 5 B8h 21 DAh 6 В Ah 22 DCh 7 BCh 23 DEh 8 В Eh 24 EOh 9 COh 25 E2h 10 C4h 26 E4h 11 C6h 27 E6h 12 C8h 28 E8h 13 CAh 29 EAh 14 CCh 30 ECh 15 CEh 31 Disabled Table 8 - CompactPCI Peripheral Card Address Allocation address creates a non-configurable system management February 2, 2000 Page 25 of 32 CompactPCI® System Management Specification PICMG 2.9 R1.0
IPMI Functional Requirements network. Therefore this exception is not a preferred embodiment for address mapping. 3.4 CompactPCI peripheral cards The target audience for this section is the peripheral developer. Compliant CompactPCI peripheral cards shall contain a PM deployed such that the combined electrical characteristics of the device, signal trace and connector meet the standard node requirements of section 2.1. The PM's IPMB signals shall be routed to the JI connector of the card per the pinout assignment for the peripheral slot IPMB 0 of Table 4. The PM shall establish the unique address it will use in the management network in accordance with section 3.3.2.2. 3.4.1 Peripheral Management Node Minimum Functionality The minimum functional requirements of the IPM device are defined by the IPMI specification. These amount to little more than acknowledging ID commands targeted at the peripheral. The peripheral developer is encouraged, therefore, to augment the minimum functionality with additional status and control features that enhance the diagnostic and management capabilities of the product. In the main, these features will be unique to each product. 3.5 Peripheral Management Controllers This section applies to PMs not resident on a CompactPCI peripheral card. Its target audience is the peripheral developer. The PMs of this section may reside on alarm cards, power supplies, fans, RAIDS, etc. The minimum functional requirements of these devices are defined by the IPMI specification and are identical to those of the section 3.4 except for address allocation. Allocation of IPMB addresses for devices in this category are controlled by the IPMB vl .0 Address Allocation specification. See section 3.3.1. February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 26 of 32
IPMI Functional Requirements 4. IPMI Functional Requirements This chapter defines the required logical behavior for each type of IPM device. The target audience for this chapter is the firmware developer undertaking the implementation of any of these devices. Generally, these implementations will start with some form of programmable microcontroller that must have sufficient resources to meet the electrical and architectural requirements of chapters 2 and 3. The actual behavior and command set capability are then added through firmware to achieve the minimum functionality defined in this chapter. Additional functionality may be defined by the firmware developer and by the developer of the host board upon which the IPM device will reside. These developers are encouraged to do so and are directed to the IPMI specification for guidance on extending the feature set of their device. 4.1 BMC Functional Requirements Fundamentally, the minimum functional requirements of the BMC are defined by the IPMI specification. Chapter three of that document is devoted to the BMC. tn addition, this section provides clarification and supporting requirements where needed to apply the IPMI specifications to the CompactPCI environment. Note that IPMI requirements identified in this section are for reference only; the IPMI specification is the controlling document for any IPMI requirements. The additional functional requirements of this section are based on the architectural issues and resource requirements of section 3.2.1. Figure 3 may be referenced for the context of the requirements that follow. Pursuant to chapter three of the IPMI specification, the BMC provides the system interface between the host processor and the system management network. In addition, the BMC provides IPM device command, SDR repository, watchdog timer, event receiver, system event log, and internal event generation functions. The BMC provides at least one IPMB interface for CompactPCI system boards. The BMC may optionally support sensor and external event generator functions. A brief description of these functions is included, but the IPMI specification governs the required functions. 4.1.1 BMC Management of Message Transfers Section 3.1 of the IPMI specification requires the BMC to have a system interface (see section 3.2.1.1 of this document). Chapter six of the IPMI specification, 'ВМС-System Messaging Interface’, defines the messaging interface between the BMC and system software. To the IPMI specification, there is only one system interface and there is only one (optional) IPMB. By defining a second IPMB, this specification extrapolates the mechanism for transferring messages among the system interface and multiple IPMBs. 4.1.1.1 System Interface to IPMB transfers The IPMI specification defines the format for the ’’Send Message” command used by the system interface to send a message to the IPMBs. The IPMI specification already specifies that channel 0 in the Send Message Command direct the message to the IPMB. This specification further specifies channel 0 to direct the message to IPMB 0 (see section 3.2.1.2). Additionally, this specification defines channel 1 to direct the message to IPMB 1 if it exists. (See section 3.2.1.3). Other protocol types may be supported (e.g. Ethernet, serial, etc.), however, their messaging must take place on channels 2 through 7. The IPMI specification defines the format for the Send Message Command Response in section 6.5 entitled ’Sending Messages to the IPMB from System Software'. The IPMI specification also defines a list of generic completion codes and command-specific completion codes for the response. A single-ported BMC shall respond to a "Send Message" command addressed to channel 1 with the error code 0C9Eh, "Parameter out of range. One or more parameters in the data field of the request are out of range.” Access of legacy devices by system software is accomplished using the "Master Write-Read I2C" command. The structure of this command includes fields, 'bus type' and 'bus ID’, to specify the bus to which the message is directed. Legacy devices on IPMB 0 are accessed per the IPMI specification, i.e. with CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 27 of 32
IPMI Functional Requirements bus type 'public' and bus ID = 0. Legacy devices on IPMB 1 shall be accessed with bus type 'public' and bus ID = 1. 4.1.1.2 IPMB to System Interface Transfers For messages originating on the IPMBs destined for the system interface, the BMC is required to implement a Receive Message Queue. The BMC shall pass these messages to the system interface in the order that they were received on each of the individual IPMB channels that it supports, however, the timing relationship between messages received on different channels is not guaranteed. For the case where abnormal traffic on a channel threatens to overrun the queue and block traffic from the other channel(s), the IPMI specification suggests strategies which the BMC developer may take under advisement. In defining the queue overrun policy, the BMC developer is expected to preserve the chronological order of the message stream as much as possible. The ’’Get Message” command is defined by the IPMI specification to pull a message from the Receive Message Queue. The format of the Get Message Command Response includes the channel from which the message came. The queuing mechanism works the same regardless of how many channels (typically IPMBs) the BMC supports since the source channel is part of the information in the queue. For example, a single-ported BMC will simply have no messages queued from channel 1 (IPMB 1) which is a consistent presentation to system software. 4.1.1.3 IPMB to IPMB transfers IPMB bridging is unspecified, i.e. neither defined nor prohibited by this specification. The system management concept embraces scenarios where emergency management controllers and ICMB bridge devices work to allow remote management applications to diagnose failed systems. These types of devices are capable of accessing management devices without the intervention of the primary processor. However, with two IPMBs, the management device population may be divided between the two busses. In order for management controllers and ICMB bridge devices to fulfill their mission, they may require access to devices on both busses. Consider that access to both populations of devices can be achieved by creating emergency management controllers and ICMB bridge devices that, like the BMC, are dual-ported and can connect to both busses subject to the constraints of section 2.1. Also, system developers may find that access to the devices on only one IPMB might provide adequate access to diagnostic information. In view of the variables associated with remote management access and given the complexity of defining a messaging protocol to overlay the IPMI specification, a proper IPMB-to-IPMB bridging definition exceeds the scope of this specification. 4.1.1.4 System Interface to Optional Bus Transactions Section 3.2.1.4 discusses the potential benefit of optional and private busses for a given BMC. For those busses which are controlled by the BMC and are accessible to system software, the IPMI specification describes two methods of access. In the case where an optional bus is an I2C bus or an IPMB, access to legacy I2C devices on it is through the “Master Write-Read I2C" command. Refer to chapter 14 of the IPMI specification entitled ’ВМС-System Interface Support Commands’. Alternatively, ’’public busses", as the term is defined in the IPMI specification, are defined and implemented at the option of the BMC developer and may be based on any suitable interface protocol. Public busses are identified by channel number and are accessed through the “Get Message” and Send Message” commands. As the IPMI specification declares, the contents of the message data in the Get Message and Send Message commands is dependent on the protocol associated with the target channel of the command. As such, for a given public bus, if the bus is not an IPMB, it falls to the BMC developer to define the interface protocol and message body for the “Get Message” and Send Message” commands. Firmware uses the channel number in the “Get Message” and Send Message” commands to select the interface protocol and message body template to apply. Note that channels 0 and 1 are assigned to IPMB 0 and IPMB 1 and shall not be used for optional busses. See section 4.1.1.1 February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 28 of 32
IPMI Functional Requirements 4.1.2 IPMI Requirements for the BMC This section is offered as a reference into the IPMI specification with respect to the core requirements for the BMC. Chapter three of that specification defines the mandatory and optional requirements for a BMC. The following are requirements defined in that chapter that have not already been discussed. • SDR repository - Refer to chapter 20 of the IPMI specification. The BMC shall support the sensor data record repository functions as defined in that section. The BMC shall support SDR information access from both the system interface and the IPMB(s). • Watchdog timer - Refer to chapter 15 of the IPMI specification. • Event receiver and system event log - Refer to chapter 18 of the IPMI specification. The BMC shall allow access to the SEL through the system interface, IPMB 0 and, if present, IPMB 1. The BMC shall act as an event receiver over these same interfaces. • Internal event generation functions - Refer to chapter 11 of the IPMI specification. 4.1.3 Hot-swap Requirements for the BMC The BMC is required to have the same tolerance for hot-swap transients as the CompactPCI peripheral management controllers for both IPMB 0 and, if supported, IPMB 1. The full discussion of these requirements as they relate to the firmware developer is presented in section 4.2.2. 4.1.4 l2C Error Recovery Requirements of the BMC Although it is anticipated that future releases of this specification may offer more sophisticated error detection and correction schemes, there are only two requirements of the BMC to monitor for, and attempt recovery from, abnormal I2C related conditions. The first is the detection of a “dormant” bus condition. This condition (described in section 2.1.2.3), and the recovery from it (described in detail in section 4.2.2), is identical for both BMCs and Peripheral Management Controllers. The second of these is monitoring for an I2C clock or data stuck line condition. The BMC shall periodically monitor for the stuck line condition described in section 2.1.2.3. Upon detection, the BMC shall ensure that it’s own lines are not offending, and secondly, place an event in the system event log describing the stuck line condition. The management application can subsequently use the existence of the event to attempt to clear the error at the system level. 4.1.5 Optional BMC Functions As with every management controller on the system management bus, the utility of the BMC may be enhanced beyond the minimum requirements of this specification. The following sections present optional functions for this purpose. 4.1.5.1 Local sensor support It is anticipated that in many cases, the board that is hosting the BMC (e.g. the SBC) would benefit from a BMC that is capable of monitoring various sorts of sensors local to the host. BMC developers are encouraged to implement optional busses with which they could monitor local sensors. These sensors require the implementation of sensor support as defined in chapter 22 of the IPMI spec. 4.1.5.2 FRU Commands An early and important feature of system management is its inventory management capability. The IPMI specification provides for a broad range of possible implementations although the typical approach is that each management controller located on a field replaceable unit (FRU) would provide inventory data for that FRU. See sections 1.5.11 through 1.5.13 of the IPMI specification for an overview of FRU principles and implementations. Although not specifically required by IPMI, incorporation of FRU inventory management support is highly recommended. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 29 of 32
IPMI Functional Requirements Generally, the BMC will be located on a field replaceable unit, e.g. the SBC. The BMC developer may elect to implement FRU inventory functions for such a case. The BMC may provide storage for FRU information for other devices or only for itself. The amount of FRU storage provided will be determined by the expectation of FRU storage on other managed devices and by physical constraints of the BMC implementation. FRU inventory functions shall comply with section 21 of the IPMI spec. 4.1.5.3 ALERT# Function The ALERT# signal is a low true, wire or’ed open collector signal optionally generated by certain chassis sensor devices to indicate to their management applications that they are in need of service. ALERT# is routed on the backplane per the pin-out in Table 4, and connected to the ALERT# pin on the BMC. The BMC should implement the ALERT# signal as a single bit, digital sensor. At the developer’s option, the BMC may directly service the device upon receipt of the ALERT#. Developers that elect to support ALERT# but that do not have a priori knowledge of the devices to be supported should send an event to the system event log whenever ALERT# is detected asserted. This event can be employed by the management application to stimulate service of the asserting device. Deassertion of the ALERT# signal is device specific, and the responsibility of the management application. 4.1.6 IPM Command Functions The IPMI specification defines the minimum command function set that must be supported by the BMC. That specification also defines optional commands that may enhance the utility of the BMC. Table 9 below lists the commands defined in the IPMI specification by functional group with reference to the relevant chapters in that specification. For reference, the commands that are mandatory at the time of this writing are bullet listed under the group heading. Nonetheless, the IPMI specification is the controlling document for defining these commands and defining the optional or mandatory status of each. Command Reference Device Global Commands • Get Device ID • Get Self Test Results • Broadcast Get Device ID IPMI Spec Chapter 13 ВМС-System Interface Support Commands • Set BMC Global Enables • Get BMC Global Enables • Clear Message Flags • Get Message Flags • Get Message • Send Message • Master Write-Read I2C commands IPMI Spec Chapter 14 BMC Watchdog Timer Commands • Reset watchdog Timer • Set Watchdog Timer • Get Watchdog Timer IPMI Spec Chapter 15 SDR Repository Commands • Get SDR Repository Info • Reserve SDR Repository • Get SDR • Add SDR • Clear SDR Repository IPMI Spec Chapter 20 Table 9 - Reference to IPMI defined commands February 2. 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 30 of 32
IPMI Functional Requirements Refer to the PICMG System Management Website identified in section 1.5 for the latest information on optional commands which the BMC developer may elect to implement. 4.2 Peripheral Management Controller Functional Requirements The minimum requirements of any Peripheral Management Controller (PM), including a CompactPCI Management Controller, is that it shall be configurable as to the address by which it is accessed and it shall support the mandatory IPM Device global commands as defined in chapter 13 of the IPMI specification. The additional requirement of hot-swap tolerance is mandated for PMs that are implemented as part of a Standard Node Device (see section 2.1.2). The remaining topics of this section are not required however the real benefit of implementing a PM is that it can provide inventory data, or access to local sensor data, and preferably both. Advanced uses for the PM will define OEM commands to effect implementation-specific utilities such as in-circuit fault diagnosis and isolation. 4.2.1 PM Address Configuration The PM shall ensure that it only responds to the address intended for it. Refer to sections 3.3.1 and 3.3.2 regarding the address allocation policy for the system management network. A CompactPCI Management Controller is responsible for ensuring that it identifies itself by the IPMB address that is assigned to the slot in which it is installed. Section 3.3.2.2 specifies the address assigned to each slot. The peripheral developer is responsible for delivering the geographical address (slot ID) to the PM. The PM shall be implemented such that it does not initiate nor respond to any IPMB transactions until it has initialized itself to the IPMB address appropriate to its geographical address. 4.2.2 Hot-swap Transient Tolerance A PM that is implemented as part of a Standard Node, including a CompactPCI Management Controller, is required to be hot-swap tolerant. Section 2.1.2 of this specification defines three levels of aberrant behavior that can result from hot-swap activity. Depending on the inherent capabilities of the hardware upon which the PM is based, the firmware developer generally contributes at all three levels to hot-swap transient tolerance, and is cautioned to pay close attention to the capabilities of his target hardware and incorporate additional software filtering as required. The lowest level of hot-swap transient is the signal transient defined in section 2.1.2.2. If the PM is based on a microcontroller that contains an integrated I2C SRT, then the SRT must meet the criteria of section 2.1.2.2 with nothing required of the firmware developer. If, however, the serial transmission of the data is under firmware control, then the serial receive routine is responsible for ensuring the glitch rejection of section 2.1.2.2. A basic approach to this requirement is to oversample the data input stream so that a valid logic level is defined by multiple successive readings at the same level. The next level of hot-swap transients is embodied in the transmission violations defined and described in section 2.1.2.3. The violations identified at this level are the aborted transfer and the stuck line (either clock or data). These violations rely on a timeout mechanism for detection and recovery. Unless the PM is based on custom hardware that automatically performs a timeout on these violations, the firmware must perform the function of monitoring the bus, detecting these violations through a timeout mechanism, and recovering the SRT function in the PM so as to ignore the violation. The Aborted Transfer, defined as the bus going dormant any time after a start condition and before a corresponding stop condition, is described in detail in section 2.1.2.3. This condition is only detected when a management controller needs to use the IPMB bus to transmit, but is unable to get a not-busy condition after the appropriate timeout (see Table 2). Upon detection of the dormant condition, this specification allows the transmitting controller to transmit its message. The transmitted message will be interpreted by other controllers on the IPMB as a legal restart condition, and the message will be properly received. This action has the added benefit of clearing the bus-busy condition upon detection of the stop condition. CompactPCI® System Management Specification PICMG 2.9 R1.0 February 2, 2000 Page 31 of 32
IPMI Functional Requirements Implementation Note - It should be noted that IPM devices with integrated I2C hardware (a SRT) may not have a means for firmware to reset the internal logic of the SRT when trying to transmit on a dormant bus. The developer of the PM is ultimately responsible for devising a means of restoring the SRT for transmission. Ideally, the microcontroller upon which the design is based will have a resettable SRT. When this is not the case, the IPM device may need to implement an alternate means of sending a dummy transaction with a stop condition onto the dormant bus to clear the SRT for transmission. This alternate means shall not cause the PM to violate the electrical loading requirements of section 2.1.1. The I2C specification, in Note 5 of chapter 9 ’Formats with 7-bit Addressing’, specifies that void messages (a start condition immediately followed by a stop condition) are illegal. Accordingly, a recommendation for the dummy transaction is shown in Table 10. As long as the IPM device targets itself for the dummy message, no other device will try to respond. The possibility exists that two IPM devices may attempt to send their dummy transactions simultaneously, so the I2C requirements for arbitration must be respected. Regardless of which device transmitted the dummy command, all SRTS on the bus should indicate a free bus once the stop condition is transmitted. Start Master’s own 7 bit address 1 1 Stop Clk 1 Clk 2 Clk 3 Clk 4 Clk 5 Clk 6 Clk 7 Clk 8 Clk 9 Table 10 - Dummy Message Format The stuck line case, where either the clock or data line is stuck low, is critical because only the perpetrator can clear the fault. Although it is anticipated that future releases of this specification may offer more sophisticated error correction techniques, each peripheral management controller shall monitor the clock and data lines for the stuck line condition, and if found, take whatever measures it can to ensure that it isn’t the offending device. The highest level of hot-swap transient is that of the protocol violations defined in section 2.1.2.4. Recovery from these violations is required by the IPMB specification and is an integral part of the firmware for implementing the IPMI commands for the PM in question. Refer to that specification for guidance in handling protocol violations. 4.2.3 IPM Device Functions The PM shall support the mandatory IPM Device global commands as defined in chapter 13 of the IPMI specification. These commands are referenced in the Device Global Commands entry of Table 9. The IPMI specification is the controlling document for defining the set of IPM Device global commands and defining the optional or mandatory status of each. 4.2.4 Sensor Device Functions As stated in the introduction to this section, it is anticipated that, in general, the system component within which the PM is located will have sensors that the PM will be required to monitor. These sensors require the implementation of sensor support as defined in chapter 22 of the IPMI spec. 4.2.5 FRU Device Functions Chapter 2 of the IPMI specification, Logical Management Device Types, introduces the concept of a virtual FRU device that is accessed through IPMI commands to yield inventory information. The peripheral developer is strongly encouraged to support the inventory capability of the system management network by implementing a FRU device within the peripheral management controller (PM). The PM should implement FRU inventory storage for the host component as well as the FRU inventory functions to access it. FRU inventory functions shall comply with chapter 21 of the IPMI specification. See sections 1.5.11 through 1.5.13 of the IPMI specification for an overview of FRU principles and implementations. Refer to the specification, ’’Platform Management FRU Information Storage definition vl.O” for implementation details. February 2, 2000 CompactPCI® System Management Specification PICMG 2.9 R1.0 Page 32 of 32