• There are no items in your cart
We noticed you’re not on the correct regional site. Switch to our AMERICAS site for the best experience.
Dismiss alert

TR 332 : ISSUE 6

Superseded
Superseded

A superseded Standard is one, which is fully replaced by another Standard, which is a new edition of the same Standard.

RELIABILITY PREDICTION PROCEDURE FOR ELECTRONIC EQUIPMENT
Superseded date

01-05-2001

Published date

12-01-2013

1. Introduction
1.1 Purpose and Scope
1.2 Changes
1.3 Requirements Terminology
2. Purpose of Reliability Predictions
3. Guidelines for Requesting Reliability Predictions
3.1 Required Parameters
3.2 Choice of Method
3.3 Operating Conditions and Environment
3.4 System-Level Information
3.5 Procedure Verification
4. Guidelines for the Reliability Prediction Methods
4.1 Preferred Methods
4.2 Inquiries
5. Overview of Method I: Parts Count Method
5.1 General Description
5.2 Case Selection
5.3 Additional Information
5.4 Operating Temperature Definition
6. Method I: Parts Count
6.1 Available Options
6.2 Steady-State Failure Rate
6.2.1 Device Steady-State Failure Rate
6.2.2 Unit Steady-State Failure Rate
6.3 First-Year Multipliers
6.3.1 Device Effective Burn-in Time
6.3.2 Device First-Year Multipliers
6.3.3 Unit First-Year Multiplier
6.4 Worksheets
6.5 Examples
6.5.1 Example 1: Case 1 (Forms 2 and 3)
6.5.2 Example 2: Case 2 (Forms 2 and 4)
6.5.3 Example 3: Case 3, General Case (Forms 5 and 6)
6.6 Instructions for Device Types/Technologies Not
           in Table A
6.7 Items Excluded From Unit Failure Rate
           Calculations
6.7.1 Default Exclusions
6.7.2 Approved Exclusions
6.7.3 Example 4
7. Method II: Combining Laboratory Data With Parts
           Count Data
7.1 Introduction
7.2 Method II Criteria
7.3 Cases for Method II Predictions
7.4 Case L1 - Devices Laboratory Tested (Devices Have
           Had No Previous Burn-in)
7.5 Case L2 - Units Laboratory Tested (No Previous
           Unit/Device Burn-In)
7.6 Example 5
7.7 Case L3 - Devices Laboratory Tested (Devices Have
           Had Previous Burn-In)
7.8 Case L4 - Devices Laboratory Tested (Units/Devices
           Have Had Previous Burn-In)
7.9 Example 6
7.10 Calculation of Number of Units or Devices on Test
8. Method III: Predictions From Field Tracking
8.1 Introduction
8.2 Applicability
8.3 Definitions and Symbols
8.3.1 Definitions
8.3.2 Symbols
8.4 Method III Criteria
8.4.1 Source Data
8.4.2 Study Length and Total Operating Hours
8.4.3 Subject Unit or Device Selection
8.4.4 Quality and Environmental Level
8.5 Field Data and Information
8.6 Method III Procedure
8.7 Examples
8.7.1 Example 7: Unit Level, Method III(a)
8.7.2 Example 8: Unit Level, Method III(b)
9 Serial System Reliability (Service Affecting
           Reliability Data)
9.1 Steady-State Failure Rate
9.2 First-Year Multiplier
9.3 Applicability
9.4 Assumptions and Supporting Information
9.5 Reporting
10. Form/Worksheet Exhibits and Preparation
           Instructions
11. Tables
References
Glossary
LIST OF FIGURES
Figure 6-1. Example 1 and 2, Case 1 (Worked Form 2)
Figure 6-2. Example 1, Case 1 (Worked Form 3)
Figure 6-3. Example 2, Case 2 (Worked Form 4)
Figure 6-4. Example 3, Case 3 (Worked Form 5)
Figure 6-5. Example 3, Case 3 (Worked Form 6)
Figure 6-6. Example 4 (Worked Form 7)
Figure 10-1. Request for Reliability Prediction (Form 1)
Figure 10-2. Device Reliability Prediction, Case 1 or 2
              (Form 2)
Figure 10-3. Unit Reliability Prediction, Case 1 (Form 3)
Figure 10-4. Unit Reliability Prediction, Case 2 (Form 4)
Figure 10-5. Device Reliability Prediction, General Case
              (Form 5)
Figure 10-6. Unit Reliability Prediction, General Case
              (Form 6)
Figure 10-7. Items Excluded from Unit Failure Rate
              Calculations (Form 7)
Figure 10-8. System Reliability Report (Form 8)
Figure 10-9. Device Reliability Prediction, Case L-1
              (Form 9)
Figure 10-10. Unit Reliability Prediction, Case L-2
              (Form 10)
Figure 10-11. Device Reliability Prediction, Case L-3
              (Form 11)
Figure 10-12. Unit Reliability Prediction, Case L-4
              (Form 12)
Figure 10-13. Additional Reliability Data Report (Form 13)
Figure 10-14. List of Supporting Documents (Form 14)
LIST OF TABLES
Table A. Device Failure Rates* (Sheet 1 of 15)
Table B. Hybrid Microcircuit Failure Rate Determination
              (Sheet 1 of 2)
Table C. Device Quality Level Description (Sheet 1 of 2)
Table D. Device Quality Factors
Table E. Guidelines for Determination of Stress Levels
Table F. Stress Factors
Table G. Temperature Factors (Sheet 1 of 2)
Table H. Environmental Conditions and Multiplying
              Factors
Table I. First Year Multiplier
Table J. Reliability Conversion Factors
Table K. Upper 95% Confidence Limit (U) for the Mean of
              a Poisson Distribution

Sets forth the recommended methods for predicting product and system reliability.

DevelopmentNote
Included in FR-796 (04/2001) Supersedes TR NWT 000332 (03/2004)
DocumentType
Standard
PublisherName
Telcordia Technologies
Status
Superseded
Supersedes

EN 62308 : 2006 EQUIPMENT RELIABILITY - RELIABILITY ASSESSMENT METHODS
BS EN 62308:2006 Equipment reliability. Reliability assessment methods
GR 454 CORE : ISSUE 1 GENERIC REQUIREMENTS FOR SUPPLIER-PROVIDED DOCUMENTATION
GR 2957 CORE : ISSUE 1 GENERIC REQUIREMENTS FOR BELOW-GROUND FLYWHEEL ENERGY STORAGE SYSTEMS
GR 2952 CORE : ISSUE 1 REV 1 GENERIC REQUIREMENTS FOR PORTABLE WAVELENGTH DIVISION MULTIPLEXER ANALYZERS
GR 2903 CORE : ISSUE 1 RELIABILITY ASSURANCE PRACTICES FOR FIBER OPTIC DATA LINKS
GR 761 CORE : ISSUE 1 GENERIC CRITERIA FOR CHROMATIC DISPERSION TEST SETS
I.S. EN 62308:2006 EQUIPMENT RELIABILITY - RELIABILITY ASSESSMENT METHODS
GR 512 CORE : ISSUE 2 LSSGR: RELIABILITY, SECTION 12
IEC 62308:2006 Equipment reliability - Reliability assessment methods

MIL-HDBK-217 Revision F:1991 RELIABILITY PREDICTION OF ELECTRONIC EQUIPMENT

View more information
Sorry this product is not available in your region.

Access your standards online with a subscription

Features

  • Simple online access to standards, technical information and regulations.

  • Critical updates of standards and customisable alerts and notifications.

  • Multi-user online standards collection: secure, flexible and cost effective.