CIS 375 SOFTWARE ENGINEERING

UNIVERSITY OF MICHIGAN-DEARBORN

DR. BRUCE MAXIM, INSTRUCTOR

Date: 9/29/97

Week 4

SOFTWARE QUALITY:

ASSESS QUALITY:

  1. Correctness;
  2. Maintainability;
  3. Integrity = [1 - threat * (1 - security)];
  4. Usability =
  5. DRE - defect removal efficiency;

QUALITY:

  • (Engineering definition)
    1. Of Design - characteristics of product specified by designer.
    2. Of Conformance - degree to which specifications were followed during manufacture.

    QUALITY ASSURANCE:

  • Consists of :
  • Auditing and reporting functions of management.
  • Goal:
  • To provide data to management to allow informal decision making regarding the artifact (software).
  • TOTAL QUALITY MANAGEMENT:

  • (TQM)
    1. Develop visible and repeatable processes.
    2. Examine intangibles affecting processes and optimize their impact.
    3. Concentrate on user's application of the product and improve the process.
    4. Examine uses of product in the market place.

    SOFTWARE QUALITY ASSURANCE:

    1. Software requirements conformance - is the basis for measuring quality.
    2. Specified standards define developmental criteria and guide software engineering process.
    3. There is an implicit set of requirements which must be met.

    SOFTWARE QUALITY ASSURANCE GROUP:

  • Duties:
    1. Prepare SQA plan.
    2. Participate in developing project software description.
    3. Review engineering activities for process compliance.
    4. Audit software product for compliance.
    5. Ensure product deviations are well documented.

    STRUCTURED WALKTHROUGH: (DESIGN REVIEW)

  • Peer review of some product - found to eliminate 80% of all errors if done properly.
  • WHY DO PEER REVIEW?

    1. To improve quality.
    2. Catches both coding errors and design errors.
    3. Enforce the spirit of any organization standards.
    4. Training and insurance.

    FORMALITY TIMING:

  • Formal presentations - resemble conference presentations.
  • Informal presentations - less detailed, but equally correct.
  • Early - informal (too early, may not be enough information).
  • Later - formal (too late, may have a "bunch of crap").
  • When?
  • ~ After analysis.
  • ~ After design.
  • ~ After compilation (first).
  • ~ After test run (first).
  • ~ After all test runs.
  • ROLES IN WALKTHROUGH:

    1. Presenter (designer/producer).
    2. Coordinator (not person who hires/fires).
    3. Secretary (person to record events of meeting, build paper trail).
    4. Reviewers
      1. Maintenance oracle.
      2. Standards bearer.
      3. User representative.

    GUIDELINES FOR WALKTHROUGH:

    1. Keep it short (< 30 minutes).
    2. Don't schedule two in a row.
    3. Don't review product fragments.
    4. Use standards to avoid style disagreements.
    5. Let the coordinator run the meeting.

    FORMAL APPROACHES TO SQA:

    1. Proof of correctness.
    2. Statistical quality assurance.
    3. Collect/categorize sample.
    4. If defects are found trace them back to the code
    5. Clean room process (combine #'s 1 & 2).

    SOFTWARE RELIABILITY:

  • Measures:
  • MTBF - mean time between failures.
  • = MTTF (mean time to failure) + MTTR (mean time to repair).
  • Availability = MTTF / (MTTF + MTTR) = (MTTF / MTBF) * 100%.
  • RELIABILITY MODELS:

    1. Predicts reliability as a function of chronological time.
  • VS
    1. Predicts reliability as a function of elapsed processor time.
  • RELIABILITY MODELS THAT ARE DERIVED FROM HARDWARE RELIABILITY ASSUMPTIONS:

    1. Debugging time between errors has exponential distribution - proportional to # of remaining errors.
    2. Each error is removed as it is discovered.
    3. Failure rate between errors is constant.

    MODELS DERIVED FROM INTERNAL PROGRAM CHARACTERISTICS

    SEEDING MODELS:

  • Take errors and arbitrarily place them into the code, to see how long it takes to catch the seeded errors.
  • CRITERIA FOR EVALUATING MODELS:

    1. Predictive validity.
    2. Capability (produce useful data).
    3. Quality of assumptions.
    4. Applicability (can it be applied across domain types).
    5. Simplicity.

    Date: 10/1/97

    Week 4

    MAINTENANCE:

  • (Longest life cycle phase)
  • EVOLVING SYSTEM VS. DECLINING:

    1. High maintenance cost.
    2. Unacceptable reliability.
    3. System adaptable to change?
    4. Time to effect change.
    5. Unacceptable performance.
    6. System functions limited in usefulness.
    7. Other systems? (faster, cheaper).
    8. High cost to maintain hardware.

    MAINTENANCE TEAM:

    1. Understand system.
    2. Locate information in documentation.
    3. Keep system documentation up to date.
    4. Extend existing functions.
    5. Add new functions.
    6. Find sources of errors.
    7. Correct system errors.
    8. Answer operations questions.
    9. Restructure design and code.
    10. Delete obsolete design and code.
    11. Manage changes.

    TYPES OF MAINTENANCE:

    1. Corrective (21%) fixing errors in original product.
    2. Adaptive (25%) fixing errors created by fixing other errors.
    3. Perfective (50%) client needs changes.
    4. Preventative (4%) prevent new errors.

    MAINTENANCE DIFFICULTY FUNCTIONS:

    1. Limited understanding of hardware and software (maintainer).
    2. Management priorities (maintenance is low priority).
    3. Technical problems.
    4. Testing difficulties (finding problems).
    5. Morale problems (maintenance is boring).
    6. Compromise (decision making problems).

    MAINTENANCE COST FACTORS:

    1. Type supported applications.
    2. Staff turnover.
    3. System life span.
    4. Dependence on changing environments.
    5. Hardware characteristics.
    6. Quality of design.
    7. Quality of code.
    8. Quality of documentation.
    9. Quality of testing.

    CONFIGURATION MANAGEMENT:

  • (Tracking changes)
  • CONFIGURATION MANAGEMENT TEAM:

    CONFIGURATION CONTROL BOARD:

  • (Change control board)
  • PROCESS:

  • (Of changes)
    1. Problem is discovered.
    2. Problem is reported to configuration control board.
    3. The board discusses the problem (is the problem a failure or an enhancement, who should pay for it)
    4. Assign the problem a priority or severity level, and assign staff to fix it.
    5. Programmer or analyst locates the source of the problem, and determines what is needed to fix it.
    6. Programmer works with the librarian to control the installation of the changes in the operational system and the documentation.
    7. Programmer files a change report documenting all changes made.

    ISSUES:

  • (Of change process)
    1. Synchronization (when?).
    2. Identification (who?).
    3. Naming (what?).
    4. Authentication (done correctly?).
    5. Authorization (who O.K.'d it?).
    6. Routing (who's informed?).
    7. Cancellation (who can stop it?).
    8. Delegation (responsibility issue).
    9. Valuation (priority issue).

    AUTOMATED TOOLS FOR MAINTENANCE:

    1. Text editors (better than punch cards).
    2. File comparison tools.
    3. Compilers and linkage editors.
    4. Debugging tools.
    5. Cross reference generators.
    6. Complexity calculators.
    7. Control Libraries.
    8. Full life cycle CASE tools.

    COMPUTER SYSTEM ENGINEERING:

    COMPUTER SYSTEM ELEMENTS:

    1. Software.
    2. Hardware.
    3. People.
    4. Databases.
    5. Documentation.
    6. Procedures (more human type, not code).

    COMPUTER SYSTEM ENGINEER/ANALYST TASKS:

    SOFTWARE ENGINEERING:

    1. Definition phase:
      1. Produce software project plan.
      2. Produce software requirements plan.
      3. Revised software project plan.
    2. Development phase:
      1. Deliver first design specifications.
      2. Module descriptions added to design specs after review.
      3. Coding after design is complete.
    3. Verification, release, and maintenance:
      1. Validation of source code.
      2. Test plan.
      3. Customer testing and acceptance.
      4. Maintenance.

    HUMAN FACTORS AND HUMAN ENGINEERING:

  • (HCI - human computer interaction)
    1. Activity analysis (watch the people you're supporting).
    2. Semantic analysis and design (what & why they do things).
    3. Semantic and lexical design (hw & sw implementation).
    4. User environment design (physical facilities and HCI stuff).
    5. Prototyping.