CIS 375 SOFTWARE ENGINEERING

University Of Michigan-Dearborn

Dr. Bruce Maxim, Instructor

System Design:

  1. Conceptual Design (function) {for the customer}.
  2. Technical design (form) {for the hw & sw experts}.

Characteristics Of Conceptual Design:

    1. Written in customer’s language.
    2. No techie jargon.
    3. Describes system function.
    4. Should be implement independent.
    5. Must be derived from requirements document.
    6. Must be cross referenced to requirements document.
    7. Must incorporate all the requirements in adequate detail.

Technical Design Contents:

    1. System architecture (sw & hw)
    2. System software structure.
    3. data

Design Approaches:

    1. Decomposition (function based, well defined).
    2. OR

    3. Composition (O.O.D., working from data types not values).

Program Design:

Program Design Modules:

    1. Must contain detailed algorithms.
    2. Data relationships & structures.
    3. Relationships among functions.

Program Design Guidelines:

    1. Top - down approach.
    2. vs.

    3. Bottom - up approach.
    4. (There are risks and benefits for both)

    5. Modularity & independence (abstraction).
    6. Algorithms:
    1. Correctness.
    2. Efficiency.
    3. Implementation.
    1. Data types:
    1. Abstraction.
    2. Encapsulation.
    3. Reusability.

What Is A Module?

A set of contiguous program statements with:

    1. A name.
    2. The ability to be called.
    3. Some type of local environment (variables).

(Self contained, sometimes can be compiled separately)

Modules either contain executable code or creates data.

Determining Program Complexity:

{More modules - less errors in each, but more errors between them}

Program Complexity Factors:

    1. Span of the data item.
    2. (too many lines of code between uses of a variable)

    3. Span of control.
    4. (too many lines of code in one module)

    5. Number of decision statements.
    6. (too many "if - then - else", etc.)

    7. Amount of information which must be understood.
    8. (too much information to have to know)

    9. Accessibility of information and standard presentation.
    10. (x,y -> x,y; not y1,y2)

    11. Structured information.

(good presentation)

Software Reliability:

  1. Probability estimation.
  2. Reliability.
  3. R = MTBF / (1 + MTBF)

    0 < R < 1

  4. Availability.
  5. A = MTBF / (MTBF + MTTR)

  6. Maintainability.
  7. M = 1 / (1 + MTTR)

  8. Software Maturity Index (SMI).

Mt = # of modules in current release.

Fc = # of changed modules.

Fa = # of modules added to previous release.

Fd = # of modules deleted from previous release.

SMI = [MT – (Fa + Fc + Fd)] / Mt

{As number of errors goes to 1 - software becomes more stable}

Estimating # Of Errors:

    1. Error seeding.
    2. (s / S) = (n / N)

    3. Two groups of independent testers.

E(1) = (x / n) = (# of read errors found / total # of real errors)= q/y =

(# overlap/# found by 2)

E(2) = (y / n) = q/y = (# overlap / # found by 1)

n = q/(E(1) * E(2))

x = 25, y = 30, q = 15

E(1) = (15 / 30) = .5

E(2) = (15 / 25) = .6

n = [15 / (.5)(.6)] = 50 errors

confidence in software:

    1. S = # of seeded errors.
    2. N = # of actual errors.

      C (confidence level) = 1 if n > N

      C (confidence level) = [S / (S – N + 1)] if n £ N

    3. Seeded errors found so far.

C = 1 if n > N

C = S / (S + N + 1)

S = # of seeded errors

s = # seeded errors found

N = # of actual errors

n = # of actual errors found so far

intensity of failure:

    1. Suppose that intensity is proportional to # of faults or errors at start of testing.
    2. duty time

      function A 90%

      function B 10%

      Suppose 100 total errors

      50 in A, 50 in B

      (.9)50K + (.1)50K = 50K

      (.1)50K = 5K

      or

      (.9)50k = 45K

    3. Zero failure testing.

failures to time t = ae-b(t)

[(ln (failures / (0.5 + failures)) / (ln (0.5 + failures) / (test + failures))] * (test hours to last failure)

Example:

33,000 line program.

15 = errors found.

500 = # of hours total testing.

50 = # of hours since last failure.

if we need failure rate 0.03 / 1000 LOC

failure = (.03)(33) = 1

[(ln (1 / (0.5 + 1))) / (ln ((0.5 + 1) / 15) + 1)] * 450 = 77 Software Quality:

McCall's software quality functions (regression)


Software Quality Measurement Principles:

    1. Formulation.
    2. Collect information.
    3. Analysis information.
    4. Interpretation.
    5. Feedback.

Attributes Of Effective Software Metrics:

    1. Simple and computable.
    2. Empirically & intuitively persuasive.
    3. Consistent & objective.
    4. Consistent in use of units & dimension.
    5. Programming language implementation.
    6. Provide mechanism for quality feedback.

Sample Metrics:

    1. Function-based (function points).
    2. Bang metric (DeMarco).

RE / FuP

(< 0.7) function strong applications.

(> 1.5) data strong applications.

Specification Quality Metrics:

nn = nf + nnf

where:

nn = requirements & specification .

nf = functional .

nnf = non-functional .

Specificity, Q1 = nai/qr

nai = # of requirements with reviewer agreement.

Completeness, Q2 = nu / (ni * ns)

nu = unique functions.

ni = # of inputs.

ns = # of states.

Overall completeness, Q3 = nc / (nc + nnv)

nc = # validated & correct.

nnv = # not validated.

Software Quality Indices:

IEEE standard - SMI (Software Maturity Index)

SMI = [Mt = (Fa + Fc + Fd)]/Mt

Mt = number of modules in current release.

Fa = modules added.

Fc = modules changed.

Fd = modules deleted.

Component Level Metrics:

    1. Cohesion metric:
    2. Data slice.

      Data tokens.

      Glue tokens.

      Super glue tokens.

      Stickiness.

    3. Coupling metrics:
    4. For data & control flow coupling.

      For global coupling.

      For environmental coupling.

    5. Complexity metrics.
    6. Interface design metrics.