What is the hardest software to program

Software technology

multi-person construction of multi-version software

Parnas (1987)

Software engineering - characteristic properties and goals

It is about fixing errors, expanding functionality, removing old ones or adapting them to new environments. Common principles are

  1. Separation of concerns / requirements
  2. Modulation (Decomposition of complex problems, i.e. top down or assembling a complex system from existing factory parts, i.e. bottom-up)
  3. abstraction (highlight important things and ignore details and, if necessary, represent different abstractions of the same reality)
  4. Anticipation of changes (i.e. when developing, bear in mind that requirements will change over time. Version management is therefore necessary)
  5. Generalization (i.e. trying to generalize problems in order to address them at a higher level)
  6. Incremental approach (Incrementality / step-by-step approach enables an evolutionary development process in which early feedback reaches the initial requirements)

SW development, SW management, SW quality assurance and maintenance and care

Programming on a small and large scale, differences

  • Requirements are known on a small scale
  • Programs can be written and designed by individuals
  • With large applications, requirements are no longer trivial and manageable
  • Precise specification becomes necessary and a model of the application is worked out
  • We are now working in a team

Engineering - systematic approach to problem solving

  • Systematic approach
  • Precise definition of the problem
  • Learn from existing solutions
  • You have to be able to recognize repeating patterns
  • Define formal models as this automation promote
  • Standard tools are used for development
  • Develop techniques with which classes of problems can be solved

Why is software engineering so complicated?

  • Only unclear or ambiguous specifications are possible
  • Semantics are very complex and there is no mathematical support

History of software engineering

  • Previously problems between only one user and computer and no other people
  • Thus problems were simple and manageable
  • Users specified the problem
  • Programmers interpreted the specification and programmed according to it

Problems of the development of large software systems and causes

  • Only unclear or ambiguous specifications are possible
  • Semantics are very complex and there is no mathematical support
  • 67% in maintenance costs alone

Life cycle of a software system

  • The development and evolution of a software system is systematically implemented according to a specific method
  • The development and evolution of a software system consists of well-defined phases
  • Each phase has well-defined start and end points which clearly define the transition to the next phase

Waterfall model of the life cycle of a software system

  1. analysis
  2. design
  3. Implementation and module test
  4. Integration and system tests
  5. Commissioning and maintenance

Phases of the waterfall model

  • Requirements analysis and specification (what's the problem?)
  • Design and specification (how to solve it?)
  • Coding and module tests (function bodies are programmed)
  • Integration and system test (all modules are put together)
  • Handover (GoLive) and maintenance (changes after customer handover)

Programming languages ​​and SE (influence)

  • Central tools for development
  • Should enable modularity, software architecture, separate and independent compilation of modules, GUI design and specification independent of implementation

Data independence, i.e. using data without having to know the underlying data representation

Separation of specification and implementation

Databases and SE (influence)

  • Large structured objects can be saved (sources)
  • Large unstructured objects can be saved (Object Programs)
  • Configuration and version management easily possible
  • Database schemas are also documentation

Management and software engineering - goals and structure

Technical management

  • Cost estimate
  • Project planning
  • Resource planning
  • Working ropes
  • project monitoring

Personnel management

  • motivation
  • employment
  • Choosing the right subjects

"Collective term for programs that are available for the operation of computing systems, including the associated documentation" (Brockhaus Encyclopedia)

Special properties of the software as a product

  • People think software is easily customizable, which is wrong
  • With development, the focus is on SE, with production in other sectors
  • A product is not only what the customer receives, but also the requirements, design, sources and test data

Properties of software

  1. correctness
  2. Reusability
  3. reliability
  4. Growth / evolution
  5. robustness
  6. portability
  7. Efficiency / performance
  8. Comprehensibility
  9. user friendliness
  10. Interoperability
  11. Verifiability
  12. productivity
  13. Maintainability
  14. Punctuality / timeliness
  15. Correctability
  16. visibility
  17. Correctness of software
  • A program adheres to the specification and its functions
  • Therefore, the specification must be present and correct
  • Correctness is therefore a mathematical property that characterizes the relationship between specification and software

Software reliability

The program does what it should and what is expected.

reliability is a relative quality feature, because incorrect software can still be reliable, little incorrect behavior can be tolerated.

Software robustness

A program is robust if it behaves "sensibly" even under conditions that were not taken into account in the requirements specification. (e.g. incorrect input data, hardware malfunction).

  • Robust against expected events such as incorrect entries is part of correctness
  • Toughness here means to be robust against unexpected events outside of the specification

Performance of software

  • Economic behavior means that a system works efficiently
  • it is efficient when it makes economical use of the computer's resources
  • In terms of speed and storage requirements
  • Measure via monitoring to discover bottlenecks
  • Analyzing a model is cheap but messy
  • Simulating models is expensive but clean
  • Performance tests have to be completed after the first version has been released, not before, otherwise things may change

Ease of use of software

  • A program is user-friendly if its users find it easy to use (subjective)
  • Attention must be paid to inexperienced (menus) and experienced users (commandline)
  • Embedded systems do not have a UI
  • It is important to standardize the user interface in order to promote recognition

Verifiability of software

  • A software system is verifiable if its properties can be easily checked and verified
  • Formal analysis, testing, tracing and debugging ...

Software maintainability

  • Maintenance should not be seen as the removal of wear and tear, but rather as changes
  • Changes to software are extremely expensive and account for 60% of all costs
  • Software evulotion is the result
  • Thinking ahead is required! (Anticipation of changes)

Categories:

  1. corrective maintenance
  2. Adaptive maintenance
  3. perfect maintenance

Correctability of software

  • Errors must be easy to rectify without having to make a great deal of effort
  • Also represents a major design goal
  • Reduce the number of individual parts and use standard parts as far as possible, as these contain fewer defects

Extensibility of software (growth, evolution)

  • Software evolution to add new functions and change old ones
  • Expandability is favored by Modularization

Software reusability

  • Indicates how easy it is to be able to reuse parts of the software system (possibly with minor changes) in other software systems
  • Create libraries that can be reused
  • Software re-use is still rare
  • Reusable components are also much more expensive than the ones that aren't

Software portability

  • Software can run in different environments (hardware, software, OS)
  • It should therefore require minimal configuration
  • Penalty for portability is poor runtime behavior because it is no longer optimized

Comprehensibility of software

A program is understandable when its behavior is predictable

Internal:

  1. How understandable is the SW system?
  2. Influence on expandability and verifiability

External:

  1. SW system is understandable when it shows predictable behavior
  2. Understandability is part of usability

Software interoperability

Is the possibility of a system to coexist and cooperate with other systems. Open systems have standardized interfaces that enable communication between systems from different manufacturers.

Software development productivity

  • Indicates how much quality can be produced in which period of time
  • Efficiency means that a good, high-quality product can be produced in a short period of time
  • Production of reusable systems lowers efficiency, as this is more complex
  • Productivity is difficult to measure

Possibility of timely delivery of the product (punctuality / timeliness)

This requires careful cost calculation, effort estimation and planning, setting of milestones and project management with computer-aided project management tools.

  • Problems are estimating the time required, productivity and setting sensible milestones
  • Requirement from the customer grows faster than it can be implemented
  • Solving the problems through incremental development methods
  • SW crisis: SW products delivered late and incorrectly

visibility

  • Every step is clearly documented
  • In requirement and specification
  • Individual steps and the status of the project are available at any time
  • External quality (Presentation of the status)
  • Internal quality (allows the programmers a more precise estimation and division of their actions)
  • = Maintenance of the requirements and design specifications

Principles of Design

  • Principle of filtering and separating
  • Principle of modularization
  • Principle of abstraction
  • Principle of anticipating changes
  • Principle of generality
  • Principle of incrementality

The principle of filtering and separating - breaking down complexity

  • Filtering and separating the most relevant key points
  • Omission of unnecessary information
  • This is the only way to decompose the problem (e.g. in modules)
  • it is often easier to grasp the partial problems than the whole
Decomposition based on:
  1. Time: schedule of activities
  2. Quality features: e.g. first correctness, then efficiency
  3. Views: e.g. data flow on the one hand and control flow on the other
  4. Size: modularization
  5. Responsibilities: Distribution of work among different people
  6. People with different abilities

Principle of modularization

  • Decomposition of the problem into individual modules
  • Complex systems can be divided into small pieces or building blocks = modules -> the system is modular
  • Can be processed and tested in parallel
  • Communication via a clear interface
  • Vertical modularity (each module described in different levels of abstraction)
  • Horizontal modularity (system description on the same level of abstraction)
  • Separation of functional specification (ER diagrams for data, DF diagrams for functions, Petri nets for control)
  • Modularization
A system can be dismantled:
  • Subdivide the original problem into sub-problems, then
  • recursive subdivision of the sub-problems (top-down) -> divide et impera
Composition of a system:
  • Assembling the system from elementary, prefabricated components (bottom-up)

Principle of abstraction

  • highlight important things and ignore details
  • Division into relevant, logical parts
  • possibly many different abstractions of the same reality: different perspectives and intentions
  • Models: abstractions of reality

Principle of anticipation of changes

Is a principle that makes software very different from other types of industrial products.

  1. new requirements are found
  2. old requirements are updated
Therefore, make provision for changes (e.g. by means of a corresponding draft)
  • Tools needed to manage the various versions and revisions of the software and its documentation
  • Configuration management problem
  • Requirements that the customer would like to have at some point with a high degree of probability or requirements that arise for other reasons, such as changes in the environment, etc.
  • the system must be programmed in such a way that changes can be made easily and not about half the system has to be redesigned

Principle of generality / general validity

If a particular problem is to be solved, one should try to identify a more general problem than this and solve the more general problem (because that is probably more reusable).

  • Discover similarities and generalize, i.e. be able to use standards
  • Increases reusability and reduces costs
  • Without generality, similar modules would have to be developed over and over again

Principle of incrementality

  • Step-by-step approach to maintain an overview and reduce errors
  • Is an evolutionary process
  • "All at once" method is not feasible for large projects, therefore a step-by-step approach
  • Early feedback from the customer: important as initial requirements may not be stable or not fully understood

Why is software changed so often?

  1. Customer requirements grow
  2. Mistakes are made
  3. Parts were forgotten when specifying
  • Clearly define the problem to be solved.
  • Develop standard tools, technologies, and methodologies to solve the problem

Function modeling: Data flow diagrams, data dictionary, process specifications

Data modeling: ER model

Event modeling: Finite automatons, Petri nets

Objectives, structure and process of the analysis

  • The aim is to find out what the real problem is
  • possible is one structured (top-down analysis of a process) or OO analysis (buttom-up analysis of an object)
  • Requirements analysis (feasible ?, costs, effort, etc.)

The analysis includes the following points:

  1. Specification of the main objectives
  2. Obtain sources of information
  3. Requirements analysis
  4. Delimitation of the problem area
  5. Determination of the parties involved
  6. Content description
  7. Create use cases
  8. Set priorities
  9. look for alternative solutions
  10. Make recommendations

Collection of requirements

  • Requirements are recorded in natural language
  • User manuals emerge from the specification
  • System test plans should be derived from specification
  • Blue Box Tests (test for specifications, as there is no code yet)

Categories of requests

  1. functional: What should the new system do?
  2. qualitative: Quality factors that have to be met
  3. operational: expected conditions in practical use

Phases of analysis

1. As-is analysis:

  • Finding the requirements
  • Gather knowledge of the existing system
  • Specify customer requirements

2. Target concept:

  • Analyze the requirements
  • Analyze knowledge

2 procedures:

Structured and object-oriented analysis

Structured analysis

  • Structuring through repeated breaking down into components of lower complexity until elementary components have been found that can be clearly and completely described (top-down analysis)
  • Starts with behavior analysis of the processes
  • Be there Functions in groups divided up
  • The result is a hierarchy of functions
  • Data flow diagrams, data dictionary, process description

modified structured analysis:

  • Function modeling: static properties of the system behavior (data flow diagrams, data dictionary, process specifications)
  • Data modeling: Properties of the data and their semantic relationships
  • Event modeling: dynamic properties of the system behavior

Data flow diagram - construction, advantages, disadvantages

  • Shows how data moves in the system and where it causes changes
  • Top-down model (iterative decomposition)
  • is a model of the system functionality as a directed graph
  • Control of the process not apparent

Consistency of a data flow diagram

  • process description (every function is described)
  • Data Dictonary (all data types and data flows are defined)
  • Balanced DFD (I / O data flows of the parent process are known to the child DFD)

What is in a data catalog?

  • Central place where all information about data types and their transformations is stored
  • Case tools check whether these are identical
  • All properties of the data are listed in the DD
  • Data elements are grouped in data flow records (name, record, source, destination, description)
  • Contains information about processes, external entities and all records
Appointments = Date + Start + End + Status + Purpose Start = Time Time = Hour + Minute Date = Day + Month + Year Address = ZIP + City + Street Group Data = Group Name Options = {Color + Appointment Type} Minute = Number + Number Employee_ID = Number + number Username = {characters} Password = {characters} Message = {characters} ZIP code = {number}

Process specification

How do you describe processes in structured analysis? Process specifications are made for all elementary processes.

elementary process:

Process at the lowest level of the process hierarchy, which is no longer refined (leaf of the hierarchy tree)

four ways of description

  1. structured language (pseudo-code)
  2. Decision tables
  3. finite automata
  4. logical specification

What is a decision table?

  • If the condition is true then do this (quadratic possibility of occupancy)
  • compact and clear presentation of actions to be taken or actions that depend on the fulfillment or non-fulfillment of several conditions

four components

  1. Conditions: logical expressions
  2. Actions: semantic units are implemented later by appropriate program sections
  3. Regulate: A rule is a combination of conditions
  4. Action indicator: An action indicator represents an association between a rule and a selection of actions that are carried out when the rule is valid

What do you model with a finite automaton, how and why?

  • Control aspects, in which a system has different states
  • Hypothetical machine suitable for modeling a control flow, which describes reactions (actions) of a process to occurring events

Display options

  1. State diagram
  2. State table
  3. State matrix

Finite machines - limits of use, advantages, disadvantages

  • Quickly becomes too complex if there are too many states
  • With 20 states, 1 million different assignments are possible
  • Sometimes an action depends on several states and not just one
  • Synchronization only possible with difficulty (e.g. producer-consumer problem)

Petri nets

Is one for modeling a system competing / cooperating processes suitable tool.

Problem of synchronization and Petri nets

Brands represent the flow of control but are anonymous and therefore the content of the brand is unknown.

E.g.:

A token in a buffer describes the presence of a message, but not whether it is well worded or not. A selection based on the content of the token cannot be described by the Petri net if several transitions can be switched.

Furthermore, no description of priorities or time restrictions is possible.

Example: If you wait longer than a second, a message should be generated

Construction of a model with a Petri net

  • Input Place -> Transition -> Output Place
  • Marks are used for synchronization

Display elements:

  1. Job: State of a process; represents a condition
  2. Transition: controlled state transition; models an event that is triggered by switching ("firing") the transition
  3. Arrow: between place and transition and vice versa
  4. Brand: their presence in all positions in front of a transition is the necessary condition for the transition to switch (fire)
Mutual exclusion is easy to implement with Petri nets, but deadlocks are possible ...

Why and how are Petri nets expanded?

  • Brands are represented by values
  • Transitions are assigned predicates that relate to the values ​​of the marks of the input locations and help determine the transition (transitions can only switch when certain marks are present)
  • Transitions are assigned functions that use the values ​​of the marks of the input locations to calculate the values ​​of the marks of the output locations
  • Specification of priorities if several transitions can be switched

"The focus is on objects that represent images of real-world objects in the problem area to be analyzed. Each object has a unique identity (object identity), certain properties (attributes) and certain behavior (methods). The objects communicate with one another by sending of messages. How an object reacts to received messages is defined by the methods it contains. Other important concepts of OOA include: class, data encapsulation, inheritance and polymorphism (overloaded). "

Object-oriented analysis

  • Starts with the analysis of the object structure
  • Functionalities are grouped and encapsulated in classes
  • Buttom-up analysis
  • Object-oriented design uses a concrete HW and SW system, architecture, language and OO Lib

Object Oriented Programming Concepts

  • Encapsulation, classes, messages, methods, inheritance, overriding, overloading of functions
  • Association models relationships between objects of one or more classes
  • Aggregation is a special form of association in which the objects of the classes involved do not have an equivalent relationship, but rather represent a whole-part hierarchy (describes how a whole is composed of its parts = part_of)
  • Classes can be derived and objects can be created from them
  • Static binding (code linked via pointers, addresses of methods specified at compile time) or dynamic binding (addresses of the methods only determined at runtime - virtual expression necessary to show compilers which object is meant when several polymorphic ones are available, since only at runtime Pointer is connected)

Steps of object-oriented analysis

  • Initial problem statement (goals, interface, technical aspects, features)
  • Object identification, class identification with attributes, relationship modeling
  • Define class hierarchy (aggregations and associations) and define constraints
  • Define methods and services, define functionality and OO scheme

Grammatical inspection

  • The specification documents are grammatically parsed
  • Nouns become objects, i.e. classes (objects with common properties)
  • Verbs later represent methods of the classes or relationships
  • And adjectives are attributes of objects

Modeling relationships

  • Represent semantic relations between objects
  • Aggregation (Is part of) to create hierarchies
  • Specialization / generalization (Is-A relationship, i.e. inheritance)
  • Message flow structure (sender and receiver for messages)

Modeling of restrictions (constraints)

Restrictions that are defined for the classes (max values ​​or min values ​​...)

What is the risk analysis for and how is it carried out?

  • Is there any competition?
  • Which technologies are using? (web, DMBS, Languages)
  • Which market influences are influencing the project?
  • Number of expected users
  • Include future trends
  • Did the customer understand the problem?
  • Problem dimensioning (transactions per time, assumed runtime for various functions)
  • Which legacy systems need to be connected?

How do you identify system boundaries and actors?

There are internal system boundaries and external ones (not our job, but possibly interfaces to it). Actor is everything that has to interact with the system.

  1. Who is using the system (human or system)?
  2. Who is waiting
  3. Who is shutting it down?
  4. Who gets information from the system?
  5. Who feeds it with data?
  6. What happens automatically in the system?

How do you identify use cases? (Use cases)

A use case describes what an actor wants to do. What happens, what is exchanged and what events are there?

Modeling aids

Scenarios, UML and class diagrams ...

What's a scenario?

  • Is a special path through the use cases from the user's point of view
  • Primary (Basic Path) and Secondary (other concurrent paths)
  • If you combine all paths you get a complete use case

What is the scope of a project and why is it so important?

Scenarios should be complete in order to be able to identify errors quickly.

Modeling in UML - fundamental questions

Based on OO

  • data-centered methods such as ERD, data flow diagrams, status diagrams
  • structural methods
  • scenario based methods (behavior analysis)
  • UML (object models from the use cases)
  • UMD - Use Case Diagram shows actors, use cases and the relationships
  • Structure diagrams for classes and packages (groups of classes)
  • Behavioral diagrams for dynamic behavior and system activities
  • Implementation diagram with component diagram (relations between program units) and deployment diagram (communication between components)

ask

  1. Who are the users of the system? = actors
  2. What are the main objects?
  3. Which objects are needed for which use case?

What do you describe everything in a class diagram?

  • provide the static view of the software - problem, since OO complexity depends on the interaction of the methods
  • Domain class modeling
  • Business model (define the most important units of the system and their relationships)
  • Static semantics (class hierarchy describes the technical structure of the problem domain) can be based on a business model
  • A class diagram contains all classes and relationships (generalization, association, aggregation, composition) and thus also represents the hierarchy

How do you model a GUI (Graphical User Interface)?

  • First the manual is written, then the code
  • GUIs are designed in parallel that can be used for use cases
  • Window navigation diagrams show relationships between windows

What is the robustness analysis for?

Three types of objects are introduced here ...
  1. Boundary Objects (Actors who communicate with the system)
  2. Entity Objects (Objects of the domain model main entities)
  3. Control Objects (Connect boundary and entity objects)
  • The robustness diagram is the interface between the dynamic and static part of a project
  • The dynamic part includes the use cases and sequence diagrams and the static part includes the domain model and the class diagram
  • The robustness diagram makes it easier to keep an overview

Why and how do you construct sequence diagrams?

Describe how the objects work together and the time sequence in which the methods are called. Sequence diagrams are consistent with the class diagrams. They relate to a specific process (scenario) of a use case!


An alternative to sequence diagrams are collaboration diagrams. They clarify the structure of the diagram better, but the chronological order of the method calls is more difficult to recognize than in sequence diagrams.

What do the collaboration and state diagrams model?

  • Collaboration diagrams show how individual parts work together (structure)
  • State diagrams (finite state machine) describe the Life cycle of an object
  • State diagrams are useful when the behavior of an object changes significantly
  • There is a start state, transition (transition) and target state in the state diagram

Activity diagrams are relatively new in UML and are used to describe processes, and not just using the example of a use case. Rather, the behavior of objects is shown in several use cases. There is synchronization and Concurrency of activities can be modeled.

Specifications

"Describes the planned, systematic approach to software development using methods, processes and software tools with the aim of producing and using high-quality software products economically."

What role do specifications play in software development?

  • Is a reference in the implementation
  • A product is only successful if it adheres to the specification
  • Often kind of contract between customer and developer

What properties should a specification have?

  • Contains a description of what the implementation must contain
  • Specification, requirement specification, design specification and module specification
  • Specification is the reference point during product maintenance
Quality features of a specification
  1. Consistency (no contradictions)
  2. Completeness (all required requirements)
  3. Clear and clear (explain and define all words used)
  4. clear and understandable

How does an operational specification differ from a descriptive specification?

  • Operational specifications describe the desired behavior
  • Descriptive specifications describe the desired properties (i.e. describes the what but not the how)

Data flow diagrams describe operations but not the data structures or relations used. A prototype is an operational model

How formal are specifications formulated?

Informal (normal language), semiformal (precise syntax but imprecise semantics) or formal (precise syntax and semantics).

What is the verification of a specification?

  • The functionality, completeness and consistency of a specification must be checked
  • Monitoring of the dynamic behavior of the specified system
  • Analysis of the properties of the system

ER model

Entities (object types), relationships and attributes (object properties)

Logical specification

  • = FOT (first-order-theory or first-level predicate logic theory)
  • Describing in the form of formulas and logical expressions (1st order logic)
  • Variables, constants, function predicates, quantifications, etc.
  • Always a Boolean result
  • Pre and post conditions
  • The advantage is that if the logical specification is correct, all derivations thereof are correct

Source: University of Münster

Logical prototyping - methods, problems

  • Operational Specification Interpreter
  • But only the behavior can be tested and not the properties
  • Better to approximate logical language like Prologue and FOT

Algebraic specification - how do you describe the syntax?

Elements (Char, Nat, Bool) and operations (functions of algebra like new, append, add, lenght, isEmpty ...)

An algebraic specification is operational modeling based on an algebra.

Example: Algebraic specification of a stack

Source: University of Zurich

Algebraic specification - how do you describe the semantics?

Axioms of algebra, which must always be true.

Algebraic Specification Problems

Incomplete, overspecified, inconsistent, redundancy

design

"Conversion of the requirements specification obtained during the analysis phase into the architecture of the future software product."

Later serves as a template for implementation ...

Design - goals, structure and process

  • The aim is to specify the How's it (Also How to Solve?)
  • The design brings the requirements specification into templates for implementation
  • Decisions about HW and SW platform
  • Software architectural design phase (basic components and patterns)
  • Detailed draft of the modules (refining the components - specification only)

What possible future changes should the draft anticipate?

Changes in the algorithm, data representation, peripheral devices or social environment

What are patterns?

Repetitive patterns that have to be recognized in order to be able to use templates. A pattern description always contains context, problem and solution.

What is a software architecture?

  • describes the structure of the software system through system components (e.g. subroutines, abstract data types, classes) and their relationships with one another
  • Consists of components, the structure of the system and the communication between the components
  • Architectural design requires a lot of intuition and experience
Motivation:
  • Mastery of the complexity of the SW system
  • Division of work within a team
  • Reusability
  • easy maintainability

Which patterns are used in the software architecture (examples)?

  • Layer architecture (like the layer model of TCP / IP)
  • Pipes and filters architecture (scanners, parsers, semantic analysis, optimization)
  • Broker architecture (communication-based for distributed systems)
  • Model-View-Controller architecture (interactive application with model, view and events
  • Presentation abstraction control (presentation, abstraction and control of agents)

Modularization

What is a module (program block)
  • is a logical unit with a clearly defined area of ​​responsibility
  • Communication with other modules only via export / import interface
  • Secret principle - internal functions and ADTs hidden from the environment
  • exchangeable with another module with the same export interface
  • can be developed separately from other parts of the program
  • positive for maintainability, reusability, testability and portability

Modularization - USES, IS_USED, IS_COMPONENT_OF

  • Module is a well-defined component of a software system
  • Functional modules (power) or ADT modules / data object modules
  • USES relation for outgoing edges
  • IS_USED for incoming edges
  • A hierarchy is built with IS_COMPONENT_OF or CONSISTS_OF

How do modules communicate (classification of modules)?

  • Via interfaces (variables, functions, etc.), which are explicitly identified as interfaces (exported)
  • Export (offer service) and import (request or insert service)
  • The advantage is the abstraction of the modules, since details remain closed
  • As long as the interface is not changed, changes in the module do not cause any additional effort

Interface - EXPORTED, IMPORTED, draft

An interface should only offer what is absolutely necessary as briefly as possible

  1. otherwise too complex and incomprehensible
  2. otherwise susceptibility to errors in implementation
  3. too much information could be misused

Notation of the module design

  • No standardized notation yet
  • Every notation is as formal as the syntax is
  • the semantics of the export are not formally specified
  • Text notation includes function bodies and description in natural language
  • Graphic design notation shows a diagram with relations (USES) and CONSIST_OF relations

What is a generic module or subroutine?

  • Generic modules only contain structural information
  • Application-specific data fields are added by the user

With the help of the generic modules, powerful and reusable program parts can be produced. Modules that deliver a similar performance and differ only in the interface in terms of the types and the exported functions are implemented with a higher degree of abstraction with the help of generic modules.

For this purpose, the varying areas are all collected and abstracted in one module. The resulting generic modules can avoid inconsistencies in modules and they can be easily changed. A generic module is parameterized with respect to a type - one also speaks of a program template.

A client cannot use such a generic module directly; it must first be instantiated with the current parameters.

Step-by-step approach and refinement - advantages, disadvantages

  • Gradual top-down refinement most popular method
  • In each step, the problem is broken down into sub-problems
  • Subsolutions are simply linked together via control structures
  • Decomposition Tree for representation
Problems:
  1. difficult to implement in large programs
  2. does not always lead to the best solution
  3. Design is a creative process that cannot be bypassed
  4. discovering other solutions often falls by the wayside
Disadvantage:
  1. Sub-problems are considered in isolation, so generalization is not possible
  2. hardly any reusability
  3. no union of several subproblems
  4. no information hiding as there is no encapsulation
  5. if something changes in structure, everything has to be revised

Handling anomalies - exceptions

  • Unexpected and unforeseen circumstances must be shielded or intercepted (exceptions)
  • In the event of an anomaly, a module should send an exception signal to the client, which calls an exception handler (e.g. for overflow, div by zero etc.)
  • Try {...} catch (EXEPTION) {// edit exception}

Verification and testing

  • motivation
  • Test cases and test methods
  • black box testing
  • white box testing

Verification, small programs, large systems, approaches

  • Small programs through several sample use cases
  • Experiment with the behavior of the program = testing
  • Analyze the program = deduce correctness
  • The problem is that the verification has to be verified

Testing - problem of continuity

  • The system is tested in representative cases
  • It is not possible to test all states
  • The problem with continuity is that software does not know how the order of the values ​​is semantically correct when comparing them

Theoretical problems of testing, complete test selection criterion

  • Consistency and completeness of the test are not feasible

Empirical testing

  • Different use cases are tested with different test series (groups)
  • Since a complete test is not possible, the ideal test set is approximated
  • The difficulty is to find all the important groups in a particular problem class

Differences Between White Box Testing and Black Box Testing

  • White box tests use the internal structure of the software and may ignore the specification
  • Black buck tests are based solely on the specification as there is no knowledge of code or design

Coverage of instructions, edges, conditions, paths

  • Coverage of instructions (an error cannot be detected if the part of the program contains the error that is not executed)
  • Therefore every elementary instruction should be checked
  • Covering edges (each edge of the control flow graph has to be traversed once, which is not always possible .. e.g. loop condition consists of several conjunctions)
  • Covering conditions (every edge of the control flow graph is traversed and every possible value is used for all conditions)
  • Path coverage (all paths from the initial state to the final state in the control flow graph are traversed. Usually there are too many paths due to many loops in the program. Each iteration is an extra path!)

Syntax-controlled testing, testing of limit values

  • Syntax-controlled testing means that a test case is processed for each BNF production, i.e. all grammar laws are checked

Testing on a large scale, bottom-up integration and top-down integration, advantages, disadvantages

Module tests to verify correct implementation

Integration tests verify the cooperation of the modules involved

  • Big bang - no integrated testing at all
  • Incremental - easier to find bugs

System tests check the entire system with all modules

  • bottom-up (USES hierarchy)
  • top-down (lower level simulation)

What is Debugging?

  • Activity of finding and fixing bugs after they are discovered

Design of the user interface

The UI is the yardstick by which systems are rated by users. If software is difficult to use:

  • Software is discarded
  • Mistakes can be made quickly
  • Components necessary today

High resolution graphic displays, mice and various interfaces for different user classes are used

  • GUIs should support the skills of the individual users
  • Consistency and help system

User interface consistency

System commands, menus and other interfaces should
  • Have the same format
  • Always pass parameters in the same way
  • Subsystems should be similar
  • When learning a command, the user should be able to draw conclusions about the operation of all others

Built-in HELP support

  • Accessible from the user terminal
  • "How to get started?"
  • Full description of the system and how to use it
  • Structured help

User interface templates

Metaphors are used that are easy to associate

  • Control panel
  • Desktop
  • trash

WIMP interface

WIMP = Windows, icons, menus, pointing (mouse click)

advantages

  • Easy to learn
  • Lots of interfaces for interaction
  • Manageable and uncomplicated
disadvantage
  • No standards
  • Difficult to find meaningful icons for abstract components

Systems with menu

  • Users don't need to know the exact command name
  • Less typing
  • The system cannot be brought into an error state
  • Context sensitive help
  • Pull-down and pop-up menus
Disadvantage:
  • Logical links impossible to express
  • For experts menu slower than command line
  • Menu hierarchy quickly becomes complex when there are many choices

Graphical interface

  • Graphics are used to represent information
  • pictures say more than words
  • Trends are visible

Textual interface

  • Command line interpreter cheap and easy
  • Combining commands expand functionality and call complexity
  • External procedures and programs can be integrated
  • Experienced users very quickly
Disadvantage:
  • It takes a lot of effort to learn
  • Errors in commands possible
  • Keyboard interaction (slow)
  • but not useful for inexperienced users

Draft error messages

  • First impression to users
  • Difficult for inexperienced users
  • Properties: Consistent, Constructive, Clear and Precise
  • Message should contain a description of how to correct the problem
  • On-line help should be available

Use of color monitors

It is a new dimension, but is of no use to blind or color-blind people and there are no standards.

Tips:

  • Do not use too many colors (4-5)
  • Color reuse consistency
  • Color adjustment should be adjustable by the user

Problems of industrial software production

  • Software development is a process of planning and management
  • Implementation will begin after the problem is understood
  • The problem is software development does not go linearly (feedback loops required)

Life cycle of a software product

... - Task - Development - Use - Task - ...

Deactivated e.g. through phases of the waterfall model

  • analysis
  • design
  • implementation
  • integration
  • installation
  • maintenance

Software production process - process models

Code-and-fix model, waterfall model

Code-and-fix model and software crisis

  • It used to be one person per software project
  • Problem was easy to understand and understand
  • Software development consisted only of coding
  • 2 phases: programming and removing errors
  • Problems of the Code and Fix model:
  • After several changes to the code, the structure becomes confusing and poor
  • It is becoming more and more difficult to incorporate new changes
Software crisis
  • 1960 begins the development of large software systems
  • Basic problem everywhere: overbuget and time lag
  • Problems to be solved were not well identified and understood
  • Employees spent more time communicating with one another than coding
  • Changes to original system requirements
  • Developers left projects
  • Original system requirements were subsequently changed

Software engineering was born to look for a way out of these problems

"A phenomenon that is constantly accompanying us in various forms and expresses that the effort to be made for software production and operation exceeds the resources available for this or will soon exceed it."

Waterfall model

  1. Feasibility study
  2. Requirements specification and analysis
  3. design
  4. Coding and module tests
  5. Implementation and system tests
  6. Handover and maintenance
The disadvantage is the strictly sequential course of the phases.

Feasibility study

Estimate costs and look for alternative solutions.

Aim:

  • Problem definition
  • Alternative solutions
  • Required resources and costs

Specification and analysis of requirements

Specify the necessary quality features:
  • Functionality
  • performance
  • Usability
  • Portability...

Describes the "what" is to be implemented, not the "how"!

Aim: Requirements specification (functional specification)

design

Here the system is decomposed into modules. The design specification includes

  • Description of the software architecture (modules, relationships)
  • Abstraction level (Uses, Is-Component Of ...)
  • Detailed module interfaces

Coding and testing

  • Programming the modules
  • Testing and debugging modules
  • System tests after implementation
  • Alpha tests (real circumstances but given users)
  • Beta testing (real circumstances and selected customers as users)

Delivery and maintenance

Maintenance accounts for over 60% of product costs:

  • Corrective Maintenance (errors)
  • Perfect maintenance (performance)
  • Adaptive maintenance (extensions or changes) -> Evolution

Advantages and disadvantages of the waterfall model

  • No feedback on previous phases
  • Rigidity of the phases
  • Early mistakes have serious consequences
  • Requirements specification mostly incomplete
  • Limited information available
  • Estimating costs and planning can only take place after a certain amount of analysis
  • Users often do not know the exact requirements of an application
  • Waterfall model does not include anticipation of changes
  • Document-based process, as each phase requires typing

Evolutionary model

  • "Do it twice" principle
  • First version of a product is a disposable prototype
  • Is considered an attempt to analyze and verify feasibility and requirements
  • The problem is the time gap between the definition of requirements and the final delivery of the product
  • Solution to the problem: incremental approach

Incremental model for implementation

  • Waterfall model is applied through to design
  • Then proceed gradually
  • Interfaces that allow subsystems to be added later
  • Code-And-Tests + Integrate + Tests
  • Each incremental step is designed, coded, tested and integrated separately
  • The steps are sketched out one at a time
  • Implemented only after feedback from customers

Prototyping

  • Evolutionary principle to structure the life cycle
  • Display or GUI included
  • Dummy functions
  • Good tool for refining client's requirements

Transformation model

Software development here is a series of steps that convert the specification into implementation. The transformations are carried out manually or semi-automatically.

Two main steps:

  1. Requirements analysis (and verification of this)
  2. Optimization (and tuning on this)

Spiral model

  • Is a metamodel which can be applied to all models
  • Identify and remove risks at the design stage
  • It's cyclical and not linear like the waterfall model
  • A risk analysis is carried out again in each run
  • A new prototype is created with each run
  • Stage 1: Identify goals, requirements and alternatives
  • Stage 2: Investigating alternatives, prototyping and simulations
  • Stage 3: Development and verification
  • Stage 4: Examine results and plan the next iteration

Evaluation of process models

  • Waterfall model: documentation based
  • Evolutionary model: Incremental
  • Transformation model: specification-based
  • Spiral model: risk based
  • Prototyping: end-user related
Prototyping:
  • Supports GUI design
  • Reduces risks such as stopping at unnecessary aspects
  • Helps focus on relevant issues
  • 40% less development time and source instructions

Software methodologies - advantages, disadvantages

  • Method = a way of doing something
  • Methodology = a system of methods supported by a tool
  • Standarts: Methodologies are combined into company-wide reusable packages
Advantages:
  • Guides the programmer through all phases
  • Learns inexperienced people (how to systematically solve problems?)
  • Standardized problem-solving strategies
Disadvantage:
  • Lack of formal investigations
  • Consume a lot of staff - expensive
  • Sometimes it takes more time to understand the methodology than the problem

Comparison of Life Cycle Models

Comparison of Life Cycle Models
Build and fix Fine for small programs that do not require much maintenance Totally unsatisfactorily for nontrivial programs
Waterfall Disciplined approach
Document driven
Delivered product may not meet client's needs
Rapid prototyping Ensures that delivered product meets client's needs A need to build twice. Cannot always be used
Incremental Maximizes early return on investment.
Promotes maintainability
Requires open architecture.
May degenerate into build and fix.
Synchronize and stabilize Future user's needs are met.
Ensures components can be successfully integrated
Has not been widely used other than in Microsoft.
Spiral Incorporates features of all the above models Can be used only for large scale products.
Developers have to be competent at risk analysis
Source: University Of California

Structured analysis and design

  • Function modeling: Data flow diagrams, data dictionary, process specifications
  • Data modeling: ER model
  • Event modeling: Finite automatons, Petri nets

Integration, usability, adequacy, learnability, reusability, tool support and economic efficiency

Jackson's structured programming

  • Jackson Program Design Methodology or Jackson Structured Programming (JSP)
  • Method for design and modeling "on a small scale"
  • Design begins by examining what is known
  • A model based on data-structured methods is created in several iterations
  • The detailed structure of the control flow is derived directly from the description of the input / output data structures and their relationships to one another

Configuration management

  • Most many designs of a product are created
  • Products are subject to many changes during maintenance
  • Software house must be able to understand every change
  • Only then can you be certain of how mistakes can be corrected and decisions made
  • Release = group of components that are created together
  • How should this be controlled? -> Configuration management

Configuration management

  • Version = instance of an object
  • Configuration item = special version of a component group
  • Baseline = frozen release, which represents specific status
  • Variant = configuration item, which is slightly different
  • Change request = online form that records all change notes
  • CM controls changes and the evolution of a product
  • Problems are sharing of components and handling of product families

Division of components

Problem of multiple access to a common pool of components

  • Multiple accesses must be matched and synchronized
  • Others need to be informed of changes
  • There must be no data loss or inconsistencies in the code

Management of product families

  • CM not only manages source codes, but also docs, test data and manuals
  • The problem is that a component can exist in several versions
possible solutions
  • Each family member is made up of different versions of the components
  • Each family member includes their own copy of all the necessary components

Versioning

  • Central shared database which is associated with the project
  • Every programmer has his own developer space, where his intermediate versions are stored that he is currently working on
  • The output of modules only via CheckIn / CheckOut functionality

CASE tools - advantages, disadvantages

  • Computer aided software engineering
  • Independent of hardware, OS and language
  • Supports analysis and definition of data flows
  • Help with drafting detailed specifications
  • Helps with programming and simplifies and supports documentation in every phase
  • Helps with project management and controls the project
  • Data analysis is integrated in CASE Tools

CASE tools guarantee that nothing is forgotten:

  • Any date with attributes
  • Any process
  • Any relationship between data
Disadvantage:
  • CASE tools are complex - a lot of training time is necessary
  • CASE tools are aids, not solution generators

Reverse engineering

  • Some CASE tools support this
  • An attempt is made to get back from implementation to specification
  • Everything exactly the other way around ...

Selection of a CASE tool

  • Complete integration of analysis, design, coding and testing?
  • Which standards and methods are supported?
  • Teamwork possible?
  • Client / server concept?
  • How user friendly? (easy to use with GUI, icons, help system)
  • Working with CASE shouldn't take longer than without

Architecture of a CASE tool

  • There are different editors for each phase, which create documents for each phase
  • Database as DS makes sense as it supports against inconsistent and multi-user operation
  • Relational DBMS inefficient because too many relationships between objects and objects are too complex (too many joins necessary if an object is to be fetched)
  • Central data dictionary which contains definitions of all data and attributes
  • OODB has many advantages (eliminates disadvantages of relational DBMS) (stored objects are not split by normalization)

Project appraisal, project planning, planning of personnel requirements, division and assignment of tasks, project management and monitoring

Hire staff, motivate employees, assign the right people to the right tasks

Project management - goals and problems

  • Planning, estimating, classifying, monitoring, reporting -> control
  • Use ovn PM software
  • Process of coordinating all steps throughout the software lifecycle
Project managers can be
  • Senior Systems Analyst
  • Large IS => single manager
  • Small IS => programmer himself

General management tasks

  • Planning of all activities (identifying tasks and time / cost estimation)
  • Distribute tasks (put together a team)
  • Organizational matters (structuring and dividing project work)
  • Monitoring the processes (leading, advising, coordinating)
  • Checking the work (checking results ...)

Planning a project

  • Planning at the beginning of each phase
  • An activity requires different resources (personnel, time, money)
  • Define events (milestones etc.)
  • Planning at the end of a phase to verify cost estimates

Methods of project appraisal

  • Most difficult thing about project management
  • Project size and required resources are still proportional!
  • Communication, changes, interfaces etc ...
  • Graph with (1/2) n (n - 1) edges (i.e. quadratic effort)

3 methods

  • Quantitative supports
  • Experience-based method
  • Constraint method

Quantitative estimation method

  • Tables and formulas used for estimation
  • Tables: Numbers and types of files, functions, etc. as an indication which is divided by productivity
  • This means that numerical weights are given for individual problems
  • Work * Experience / Productity = person / days

Method of estimation based on experience

  • Based on experience from previous projects
  • Doesn't work for large projects because it is too complex

Method of restrictions

  • Project requirements serve as the basis

Project schedule (scheduling)

  • Determine the order of the tasks
  • Which tasks depend on which results?
  • Dependencies are determined by placing activities in a logical sequence
  • Grantt charts and PERT / CPM

Gantt charts

  • Can get too big quickly
  • Decomposition: one plan per team, one plan per task
Disadvantage:
  • No dependencies shown
  • No personal information or person days given

PERT / CPM charts

The "critical path" method with a directed graph with:
  • Activities / tasks as edges
  • Events as nodes and for synchronization (e.g. dependencies) dummy nodes
  • Earliest Completion Time and Latest Completion Time are entered in the node
  • This creates buffers that are used for reallocation in the event of shifts
Disadvantage:
  • It gets very complicated even for small projects
  • No person scheduling

Critical way

Critical path is a path which consists of nodes with no buffer available (ECT == LCT)

Cost-benefit analysis

  • The costs are compared with the benefits
  • Determine economic benefits and compare them with alternative solutions
Strategies:
  • Payback analysis
  • Return and Investment Analysis
  • Present value analysis

Pay-back analysis

  • How much time does it take to recover the money invested?
  • Disadvantage: Disregards costs after the payback period

Return and Investment Analysis

  • ROI = (Total Benefit - Total Cost) / Total Cost
  • Percentage indicating how profitable the business would be
  • Projects have to fall short of a minimum
  • Disadvantages: average values ​​as a basis

Present value analysis

  • Money today = money yesterday + interest
  • PV (present value)

Software metrics

  • Measure different aspects to make them easier to understand
  • Productivity metrics
  • Tries to make predictions easier
  • How was the productivity in previous projects?
  • How can past productivity be extrapolated to current projects?
  • Help to understand the technical process used for production and the product itself
  • Measurements take place to improve the product
  • Necessary for planning, costs -> estimates possible

Reasons for measuring

  • Determine the quality of a product
  • Check employee productivity
  • Investigate the usefulness of new SE tools and practices
  • Create a baseline for assessments

Direct and indirect measurement

Direct measurement

  • Lines Of Code (LOC or KLOC)
  • Execution speed
  • Memory usage
  • Reported defects / time period

Indirect measurement

  • Functionality
  • quality
  • complexity
  • maintenance

Categories of software metrics

  • Productivity metrics (focus on SE process)
  • Quality metrics (how close is the product to customer requirements)
  • Technical metrics (focus on the product itself, e.g. modularity)
  • Size-oriented metrics (direct measured values)
  • Function-oriented metrics (indirect metrics)
  • People-oriented metrics (information about the workers who produce it)

Size-oriented metrics

  • Direct measurements which correspond to development
  • Productivity = KLOC / person-month
  • Quality = error / KLOC
  • Cost = $ / KLOCK
  • Documentation = number of pages / KCLOC
  • Advantage: easy to measure
Problems:
  • Not accepted as the best way to measure
  • Language dependent
  • Planner needs to estimate LOC well before analysis is complete

Function-oriented metrics

  • Indirect measurements of the software and the process with a focus on program functionality
  • Empirical relationships are based on countable measurements
Example: (one count and 3 weightings per measurement, such as simple, average and complex)
  • Number of inputs
  • Number Of Outputs
  • Number Of Files
  • Number Of External Interfaces
  • Result: Total Count

Function points method

  • FP = total number * [0.65 + 0.01 Sum (1..14) (Fi)]
  • Fi are factors between 0 and 5 that describe the semantics and complexity of the problem
  • 0.65 and 0.01 are empirical constants
  • 1 to 14 are 14 different aspects such as e.g.
  • Q1: Does the system need backup and recovery
  • Q2: Is the performance critical?
  • Q3: Do you want the code to be reusable?
  • Productivity = FP / Person-Month (analogous to size-orientated direct measurement)

Feature Points Method

  • Extension of the function point method in additional algorithmic consideration
  • Count-Total also includes an estimate of the algorithm complexity
  • Language-independent and based on data from early evolution
Disadvantage:
  • Subjective estimates
  • no direct physical meaning
  • Data is difficult to collect

Metrics and Languages

Estimates of how many LOFs are required on average for a function point E.G:
  • Assembler (300)
  • FORTRAN (100)
  • Pascal (90)
  • Code Generators (15)

Arguments in favor of software metrics

Without it, there would be no clue on which to improve. Answers to the following questions are possible:
  • Which user requirements change most frequently?
  • Which modules in the system are most prone to errors?
  • How much test time do you have to plan for each module?
  • How many errors can I expect on average when testing?
  • The most difficult part is collecting data

How can the quality of software be defined?

  • Conformity to previously specified functionality and performance requirements
  • So quality is measured by the requirements
  • Today, standards dictate which aspects must always be met
  • Implicit quality requirements are also important (such as good maintainability)

Software quality factors

  • KLOC (direct measurement)
  • Usability, maintainability ... (indirect measured values)

Measure the quality

  • Any software quality measurement can only be incomplete
  • The number of words in a verbal description of the functionality can be used as an indicator of complexity (the more words, the more difficult it is to be more complex)
  • How many jumps in the code? -> How is the test effort?
  • Module calls from other modules -> maintainability
  • Access to global data or local variables provide information about the comprehensibility of the code
  • Number of non-standardized features -> portability

Test plan

  • Test plan is required for quality control
  • Contract between customer and developer and developer and quality assurance team

Audit

The quality assurance staff checks whether the tests were carried out sensibly.

Systematic, independent investigation to determine whether the quality-related activities and the associated results correspond to the planned arrangements and whether these arrangements are effectively implemented and suitable to achieve the objectives. [ISO 8402]

Software reviews

  • Reviews are made at various points in the development process
  • Should reveal errors in order to be able to eliminate them as early as possible
  • 50-60% of all errors arise in the draft (reviews repair up to 75% of these errors)

Cost of errors in the software

  • Increase exponentially as development progresses
  • Cost during design = 1, during tests 15 and after release 60-100

Increase in errors in the course of software development

  • Errors from each phase are passed on to the next
  • In this new errors arise again
  • Without reviews, hardly any errors in specification and design were discovered
  • With reviews, the error is recognized and corrected, since it is "tested" in the draft

Quality certificate

  • ISO 9001 is not industry related
  • ISO 9003 derived from 9001 for software
  • 20 main features are covered, such as document control, design control, process control, training, contract review, quality system
  • Certificate which assures customers that the company works according to standards

Software metrics and software quality

  • Always incomplete
  • Partly individual assessment
  • In some cases only indexed, derived information about quality
E.G:
  • Number of branches -> testability
  • Access to global variables -> low maintainability of the system
  • Number of local variables -> time-consuming debugging
  • Non-standard features -> portability

McCabe's cyclomatic complexity

  • Hypothesis: Complexity depends on the complexity of the control flow graph (program graph)
  • Experiments show relationships between McCabe's metric and the number of errors in the source code (and the time to correct them)
  • V (G) is an indicator for the maximum module size and should not exceed a certain value, otherwise difficult to test
  • Problem: Metric measured very late in the software project
  • Therefore, metrics have to be found that can already be measured in the design

Metrics for the design

  • The basic idea is that the external complexity of a module is related to the number of flows (in / export) between the module and the environment
  • Internal complexity = LOC
Local rivers
  • Direct => Module A gives parameters to B
  • Indirect => module A returns value to B.
Global rivers
  • Module A writes DS and module B reads from DS
  • Fan-In / Fan-Out (everything that goes in and out, regardless of whether it is local or global)
  • Complexity = length * (fi * fo) 2

Use of metrics

  • Filter that identifies the most complex modules
  • To study developer methods
  • Investigate the increasing complexity during maintenance
  • Find the parts / modules that make the most mistakes / work -> redesign + reimplementation

Legacy systems - characteristics and problems

  • Developed with the assumption of a short lifetime, but are still in use
  • Legacy systems are complex and replacement would be very expensive
  • Legacy systems cause high maintenance costs
  • Complex software, a lot of support, little documentation
  • When exchanging, there is a risk that business knowledge will be lost
  • Wrong decisions can have serious consequences

Problems with the outdated software architecture

  • Systems consist of many different individual systems that share data
  • Systems partly use files and no DBMS
  • Data is often redundant

Reengineering - when and why?

  • Software evolution between two extreme cases of system replacement and ongoing maintenance
  • Reengineering is usually cheaper than developing a new system
  • The starting point for RE is the existing system
  • Lower risk (incremental) and costs

Methods of reengineering

Source code translation
  • Program translation into other programming language
  • Update target language to a new version (e.g. if old compilers are no longer compatible)
Program restructuring
  • Programs with "spagetti" structure are difficult to study
  • Many automatic restructuring tools available
System restructuring
  • Program restructuring does not improve the system architecture as it is only viewed in isolation
  • System has to be restructured itself
Incremental evolution with system replacement is less risky than Bing Bang

Maintenance of software - causes, methods

  • To keep a system in good condition
  • Making changes after the system has been sold

The three types of maintenance

  1. Corrective (fixing bugs)
  2. Adaptive (new environment in HW / SW)
  3. Perfective (new requirements)
  • Predictive maintenance (structure it simply and well beforehand so that maintenance becomes cheap)
  • Costs for changes during maintenance are extremely high
  • Data reengineering (data transfer, data comparison, data redefinition)
  • Data centralization (RDBMS ...)

Programming concepts

Generic programming, templates, patterns and frameworks ...

Generic programming and templates

  • Generic units are parameterized templates
  • They have to be initialized in order to inform the compiler of the data types used
  • DT can also be abstract
  • BSP: Sorting algorithm, data type is a formal parameter

STL Library - the idea

  • Combine common components in a library
  • STL multi-dimensional space (Alg, Containers / DS, DT) from which the required procedure with the required data type and data structures (array, lists ...) can be selected
  • STL reduces the necessary LOC, since without templates i * j * k different code versions
  • With template functions only j * k code pieces to be programmed
  • With template classes for containers even only y + k

Patterns

  • Much more general than template
  • Not a piece of software, but a concept of proven solutions
Each pattern consists ofTypes of patterns
  • Conceptual
  • Architectural
  • Design pattern
  • Programming pattern
  • Documentation sample

Frameworks

  • The idea is to expand the virtual machine
  • Framework is reusable mini architecture which abstracts the generic structure and behavior of software families
  • Context specifies structure
  • Divided into classes
  • How classes and objects work together
  • Framework (Executable) is an implementation of design patterns
  • Framework and reusability
  • Dynamic binding, e.g. dynamic configuration of features during runtime
Reusable increases, but also disadvantages:
  • Difficult to see through and to understand
  • Increased complexity
  • All features of parent classes are inherited, even if not required -> thus increasing size
  • Performance reduction through dynamic binding

Metaprogramming (generative programming)

  • Applications where the context is known at compile time, thus static binding instead of dynamic binding is possible
  • Meta-Programming uses template specializations (if-tool) and template recursion (loop-tool) to write C ++ programs, which are interpreted by the C ++ compiler at compilation time
  • Compile Time Computation and Code Generation

Adaptive programming

  • OO technology encapsulates data and functions in classes
  • Implementation is protected against changes to DS, since access is only possible via interfaces
  • Some applications suffer from periodic changes in class structures and class hierarchies
  • Adaptive programming enables the applications to interface with the class hierarchy

Aspect-oriented programming

With OO, functionality is encapsulated in classes

Problem:

  • There are services that cannot be encapsulated in just one class (e.g. tracing)
  • Such a service is spread over many objects
  • How can such a service still be implemented in a centralized manner?
  • Aspects are functionalities, models of such services
  • Aspects are encapsulated in their own separate units
  • During preprocessing, the aspects are distributed in the program by a WEAVER

Petr Kroha
Software technology
Prof. Kroha
Lecture notes
L. Rosenhainer
Lecture notes