Database System vs. Traditional File Processing
1. Self-Describing Nature of the Database System
Database System
-
A DBMS stores both:
-
The actual data, and
-
A complete description of the data (metadata) in a catalog.
-
-
Metadata includes structure of files, types of data items, constraints, etc.
-
DBMS software reads definitions from the catalog, so it can work with any database.
Traditional File Processing
-
Data description is embedded inside application programs.
-
Each program contains its own file structure definitions (e.g., C++ structs).
-
Programs are tied to a single specific file format and cannot work with other data files.
-
No central catalog or metadata repository.
2. Insulation Between Programs and Data (Program-Data Independence)
Database System
-
Data structure is separated from application programs.
-
If the structure changes (e.g., adding Birth_date to STUDENT), only the catalog changes.
-
Programs continue to work without modification.
-
Achieved through data abstraction provided by the data model.
Traditional File Processing
-
File structure is hard-coded in each program.
-
Any change in file format requires changing all programs that access that file.
-
No mechanism for program-data independence.
-
Details like byte positions and record formats are visible to programs.
3. Support for Multiple Views
Database System
-
DBMS can define multiple views for different users.
-
A view:
-
Can be a subset of the data, or
-
Can be virtual data derived from stored files.
-
-
Users need not know whether data is stored or computed.
Traditional File Processing
-
Each application defines its own files and programs, so views are limited.
-
No built-in way to provide different external views of the same data.
-
If two users need different views, they usually maintain separate files.
4. Data Sharing and Multiuser Transaction Processing
Database System
-
Multiple users can access and update the database simultaneously.
-
Requires:
-
Concurrency control,
-
Transaction management, and
-
Enforcement of ACID properties (especially isolation and atomicity).
-
-
Supports Online Transaction Processing (OLTP) environments.
-
Ensures correctness (e.g., preventing two agents from booking the same seat).
Traditional File Processing
-
Data is usually isolated in separate files for each application.
-
Multiuser access is difficult and often requires custom code.
-
No automatic concurrency control or transaction mechanism.
-
Risk of inconsistent or incorrect updates when many users access files simultaneously.
Summary Table
| Feature | Database System (DBMS) | Traditional File Processing |
|---|---|---|
| Metadata storage | Stored in catalog; self-describing | Inside program code; no catalog |
| Program-data independence | Yes; programs need not change if structure changes | No; programs must change if file structure changes |
| Data abstraction | High; internal details hidden | Low; programs deal with record formats directly |
| Multiple views | Supported | Not supported or requires separate files/programs |
| Data sharing | Centralized, shared database | Separate files for each application |
| Concurrency control | Built-in (transactions, isolation) | Must be manually programmed; often absent |
| Redundancy | Reduced | High (duplicate data across apps) |
| Integration | High; one unified database | Low; separate independent files |
Comments
Post a Comment