MANAGING CHANGE
Donald J. Haderle
IBM Fellow, Vice President
IBM
555 Bailey Avenue
San Jose, CA 95141
haderle@us.ibm.com
Performance (scalability) of transaction systems was THE concern for customers in the past. They are still concerned, especially with newer technologies. But their major concern has shifted to availability of service (no outage or degradation of service) and their ability to respond to new business requirements. One of the major impacts on our technology base is the ability to respond to changes rapidly with no downtime; viz., any conceivable schema change physical or logical, code fixes on the base software (transaction managers, database managers, application servers, etc.), hardware changes (memory, processors, storage), applications, and myriad other artifact.
SAP R/3, by example, makes fixes which modify schemas (e.g., length/data type of column, delete a column, etc.). These aren’t exotic, yet they have terrible impact on the underlying systems. Most dbms’ invalidate views that are dependent on a modified table, forcing them to be redefined after the base table is modified and all authorization to the view to be reestablished. The base table definition is modified and most of the dbms force the data to be reloaded to either get it into the right format or they leave the data in the original format and do changes on the fly when data is accessed or updated. If updated the record is generally moved to a different physical page destroying the clustering characteristics of the data, harming performance, which in turn forces a reorganization. If the column is indexed, the index may need reconstruction. This is all downtime.
These changes were trivial in comparison to substantive schema changes. I assert that this is the BIG problem to work on in the context of our future transaction systems and I would be glad to elaborate give that y’all invite me.