One of those seamless migrations
So when the client said it had to work on Oracle, like two years ago, but they would eventually migrate everything in the whole enterprise to Microsoft SQL Server (sigh), we went with Doing The Simplest Thing That Could Possibly Work (TDSTTCPW). In retrospect, that was kind of brilliant.
Working alongside my geek pal Beth, I wrote the Web Service to work only with Oracle, figuring the Microsoft issue was for later. You literally can’t save time as such–time goes by whether you’re doing anything with it or not!–so it didn’t make sense to write both interfaces at once. It could hypothetically “save time” but only in the sense of doing something unnecessary in the present.
Let’s not get crazy here
There’s a difference between “the simplest thing” and “the simplest thing that can possibly work” though, depending on what you mean by “work.” For this project, making it “work” definitely meant not painting ourselves into the proverbial corner. Obviously it made sense to separate the database-specific stuff from most of the business logic. And we implemented a fair amount of code in stored procedures, which would definitely have to be rewritten after the engine migration. That’s okay.
Then a funny thing happened with implementation. The DBA group informed Beth that they’d have to review all of our stored procedures and charge the hourly cost back to our project. Which didn’t have a budget for that. She went back to the client and said we’d have to rewrite a lot of our code to do without stored procedures or they’d have to figure out the chargeback issue with the DBA staff.
The client chose the former, so we took all those lovingly hand-crafted Oracle stored procedures and converted them to ugly C# logic sprinkled with OracleCommand objects.
Right there, we more than paid for the decision not to support both Oracle and Microsoft at the outset. We would have written stored procedures for Microsoft SQL Server, or at least the client-side support for them, for no reason at all. That must have saved eighty hours or so.
A couple of weeks ago, Beth let me know the client was finally ready for the Microsoft migration. We decided that we wanted both Oracle and Microsoft support in the runtime, so the switch between database engines could be done in configuration rather than at build time. Which meant we couldn’t just drop in MS equivalents of all our Oracle client calls. We had to set up both.
The technique we came up with was pretty clean, probably about what you’re thinking of.
- Hit every Web Method, wrapping every command parameter in an overloaded method called AddInputParameter() or one called AddInputParameterWithValue(), or in a few cases AddOutputParameter(). These methods instantiated an OracleParameter, initialized it where required, and added it to the indicated IDbCommand‘s parameter list.
- Every place that used an OracleConnection was actually fine with an IDbDataConnection. Search and replace.
- Ditto OracleCommand and IDbDataCommand.
- (Later on) modified the getConnection() factory method we’d already made to return an OracleCommand object to return either an OracleCommand or a SqlCommand depending on a compiler setting. But the return type was IDbCommand for compatibility. That meant we could throw command objects around without caring about where they came from. (The magic of polymorphism.)
- Ditto the getCommand() factory method.
- Had to rewrite one particularly gnarly method that copied a record in a table, more or less in place except for the primary key.
- Safely ignored all the stored procedure client calls. We knew they weren’t being used anyway.
Now this got us to the point where the Web Service class had methods that touched Oracle and methods that touched those methods. Period.
It was easy to drag the Oracle methods into a class called FooDbOracle. It was also fairly easy to convert all existing calls to those methods to go through the FooDbOracle object. Thus:
IDbDataCommand com = getCommand();
IDbDataCommand com = getCommand();
The db pseudo-variable was itself just a property of the main class:
return new FooDbOracle();
We ran the unit tests–the few that we’d actually bothered to write ahead of time anyway–and found success.
Given that the FooDbOracle class worked so well, it was easy to abstract an interface called FooDb. And then I re-implemented that interface as FooDbMsSql, so we then had two engine-specific classes that implemented the same FooDb interface.
The next thing to do is to figure out at runtime which implementation of the FooDb interface to use. Since we set things up so the calling code always went through that localized db property, it was really easy to modify its get to act as something like a factory method. It produces a FooDbOracle or a FooDbMsSql depending on a configuration setting (which is not interesting here).
switch (which_engine_config_file_says_to_use) // <== obvious pseudocode
return new FooDbOracle();
return new FooDbMsSql();
throw new NotImplementedException("Only Microsoft SQL Server and Oracle are supported!");
We didn’t really have enough unit tests to make this a safe upgrade. I wrote much of the original code before being comfortable enough with NUnit to rely on it, and in the intervening couple of years Beth just didn’t feel like maintaining the unit tests to keep up with all her refactoring. So there’s an awkward homemade test suite that doesn’t cover very much.
Also, this code is clearly not optimized for performance. We’re creating that db object over and over again when it should obviously be cached. And there should be connection pooling too. But the service wasn’t designed to support persistence, and we’d probably have to pay some attention to concurrency if we’re going to pool connections, and it honestly just wasn’t worth the time to think about as this application doesn’t handle a very high volume of interactions.
Again, DTSTTCPW dictates we go with the techniques that work pretty well in the simple cases that we’re actually facing in production. We can worry about the load issues later if they ever arise, but they probably won’t. (I’m pretty sure that about 95% of anticipated performance issues are imaginary.)
Is this perfect?
Definitely not. It could be faster. It could be easier to maintain and more flexible than it is. But it got done! When we tally up the time for billing purposes it might end up around, oh, like fifty hours total. Best of all, we can roll this out in every shop that is ready, and flip the configuration switch when the DBAs give the all-clear.
May 29, 2010 @ 9:04 am
A contractor can slowly nickel and dime a client into spending way more money on lots of extra little things they don’t need. Sometimes we forget that the opposite is also true. We can save them nickels on things they don’t need in order to deliver a more focused product under-budget.