Q: When did you get involved in technology?
Craig: I had the good fortune to enter the technology industry when it still primarily existed in garages. In the mid-80s, I founded a company catering to the early computing needs of mid-sized businesses. No sooner had I installed my first system when something went horribly wrong and, due to a loose power cord, my customer experienced data loss. This was certainly not a disaster as experienced today, but absolutely traumatizing to my customer who had just spent the day entering financial information in VisiCalc.
Q: What were the early days of infrastructure like?
Craig: In the early ‘90s, working for one of the largest integrators, these were computer networking’s halcyon days and I worked with some of the brightest minds in architecting redundant networks. What was most fascinating about my time there was the wide variety of applications these networks were carrying. Regardless of industry or application there was a common theme; any application valuable enough to consume precious network resources required redundancy. In fact, the two usually went hand in hand. Strangely enough, in these same projects the processing and storage infrastructure often was not redundant, beginning my appreciation for the differences between network and systems architects. The network group knows portions of their infrastructure will fail and the system people often believe their infrastructure won’t.
Q: Tell us about your first disaster
Craig: In the mid-90s, I joined “StorageTek,” Storage Technology Corporation’s vendor of data storage and protection products and experienced my first personal disaster. In a period of three months my laptop was stolen twice. After the first theft, StorageTek’s IT group stated that despite selling data backup solutions, the laptop data was not protected.
Q: How bad was it?
Craig: I had lost everything – all that was needed to perform my work. But it started a thought process: being so tethered to my laptop was similar to losing a job – the only choice was starting over. It was a real eye-opener; were there ways to start protecting the data? I vowed to never to undergo the same experience. After some research I found early-stage technology allowing StorageTek to build an on-line backup service catering to mobile users and small business.
Q: What was going in in computing then?
Craig: Computing was changing; hackers were becoming a problem, as were ‘fat fingers’—or human error. There just weren’t solutions to these problems. Later, after joining another company, we built solutions to address high-availability needs for customers with these issues. We succeeded because companies sought different approaches—leveraging co-located development and testing resources from a second site with the ability to repurpose the computing assets in a disaster. Primarily paying attention to the disaster recovery space, I saw how underserved it was. The juggernauts were SunGard, IBM and, to a lesser degree, Hewlett Packard. They all catered to large enterprises’ traditional disaster recovery needs, but none appeared to be demonstrating creativity.
Q: What was the turning point in disaster recovery?
Craig: In 2005, SunGard acquired Inflow, where I was working, and for the first time I could merge my high-availability infrastructure design and managed service delivery experience and offer customers an availability continuum — as opposed to one-size–fits-all. In the next four years at SunGard, I worked on a succession of interesting customer issues. It was a great opportunity to learn and increase my awareness.
I spoke at numerous events, met many interesting people and listened to their challenges – and did a tremendous amount of reading. Clearly, disaster recovery, security, high-availability and business continuity were beginning to merge.
Q: You are also an expert in data centre relocation. Tell us about that?
Craig: After more than 10 years in the hosting industry, I have developed a very deep expertise in application infrastructure relocation having been directly involved in (or managing) at least 500 such projects.
I’ve realized that application infrastructure relocation and disaster recovery planning are very similar. Both are filled with trepidation and a never-ending naysayers’ parade. In fact, the only difference is the planner’s vision of the outcome. In relocation’s case, those charged with organizing the effort know it will happen and who will be involved. The people preparing for a disaster fool themselves into believing there is a chance of it never happening, hoping that’s true and intending to be away when it does.
Ironically, there can be so much effort expended on the first situation and so little expended on the second. The outcome needs to be the same – success. Regardless of the technology or the nature of the business, through trial and error I have learned that the plan must be executed with as little interruption as possible.
Q: You discovered something important when you relocated data centres?
Craig. Yes, often I find myself speaking with application developers who can seldom, if ever produce good documentation, let alone describe the inter-dependencies between applications. A quick scan of tools in the market produced an appreciation of the problem but no comprehensive solution; I was left to create rudimentary tools myself. More often than not these “tools” were really just the outcomes of extensive developer interviews – asking how application “A” relates to application “B.” However, if developers created undocumented dependencies there was no easy way to address risks since they weren’t even identified.
Q: The industry began to change then?
Craig: When I joined Hosting.com, the world of cloud computing was just beginning to develop. Initially, I was impressed with virtualization’s operational benefits but the cloud is different. Cloud’s principal pretense is a true compute utility — where resources are consumed as-needed. What better-use case can there be for cloud computing than high-availability and business continuity?
Q: You changed your idea of applications then?
Craig: An application can be configured to replicate data real-time to the cloud where a minimal number of resources are devoted to the effort. At the moment of a disaster (real or imagined) the workload normally running at a customer location can be transitioned to the cloud. It will consume the appropriate amount of resources until such time as the workload can be migrating to the original location.
Q: That started off your journey that led to ThinkOn?
Craig: Now I can combine the best technological economics and operational efficiency to truly change the way companies manage their application availability strategies. It’s a fundamental re-think of how companies can cost effectively deliver an always-on application experience for their bet-the-business applications.