Wednesday, July 30, 2008

The Goldman Sachs SaaS scorecard

The past couple weeks I have been peering over reams and reams of investment analysis regarding 'cloud computing'. It has been surprising to me that cloud computing as a concept is being misused to label disjointed offerings as the same. For example, viewing Google App Engine and Amazon EC-2 as conceptually the same is a fantasticly bad characterization and ripe for bad decision making for IT managers, software vendors, and investors.

In my mind, the issue of what cloud computing is or is not, is misguided to some degree and only useful for the software professionals that are leveraging the different offerings. And these folks should be sophisticated enough to recognize the different underlying technologies. From an end user perspective, what you can do with the end result of cloud computing is much more interesting. And in that regard, the offerings are much more easily understood. For example, using Google PicasaWeb or Smugmug to store your family's pictures on sharable, replicated, backed-up storage is an end user value you can measure.

In all this research I did come across a nice and simple scorecard that Goldman Sachs uses to educate its clients about one form of cloud computing: SaaS. Goldman Sachs uses the following characteristics to judge the value proposition of a SaaS vendor or on-premise ISV.


Does the application provide value as a stand-alone workflow or does it require extensive integration with other applications?

Integration is the biggest hurdle a SaaS provider encounters. When applications, and the workflows they automate, become more standardized with accepted APIs, this will become less of a hurdle but right now stand-alone value is the litmus test for success.

Does the application represent an industry best-known-method or does it require extensive customization?

Due to the fact that SaaS business models can only create value if they aggregate multiple users on the same software and/or hardware instance, customization dilutes the profitability.

Is the application used by a distributed workforce and non-badge employees?

This is clearly the driving force behind SaaS and other forms of cloud computing. In my mind, this is routed in a pattern that has been driving IT innovation for the past decade. Consumers have become accustomed to mobility of information in their personal lives. Universal access to email or travel itineraries is so natural that it is aggravating when corporate data systems don't provide the same sophistication. It is hard to have confidence in your IT department if they push applications on you that look and work horribly compared to the applications you use in your personal life.

Does the data managed by the application cross a firewall?

This is the security aspect of SaaS applicability. If much of the data comes from outside the firewall as aggregated information to help the business unit then this is a simple decision. Many B2B workflows have this attribute. If the data being manipulated are the crown jewels of the company, attributes such as security, SLA, accountability, rules and regulations become big hurdles for adoption.

Does the application benefit from customer aggregation?

This is the benchmarking opportunity of SaaS. If you host thousands of similar businesses then you have the raw data to compare these companies among each other, and possibly against industry wide known metrics.

Does the application deployment show dramatically lower upfront costs than on-premise solution?

Low upfront costs can defuse deployment resistance. This is very important attribute for a SaaS offering that is targeting the Small and Medium sized Business (SMB) segments.

Does the application require training to make it productive?

If it does it is bad news: internet applications need to do task extremely well particularly if your access to the application is through some seriously constrained interface like a smart phone or MID.

Can the application be adopted in isolation?

This is similar to the stand-alone requirement, but more focused on the procurement question for the SaaS solution. If the solution can be adopted by a department instead of having to be screened for applicability across the whole enterprise, it clearly will be easier to get to revenue.

Is the application compute intensive/interactive?

SaaS applications don't do well with large compute requirements mainly due to the multi-tenancy that is the basis of the value generation. If one customer interacts with the application and makes a request that pegs the server on which the application runs, all other customers will suffer.

Sunday, July 20, 2008

SaaS business development

There is a great group discussion regarding cloud computing at Google Groups. In one thread titled Brutal reality of SaaS... I came across a new term for me: "adoption-led market".

Traditionally, the software acquisition process involves RFPs from vendors that are then run through some evaluation process leading to an acquisition. Money changes hands, and at that point, the business adoption starts. Software selection in this procurement-driven market is a matter of faith.

The open source community had to resort to a different approach. Simon Phipp's dubbed this the adoption-led market. The basic dynamic here is that developers try out different packages, often open source, to construct prototypes with the goal to create a deployable solution to their business problem.

It is clear that availability of open source solutions had to cross a certain critical mass of functionality and reliability before this market could develop. The LAMP stack was the first reliable infrastructure that was able to deploy business solutions, but now we have a great proliferation of functional augmentation to this stack that accelerates this adoption-led market.

Simon concludes in his blog entry:

Written down like that, it seems pretty obvious, but having a name for it – an adoption-led market – has really helped pull together explanations and guide strategy. For example:

  • In a procurement-driven market you need to go out and sell and have staff to handle the sales process, but in an adoption-led market you need to participate in communities so you can help users become customers.


  • In a procurement-led market you need shiny features and great demos, whereas in an adoption-led market you need software that is alive, evolving and responsive to feedback.


  • In an adoption-led market you need support for older hardware and platforms because adopters will use what works on what they already have.


  • Adoption-led users self-support in the community until they deploy (and maybe afterwards if the project is still “beta”) so withholding all support as a paid service can be counter-productive.



To me the change from a faith based procurement process to a more agile functionality driven approach is at the basis of SaaS attractiveness. A small business can do an evaluation in 15 minutes and get a sense if the software is going to solve a problem. The on-demand test drive facility to me is the great break through and I find myself looking for that facility in all software evaluations now.

Observing my own behavior, building a SaaS business centers on this adoption-led approach. A potential client is looking for an easy to use trail capability either as an open source package like MySQL or as a trail test drive like Bungee. Ease-of-use is the differentiator here since most evaluations would be opportunistic: if you can't impress your customer in the time it takes to drink a cup of coffee you may have lost that customer for ever.

Friday, July 18, 2008

SaaS economics

While researching the offerings of rPath, I came across Billy Marshall's blog . Among many of the tidbits of insight, two jumped out of me.

Complex server applications have typically so many configuration hooks that application deployment is not easily automated. This implies that bringing up and shutting down applications is not the same as what we are used to on the desktop. Applications become tightly coupled to physical host configuration, internal IT processes, and the prowess of the admin. Billy blames this on OSFAGPOS, or On Size Fits All General Purpose Operating Systems. OSFAGPOS is deployed in unison with the physical host because the OS integrates drivers that enable access to the physical hosts's hardware resources. A RAID or network stack can be configured to improve performance for specific application attributes, and it is this separation of roles that creates the root of all evil. rPath's vision is JeOS (read "juice"), or Just Enough OS that packs all the meta data needed for the release engineering process to do the configuration automatically. This would make application startup fast, cheap, and reliable.

Here is an economic reason why new SaaS providers will siphon off a portion of the software universe. A typical ISV spends between 25 and 40% of its engineering and customer support expense on peripheral functionality such as installers, cross-platform portability, and system configuration. Since a SaaS provider solves these issues through a different business model it frees up a significant portion of the development budget to work on core application features.

In my mind, the cross platform aspect is not as clear cut as Billy makes it appear. Whereas for an ISV the economic lock-in limits the TAM for its application, for an SaaS provider it can cut both ways. If the SaaS provider caters to customer for which high availability is important selecting gear from IBM or Sun might create a competitive advantage AND credibility. But if the SaaS provider caters to customers for which cost is most important, selecting gear from Microsoft/Intel/AMD might be the better choice. The hardware platforms have different cost structures and if a SaaS provider wants to straddle both customer groups they still need some form of cross-platform portability.

Wednesday, July 9, 2008

Virtualization and Cloud Computing

Virtualization in the data center allows hardware to be reused for multiple services and provides a mechanism to extract more value out of a hardware purchase. Virtualization also allows new business models like Amazon Web Services, where the virtualization is leveraged to provide hardware for rent. That model combined with scale allows for a dynamic resource allocation that is very attractive for start-ups that do not have an IT workload that is either constant or predictable. Scientific research is much like the workload of a web service start-up: potentially heavy but highly unpredictable. In this context, the work on virtual workspaces that is done by the globus community is very interesting. The Virtual Workspace Service provides an open source infrastructure for the deployment and management of virtual machines. Among other things, the workspace service will allow you to do the following:

  • Create compute clouds

  • Flexibly combine VM deployment with job deployment on a site configured to manage jobs using a batch scheduler

  • Deploy one-click, auto-configuring virtual clusters

  • Interface to Amazon EC2 resources


My experience with virtual clusters on EC2 has not been positive, so I am very interested to see if the Globus approach can deliver. The idea to have the ability to allocate a cluster in the background when I want to run a MPI application is just too appealing to give up on.