Tuesday, January 19, 2010
What Can Enterprise Software Learn From CES? - Embrace Ubiquitous Convergence
Thursday, January 29, 2009
Open Source Software Business Models On The Cloud
Powering the cloud: OSS can power the cloud infrastructure similarly as it has been powering the on-premise infrastructure to let cloud vendors minimize the TCO. Not so discussed benefit of the OSS for cloud is the use of core algorithms such as MapReduce and Google Protocol Buffer that are core to the parallel computing and lightweight data exchange. There are hundreds of other open (source) standards and algorithms that are a perfect fit for powering the cloud.
OSS lifecycle management: There is a disconnect between the source code repositories, design time tools, and application runtime. The cloud vendors have potential not only to provide an open source repository such as Sourceforge but also allow developers to build the code and deploy it on the cloud using the horsepower of the cloud computing. Such centralized access to a distributed computing makes it feasible to support the end-to-end OSS application lifecycle on single platform.
OSS dissemination: Delivering pre-packaged and tested OSS bundles with the support and upgrades has been proven to be a successful business model for the vendors such as Redhat and Spikesource. Cloud as an OSS dissemination platform could allow the vendors to scale up their infrastructure and operations to disseminate the OSS to their customers. These vendors also have a strategic advantage in case their customers want to move their infrastructure to the cloud. This architectural approach will scale to support all kinds of customer deployments - cloud, on-premise, or side-by-side.
The distributed computing capabilities of the cloud can also be used to perform static scans to identify the changes in the versions, track dependencies, minimize the time to run the regression tests etc. This could allow the companies such as Blackduck to significantly shorten their code scans for a variety of their offerings.
Compose and run on the cloud: Vendors such as Coghead and Bungee Connect provide composition, development, and deployment of the tools and applications on the cloud. These are not OSS solutions but the OSS can build a similar business model as the commercial software to deliver the application lifecycle on the cloud.
OSS as SaaS: This is the holy grail of all the OSS business models that I mentioned above. Don't just build, compose, or disseminate but deliver a true SaaS experience to all your users. In this kind of experience the "service" is free and open source. The monetization is not about consuming the services but use the OSS
services as a base platform and provide value proposition on top of
that. Using the cloud as an OSS business platform would allow companies to experiment with their offerings in a true try-before-you-buy sense.
Monday, December 1, 2008
Does Cloud Computing Help Create Network Effect To Support Crowdsourcing And Collaborative Filtering?
O'Reilly comments on the cloud in the context of network effects:
"Cloud computing, at least in the sense that Hugh seems to be using the term, as a synonym for the infrastructure level of the cloud as best exemplified by Amazon S3 and EC2, doesn't have this kind of dynamic."
Nick argues:
"The network effect is indeed an important force shaping business online, and O'Reilly is right to remind us of that fact. But he's wrong to suggest that the network effect is the only or the most powerful means of achieving superior market share or profitability online or that it will be the defining formative factor for cloud computing."
Both of them also argue about applying power laws to the cloud computing. I am with Nick on the power laws but strongly disagree with him on his view of cloud computing and network effects. The cloud at the infrastructure level will still follow the power laws due to the inherent capital intensive requirements of a data center and the tools on the cloud would help create network effects. Let's make sure we all understand what the powers laws are:
"In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution."
Any network effect starts with a small set of something and it eventually grows bigger and bigger - users, content etc. The cloud makes it a great platform for such systems that demand this kind of growth. The adoption barrier is close to zero for the companies whose business model actually depends upon creating these effects. They can provision their users, applications, and content on the cloud and be up and running in minutes and can grow as the user base and the content grows. This actually shifts the power to the smaller players and help them compete with the big cloud players and yet allow them to create network effects.
The big cloud players, that are currently on the supply side of this utility mode, have few options on the table. They either can keep themselves to the infrastructure business and I would wear my skeptic hat and agree with a lot of people on the poor viability of this capital intensive business model that has very high operational cost. This option alone does not make sense and the big companies have to have a strategic intent behind such large investment.
The strategic intent could be to SaaS up their tools and applications on the cloud. The investment and control over the infrastructure would provide a head start. They can also bring in partner ecosystem and crowdsource large user community to create a network effect of social innovation that is based on collective intelligence which in turn would make the tools better. One of the challenges with the recommendation systems that uses collaborative filtering is to be able to mine massive information that includes users' data and behavior and compute the correlation by linking it with massive information from other sources. The cloud makes a good platform for such requirements due to its inherent ability to store vast amount of information and perform massive parallel processing across heterogeneous sources. There are obvious privacy and security issues with this kind of approach but they are not impossible to resolve.
Google, Amazon, and Microsoft are the supply side cloud infrastructure players that are already moving in the demand side of the tools business though I would not call them the equal players exploring all the opportunities.
And last but not the least, there is a sustainability angle around the cloud providers. They can help consolidate thousands of data centers into few hundreds based on the geographical coverage, availability of water, energy, and dark fiber etc. This is similar to consolidating hundreds of dirty coal plants into few non-coal green power plants that can produce clean energy with efficient transmission and distribution system.
Thursday, October 16, 2008
Greening The Data Centers
The energy efficiency of a data center can be classified into three main categories:
1. Efficiency of the facility: The PUE is designed to measure this kind of efficiency that is based on how a facility that hosts a data center is designed such as its physical location, layout, sizing, cooling systems etc. Some organizations have gotten quite creative to improve this kind of efficiency by setting up an underground data center to achieve consistent temperature or setting up data centers near a power generation facility or even setting up their own captive power plant to reduce the distribution loss from the grid and meet the peak load demand.
2. Efficiency of the servers: This efficiency is based on the efficiency of the hardware components of the servers such as CPU, cooling fans, drive motors etc. has made significant progress in this area to provide energy-efficient solutions. Sun has backed up the organization OpenEco that helps participants assess, track, and compare energy performance. Sun has also published their carbon footprint.
3. Efficiency of the software architecture: To achieve this kind of efficiency the software architecture is optimized to consume less energy to provide the same functionality. The optimization techniques have by far focused on the performance, storage, and manageability ignoring the software architecture tuning that brings in energy efficiency.
Round Robbin is a popular load balancing algorithm to optimize the load on servers but this algorithm is proven to be energy in-efficient. Another example is about the compression. If data is compressed on a disk it would require CPU cycles to uncompress it versus requiring more I/O calls if it is stored uncompressed. Given everything else being the same, which approach would require less power? These are not trivial questions.
I do not favor an approach where the majority of the programmers are required to change their behavior and learn new way of writing code. One of the ways to optimize the energy performance of the software architecture is to adopt an 80/20 rule. The 80% of the applications use 20% of the code and in most of the cases it is an infrastructure or middleware code. It is relatively easy to educate and train these small subset of the programmers to optimize the code and the architecture for energy-efficiency. Virtualization could also help a lot in this area since the execution layers can be abstracted into something that can be rapidly changed and tuned without affecting the underlying code to provide consistent functionality and behavior.
The energy efficiency cannot be achieved by tuning things in separation. It requires a holistic approach. PUE ratios identify the energy loss prior to it reaches a server, the energy-efficient server requires less power to execute the same software compared to other servers, and the energy-efficient software architecture actually lowers the consumption of energy for the same functionality that the software is providing. We need to invest into all the three categories.
Power consumption is just one aspect of being green. There are many other factors such as how a data center handles the e-waste, the building material used, the green house gases out of the captive power plant (if any) and the cooling plants etc. However tackling energy efficiency is a great first step in greening the data centers.
Friday, September 12, 2008
Google Chrome Design Principles
- Embrace uncertainty and chaos: Google does not expect people to play nice. There are billions of pages with unique code and rendering all of them perfectly is not what Google is after. Instead Chrome puts people in charge of shutting down pages (applications) that do not behave. Empowering people to pick what they want and allow them to filter out the bad experience is a great design approach.
- Support the journey from pages to applications to the cloud: Google embraced the fact that the web is transitioning from pages to applications. Google took an application-centric approach to design the core architecture of Chrome and turned it into a gateway to the cloud and yet maintained the tab metaphor to help users transition through this journey.
- Scale through parallelism: Chrome's architecture makes each application a separate process. This architecture would allow Chrome to better tap into the multi-core architecture if it gets enough help from an underlying operating system. Not choosing a multi-threaded architecture reinforces the fact that parallelism on the multi-core is the only way to scale. I see an opportunity in designing a multi-core adaptation layer for Chrome to improve process-context switching since it still relies on a scheduler to get access to a CPU core.
- Don't change developers' behavior: JavaScript still dominates the web design. Instead of asking developers to code differently Google actually accelerated Javascript via their V8 virtual machine. One of the major adoption challenges of parallel computing is to compose applications to utilize the multi-core architecture. This composition requires developers to acquire and apply new skill set to write code differently.
- Practice traditional wisdom: Java introduced a really good garbage collector that was part of the core language from day one and did not require developers to explicitly manage memory. Java also had a sandbox model for the Applets (client-side runtime) that made Applets secured. Google recognized this traditional wisdom and applied the same concepts to Javascript to make Chrome secured and memory-efficient.
- Growing up as an organization: The Chrome team collaborated with Android to pick up webkit and did not build one on their own (actually this is not a common thing at Google). They used their existing search infrastructure to find the most relevant pages and tested Chrome against them. This makes it a good 80-20 browser (80% of the people always visit the same 20% of the pages). This approach demonstrates a high degree of cross-pollination. Google is growing up as an organization!
Monday, July 21, 2008
SaaS platform pitfalls and strategy - Part 2
Don't simply reduce TCO, increase ROI: According to an enterprise customer survey carried out by McKinsey and SandHill this year, the buying centers for SaaS are expected to shift towards the business with less and less IT involvement. A SaaS vendor should design a platform that not only responds to the changing and evolving business needs of a customer but can also adapt to changing macro-economic climate to server customer better. Similarly a vendor should carve out a Go To Market strategy targeting the businesses to demonstrate increased ROI and not necessarily just reduced TCO even if they are used selling a highly technical component to IT.
The Long Tail: SaaS approach enables a vendor to up-sell a solution to existing customers that is just a click-away and does not require any implementation efforts. A vendor should design a platform that can identify the customer's ongoing needs based on the current information consumption, usage, and challenges and tap into a recommendation engine to up-sell them. A well-designed platform should allow vendors to keep upgrade simple, customers happy, and users delighted.
Hybrid deployment: The world is not black and white for the customers; the deployment landscape is almost never SaaS only or on-premise only. The customers almost always end up with a hybrid approach. A SaaS platform should support the integration scenarios that spans across SaaS to on-premise. This is easier said than done but if done correctly SaaS can start replacing many on-premise applications by providing superior (non)ownership experience. A typical integration scenario could be a recruitment process that an applicant begins outside of a firewall on a SaaS application and the process gradually moves that information into an enterprise application behind the firewall to complete the new hire workflow and provision an employee into the system. Another scenario could be to process a lead-to-order on SaaS and order-to-cash on on-premise.
Ability to connect to other platforms: It would be a dire mistake to assume standalone existence of any platform. Any and all platforms should have open, flexible, and high performance interfaces to connect to other platforms. Traditionally the other platforms included standard enterprise software platforms but now there is a proliferation in the social network platforms and a successful SaaS player would be the one who can tap into such organically growing social networking platforms. The participants of these platforms are the connectors for an organization that could speed up cross-organizational SaaS adoption across silos that have been traditional on-premise consumers.
Built for change: Rarely a platform is designed that can predict the technical, functional, and business impact when a new feature is included or an existing feature is discarded. Take internationalization (i18n) as an example. The challenges associated to support i18n are not necessarily the resources or money required to translate the content into many languages (Facebook crowdsourced it) but to design platform capabilities that can manage content in multiple languages efficiently. Many platform vendors make a conscious choice (rightfully so) not to support i18n in early versions of the platform. However rarely an architect designs the current platform that can be changed predictably in the future to include a feature that was omitted. The design of a platform for current requirements and a design for the future requirements are not mutually exclusive and a good architect should be able to draw a continuum that has change predictability.

Vendors should also virtualize the core components of the platform such as multi-tenancy and not just limit their virtualization efforts to the deployment options. Multi-tenancy can be designed in many different ways at each layer such as partitioning the database, shared-nothing clusters etc. The risks and benefits of these approaches to achieve non-functional characteristics such as scalability, performance, isolation etc. change over a period of time. Virtualizing the multi-tenancy allows a vendor to manage the implementation, deployment, and management of a platform independent of constantly moving building components and hence guarantee the non-functional characteristics.
Don't bypass IT: Instead make friends with them and empower them to server users better. Even if IT may not influence many SaaS purchase decisions IT is politically well-connected and powerful organization that can help vendors in many ways. Give IT what they really want in a platform such as security, standardization, and easy administration and make them mavens of your products and platform.
Platform for participation: Opening up a platform to the ecosystem should not be an afterthought instead it should be a core strategy to platform development and consumption. In early years eBay charged the developers to use their API and that inhibited the growth which later forced eBay to make it free and that decision helped eBay grow exponentially. I would even suggest to open source few components of the platform and also allow developers to use the platform the way they want without SaaS being the only deployment option.
Platform Agnostic: The programming languages, hardware and deployment options, and UI frameworks have been changing every few years. A true SaaS platform should be agnostic to these building components and provide many upstream and downstream alternatives to build applications and serve customers. This may sound obvious but vendors do fall into "cool technology" trap and that devalues the platform over a period of time due to inflexibility to adopt to changing technology landscape
Monday, July 7, 2008
SaaS platform - design and architecture pitfalls - Part 1
1) Failing to design for rollback
"...you can only make one tweak to your current process, make it so that you can always roll back any code changes..."
This is a universal truth for any design decision for a platform irrespective of the delivery model, SaaS or on-premise. eBay makes it a good case study to understand the code change management process called "trains" that can track down code in a production system for a specific defect and can roll back only those changes. A philosophical mantra for the architects and developers would be not to make any decisions that are irreversible. Framing it positively prototype as fast as you can, fail early and often, and don't go for a big bang design that you cannot reverse. Eventually the cumulative efforts would lead you to a sound and sustainable design.
2) Confusing product release with product success
"...Do you have “release” parties? Don’t — you are sending your team the wrong message. A release has little to do with creating shareholder value..."
I would not go to the extreme of celebrating only customer success and not release milestones. Product development folks do work hard towards a release and a celebration is a sense of accomplishment and a motivational factor that has indirect shareholder value. I would instead suggest a cross-functional celebration. Invite the sales and marketing people to the release party. This helps create empathy for the people in the field that developers and architects never or rarely meet and this could also be an opportunity for the people in the field to mingle, discuss, and channel customer's perspective. Similarly include non-field people while celebrating field success. This helps developers, architects, and product managers understand their impact on the business and an opportunity to get to know who actually bought and started using their products.
5) Scaling through third parties
"....If you’re a hyper-growth SaaS site, you don’t want to be locked into a vendor for your future business viability..."
I would argue otherwise. A SaaS vendor or any other platform vendor should really focus on their core competencies and rely on third parties for everything that is non-core.
"Define how your platform scales through your efforts, not through the systems that a third-party vendor provides."
This is partially true. SaaS vendors do want to use Linux, Apache, or JBoss and still be able to describe the scalability of a platform in the context of these external components (that are open source in this case). The partial truth is you still can use the right components the wrong way and not scale. My recommendation to a platform vendor would be to be open and tell their customers why and how they are using the third party components and how it helps them (the vendor) to focus on their core and hence helps customers get the best out of their platform. A platform vendor should share the best practices and gather feedback from customers and peers to improve their own processes and platform and pass it on to third parties to improve their components.
6) Relying on QA to find your mistakes:
"QA is a risk mitigation function and it should be treated as such"
The QA function has always been underrated and misunderstood. QA's role extends way beyond risk mitigation. You can only fix defects that you can find and yes I agree that mathematically it is impossible to find all the defects. That's exactly why we need QA people. The smart and well-trained QA people think differently and find defects that developers would have never imagined. The QA people don't have any code affinity and selection bias and hence they can test for all kinds of conditions that otherwise would have been missed out. Though I do agree that the developers should put themselves in the shoes of the QA people and make sure that they rigorously test their code, run automated unit tests, and code coverage tools and not just rely on QA people to find defects.
8) Not taking into account the multiplicative effect of failure:"Eliminate synchronous calls wherever possible and create fault-isolative architectures to help you identify problems quickly."
No synchronous calls and swimlane architecture are great concepts but a vendor should really focus on automated recovery and self-healing and not just failure detection. A failure detection could help vendor isolate a problem and help mitigate the overall impact of that failure on the system but for a competitive SaaS vendor that's not good enough. Lowering MTBF is certainly important but lowering MDT (Mean down time) is even more important. A vendor should design a platform based on some of the autonomic computing fundamentals.
10) Not having a business continuity/disaster recovery plan:
"Even worse is not having a disaster recovery plan, which outlines how you will restore your site in the event a disaster shuts down a critical piece of your infrastructure, such as your collocation facility or connectivity provider."
Having a disaster plan is like posting a sign by an elevator instructing people not to use it when there is a fire. Any disaster recovery plan is, well, just a plan unless it is regularly tested, evaluated, and refined. Fire drills and post-drill debriefs are a must-have.
I will describe some of the design and architectural must have characteristics of a SaaS platform in the part 2 of this post.
Wednesday, February 20, 2008
Scenario-based enterprise architecture - CIO’s strategy to respond to a change
For CIOs, the key question is how to turn IT into an asset and a capability to support the business and not to become an IT bottleneck that everyone wants to avoid or circumvent. Strategic IT planning that is scenario-based, transparent policies, and appropriate governance could help the enterprise architecture from falling apart and build capabilities that serves the business needs and provides them with the competitive advantage.
To be tactical and strategic at the same time is what could make many CIOs successful. In my interaction with CIOs, I have found that some of their major concerns are organizational credibility and empowerment. CIO is often times seen as an inhibitor by the business people and it is CIO’s job to fix that perception. To be seen as a person who can respond to business needs quickly and pro-actively can go a long way to fix this perception. You cannot really plan for all the possible worst case scenarios but at least try to keep your strategy nimble and measures in place to react to the ones that you had not planned for and act ahead of time on the ones that you did plan for.
Sunday, September 9, 2007
The eBay way to keep infrastructure architecture nimble
"Innovating for a community of our size and maintaining the reliability that's expected is challenging, to say the least. Our business and IT leaders understand that to build a platform strategy, we must continue to create more infrastructure, and separate the infrastructure from our applications so we can remain nimble as a business. Despite the complexity, it's critical that IT is transparent to our internal business customers and that we don't burden our business units or our 233 million registered users with worries about availability, reliability, scalability, and security. That has to be woven into our day-to-day process. And it's what the millions of customers who make their living on eBay every day are counting on us to do."
eBay's strategy to focus on identifying the pain points early on and solving those problems first and keep the infrastructure nimble to adapt to growth has paid off. eBay focused on an automated process to roll out the weekly builds into their production system and tracking down the code change that could have destabilized a certain set of features. The most difficult aspect of sustaining engineering is to isolate the change that is causing an error; fixing the error once the root cause is known is relatively easy most of the times. eBay also embraces the fact that if you want to roll out changes quickly, the limited QA efforts, automated or otherwise, are not going to guarantee that there won't be any errors. Anticipating errors and have a quick plan to fix it is a smart strategy.
If you read the post closely you will observe that all the efforts seem to be related to the infrastructure architecture such as high availability, change management, security, third-party API, concurrency etc. ebay did not get distracted by the Web 2.0 bandwagon early on and instead focused on platform strategy to support their core business. This is a lesson that many organizations could probably learn that be nimble and do what your business needs and don't get distracted by disruptive changes, instead embrace them slowly. Users will forgive you if your web site doesn't have round corners and does not do AJAX, but they won't forgive you if they could not drum up their bid and lost the auction because the web site was slow or was not available.
One of the challenges eBay faced was lack of any good industry practices for similar kind of requirements since eBay was unique in a way it grew exponentially and had to keep changing their infrastructure based on what they think is the right way to it. eBay is still working on grid infrastructure that could standardize some of their infrastructure and service delivery platform architecture. This would certainly alleviate some of the pains that they have from their proprietary infrastructure and could potentially become the de facto best practices for the entire industry to achieve the best on-demand user experience.
eBay kept it simple - a small list of trusted suppliers, infrastructure that can grow with users, and a good set of third party API and services to complete the ecosystem to empower users to get the maximum juice out of their platform. That's the eBay way!
Tuesday, September 4, 2007
SugarCRM hops on to multi-instance on-demand architecture bus
The multi-instance model resonates well with the customers that are concerned with the privacy of their data. This model is very close to an on-premise model but the instance is managed by a vendor. This model has all the upgrade and maintenance issues as any on-premise model but a vendor can manage the slot more efficiently than a customer and can also use utility hardware model and data center virtualization to certain extent. The customizations are easy to preserve for this kind of deployment, but there is a support downside due to each instance being unique.
Multi-tenant architecture has benefits of easy upgrade and maintenance since there is only one logical instance that needs to be maintained. This instance is deployed using clusters at the database and mid-tier levels for load balancing and high availability purposes. As you can imagine, it is critical that architecture supports "hot upgrade". You take the instance down for scheduled or unscheduled downtime and all your customers are affected. The database vendors still struggle to provide a good high available solution to support hot upgrades. This also puts pressure on application architects to minimize the upgrade or maintenance time.
And, this is just a tip of an iceberg. As you dig more into the deployment options, you are basically opening a can of worms.
Tuesday, July 3, 2007
SOA ROI - interoperability and integration
If you are a SOA enabled enterprise application vendor trying to sell SOA to your customers you quickly realize that very few customers are interested in buying SOA by itself. Many customers believe SOA investment to be a non-differential one and they compare that with compliance – you have to have it and there is no direct ROI. A vendor can offer ROI if the vendor has the right integration and interoperability strategy. For customers it is all about lowering the TCO of the overall IT investment and not about looking at TCO of individual applications. SOA enabled applications with standardized, flexible, and interoperable interfaces work towards the lower TCO and provide customers sustainable competitive advantage. Generally speaking customers are not interested in the "integration governance" of the application provider as long as the applications are integrated out-of-the-box and has necessary services to support inbound and outbound integration with customer's other software to support customer's vision of true enterprise SOA.
It has always been a long debate what is a good integration strategy for SOA enabled products. Organizations debate on whether to use the same service interfaces for inter-application and intra-application integrations. Intra-application integration have major challenges, especially for large organizations. Different stakeholders and owners need to work together to make sure that the applications are integrated out-of-the-box. It sounds obvious but it is not quite easy. In most cases it is a trade off between to be able to "eat your own dog food" by using the published interfaces versus optimizing performance by compromising the abstraction by having a different contract than inter-application integration. There are few hybrid approaches as well that fall between these two alternatives, but it is always a difficult choice. Most of the customers do not pay too much attention to the intra-application strategy, but it is still in the best interest of a vendor to promote, practice, and advocate service-based composition against ad-hoc integration. There are many ways to fine tune the runtime performance if at all this approach results into performance degradation.
The other critical factor for ROI is the interoperability. The internal service enablement doesn't necessarily have to be implemented as web services, but there is a lot of value in providing the standardized service endpoints that are essentially web services that have published WSDL and WS-I profile compliance. The interoperability helps customer with their integration efforts and establish trust and credibility into the vendor's offerings. I have also seen customers associating interoperability with transparency. Not all the standards have matured in the area of Web Services and that makes it difficult for a vendor to comply to or to follow a certain set of standards, but at the minimum vendors can decide to follow the best practices and the standards that have matured.
Sunday, June 17, 2007
SOA Governance - strategic or tactical?
Many architects view SOA governance as a technical challenge, but I beg to defer. Strategic SOA governance is not just a technical problem; it is a business and process problem that has socioeconomic implications. I already talked about the people part. Talking about SOA economics, there is no good way to calculate ROI based on just SOA. Few people have actually tried doing this and I am not sure if this is a right model. Number of services or number of reusable services or any other QoS for SOA don't help to build an economic metrics. SOA is quite intertwined with the business and it is your guess versus mine in extracting a monetary value out of it. Having said this, people do work hard on making a business case for their organizations since SOA is hard to sell.
The strategic to tactical transformation of SOA is not easy. This is where people argue on several reference architectures, policies etc. These are very time consuming and dirty efforts and include several technical, domain, and functional discussions. Cross-functional team works well to tackle this kind of governance problem since it is critical to have a holistic (horizontal) view of SOA with enough help from the experts in several (vertical) areas. SOA architects have to have good people and project management skills since as I already mentioned governance is not just a technical problem. If you are a technical architect, you end up with a diagram like this. This diagram does not help anyone since it mixes a lot of low level details with high level details and the information is difficult to consume. Communicating the architecture is one of the difficult challenges for an architect and it even becomes more difficult if you are describing strategic SOA governance.