IT Governance Tool – Kintana and Clarity

IT Governance is focused on information technology (IT) systems and their performance and risk management. IT Governance focuses on providing value delivery inline with the strategic alignment with business goals. It ensures that IT delivers promised benefits against strategies. It enables execution of value proposition throughout the IT Life cycle and optimization of IT costs.

Project Portfolio Management (PPM) is a subset discipline of IT Governance that deals with the IT investments, their performance and associated risk management. Project and Portfolio Management (PPM) applications can provide visibility into the current state of organizational initiatives, resources and spending through the centralized collection of data from multiple sources and perspectives. Integration across multiple business and IT process domains through PPM functions provides multidimensional views of this data for better visibility and understanding of resource supply versus project demand in IT and other project environments.

3 areas that IT organizations must target for change to gain the full benefits of cloud computing:

IT management at large scale, without IT intervention – the efficiency of a cloud rests on operating at large scale and especially with as little human intervention as possible.

Operating at cloud scale requires using pools of homogeneous resources that can be swapped between different workloads as needed, automatically. Using dedicated resources – servers – slows things down. IT Management, and especially automation, software needs to match requirements, allowing you deal with pools of resources and swap them around as needed.

Configurations Management & Automation – The configuration and provisioning needs of the cloud must be highly automated and often assume less than perfect process and diagnostics.

Provisioning and configuration are happening all the time. Instead of being a once-in-a-release activity, resources in your cloud are constantly being shifted around for different uses, updated even as you begin to deliver new applications and features more frequently. Automation software needs to operate in an environment where it becomes part of the daily running, diagnosing of your, and repairing your IT services.

Delivering Frequent Functionality – Taking advantage of the cloud depends on delivering applications in a rapid, Agile fashion, which depends on automation being an enabler instead of yet another moving part to wrestle with.

The most radical need that cloud computing is driving in IT Management is the need to collaborate with developers. In addition to driving down costs, the ultimate goal of cloud computing is to allow IT to deliver more services more frequently to The Business, customers, or whoever IT’s customers and end-users are. Note the use of the term “feature” vs. “release”: orienting your automation around delivering small chunks of functionality is critical. While developers must re-oriented their application architectures and project management to deliver frequent functionality, operations must re-orient as well. Operations has to start thinking of the pools of resources in their cloud(s) as something they “program” as part of the applications and services being delivered, not just infrastructure that’s manually tuned and configure.

There isn’t time for IT to give loving care to each resource or even orchestrate everything manually. Instead, you need your automation system to act like a compiler and runtime engine to do your infrastructure bidding at scale.

“Needs” of the cloud are driving requirements for automation software. To pull out some general principals from the above, reaching the full benefits of cloud computing depends on IT changing the way it operates & thinks in several ways:

Failure is a Feature – assume failures are going to happen all the time and build your automation system as one of the ways to recover by re-provisioning fresh, error free resources.

Embrace Simplicity – try to keep pools of resources as homogeneous as possible instead of requiring extra attention for specialized IT.

Automate Everything – Operating at cloud scale requires that humans intervene at only the most critical moment, and not be constant hand-holders for cloud resources.
Tools for IT Governance
CA Clarity PPM
It is a single system of record designed to help you manage IT services, projects, people and financials. CA Clarity PPM for IT Governance, available on demand or on premise, helps you improve the quality of investment decision-making and ensures all IT services and investments are aligned with corporate strategy.

CA Clarity PPM for IT Governance eliminates the time-consuming manual processes and reporting. By bringing together demand, resource, portfolio and financial management, IT organizations can confidently:

v     Select the right IT investments

v     Satisfy increasing demand on scarce resources

v     Effectively determine where to trim IT costs

v     Prove the value IT delivers to the business

Solaris, HP-UX, AIX, Windows and Linux environments

Portfolio management Quickly assess and compare proposed investments and their impact on the business by performing side-by-side ‘What-If’ scenario comparisons, visual alignment indicators, and scorecards.

Resource management Prioritize your resources using capacity planning and skills search capabilities to ensure the right people are available at the right time to support your initiative from the outset.

Budget management Accurately forecast program spend by tracking and planning labor and non-labor program costs to minimize budget surprises.

Dashboards & business intelligence Flexibly respond to changing requirements by having dashboard visibility into critical program information, and risks, and be able to reallocate funds or people to minimize budget overruns and delays.

Project management Accelerate delivery of your programs with leading project management capabilities, best practices, and configurable workflows.

Collaboration Stay on top of everything that’s happening on the program with real-time notifications on program scope changes, people, documents, best practices, tips, and scope changes.

End-user training Speeds user training and adoption using a fast and flexible method for developing customer-specific documentation, training, and support.
Mercury’s ITG (IT Governance) is now HP’s PPMC Project and Portfolio Management Center

HP Project and Portfolio Management

HP Project and Portfolio Management

HP Portfolio Management module enables you to govern your portfolio of IT projects, applications, and opportunities in real-time with effective collaborative processes.

HP Program Management module enables you to automate processes for managing scope, risk, quality, issues, and schedules.

HP Project Management module integrates project management and process controls to reduce the number of project/schedule overruns, thereby reducing project risks and costs.

HP Financial Management module provides a single, real-time view into all financial attributes related to the programs, projects, and overall IT portfolio.

HP Resource Management module provides comprehensive resource analysis, which includes both strategic and operational activities in the work lifecycle.

HP Time Management module focuses on value-added activities by streamlining time and improving accuracy across work performed by IT.

HP Demand Management module captures all project and non-project requests of IT so you will know what the organization is asking for and have the information to focus your valuable IT resources on top business priorities.

HP Project and Portfolio Management Dashboard provides role-based, exception-oriented visibility into IT trends, status, and deliverables to help you make and execute real-time decisions.

HP PPM Center Mobility Access is our mobility and collaboration solution embedded in HP PPM Center software and enables email notifications and approval actions from user’s email on any device that supports regular email.

Functionality HP – PPMC CA – Clarity
Support for project planning and project prioritization Yes Yes
Workflow engine which converts business processes into digitized workflows Yes Yes
Dashboard functionalities with drill-down capability based on portlet technologies Yes Yes
Functionalities that allow a strategic alignment to higher-level business objectives. Yes Yes
Collaboration functionality No Yes
Document management functionality Yes Yes
Availability of web services for 3rd party integration Yes Yes
Direct charging and billing functionality No Yes
Use as application deployment system Yes No
Functionality for tracking resource, time and cost Yes Yes
Portfolio Management Yes Yes
Project Management Yes Yes

CA’s Cloud Solutions

CA's Cloud Solutions

CA’s Cloud Solutions

Project and Portfolio Management
Project Management Office: CA Services specializes in designing, implementing and optimizing PMO solutions to help you achieve the efficient IT performance that drives superior business results through a five-step, rapid time-to-value approach that delivers results quickly and incrementally.

CA Clarity™ PPM for IT Governance: CA Clarity PPM provides a systematic and structured approach to quickly bring together people, processes, and technology to help optimize your business processes.

Project and Portfolio Management on Demand

Project and Portfolio Management on Demand

v     Increase alignment of investments with business strategy

v     Accurately forecast and budget funds

v     Accelerate delivery of your initiatives

v     Improve agility to re-assign people and funds when priorities change

v     Minimize manual efforts to gain business intelligence on initiatives and program practices

CA Agile Vision™ : The integrated solution provides an accurate picture of all your deliverables, costs, and resources across your Agile and non-Agile projects—helping you save money, meet requirements faster, and work smarter.
CA Access Control: It is designed to provide a comprehensive solution to privileged user management, protecting servers, applications and devices across platforms and operating systems.

It is designed to streamline policy deployment and management by helping administrators to construct logical policy sets and deployment rules.

CA Access Control is designed to support a wide range of virtualization platforms including: VMware ESX, Solaris 10 Zones and LDOMs, Microsoft Hyper-v, IBM VIO and AIX LPAR, HP-UX VPAR, Linux Xen and Mainframe x/VM, providing for more consistent security management of access control risks across these virtual partitions.

CA Access Control is designed to audit all activity performed and track the activity based on the original user.

CA Enterprise Log Manager:

• User activity compliance reporting Provides predefined and customizable reports mapped to common security auditing guidelines and compliance regulations (such as PCI DSS, SOX, HIPAA, FISMA, NERC, and more) that can be emailed and run on schedule or on demand.

User activity investigation Delivers visual log analysis tools with drill-down capabilities that expedite the investigation of user and resource activities and the identification of policy violations.

Automatic compliance report updates Provides regular, automatic content and program updates, including new compliance reports, new queries, product integrations, release upgrades, and more.

CA Identity and Access Management (iam ), Spectrum and Mainframe Integrations CA Enterprise Log Manager helps extend the capabilities of leading CA solutions such as CA IAM (e.g., CA Access Control, RCM, DLP, and more), CA Spectrum and Mainframe systems by delivering integrated user and resource access activity reporting for these solutions.

Cloud and virtualization support Get granular user activity reports for virtualization hosts and guests, as well as for both private and public cloud environments.

The ability to report and investigate what users, especially those with escalated privileges, do in Cloud environments is essential. The user activity and compliance reporting solution must support commonly used virtualization platforms such as: VMware, Citrix, Microsoft, Cisco and others. Collecting user activity logs across the virtual environment including virtual servers, network, storage and management systems is very important.

CA Federation Manager: It provides standards-based identity federation capabilities that enable the users of one organization to easily and securely access the data and applications of other organizations and cloud services.

CA Identity Manager: CA Identity Manager helps organizations automate identity-related processes throughout the enterprise—from the mainframe to

the cloud, across employees, contractors, partners, and customers.

• Extending the enterprise up to the cloud (IAM up to the cloud)

• IAM to secure cloud service providers (IAM inside the cloud)

• IAM services delivered down from the cloud (IAM down from the cloud)

Provides identity administration, provisioning/deprovisioning, user selfservice, and compliance auditing and reporting. It helps you establish consistent identity security policies, simplify compliance, and automate key identity management processes across multiple independent tenants.

CA SiteMinder®: CA SiteMinder provides a centralized security management foundation that enables the secure use of the web to deliver applications and cloud services to customers, partners, and employees.

CA SOA Security Manager: provides you with the industry’s most comprehensive SOA/WS security platform. It provides both identity-based Web services security—authentication, authorization, and audit (AAA)—and content-based XML threat-centric security in a single integrated solution.
Service Assurance
CA Wily Customer Experience Manager™: Whether your business-critical applications are internal or external, inside the firewall or in the cloud, the CA Wily Application Performance Management solution — CA Wily Customer Experience Manager™ and CA Wily Introscope® — helps you deliver the online service performance that your end-users expect.

CA NetQoS SuperAgent: CA Service Assurance solutions uniquely link real end-user experience, transactions, and applications with the underlying systems and network infrastructure supporting them so you can understand the real-time performance, risk, and quality of business services across your physical, virtual, and cloud environments.

CA Infrastructure Management: It provides visibility and control into the performance and availability of the service delivery infrastructure —and traffic it delivers. CA Infrastructure Management proactively manages voice and data networks, physical and virtual servers, databases, and client-server applications.

CA Spectrum® Infrastructure Manager: CA Spectrum Infrastructure Manager helps enterprises, government agencies and service providers avoid the risk of business interruptions and cost of business-service failures by integrating service, fault and configuration management into a single tool to provide better IT service at a lower cost.

New release of CA Spectrum enables IT to leverage its existing investment in infrastructure management for both physical and virtual environments.
Service Automation

CA Automation Suit

CA Automation Suit

CA Automation Suite for Hybrid Clouds : CA Automation Suite for Hybrid Clouds helps you dynamically deploy and elastically scale IT computing resources by providing a single interface to build and manage internal private clouds that integrate access to supplemental public cloud resources.

CA Server Automation: Designed to enable consistent deployment of applications and services across physical, virtual and public cloud environments with a comprehensive range of provisioning, patching and deployment methods.

CA Virtual Automation: Helps realize the promise of cloud computing with user-enabling, automation to align with key business applications and services.

CA Process Automation: Designed to automate, integrate and orchestrate operational processes across platforms, applications and IT groups to improve business service.

CA Service Catalog: Helps IT to communicate value, set expectations, improve customer relationships, reduce costs, gain efficiencies, and making better decisions with financial insight.

3Tera AppLogic: It provides you with a fast track to private cloud with service assembly, dynamic provisioning and scaling, self-service, and resource metering, all in a single environment.

CA Workload Automation: It is designed to enable IT organizations to reduce costs in scheduling and managing workloads and help derive significant business benefits from offloading workload processing to cloud computing environments on-demand or through cloud bursts.

CA Workload Automation’s cloud-aware workload automation engine is designed to intelligently respond to business needs and events by managing and processing workloads in physical and virtual resource pools, regardless of underlying platforms, optimizing infrastructure, saving costs and improving agility in operations.

CA Server Automation: It is an integrated solution that automates provisioning, patching, and configuration of operating system and application components across physical, virtual and public cloud systems.

Automate provisioning of software stacks (OS and applications) and Cisco UCS platforms, as defined by service profiles, to provide robust cloud services

CA Virtual Automation: It is also part of the CA Automation Suite for Data Centers, a modular approach to automation which also includes CA Server Automation, CA Process Automation, and CA Configuration Automation. The suite is designed to help you automate, orchestrate, integrate, and standardize critical data center operation activities across physical, virtual, and cloud environments.

Designed to provide VM Administrators and IT Operations with an automation solution to build and manage a private cloud infrastructure

CA Virtual Configuration: It is a stand-alone virtual environment configuration management product for the configuration of heterogeneous virtual machines and infrastructure. Virtual Configuration’s discovery capability is designed to identify server and application dependencies, and inventories that data for configuration baselining and validating configurations, as well as detecting and remediating configuration drift. Its virtualization dashboards facilitate change tracking and review, checking compliance for audits and then reporting the environment’s status.
Service Management
CA CMDB: It improves business alignment by providing disparate groups across IT with a shared essential view of how IT components are configured together to support your business-critical applications and services, spanning hardware and software, physical and virtual, mainframe and distribute.

CA On Demand Portal: A new easy-to-use Web interface offers deployment, metering, billing, and centralized user management, providing compelling technical and business advantages over home-grown or third-party solutions.

CA Service Desk Manager: CA IT Process Automation Manager is a robust, enterprise-class runbook automation tool providing repeatable and extensible automation that maximizes business efficiencies across departments and integrates and optimizes IT operations across physical, virtual, and cloud environments.


CA Oblicore Guarantee™: It is designed to automate, activate and accelerate the management, monitoring, and reporting of service level agreements (SLAs) and service delivery for enterprises and service providers.

CA Service Catalog: It automates your service requests. Process orchestration enables the automation of complex compound or bundled services such as employee onboarding activities that include touchpoints to multiple functional groups or external vendors.
Virtualization Management
CA ARCserve® Backup

CA ARCserve® Replication and High Availability

CA Virtual Assurance for Infrastructure Managers: It is an add-on product to CA’s Enterprise Managers supporting large production virtualization environments, enhancing operational visibility and control, and driving agility. It delivers a greater ROI by providing centralized, heterogeneous physical and virtual systems management and expanding support into networks, databases, and applications.

IT Governance and IT Management Reference Model

IT Governance and IT Management-Before discussing IT Governance  and  IT Management  Reference Model, let’s discuss about governance and management.

Governance is about vision, and the translation of vision into policy.

Governance can be said to be representing the owners, or the interest group of people, who represent a firm, company or any institution. The governing body, on the other hand, appoints management personnel.

Cloud Computing – Download Free EBooks and Whitepapers

Good governance is outcome and value focused. It helps an enterprise realize its goals and reap business benefits. It also helps to mitigate risk and improve team effectiveness by enabling effective measurement and control and promoting good communication.

Good governance does not consist of a set of shackles and controls that stifle creativity. Although it is based on repeatable measures, good governance should provide a context for guiding entrepreneurialism, quality achievement, and efficient execution. To be accepted by practitioners, governance measures must have demonstrable value.

The governing functions are those that provide the essential direction, resources and structure needed to meet specific needs in the community which include:

v      Strategic Direction, setting a direction for the organization that reflects community needs.

v      Resource Development, developing financial resources that support program activities

v      Financial Accountability, managing financial resources that ensure honesty and cost-effectiveness

v      Leadership Development, developing the human resources that lead the organization today and in the future

Enterprise governance is the set of responsibilities and practices exercised by the board and executive management with the goals of providing strategic direction, ensuring that objectives are achieved, ascertaining that risks are managed appropriately and verifying that the enterprise’s resources are used responsibly.

While governance developments have primarily been driven by the need for transparency of enterprise risks and the protection of shareholder value, the pervasive use of technology has created a critical dependency on IT that calls for a specific focus on IT governance.

IT governance is the responsibility of the board and executive management” and that IT governance “should be an integral part of enterprise governance. IT governance comprises a set of formal and informal rules and practices that determine how IT decisions are made, how empowerment is exercised, and how IT decision makers are held accountable for serving the corporate interest.

An IT governance program operationalizes mechanisms — in the form of decision-making structures, principles, policies, standards, and procedures — to make sure that transparent and well-informed decisions are rendered and the appropriate action taken.
Proposed Principles for the IT Governance Model

v      Simple

o        Simple to understand and explain

o        Easy to maintain

v      Participative and inclusive

o        Stakeholders must be part of the decision process

o        All parties concerned should be given the opportunity to provide input and feedback

o        All departments and agencies must be informed of the decisions made

v      Formal

o        Governance –  its roles, responsibilities and structures are recognized and supported

o        The decision process will follow a known process that ensures appropriate consultation and engagement from all stakeholders

o        Decisions made are universal and are inherited

v      Flexible

o        The governance structure should be able to accommodate new directions and decision areas,  and new stakeholders

v      Acting as One

o        The model should support the alignment of government-wide and departmental decisions

o        Departments’ and agencies will implement a governance structure for IT that is in line with Treasury Board Secretariat (CIOB) to facilitate communication and alignment

IT Governance Reference Model

IT governance is the responsibility of the board of directors and executive management. It is an integral part of enterprise governance and consists of the leadership and organizational   structures and processes that ensure that the enterprise’s IT sustains and extends the enterprise’s strategies and objectives.

IT governance can be seen as a structure of relationships and processes to direct and control the enterprise use of IT to achieve the enterprise’s goals by adding value while balancing risk vs. return over IT and its processes.

IT governance provides the structure that link IT processes, IT resources and information to enterprise strategies and objectives. Furthermore, IT governance integrates and institutionalizes best practices of planning and organizing, acquiring and implementing, delivering and supporting, and monitoring and evaluating IT performance to ensure that the enterprise’s information and related technology support its business objectives. IT governance thus enables the enterprise to take full advantage of its information, thereby maximizing benefits, capitalizing on opportunities and gaining competitive advantage. IT governance also identifies control weaknesses and assures the efficient and effective implementation of measurable improvements.Information technology, in its turn, can influence strategic opportunities as outlined by the enterprise and can provide critical input to strategic plans. In this way, IT governance enables the enterprise to take full advantage of its information, and can be seen as a driver for corporate governance.

Because every organization is only one of its kind, companies will differ in how they nurture an environment favorable to advantageous behavior in the use of IT.

Therefore, IT governance cannot be implemented according to a one-size-fits-all pattern but instead must be carefully architected based on an organization’s profile. For an IT governance program to be effective, it needs to be symbiotic with the prevailing culture and carefully interwoven into the organization’s operational structure.

Governance mechanisms, structures, relationships, and processes must be synergistically fused with the organization if IT governance is to be successful.

IT Governance

IT Governance

1.      Business drivers: A principal objective of IT governance is to see that the IT tactical direction aligns with the company’s tactical business goals. Business drivers are the attributes of business function necessary to maintain the strategic business needs of the company and outline the IT governance framework.

2.      Guiding Principles: Guiding principles encapsulate the organization’s beliefs and philosophies and are enacted by controls in the form of policies, standards, and procedures that guide how decisions will be driven in both the business and IT organizations and at every level of the enterprise (i.e., strategic, tactical, or operational).

v      Every organization has a unique “personality profile” that reflects three interrelated dimensions:

v      Culture — the manner in which a company characterizes itself; the company’s unique identity

v      Business model — how the organization will create value for its customers.

v      Operating environment – the means by which value can be realized to sustain the business model

 3.      Accountability Framework: Central to IT governance is the notion of authority, empowerment, and accountability. An accountability framework includes clear assignment of roles and responsibilities for decision making.

4.      Decision model: A decision-making model helps ensure that IT decisions are logical and reliable with the corporate direction and aligned with the overall business strategies. The decision model ensures clarity of, and accountability for, desired outcomes. Decision authorities are individuals or bodies (e.g., committees or boards) that are empowered to make and ratify decisions regarding the use of IT

v      IT governance frameworks embrace sound industry practices and are a blend of collective intelligence derived from a community of experts.

v      Industry frameworks of best practices prove very useful enablers by providing the foundation of a governance program. For an IT governance program to be effective, however, it must be tailored and architected to shadow an organization’s “personality.”

v      Each of the leading practice frameworks exhibits relative merits and strengths. Each tends to have been designed to serve a specific aspect of IT, and this shapes the construct and content.

v      IT governance frameworks are not necessarily mutually exclusive. Components of different frameworks can coexist and complement each other. This federated approach can be particularly attractive when the remediation efforts point to necessary improvements in diverse areas of governance; for example, business technology alignment and vendor service-level management. In these instances, COBIT and ITIL can be synthesized together in a unified framework.

v      IT governance cannot be approached in a haphazard manner. There is no vanilla procedure that will magically embed IT governance into an organization. While not prescriptive, IT governance is top-down and principles-based. To be successful, it requires structured, systematic thinking and an understanding of an organization’s personality traits. It further requires ownership and sponsorship at the senior management/executive level. It is essential that business and IT senior and operational management create awareness and involvement for the IT governance initiative.

v      IT governance to be successful, it should be a workable solution able to deal with the challenges and pitfalls presented by IT. It should not only prevent problems but also enable competitive advantage. IT risks are closely related to business risks, because IT is the enabler for most business strategies. The management and control of IT should, therefore, be a shared responsibility between the business and the IT functions, with the full support and direction of the board. IT governance provides the oversight and monitoring of these activities within a wider enterprise governance scheme.

IT Governance & IT Management
Management is about making the decisions needed to implement policy. While governance pertains to the vision of an organization, and translation of the vision into policy, management is all about making decisions for implementing the policies.

Governance and Management

Governance and Management

The management functions are those that provide the program activities and support to accomplish the goals of the organization. These usually include:

 v      Program Planning and Implementation: Taking the strategic direction to the next level of detail and putting it into action

v      Administration: Ensuring the effective management of the details behind programs.

IT Governance and IT Management

IT Governance and IT Management

IT management is focused on the effective and efficient internal supply of IT services and products and the management of present IT operations.

IT governance, in turn, is much broader and concentrates on performing and transforming IT to meet present and future demands of the business (internal focus) and business customers (external focus).

Management comes only second to the governing body, and they are bound to strive as per the wishes of the governing body.

IT governance is about deciding and prioritizing what things to do, while management is about how to do them in an optimal manner.

Therefore, good IT management disciplines are corollary to good IT governance.

IT Governance and IT Management

IT Governance and IT Management

v      Strategy alignment ensures that IT generates demand for the products and services offered by the organization. This translates into coherent business.

v      Value delivery & Management ensures that IT acquires, provisions, and deploys technology solutions on a timely, cost effective, and high-quality basis to meet the needs of the business and organizations maximize value by optimizing the benefits of investments throughout their economic lifecycle within defined risk tolerance thresholds.

v      Risk management ensures the practices of risk identification, quantification (likelihood and impact), and mitigation are effectively deployed across the organization. The influence of risk management permeates all aspects of the reference model.


v      Resource management ensures optimal use and allocation of IT resources and capabilities in servicing the needs of the enterprise, maximizing the efficiency of these assets, and minimizing their costs.

v      Performance management ensures that the performance and quality of IT services are adequately defined, monitored, and measured.

IT governance is a life cycle that, for a specific objective, can be entered at any point but is best started from the point of aligned business and IT strategy. Then, the implementation will be focused on delivering the value that the strategy promises and addressing the risks that need to be managed. To support this implementation, management should manage its IT resources such that the enterprise is capable of delivering business results/value at an affordable cost with an acceptable level of risk. At regular intervals the strategy needs to be monitored and the results measured, reported and acted upon. The strategy should be re-evaluated and realigned as required.

VCE-VMware, Cisco, EMC VBlock


VCE = the VMware, Cisco, EMC (VCE) Coalition is a partnership between these 3 companies to work together to promote and accelerate the movement to cloud computing and fully virtualized environments. The VCE acronym has also been interpreted as “virtualization changes everything”. The vBlock is the building block offered by VCE to help companies make this move to virtualization easier and faster (discussed below). You should care about VCE because, with the resources they are devoting to this, and their solutions may be in your datacenter just around the corner.

With Vblock™ Infrastructure Packages, the VCE coalition delivers the industry’s first completely integrated IT offering that combines best-in-class virtualization, networking, computing, storage, security, and management technologies with end-to-end vendor accountability. These integrated units of infrastructure enable rapid virtualization deployment, so customers quickly see a return on investment.

VCE coalition services combine Cisco networking and compute expertise, EMC information management and storage expertise, and VMware virtualization expertise. Together the coalition uses proven methodologies and best practices to:

· Define the scope of a customer’s initiative and build the business justification for the transformation to private cloud
· Collaboratively define strategy and develop an architecture that is right for your business
· Address governance, technology, and operational issues to plan an on-demand service model that eliminates unnecessary IT investment
· Design and deploy a highly scalable Vblock to achieve the benefits of pooled resources

IT is adopting a private cloud model that delivers IT as a service—internally, externally by a service provider, or in combination.

Vblock is the platform built from the most efficient virtualization solution, converged server and network platform, and enterprise storage.

Vblock is the only platform that blends cloud computing with integrated, consistent enterprise security and compliance, comprehensive business continuity and disaster recovery capabilities, performance, consumption, and mobility management, management uniformity and consistency of infrastructure elements, a single services and support model, and finally, IT agility and responsiveness to changing priorities for faster deployment and scalability.

VCE-VMware, Cisco, EMC VBlock

VCE-VMware, Cisco, EMC VBlock

Vblock Infrastructure Packages offer varying storage capacities and processing/network performance, and support such incremental capabilities as enhanced security and business continuity.

Vblock Infrastructure Packages are jointly tested and supported to deliver the right performance and capacity at the right price. Customers can integrate existing OS, applications, databases, and infrastructure software, using any protocol.

The Vblock unified customer engagement supports the transformation of existing infrastructures into a pervasive virtualized environment using Vblock Infrastructure Packages. The customer engagement offerings provide integrated sales, services, and support.

VCE-VMware, Cisco, EMC VBlock

VCE-VMware, Cisco, EMC VBlock

The three Vblock Infrastructure Packages are:
· Vblock 2: Designed for massive sizes and numbers of virtual machines in a compact footprint, supporting high-end configurations that are completely extensible to meet the most demanding IT needs. Vblock 2 consists of the Cisco Unified Computing System, EMC Symmetrix VMAX storage, and VMware vSphere 4.
· Vblock 1: Designed for large sizes and numbers of virtual machines in a compact footprint, supporting midsized configurations that deliver a broad range of IT capabilities to organizations of all sizes. Vblock 1 consists of the Cisco Unified Computing System, EMC CLARiiON CX4 or EMC Celerra unified storage, and VMware vSphere 4.
· Vblock 0: Designed for moderate sizes and numbers of virtual machines in a compact footprint—ideal for a test and development platform or for remote data centers. Vblock 0 consists of the Cisco Unified Computing System, EMC Celerra unified storage, and VMware vSphere 4.

VCE-VMware, Cisco, EMC VBlock

VCE-VMware, Cisco, EMC VBlock

Also note that these configuration MUST be exactly configured in this way otherwise it is not a Vblock supported configuration any more.

1. Min 4x UCS 5100 Blade Chassis with Max of 8x UCS 5100 Blade Chassis

2. 32-64 UCS B Blade Servers

3. Nexus 1000V

4. EMC Symmetrix V-Max Storage unit

5. 96-146Tb Storage capacity

6. EFD, FC and SATA drives are supported

7. Cisco MDS 9506

8. VMware vSphere 4 with Enterprise Plus Licenses (Must be “Plus”)

9. EMC Navisphere and PowerPath/VE

10. Cisco UCS and Fabric Manager

The one thing that is at an optional “extra cost” is UIM (Unified Infrastructure Manager).
Vblock1 – Configuration

So what is this VBlock all about?

Vblocks are pre-engineered, tested and validated units of IT infrastructure that have a defined performance, capacity and availability SLA.

Currently there are two types of Vblocks, Vblock1 and Vblock2. (I understand that Vblock0 is on its way..soon…). So more on Vblock1 configuration:

1. 2x Cisco UCS 5100 Blade Chassis (Minimum config, max 4)
2. 16-32 Cisco UCS B-series blades
3. 960Gb Memory (Max 1920Gb)
4. Nexus 1000V
5. 6100 Series fabric interconnects
6. EMC Clariion CX4-480 Storage unit
7. 38TB Storage (64TB Max)
8. Cisco MDS 9506
9. VMware vSphere 4 with Enterprise Plus Licenses (Must be “Plus”)
10. EMC Navisphere and PowerPath/VE
11. Cisco UCS and Fabric Manager

The above configuration can support from 512 up to 4096 VM’s. Yes 4096 does sound like a lot, but there is VM guidelines and vBlock1 configuration on the max Mem and CPU per VM.

Below is a high level topology overview of the design:

vBlock – EMC defined the vBlock as “a pre-architected and pre-qualified environment for virtualization at scale: storage, fabric, compute, hypervisor, management and security.” I think of it more as a “datacenter in a box” with all the pieces being from VMware, Cisco, and EMC. The vBlock is shipped to you and all ready to virtualize your infrastructure. Here is what it might look like:

Think of vBlock as the plug and play data-center solution, it will consist of Cisco UCS/Nexus/MDS, EMC Storage and VMware Virtualization all within a pre-designed and built rack solution – simply drop it in your data-center, plug it in and deploy virtual machines.  This is not really any different than purchasing HP Servers, NetApp Storage and VMware licenses separately. Ultimately it is the same solution, but now with a single SKU that partners will be able to sell.

VCE-VMware, Cisco, EMC VBlock

VCE-VMware, Cisco, EMC VBlock

That’s a Vblock Type 1, and below is a Vblock Type 2.

Consider the following two scaling models:

* You can scale out a Vblock by adding Vblocks.   EMC Unified Infrastructure Manager integrates the management – so you don’t have “islands” from a management standpoint.   This is the sweet-spot of the “Type 1/Type 0” Vblock – whose minimum/maximums are more tightly bound.   If you look at the  logical diagram, you could take 3 of the Vblock type 1s, and have the effective scale of a Type 2.
* You can scale out a Vblock by starting smaller and scaling a Vblock out horizontally – and only when you hit a maximum, build another Vblock.   This is the sweet spot of the Type 2 Vblock.   If you look at the diagram – the “minimum” configuration of a Type 2 is actually the same scale (physically and logically) as a single Type 1.

VCE-VMware, Cisco, EMC VBlock

VCE-VMware, Cisco, EMC VBlock

So – Question 1: Why are there two core types of Vblocks?  Answer: Because there are two fundamentally different scaling models for customers – and at different scales, the two scaling models have differing requirements and efficiencies.

The data center is now moving towards a “private cloud” model, which is a new model for delivering IT as a service, whether that service is provided internally (IT today), externally (service provider), or in combination.

The benefits of private clouds are capturing the collective imagination of the business in organizations of all sizes around the world. The realities of outdated technologies, rampant incremental approaches, and the absence of a compelling end-state architecture are impeding adoption by customers.

By harnessing the power of virtualization, private clouds place considerable business benefits within reach.

o        Business enablement—Increased business agility and responsiveness to changing priorities; speed of deployment and the ability to address the scale of global operations with business innovation

o        Service-based business models—Ability to operate IT as a service

o        Facilities optimization—Lower energy usage; better (less) use of data center real estate

o        IT budget savings—Efficient use of resources through consolidation and simplification

o        Reduction in complexity—Moving away from fragmented, “accidental architectures” to integrated, optimized technology that lowers risk, increases speed, and produces predictable outcomes

o        Flexibility—Ability of IT to gain responsiveness and scalability through federation to cloud service providers while maintaining enterprise-required policy and control

Moore’s Law: enterprise IT doubles in complexity and total cost of ownership (TCO) every five years and IT gets more pinched by the pressure points.

Enterprise IT solutions over the past 30 years have become

  • More costly to analyze and design,
  • Procure,
  • Customize,
  • Integrate,
  • Inter-operate,
  • Scale,
  • Service, and
  • Maintain.

This is due to the inherent complexity in each of these lifecycle stages of the various solutions.

Within the last decade, we have seen the rise of diverse inter-networks—variously called “fabrics,” “grids,” and, generically, the “cloud”—constructed on commodity hardware, heavily yet selectively service-oriented with a scale of virtualized power never before contemplated, housed in massive data centers on- and off-premises.

What Constitutes Vblock Infrastructure Packages?

 Vblock Infrastructure Packages are pre-engineered, tested, and validated units of IT infrastructure that have a defined performance, capacity, and availability Service Level Agreement (SLA).

To deliver IT infrastructure in a new way and accelerate organizations’ migration to private clouds

Removing choice is part of that simplification process. To that end, many decisions regarding current form factors may limit the scope to customize or remove components.

For example, substituting components is not permitted as it breaks the tested and validated principle.

While Vblock Infrastructure Packages are tightly defined to meet specific performance and availability bounds, their value lies in a combination of efficiency, control, and choice.

Another guiding principle of Vblock Infrastructure Packages is the ability to expand the capacity of Vblock Infrastructure Packages as the architecture is very flexible and extensible.

Vblock Infrastructure Packages—A New Way of Delivering IT to Business

Vblock Infrastructure Packages accelerate infrastructure virtualization and private cloud adoption:

  • Production-ready
    • Integrated and tested units of virtualized infrastructure
    • Best-of-breed virtualization, network, compute, storage, security, and management products
  • SLA-driven
    • Predictable performance and operational characteristics
  • Reduced risk and compliance
    • Tested and validated solution with unified support and end-to-end vendor accountability

Customer benefits include:

  • Simplifies expansion and scaling
  • Add storage or compute capacity as required
  • Can connect to existing LAN switching infrastructure
  • Graceful, non-disruptive expansion
  • Self-contained SAN environment with known standardized platform and processes
  • Enables introduction of Fibre Channel over IP (FCIP), Storage Media Encryption (SME), and so on, later for Multi-pod
  • Enables scaling to multi-Vblock Infrastructure Packages and multi-data center architectures
  • Multi-tenant administration, role-based security, and strong user authentication
VCE-VMware, Cisco, EMC VBlock

VCE-VMware, Cisco, EMC VBlock

Vblock 2 Components

Vblock 2 is a high-end configuration that is extensible to meet the most demanding IT requirements of large enterprises or service providers. By delivering high-performance and large-scale virtualization, Vblock 2 can support a substantial number of virtual machines in a compact footprint.

Vblock 1 Components


Cloud Computing Chargeback Models

cloud computing chargeback models

Before understanding cloud computing chargeback models, lets find what is chargeback? Chargeback and metering refers to the ability for an IT organization to track and measure the IT expenses per business unit and charge them back accordingly. According to Role, chargeback includes different dimensions.

Cloud Computing – Download Free EBooks and Whitepapers
Chargeback in Traditional IT Environment

Chargeback in Traditional IT Environment

IT department provides services to internal departments IT department provides services to external departments as service providers.

The idea of charge back in IT industry developed in the mainframe era. Mainframes were very expensive and to buy a mainframe for small to medium sized business was a problem. So the Businesses having these mainframes began providing the computing services to the small businesses to cope with the operational costs and to better utilize the resources of these expensive mainframes.

The central idea is that computing resources and services are metered like electricity, so customers pay only for what they use. Internally, enterprises can charge back business units or at least use “showback” to educate managers about the costs of computing and strategic expenditures.

Traditionally, organizations funded server (and storage) acquisition as part of the new project process.

But virtualization breaks this model; where it is a method of making a physical entity act as multiple, independent logical entities. Thus Chargeback needs to consider different dimension.

In designing an accounting mechanism to support new technologies, two factors must be determined: 1) What are the resources metrics on which chargeback will be based 2) How to account for the excess capacity required supporting a dynamic, shared usage model.

As newer technologies like de-duplication become widely adopted, issues like whether to charge by logical (virtual) gigabyte or physical de-duplicated gigabyte and how to predict or plan for that.

Accurate chargeback are instrumental in showing business units the direct benefits of virtualization.

Chargeback Policies

Chargeback Policies

An IT chargeback system is a method of accounting for technology-related expenses that applies the costs of services, hardware or software to the business unit in which they are used. IT chargeback systems are sometimes called “responsibility accounting” because this sort of accounting demonstrates which departments or individuals are responsible for significant expenses.

Reporting systems that leverage IT chargeback provides end users with more transparency into which business decisions are creating expenses and helps management identify how to achieve greater efficiency.

In the traditional chargeback model, an IT department might divide its budget for services by the total number of business units it serves. In the cloud, that scenario gets even more complex, because IT needs to consider the rate and time of consumption.

Chargeback Goals:

  • Provide Business Units with Information
  • Identify Services Required
  • Control or Influence Costs
  • Use Resources More Effectively
  • Improve Productivity

Architecture and Implementation Technology Stack
Few key steps can help you implement this model and resolve some chargeback chores.


  • To evaluate the needs of the business
  • Anticipate how the business units would benefit from a chargeback system
  • Whether business units are just seeking more control over IT processes, whether divisions are trying to determine if projects are viable
  • To anticipate cost to implement a chargeback model and business’ willingness to pay for it
  • Evaluating software that will help with a chargeback program and help you implement the process.

If you are proposing moving to a chargeback system, proposed costs that may be included, starting with the most obvious:

  • Materials and external costs—all the out of pocket costs associated with the project
  • Creative and production, fully burdened labor
  • Account management/project management/traffic, fully burdened labor
  • Department management and administration
  • Hardware and software costs
  • Overhead (e.g., space and associated facility costs)
  • Training and team events

Chargeback Challenges
“Virtualization throws a very large wrench into IT chargeback as the connection between a virtual server and its physical home is not necessarily as clear. Almost any hypervisor offers the ability to extract the necessary data to do accurate chargeback of virtual machines. However, most of the platforms do not make this process very easy. Typically, a third-party tool or business intelligence reporting structure will need to be implemented to resolve this particular thorny issue.

Factors considering IaaS PaaS and SaaS usage plans

Factors considering IaaS PaaS and SaaS usage plans

In a chargeback program, it may be too easy for the business units to start seeing the data center as just another utility service—one that exists solely for each business unit. This means it may become difficult to analyze technology trends, forecast services for the company, or even train staff adequately because there is no business unit willing to pay for that work.
What will the customer base really use? Virtual CPU capacity, Disk IO, Network IO, or even Memory Loading? What if your costing model is based on just one or two capacities, but all your clients end up using more of one of the others you did not base your cost model on?

Even worse, after you figure out some costing factor for Virtual CPU, Network, Disk, and Memory.

  • Did you really pre-provision enough or each?

Decide what you need and when you need it in reference to infrastructure, pre-provision at least what you can for the first year if you can, this is a sunk cost.

  • What about the time factor?
  • Network connectivity must be preconfigured, who gets that fixed cost per month until it is fully loaded?
  • Shared storage the same issue, but you have to have network and disk resources available before the virtual host is online.

Decide how or what you will charge against, again by virtual CPU, disk, memory, and/or network resources, include all costs for the period defined, site costs, infrastructure costs.

Organizations first need to determine the capacity and power rating for each box in their data center. A unit-based measurement system provides a clearer picture of performance, so that consolidation plans can be adjusted and adapted as the technology is implemented.

The benefits of measurement extend beyond a consolidation project to performance management. Detailed measures like cost per CPU second enable organizations to see how other changes in the environment impact efficiency and cost. While these systems require an investment in terms of time and sometimes, the help of a third-party, the ability to track and measure the impact of significant changes in the data center is invaluable. A key performance indicator (KPI) such as hardware cost per CPU second is the best way we have to indicate the unit cost of computing value delivered.

Following are the important points related to chargeback in the context of virtualization.

  • Incentives are as important as cost recovery: Chargeback methods also create incentives for the service provider. For example, a “one size fits all” VM price that assumes a certain average capacity will discourage the use of “large” VMs.
  • Separate infrastructure costs from services costs: Most successful chargeback methods do not co-mingle the cost of data center capacity (CPU, Memory, Storage, and Facilities) with the labor and tools to manage individual application and OS instances. E.g. the difference between 10 VMs with 1 GB RAM each and 1 VM with 10 GB of RAM. Each has a total of 10 GB of RAM capacity, but 10 VMs are usually much more work to manage than a single, large VM.
  • Unit costs change over time: Shared infrastructure is a combination of fixed costs and variable costs. As the environment grows, the proportion of fixed costs as a percentage of the whole will decrease. In Virtual environment, there is an entry cost that typically requires two or three hosts, access to shared storage and network fabric, and incremental capacity is less expensive for each additional VM.

Chargeback methods need to account for the effect of decreasing marginal costs. (1) Revise the model over time, (2) assume a long-term average steady-state environment or (3) ignore the growth effect in the cost model, allowing cost reductions to improve margins over time.

  • Use tiered pricing: Chargeback for virtual infrastructure can be tiered on several dimensions. One dimension is on capacity: assuming the “slice” based model is used, then the capacity tiers are either carved up into some form of small/medium/large slice, or as slices measured in discrete units (e.g. a slice defined as 1 GHz CPU and 1 GB RAM, and a larger VM requiring 4 GB of RAM is counted as four slices).
  • Bundling is a necessity: In traditional IT environments, it is simple enough for the service provider to pass on the cost of network ports, storage, server hardware, power, and floor space. Virtualization abstracts the applications and operating systems from the physical infrastructure. Then, it moves everything into a shared infrastructure. Sending the customer a bill that shows usage of network ports, power (kWh) and cooling usage (BTU) is not only wrong, but it is also almost impossible to properly calculate (e.g. they could be using fractions of network ports and may be using portions of de-duplicated storage).When it comes to infrastructure costs, it is best to roll up the core infrastructure (floor space, power, cooling, network connectivity, etc.) into a bundled infrastructure rate, since the usage of those elements tend to be correlated with actual compute usage in a virtualized world.
  • Be prepared for a dynamic environment: Virtualization offers customers many more options for creating, cloning, powering on, and powering off of virtual systems.

With that in mind, chargeback in this more dynamic environment must take into account three major themes:

§ More self-service tools for IT infrastructure; Provisioning and management tasks are automated

§ Capacity-based pricing must take into account capacity that is used on an on-demand basis

§ For non-automated functions, labor used for turn-on, turn-off, cloning, etc. should not be ignored (hence the need for some self-service automation such as VMware Lifecycle Manager when the business requires such a dynamic environment).

Functions, Features, Qualities
Chargeback is a way to put IT services in terms that businesspeople understand and value. When IT is bought and consumed like other services, IT can become a business within the business. And that is the path to true IT value.
Chargeback systems and procedures need to be user-oriented. There is a tendency to present billing information in terms of Number of I/Os, Processing Time or Memory Consumed. These units have little or no meaning to most users. A desired approach is to tie system resources to business entities. In this way, users see charges by items such as Customers Processed or Accounts Updated. These are areas in which users can communicate and control usage of their processing resources.

Key chargeback functions can be grouped into the following areas:

Data Collection Functions: For example, if a cost element is based on number of transactions, then the number of transactions executed each day will have to be collected.

Account Table Maintenance: There will most likely be some form of account table that describes the various users and departments and how system data that is collected is to be allocated across these entities. Periodic maintenance will need to be done on this table to ensure that charges are being allocated correctly based on the most current requirements of the business.

Rate Setting: Chargeback algorithms and rates may change over time due to increased costs for services or changes in user departments. These rates need to be fairly developed, agreed upon by key users and integrated into the current procedures and systems used to do chargeback.

Billing & Reporting: Chargeback bills and reports will have to be produced on a periodic basis.

Administrative functions

  • Budgeting Activities
  • Pricing Decisions
  • Usage Variance Analysis Requests
  • Business General Ledger & Accounting
  • Cost Allocation Decisions
  • Capacity Planning & Usage Trending Activities

Successful chargeback model

1. Organizational Alignment

2. Involve a business stakeholder and make the business own facets of the problem.

3. A set of clearly defined roles and deliverables will be needed to clarify who owns what components.

4. Definition of service level agreements on data, availability etc that need to be defined.

Simple IT chargeback systems are little more than straight allocations of IT costs based on readily available information, such as user counts, application counts, or even subjective estimation. At a lesser degree of complexity, an organization trades some of the effectiveness of IT chargeback for a smaller burden, in terms of time and money required to perform the chargeback.

  • Chargeback Software Selection: Key requirements are to determine the charging algorithm flexibility of the product chosen, accuracy of collection and reporting and ease of maintenance and use.
  • Configuration versus Utilization Costing Strategies: When developing cost algorithms, a key decision is whether to charge based on utilization of resources [incremental charges and measuring use of cost resources] versus charges directly related to the costs of the configuration[distributes costs to user departments].
  • Fixed Versus Variable Cost Elements: It must be determined whether charging algorithms are to be based on fixed charges (Predictable usage) versus variable charges (Unpredictable usage). Reporting and billing will then have to show how rates have been applied and what amounts of the variable elements have been recorded and charged for.
  • Unused or Idle Resources: It should be determined how unused or idle resources will be covered by the chargeback system.
  • Overhead Resources: It should be determined how overhead resources will be accounted for. These resources tend to be used by all users (such as the Operating System). If these types of resources are to be charged to users, a fair allocation strategy needs to be determined.
  • Shared Resources: It should be determined how shared resources will be covered by the chargeback system.
  • Systems Software Customization: Some customization efforts for systems software may be needed to adequately measure and collect usage statistics.
  • Scope of Processing Costs: The scope of processing costs needs to be determined to understand the true costs of delivering services needs to be recovered. These categories include:

Equipment expenses: Rental, lease, maintenance or depreciation on equipment.

Communications expenses: Costs for lines, wiring, transmission facilities and services

Salary expenses: Costs for personnel used to support processing functions.

Occupancy and facilities expenses: Costs for building space and utilities

Supplies expenses: Costs for supplies, forms and other non-equipment materials

Other allocated expenses: Costs that have been allocated to operations from other activities within the business enterprise

  • Manual Services: Chargeback activities may need to include cost items for manual services such as data entry, tape mounts, etc.
  • Variable Charge Rates: It may be desirable to change charges depending on some criteria such as time of day, peak loads, priorities, etc.
  • Variance Analysis: It may be desirable to compare budgeted processing expenses with those actually incurred by each user department.
  • Charge Penalties: It may be desired to implement penalty costs for exceeding usage limits or processing certain types of functions at inconvenient times of the day.

Charging back is not about creating a profit center or penalizing internal clients, it’s about running your creative services department in the most efficient manner to support maximizing your company’s profits. Benefits of implementing a chargeback system include:

  • Increased awareness of how much it really costs to do those creative projects
  • Recognition of the value of an internal group compared to outside agencies—not just the cost, but things like brand knowledge and quick turnaround
  • Increased efficiencies by reducing those endless rounds of revision
  • Increase resource flexibility—to add staff to meet demand if you are billing for their costs
  • Transition from a cost center to a value center
  • Redefine itself as a service provider
  • Replace lengthy customized Service Level Agreements (SLAs) with a service catalog
  • To improve your costing strategy, provides the client with data for budgeting and gives your finance group the information they need to re-define their financial model.
  • IT users know and understand the complications and charges for requesting an IT service.

Applications, Use Cases, Customer Case Studies
Different models, with different classes of service, can be used to drive more cost-efficient consumption of IT and to achieve more effective matching of service to business need. Four basic methods for pricing IT value are described below.
No chargeback

The IT budget is a separate function approved as part of the organizations planning process.
Advantages Low-cost alternative
Disadvantages No accountability for demand and users do not necessarily understand the cost of the IT resources they are consuming

Non-IT-based chargeback

IT costs are allocated to business units based on a non-IT allocation metric (e.g., % of revenue).
Advantages Simple, low-cost approach to allocating IT costs.
Disadvantages Cost allocations do not necessarily correlate to the cost of the service and consumption/demand cannot be allocated to the business unit using the service.

IT-based chargeback

It uses IT measurements to allocate costs to user groups.
Advantages Correlates to the cost of the service
Disadvantages IT measurements (e.g., operating system instances) are difficult and costly to implement.IT measurements can be difficult for users to understand and relate to business activities.

Direct Chargeback

Approach to allocate specific costs for an entire service to a business unit.
Advantages Easy to implement
Disadvantages Not conducive to shared environments that can reduce costs for an organization.

Profit-oriented pricing

Model charges a fee for service similar to an external service provider.
Advantages IT competes with external providers;
Disadvantages Suboptimal decision making, as organizations may not invest internally to ensure long-term effectiveness.

Fixed Revenue

Fixed compensation for services provided
Advantages Customer has greater predictability of Shared Service Center costs
Disadvantages Exposes Shared Service Center to risks of cost/volume increases that are beyond its control

Fixed Revenue within predefined range

Fixed compensation for services provided, as long as resource utilization or transaction volumes stay within predefined range of activity. If activity goes above or below range, price will be adjusted accordingly
Advantages Favorable to Shared Service Center if it has cost advantagePerceived to be fair to customersPrice is based on volume of transactions (i.e. usage)Addresses Shared Service Center risks exposure of uncontrollable cost/ volume increasesCustomer doesn’t have to pay fixed revenues if volume is below predefined range
Disadvantages Customer may perceive that cost savings or efficiency improvements are not passed on

Subscription Pricing

subscription pricing is a pay-per-use model. The operational cost of the IT facilities is calculated and amortized across a subscription period (for example, one year) and then divided between all the users of the service.
Advantages Simple: If, for example, five lines of business were subscribing to a service that cost $60,000 per month to provide, the subscription charge (assuming a break-even business model) would be $60,000/5 = $12,000 per business unit per month.
Disadvantages No usage monitoring or penalties: It assumes all parts of the business will use the service at the same level on a constant basis, with no penalties for excessive consumption or peak time usage.

Peak-Level Pricing

Subscription model that adds a mechanism to monitor and record peak consumption. Consumers are billed according to their peak use, not according to their average use.
Advantages Simple to meter: Only peak-level usage needs to be monitored and recorded.Easy to show when consumers are using more than the base level resources.
Disadvantages Penalizes variability: If there are just a few peaks of usage during a given period, the scheme can seem unfair. But shortening the analysis period—say from six months down to one—and the measurement intervals—from weekly to daily, for example—can solve the problem.

User-Based Pricing

To meter IT by the person rather than the machine.
Advantages Easy to implement: Tracking the authentication of individual users to IT services is relatively simple; especially if a single sign-on system is in place.The authentication records provide the basis for cost justification.
Disadvantages Ignores system load: If users make heavy demands on systems when they log on, this model shortchanges IT.

Ticket-Based Pricing

IT can meter and control usage using electronic “tickets” that use a validity period (say four hours).
Advantages Consumption regulation: Ticket-based pricing lets IT control system load to a fine degree, helping to eliminate usage peaks and ensure business continuity.Simple: All that is required to monitor ticket pricing is a low-latency (i.e., fast-responding) portal, most probably constructed as a Web service.Strongest cost justificationPinpoint monitoring: Tickets can be very specific, allowing both sides to monitor exact usage down to the specific application level.Network access could be offered under the ticket-based chargeback model at three price levels,v Each with varying degrees of bandwidth,v Service level guarantees andv Peak usage guarantees
Disadvantages Ticket hoarding: For the ticket-based model to operate effectively, it’s often necessary to implement “use-by” dates on tickets to avoid stockpiling.

Virtual Server Count Based Pricing

Charging the departments based on the no. of VMs they have on a particular host server.Host machine contains 10 VMs and 2 of them belong to the X department, X department will be charged 20% of the server’s overall operating costs (plus any applicable software licenses).
Advantages Easy to implement & Simple to meter
Disadvantages It isn’t completely fair — not all VMs consume equal hardware resources.

Resource consumption Based Pricing

More common to find resource-based chargeback used to determine equitable cost allocations for the distributed systems.There are sets of metering records for the various OSs but there are also metering data available to track and cost specialized environments such as: Web servers, File servers, Database serversFor example, one department may operate a virtual Web server which consumes sparse resources, while another department operates a SQL server that consumes nearly all of the host machine’s available CPU and memory resources. Being that some VMs consume more hardware resources than others, some organizations have begun basing their IT chargeback on the number of virtual CPUs that are allocated to each particular virtual server.
Advantages Assuming the chargeback method is driven down to the business unit level, some control over demand, and behavior can be affected by the “pay for what you use”.Perceived to be fair to customersPrice is based on usage of various resource factors
Disadvantages It is labor-intensive for IT to Keep track of resources.Chargeback is less transparent since it is derived from multiple resource factors

Static capacity-based Based Pricing

This is best implemented as “slices”: the customer pays for a set amount of capacity, regardless of its consumption. Slices can be aggregated (a resource pool) or granular (individual VMs).
Advantages Relatively easy to measure and bill for allocation of these slicesCustomers are unlikely to be shocked by the bill at the end of the month.
Disadvantages Chargeback is less transparent.

Hardware Based Pricing

Departments to pay for their own hardware. For example, an organization’s marketing department needs to deploy a SharePoint server. SharePoint would be installed within a virtual machine running on the server. The department would own the server, and would be free to deploy additional VMs without incurring any additional hardware-related chargebacks until the server has been filled to capacity.
Advantages Easy to implement & Simple to meter
Disadvantages Need to establish guidelines as to what can be installed on a host server. For example, some organizations have security policies prohibiting Internet-facing VMs from being installed on the same host server as backend virtual servers.Departments are reluctant to blow their IT budgets on redundant hardware.

Flat rate Pricing

A IT chargeback model that works the best in a virtual data center is to charge a flat rate for each server. Because some VMs consume more hardware resources than others, you could establish two different rates – basic and high capacity.
Advantages It does not force IT to use complex and time-consuming methods for determining each department’s monthly bill.Makes it easy for the departments to stay on budget.
Disadvantages Chargeback is less transparent.

Hybrid Model based Pricing


Transaction Ratio Funding

In Transaction Ratio Funding, starts with a fixed operating budget. Then, based on the ratio of transactions, the single operating budget is divided. T1 hosts 20,000 transactions against their shared services, and T2 hosts 30,000 transactions, then T1 would get 40% of the operating budget
Advantages Usage based transparent chargeback
Disadvantages complex and time-consuming methods to keep track of transactions

Activity Based Pricing

Activity-Based Costing (ABC) is excellent for shared service chargeback as it makes the cost-allocation fair, transparent and predictable. It also provides the basis for a clear and easy-to-understand invoice.
Advantages Helps shared service centers justify their costs.Helps business units understand what their usage of shared services costProvides shared service centers with better information for continuously improving internal efficiency and for benchmarking.Perceived to be fair to customersPrice is based on volume of transactions (i.e. usage)Establish meaningful activity centers (cost pools).Ensure that the cost drivers selected have a strong positive correlation with the resource cost.Charge for resources that are expensive (material).Data is easy and inexpensive to collect.Sufficient detail to allow customers to influence the usage (cost).Track information by service (application) and customer (company, division, cost center, etc.)
Disadvantages Revenues fluctuate according to demands


Understanding Active Directory Federation Services

Active Directory Federation Services (ADFS) is based on the emerging, industry-supported Web Services Architecture, which is defined in WS-* specifications.

ADFS is a component in Microsoft® Windows Server™ 2003 R2 that provides Web single-sign-on (SSO) technologies

To authenticate a user to multiple Web applications

Cloud Computing – Download Free EBooks and Whitepapers

Over the life of a single online session

ADFS accomplishes this by securely sharing digital identity and entitlement rights, or “Claims,” across security and enterprise boundaries.

ADFS is not:

A database or repository for employee or customer identity data

An extension of the Active Directory™ directory service schema

A type of Windows domain or forest trust
Key features of ADFS

Federation and Web SSO: When an organization uses the Active Directory™ directory service, it currently experiences the benefit of SSO functionality through Windows-integrated authentication within the organization’s security or enterprise boundaries.

ADFS extends this functionality to Internet-facing applications, which enables customers, partners, and suppliers to have a similar, streamlined, Web SSO user experience when they access the organization’s Web-based applications.

Web Services (WS)-* interoperability: ADFS provides a federated identity management solution that interoperates with other security products that support the WS-* Web Services Architecture.

Extensible architecture: ADFS provides an extensible architecture that supports the Security Assertion Markup Language (SAML) token type and Kerberos authentication (in the Federated Web SSO with Forest Trust scenario).

Active Directory, Domain, Trust and Forest
Active Directory is a centralized and standardized system that automates network management of user data, security and distributed resources and enables interoperation with other directories. Active Directory is designed especially for distributed networking environments.

Active Directory is a centralized and standardized system that automates network management of user data, security and distributed resources and enables interoperation with other directories. Active Directory is designed especially for distributed networking environments.

Windows Server 2003 Active Directory provides a single reference, called a directory service, to all the objects in a network, including







Active Directory networks are organized using four types of divisions or container structures. These four divisions are forests, domains, organizational units and sites.

  • Forests: The collection of every object, its attributes and attribute syntax in the Active Directory.

Forests are not limited in geography or network topology. A single forest can contain numerous domains, each sharing a common schema. Domain members of the same forest need not even have a dedicated LAN or WAN connection between them. A single network can also be the home of multiple independent forests. In general, a single forest should be used for each corporate entity. However, additional forests may be desired for testing and research purposes outside of the production forest.

  • Domain: A collection of computers that share a common set of policies, a name and a database of their members.

Domains serve as containers for security policies and administrative assignments. All objects within a domain are subject to domain-wide Group Policies by default

Furthermore, each domain has its own unique accounts database. Thus, authentication is on a domain basis. Once a user account is authenticated to a domain, that user account has access to resources within that domain.

A domain must have one or more servers that serve as domain controllers (DCs) and store the database, maintain the policies and provide the authentication of domain logons.

With Windows NT, primary domain controller (PDC) and backup domain controller (BDC) were roles that could be assigned to a server in a network of computers that used a Windows operating system.

The user need only to log in to the domain to gain access to the resources, which may be located on a number of different servers in the network.

One server, known as the primary domain controller, managed the master user database for the domain. One or more other servers were designated as backup domain controllers. The primary domain controller periodically sent copies of the database to the backup domain controllers. A backup domain controller could step in as primary domain controller if the PDC server failed and could also help balance the workload if the network was busy enough.

  • Organizational units: Containers in which domains can be grouped. They create a hierarchy for the domain and create the structure of the Active Directory’s company in geographical or organizational terms.

Organizational units are much more flexible and easier overall to manage than domains. OUs grant you nearly infinite flexibility as you can move them, delete them and create new OUs as needed. However, domains are much more rigid in their existence. Domains can be deleted and new ones created, but this process is more disruptive of an environment than is the case with OUs and should be avoided whenever possible.

  • Sites: Physical groupings independent of the domain and OU structure. Sites distinguish between locations connected by low- and high-speed connections and are defined by one or more IP subnets.

By definition, sites are collections of IP subnets that have fast and reliable communication links between all hosts. Another way of putting this is a site contains LAN connections, but not WAN connections, with the general understanding that WAN connections are significantly slower and less reliable than LAN connections. By using sites, you can control and reduce the amount of traffic that flows over your slower WAN links.

Domain is territory over which rule or control is exercised; most organizations that have more than one domain have a legitimate need for users to access shared resources located in a different domain.

Controlling this access requires that users in one domain can also be authenticated and authorized to use resources in another domain. To provide authentication and authorization capabilities between clients and servers in different domains, there must be a trust between the two domains.

Trusts are the underlying technology by which secured Active Directory communications occur, and are an integral security component of the Windows Server 2003 network architecture.

Trusts help provide for controlled access to shared resources in a resource domain (the trusting domain) by verifying that incoming authentication requests come from a trusted authority (the trusted domain).

In this way, trusts act as bridges that allow only validated authentication requests to travel between domains.

Types of trust relationships:

ONE-WAY, providing access from the trusted domain to resources in the trusting domain

TWO WAY, providing access from each domain to resources in the other domain

NONTRANSITIVE, trust exists only between the two trust partner domains

TRANSITIVE, trust extends to any other domains that either of the partners trusts

In some cases, trust relationships are automatically established when domains are created; in other cases, administrators must choose a type of trust and explicitly establish the appropriate relationships.

Group Policy management and Active Directory

It’s difficult to discuss Active Directory without mentioning Group Policy. Admins can use Group Policies in Microsoft Active Directory to define settings for users and computers throughout a network. These setting are configured and stored in what are called Group Policy Objects (GPOs), which are then associated with Active Directory objects, including domains and sites. It is the primary mechanism for applying changes to computers and users throughout a Windows environment.

Through Group Policy management, administrators can globally configure desktop settings on user computers, restrict/allow access to certain files and folders within a network and more.

ADFS 2.0, which was released in early May, “doesn’t require changes to Active Directory server — it’s a separate server that knows how to talk to Active Directory,”

ADFS 2.0 is a central piece of Microsoft’s identity management strategy, providing a two-way gateway for sending and receiving claims-based requests, as Microsoft calls them, using SAML-based tokens containing information about users and what they want in terms of information and access.

ADFS 2.0 supports the open standard protocol Security Assertion Markup Language (SAML) 2.0,

“SAML interoperability is built into ADFS 2.0,”

ADFS 2.0 is expected to be baked into many future Microsoft application products, such as SharePoint 2010. But the reality is today legacy applications don’t have the ability to easily work under a SAML-based framework, though they can be made to work that way.

“Policy framework is not part of ADFS 2.0,”

The authorization protocol Extensible Access Control Markup Language (XACML) from the Organization for the Advancement of Structured Information Standards (OASIS) has emerged as the preferred standard for fine-grained authorization.

IBM says it supports XACML in its Tivoli Federated Identity manager product. But it’s unclear if Microsoft is going to go the XACML route

Claims-Based Identity Model

When you build claims-aware applications, the user presents an identity to your application as a set of claims. One claim could be the user’s name, another might be an e-mail address. The idea here is that an external identity system is configured to give your application everything it needs to know about the user with each request she makes, along with cryptographic assurance that the identity data you receive comes from a trusted source.

Under this model, single sign-on is much easier to achieve

Under this model, your application makes identity-related decisions based on claims supplied by the user. This could be anything from simple application personalization with the user’s first name, to authorizing the user to access higher valued features and resources in your application.

Security Token

The user delivers a set of claims to your application along with a request. In a Web service, these claims are carried in the security header of the SOAP envelope. In a browser-based Web application, the claims arrive through an HTTP POST from the user’s browser, and may later be cached in a cookie if a session is desired. Regardless of how these claims arrive, they must be serialized, which is where security tokens come in. A security token is a serialized set of claims that is digitally signed by the issuing authority.


Think of a claim as a piece of identity information such as name, e-mail address, age, membership in the Sales role. The more claims your application receives, the more you’ll know about your user.



NOSQL and Cloud Computing


Cloud Computing is moving from being “IT buzzword” to reasonable yet reliable way of deploying applications in the Internet. IT managers within companies are considering deploying some applications within cloud. A cloud-related trend that developers have been paying attention is the idea of “NoSQL”, a set of operational-data technologies based on non-relational concepts. “NoSQL” is “a sea change” idea to consider data storage options beyond the traditional SQL-based relational database.

Cloud Computing – Download Free EBooks and Whitepapers

Accordingly, a new set of open source distributed database is actively propping up to leverage the facilities and services provided through the cloud architecture. Thus, web applications and databases in cloud are undergoing major architectural changes to take advantage of the scalability provided by the cloud. This article is intended to provide insight on the NOSQL in the context of Cloud computing.

Face off ~ SQL, NOSQL & Cloud Computing

A key disadvantage of SQL Databases is the fact that SQL Databases are at a high abstraction level. This is a disadvantage because to do a single Statement, SQL often requires the data to be processed multiple times. This, of course, takes time and performance. For instance, multiple queries on SQL Data occur when there is a ‘Join’ operation. Cloud computing environments need high-performing and highly scalable databases.

NoSQL Databases are built without relations. But is it really that “good” to go for NoSQL Databases? A world without relations, no joins and pure scalability!  NoSQL databases typically emphasize horizontal scalability via partitioning, putting them in a good position to leverage the elastic provisioning capabilities of the cloud.

The general definition of a NOSQL data store is that it manages data that is not strictly tabular and relational, so it does not make sense to use SQL for the creation and retrieval of the data. NOSQL data stores are usually non-relational, distributed, open-source, and horizontally scalable.

If we look at the big Platforms in the Web like Facebook or Twitter, there are some Datasets that do not need any relations. The challenge for NoSQL Databases is to keep the data consistent. Imagine the fact that a user deletes his or her account. If this is hosted on a NoSQL Database, all the tables have to check for any data the user has produced in the past. With NoSQL, this has to be done by code.

A major advantage of NoSQL Databases is the fact that Data replication can be done more easily then it would be with SQL Databases.

As there are no relations, Tables don’t necessary have to be on the same servers. Again, this allows better “scaling” than SQL Databases. Don’t forget: scaling is one of the key aspects in Cloud computing environments.

Another disadvantage of SQL databases is the fact that there is always a schema involved. Over time, requirements will definitely change and the database somehow has to support this new requirements. This can lead to serious problems. “Just imagine” the fact that applications  need two extra fields to store data. Solving this issue with SQL Databases might get very hard. NoSQL databases support a changing environment for data and are a better solution in this case as well.

SQL Databases have the advantage over NoSQL Databases to have better support for “Business Intelligence”.

Cloud Computing Platforms are made for a great number of people and potential customers. This means that there will be millions of queries over various tables, millions or even billions of read and write operations within seconds. SQL Databases are built to serve another market: the “business intelligence” one, where fewer queries are executed.

This implies that the way forward for many developers is a hybrid approach, with large sets of data stored in, ideally, cloud-scale NoSQL storage, and smaller specialized data remaining in relational databases. While this would seem to amplify management overhead, reducing the size and complexity of the relational side can drastically simplify things.

However, it is up to the Use-Case to identify if you want a NoSQL approach or if you better stay with SQL.

“NOSQL” Databases for Cloud

The NoSQL (or “not only SQL”) movement is defined by a simple premise: Use the solution that best suits the problem and objectives.

If the data structure is more appropriately accessed through key-value pairs, then the best solution is likely a dedicated key value pair database.

If the objective is to quickly find connections within data containing objects and relationships, then the best solution is a graph database that can get results without any need for translation (O/R mapping).

Today’s availability of numerous technologies that finally support this simple premise are helping to simplify the application environment and enable solutions that actually exceed the requirements, while also supporting performance and scalability objectives far into the future.  Many cloud web applications have expanded beyond the sweet spot for these relational database technologies. Many applications demand availability, speed, and fault tolerance over consistency.

Although the original emergence of NOSQL data stores was motivated by web-scale data, the movement has grown to encompass a wide variety of data stores that just happen to not use SQL as their processing language. There is no general agreement on the taxonomy of NOSQL data stores, but the categories below capture much of the landscape.

Tabular / Columnar Data Stores

Storing sparse tabular data, these stores look most like traditional tabular databases. Their primary data retrieval paradigm utilizes column filters, generally leveraging hand-coded map-reduce algorithms.

BigTable is a compressed, high performance, and proprietary database system built on Google File System (GFS), Chubby Lock Service, and a few other Google programs;

HBase is an open source; non-relational, distributed database modeled after Google’s BigTable and is written in Java. It runs on top of HDFS, providing a fault-tolerant way of storing large quantities of sparse data.

Hypertable is an open source database inspired by publications on the design of Google’s BigTable. Hypertable runs on top of a distributed file system such as the Apache Hadoop DFS, GlusterFS, or the Kosmos File System (KFS). It is written almost entirely in C++ for performance.

VoltDB is an in-memory database. It is an ACID-compliant RDBMS which uses a shared nothing architecture. VoltDB is based on the academic HStore project. VoltDB is a relational database that supports SQL access from within pre-compiled Java stored procedures.

Google Fusion Tables is a free service for sharing and visualizing data online. It allows you to upload and share data, merge data from multiple tables into interesting derived tables, and see the most up-to-date data from all sources.

Document Stores

These NOSQL data sources store unstructured (i.e., text) or semi-structured (i.e., XML) documents. Their data retrieval paradigm varies highly, but documents can always be retrieved by unique handle. XML data sources leverage XQuery. Text documents are indexed, facilitating keyword search-like retrieval.

Apache CouchDB, commonly referred to as CouchDB, is an open source document-oriented database written in the Erlang programming language. It is designed for local replication and to scale vertically across a wide range of devices.

MongoDB is an open source, scalable, high-performance, schema-free, document-oriented database written in the C++ programming language.

Terrastore is a distributed, scalable and consistent document store supporting single-cluster and multi-cluster deployments. It provides advanced scalability support and elasticity feature without loosening the consistency at data level.

Graph Databases

These NOSQL sources store graph-oriented data with nodes, edges, and properties and are commonly used to store associations in social networks.

Neo4j is an open-source graph database, implemented in Java. It is “embedded, disk-based, fully transactional Java persistence engine that stores data structured in graphs.

AllegroGraph is a Graph database. It considers each stored item to have any number of relationships. These relationships can be viewed as links, which together form a network, or graph.

FlockDB is an open source distributed, fault-tolerant graph database for managing data at webscale. It was initially used by Twitter to build its database of users and manage their relationships to one another. It scales horizontally and is designed for on-line, low-latency, high throughput environments such as websites.

VertexDB is a high performance graph database server that supports automatic garbage collection. It uses the HTTP protocol for requests and JSON for its response data format and the API are inspired by the FUSE file system API plus a few extra methods for queries and queues.

Key/Value Stores

These sources store simple key/value pairs like a traditional hash table. Their data retrieval paradigm is simple; given a key, return the value.

Dynamo is a highly available, proprietary key-value structured storage system. It has properties of both databases and distributed hash tables (DHTs). It is not directly exposed as a web service, but is used to power parts of other Amazon Web Services

Memcached is a general-purpose distributed memory caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source must be read.

Cassandra is an open source distributed database management system. It is designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service with no single point of failure. It is a NoSQL solution that was initially developed by Facebook and powers their Inbox Search feature.

Amazon SimpleDB is a distributed database written in Erlang by It is used as a web service in concert with EC2 and S3 and is part of Amazon Web Services.

Voldemort is a distributed key-value storage system. It is used at LinkedIn for certain high-scalability storage problems where simple functional partitioning is not sufficient.

Kyoto Cabinet is a library of routines for managing a database. The database is a simple data file containing records; each is a pair of a key and a value. There is neither concept of data tables nor data types. Records are organized in hash table or B+ tree.

Scalaris is a scalable, transactional, distributed key-value store. It can be used for building scalable Web 2.0 services.

Riak is a Dynamo-inspired database that is being used in production by companies like Mozilla.

Object and Multi-value Databases

These types of stores preceded the NOSQL movement, but they have found new life as part of the movement. Object databases store objects (as in object-oriented programming). Multi-value databases store tabular data, but individual cells can store multiple values. Examples include Objectivity, GemStone and Unidata. Proprietary query languages are used.

Miscellaneous NOSQL Sources

Several other data stores can be classified as NOSQL stores, but they don’t fit into any of the categories above. Examples include: GT.M, IBM Lotus/Domino, and the ISIS family.

Sources for further Reading

enStratus – Cloud Governance Tool

enStratus is a cloud infrastructure management platform from enStratus Networks LLC that addresses the governance issues associated with deploying systems in public, private, and hybrid clouds.

Cloud Computing – Download Free EBooks and Whitepapers

The enStratus tagline is “governance for the cloud”. The company defines cloud governance to mean:

v Security controls, including user management, encryption, and key management

v Financial controls, including cloud cost tracking, budgets, chargebacks, and multi-currency support

v Audit controls and reporting

v Monitoring and alerting

v Automation, including auto-scaling, cloud bursting, backup management, and change management

v Unified cross-cloud management

enStratus supports both SaaS and on-premise deployment models and manages multi-cloud infrastructures, including combinations of public and private clouds.
enStratus sits outside of the cloud and watches over your cloud infrastructure.

enStratus has four main components:

v The console: Multi-User Console

o The Console: “at-a-glance” window

o Cluster Management: enables you to define your uptime objectives, application architecture, and system configuration and rely on enStratus to manage the deployment and operation of applications.

o Cloud Management: direct control over the cloud resources you are managing.

o User Management:

§ enStratus removes the requirement for sharing cloud credentials by introducing a third layer of user management. As the cloud management platform credentials are held outside of the cloud in an encrypted database. Using enStratus, administrators can focus on access rights and permissions without having to worry about losing control of account credentials.

§ Role based

§ LDAP and Active Directory support.

§ Seamless integration of user management in cloud infrastructure into traditional datacenters

§ Security Groups

§ Roles Management

  • Reports
  • Configurator
  • Server Manager
  • Admin


v The provisioning system / the monitoring system: It stores all of your critical configuration data and takes actions like backup management, auto-scaling, autorecovery, and more on your behalf. It also monitors your cloud systems and alerts you when events that require your attention are happening.

Active intelligence system that executes actions on your behalf

The credentials system: The Credentials system is a storage system that is not routable from the Internet for storing all authentication and encryption credentials—all encrypted using customer-specific encryption keys that are never stored on the file system or are otherwise accessible to humans.
Notification Support

RabbitMQ and an SMTP service that enStratus can optionally route emails through.


Install an SMS service and write an SMS service plug-in (or use the enStratus Twilio plug-in).

enStratus alert integration with Amazon SNS.

Future work: To enable customers to manage SNS through the enStratus console so that our customers can take advantage of Amazon SNS for their own uses.

enStratus categorizes alerts on a scale of 1 to 10.

v LEVEL 1-3: LOW


v LEVEL 7-10: HIGH

Service Automation
A service is a bundled software package with associated managment scripts that handle the configuration, starting, stopping, and ongoing management of the application. Services are where the behavior of enStratus can be extended and customized to perform as desired.
Deployment Automation
A deployment is a group of inter-dependent servers/services in an automated configuration controlled by enStratus to ensure security, redundancy, scalability, and recoverability. enStratus governs the deployment according to the parameters defined in Automation > Deployments in the enStratus console.

v enStratus monitors all servers in your account.

v enStratus allows searching/starting of publicly available machine images

v IP Addresses allow for reservation of static ip addresses

v enStratus supports a wide range of load balancers.

o Elastic Load Balancers

o HA-Proxy

o Zeus load balancer.

Firewalls, or security groups, in enStratus control accessibility to running servers. Each account has a firewall called ‘default’ that is the default firewall into which all servers are launched.
On-Premise Deployments
Private clouds are often the driving force behind a decision to deploy enStratus on-premise because of the communication channels that enStratus needs with some cloud virtual machines. With the on-premise option, you can install a fully functional version of the enStratus software behind your firewall and it will manage your infrastructure just like the SaaS product.
SaaS Product
SaaS product, enStratus requires either a VPN between our data center and yours or a server running an enStratus proxy that enables communication between enStratus and private cloud.
How to Use it?
Need to contact enStratus to get an on-premise deployment license. enStratus

offer trial installations for qualified prospects, but enStratus recommend leveraging the enStratus SaaS trial accounts if you are simply trying to get a feel for enStratus. For those doing an on-premise install, enStratus will provide the software and a license key for your setup.
v Ability to automatically attach, format, and mount RAID volumes through the enStratus console with the option to have those volumes automatically encrypted.

v Auto-scaling, auto-recovery, and automated backups

v Automated backups into the public cloud for “off-site” backups

v Monitoring and alerting of your private cloud infrastructure

v Intrusion detection system integration

v Automated DR into any public cloud

v Multi-tenancy

v Provisioning/de-provisioning of VMs based on pre-configured templates and VM sizes

v Creating custom templates based on running VMs

v Budgeting in chargebacks, including tracking costs for a given budget across all clouds

v Audit and report for compliance

v Mange to your service level requirements

v Key Management

v enStratus stores all credentials outside the cloud in a private data center and encrypts all data into the cloud.

v enStratus utilizes the industry standard Advanced Encryption Standard(AES) 256.

v By default, Linux instances are accessed using the ssh protocol. Windows instances through RDP.

v OSSEC host intrusion detection system on all Linux-based images. OSSEC alerting is integrated into enStratus and is accomplished via email.

v SAML and LDAP integration

v enStratus supports a range of scripting languages like use Bash and Python

v Range of logs for firewalls and other system activity

v Support multiple server platforms, Ubuntu, Debian, CentOS, Fedora, Red Hat, Solaris, Windows 2003 and Windows 2008

v File System Encryption

v Backup Encryption

v Billing Alerts

v Unified Reporting

v Shared Resource Accounting

v Only cloud infrastructure management platform that provides full and equal functionality for Windows environments in the cloud as well as Unix systems.

v Support for the Microsoft Azure cloud computing platform

v enStratus color labels can help you color code your cloud servers so that it’s easier to tell what servers can be shut down.

v The alerting system within enStratus is extensible and can be adapted to alert on client-specific rules or systems.

v API – The enStratus API allows you to extend, integrate or customize enStratus for your specific requirements

v enStratus supports all leading public and private clouds including Amazon Web Services, AT&T Synaptic Storage,, EMC Atmos, Eucalyptus, Google Storage, GoGrid, OpenStack, Rackspace, ReliaCloud, ServerExpress, Terremark, VMware and Windows Azure.

v Protect from single cloud vendor lock-in to allow cross cloud operations and migration

v Integration of cloud management with existing IT infrastructure management processes

v Posting of cloud health information to existing infrastructure management tools

v Extend security policies into the cloud environment

v Support for Citrix XenServer

v To meet unique needs of customers, enStratus now provides configurable alert thresholds as well as alerting on any changes to firewall rules

v Automated cloud-bursting from your private cloud into a public cloud

v Support for configurable virtual disks

v Configurable public IP address management

v Support for F5 load balancers

v Rich meta-data including user-friendly naming, color labeling, and descriptions

v Application configuration and deployment
CCSK & enStratus

enStratus , announced that CSA has selected enStratus cloud management platform for their new User Certification Program system. According to Jim Reavis, Executive Director of CSA, The Cloud Security Alliance requires a cloud management platform that provides the critical cloud governance capabilities. For this reason, we selected enStratus and have deployed their cloud management platform to improve the resiliency and availability of our certification system.

Pricing Plan
What vSphere + enStratus Means
It’s possible to take the enStratus SaaS offering and point it to vSphere SDK endpoint and have an instant cloud-like environment. enStratus will auto-discover all of the resources in VMware infrastructure and immediately enable unified chargeback tracking between VMware private “cloud” and public clouds.

v Setup of a DHCP host within the same VLAN(s) as your virtual machines

v Defining supported server “sizes” (e.g. 1 CPU with 512M RAM, 8 CPU with 64G RAM, etc.)

v Defining chargebacks for various server size, operating system, and software combos

v Setting up baseline templates that will be used to support new VMs.

In SaaS offering, need to connect our SaaS environment to VLANs via VPN or VPN proxy tool. No such intermediary is required for on-premise deployment.

Supported Clouds and Cloud Platforms

Computev Amazon Web Servicesv Cloud Centralv GoGridv Rackspacev ReliaCloudv Terremark vCloud Express Storagev Amazon Web Servicesv AT&T Synaptic Storagev Azure Services Platformv Googlev Rackspace Cloud Platformsv Atmosv Cloud.comv Eucalyptusv OpenStackv vCloudv vSphere

In addition, enStratus provide cloud-like support for traditional vSphere environments with support for XenServer and Nimbula coming soon.