I would like to present our incredible Federal Government IaaS opportunity.
Cloud technologies are transforming the way computing power is bought, sold, and delivered. Rather than purchasing licenses or hardware, users may now obtain computing power as a service, buying only as much as they need, and only when they need it. This new business model promises vast efficiency and cost advantages.
The tremendous impact of cloud computing on business has not been lost on Congress. The enormous potential of cloud computing has prompted the United States Federal Government to look to the cloud as a means to reorganize its IT infrastructure and to decrease its IT budgets.
In December 2010, the Office of Management and Budget (OMB) issued a Cloud First Strategy for Federal Government computing needs. Under this policy, government agencies will use cloud computing to boost computer operations rather than building expensive data centers. This Federally mandated strategy requires that each agency chief information officer (CIO) fully migrate three services to a cloud solution by June 2012, and implement cloud-based solutions whenever a secure, reliable, and cost-effective cloud option exists.
Autonomic Resources Cloud Platform [ARC-P] has used our expertise in the commercial space to satisfy the unique ‘cloud’ demands of the Federal Government, allowing us to deliver the most reliable lightweight application architecture for all government agencies based on our proprietary open infrastructure.
Powered by ARC-P™ ARC-P acts as our IaaS [Infrastructure as a Service] offering, and it effectively supplies our government clients with raw computing power, storage, and networking infrastructure as a service [IaaS]. ARC-P provides a fully patched and compliant [agency dependent] hosting environment to run software.
ARC-P also provides value by supplying simplified computing power, storage, and supporting infrastructure that can be acquired and utilized on-demand. Our government clients can now achieve rapid data center capabilities without the need to be provisioned, coordinated with contractor IT organizations, or purchased and owned by the government.
And here’s the big news: ARC-P is currently the sole access point to the cloud for the entire US Government and all of its various agencies and operations. It is an unprecedented [and, admittedly, short-lived] monopoly. It has been well-earned through the remarkable vision and tireless work of John Keese and his Autonomic Resources team, which makes it all the more meaningful and valuable. Remarkably, if you want to work in the cloud with the Federal Government today, there is only one access point, and it is Powered by ARC-P™. That’s it. One. Just one access point in the whole world. What’s the potential of that?
In order to make the cloud work in the Federal space for a vendor or agency, three things have to happen: first, you have to have the FedRamp IaaS Authority To Operate [ATO]; second, you have to operate in a secure and authorized data center; and, third, you have to make your enterprise or software actually work with the certified IaaS. Currently, Powered by ARC-P™ is the only solution available to the Federal Government with a FedRAMP ATO on all three critical components. While, as a service provider, we can profit from all three components, the real value resides in providing the FedRAMP IaaS ATO to every service provider and data center that wants access to the Federal Government treasure trove of business.
The most remarkable aspect of this opportunity is that the GSA wants as many providers of cloud service made available to the Federal government as possible. They don’t want just one email supplier, they want dozens of alternatives. They want a competitive landscape. Here’s what they state as their number one program goal on their website: http://www.gsa.gov/portal/category/102375 “Program Goals. Accelerate the adoption of secure cloud solutions through re-use of assessments and authorizations.”
And since the GSA is limiting it to merely 12 access points, it is up to these anointed dozen to go out and secure as many providers of services as possible for the Federal government. Yes, the fact of the matter is that they actually want us to go out and provide as many of these IaaS access points as possible. We have a limitless supply, and, not only is it a limitless supply, each individual access point comes replete with a limitless capacity. I’m still in awe of this opportunity.
Let’s say, for example, that there is a company that want access to the Federal Government with a FedRAMP IaaS ATO. They can go after it themselves and maybe spend $5-10M and 6-18 months trying to secure it [and all of them have actually failed to secure an ATO up to this point], or they can come to us and spend $1M and 15 minutes to secure full access. That’s right, $1M per access point Powered by ARC-P™. That’s an incredible savings, not to mention the incredible speed to benefit. And it doesn’t stop there. We can have industry specific access points. We can have industry exclusive access points. We can have non-exclusive access points. But it gets better. Everyone who has an access point is now going to need an authorized data center as well as a service provider who can get them operational and supported.
Now, before I lose my breath, I have to mention that we are also close to getting an ATO for our CMaaS. Yes, FedRamp is requiring a continuous monitoring component for all of this. It just keeps getting better.
Some companies will buy the access point just to say they have it. Yahoo will say, “Yes, we have the FedRamp Authority to Operate.” How much will that add to their stock value on the street even if they never use it? Our potential customer list is almost as limitless as the opportunity. No one currently has the ATO: Microsoft, Google, Amazon, Yahoo, Apple, Apple, IBM, Oracle, Dell, SAS, SalesForce, Facebook, HP, etc. – and they all want it.
We will be providing our proprietary FedRAMP authorized IaaS access points for up to $1M each. We will require an army of agents to provide the GSA with the type of coverage they envision. There will be significant incentives to provide this Powered by ARC-P™ access, and there will be significant earnings on the ensuing services we provide. Each Powered by ARC-P™ engagement can earn the agent $250k. We need to get busy with this immediately. While the potential is limitless, it is still only potential until we realize it.
I am hoping you will be able to help me bring this Powered by ARC-P™ vision to fruition. If you are interested in providing these access points to various service providers and secure data centers that want to do business with the Federal Government in the cloud, contact Joe Kreuz at 716.445.2210.
Below is more information on FedRAMP, our authority to operate, and cloud computing. Please feel free to comment on this blog or email me directly at email@example.com. Thanks.
The Federal Risk and Authorization Management Program (FedRAMP) is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.
The FedRAMP Joint Authorization Board has granted its first provisional authorization to Autonomic Resources, who used Veris Group as their FedRAMP accredited 3PAO.
Accelerate the adoption of secure cloud solutions through reuse of assessments and authorizations.
Increase confidence in security of cloud solutions.
Achieve consistent security authorizations using a baseline set of agreed upon standards to be used for Cloud product approval in or outside of FedRAMP.
Ensure consistent application of existing security practices.
Increase confidence in security assessments.
Increase automation and near real-time data for continuous monitoring.
Increases reuse of existing security assessments across agencies.
Saves significant cost, time and resources – “do once, use many times.”
Improves real-time security visibility.
Provides a uniform approach to risk-based management.
Enhances transparency between government and cloud service providers (CSPs).
Improves the trustworthiness, reliability, consistency, and quality of the Federal security authorization process.
Autonomic Resources ARC-P Cloud Receives FedRAMP’s First Issued Authority to Operate
Autonomic Takes the Lead in Government Cloud Adoption
December 27, 2012
CARY, N.C., Dec. 27, 2012 (GLOBE NEWSWIRE) — Autonomic Resources, a Government Cloud Service Provider (CSP), has received a Federal Risk Authorization and Management Program (FedRAMP) provisional Authority to Operate (ATO) from the FedRAMP Joint Authorization Board (JAB) for their ARC-P cloud solution.
The FedRAMP program supports the U.S. government’s objective to enable U.S. federal agencies to use managed service providers that enable cloud computing capabilities. The program is designed to comply with the Federal Information Security Management Act of 2002 (FISMA). FedRAMP is governed by a Joint Authorization Board (JAB) that consists of representatives from the Department of Homeland Security (DHS), the General Services Administration (GSA), and the Department of Defense (DoD). The FedRAMP program is endorsed by the U.S. government’s CIO Council including the Information Security and Identity Management Committee (ISIMC).
FedRAMP provides a streamlined avenue for U.S. federal agencies to make use of cloud service provider platforms and offerings. The FedRAMP program provides an avenue for CSPs to obtain a Provisional Authorization after undergoing a third-party independent security assessment that has been reviewed by the JAB. By assessing security controls on candidate platforms, and providing Provisional Authorizations on platforms that have acceptable risk, FedRAMP enables federal agencies to leverage the security assessment process for the FedRAMP baseline of security controls.
Autonomic Resources now holds a FedRAMP provisional Authority to Operate, a demonstration that Autonomic meets the mandatory security requirements for cloud services housing Federal information. US Government Agencies are rapidly moving towards cloud adoption as the preferred method of compute and FedRAMP certification is critical in positioning agencies to meet Cloud First/Future First and Office of Management and Budget (OMB) mandates. “Autonomic stands ready to assist US Government agencies to meet both their security and budgetary objectives. The timeliness of our FedRAMP certification will assist federal and state government address serious needs to implement more cost effective, elastic compute platforms, and reduce their Information Technology spending. We are fully aware of the fiscal challenges our customers face and are uniquely positioned to be key to the solution going forward,” noted Autonomic Resources’ Founder and President John Keese. “Our team’s strict adherence to the FedRAMP requirements, coupled with our GSA ATO experience, enabled us to complete what much larger CSP’s have yet to accomplish. Further, Autonomic has already begun the application and security process to extend our ARC-P EaaS, PaaS and SaaS offerings.”
The Autonomic Resources Cloud-Platform (ARC-P) provides U.S. Government customers with a government community Infrastructure as a Service (IaaS) cloud offering providing both managed and unmanaged virtual machines. Autonomic does not service any non U.S. Government entities with cloud services and only utilizes highly cleared US citizens for cloud operations.
Autonomic is one of only a few vendors to have met the technical requirements necessary to be awarded two GSA contracts for cloud computing; Infrastructure as a Service (IaaS) and Email as a service (EaaS). Both BPA’s demonstrate that Autonomic has met pre-qualified technical and pricing requirements, making the procurement process fast, flexible, and cost-effective for US government agencies. Further, the Autonomic EaaS is already in FEDRAMP processing, which will ensure their ability to rapidly enable their email platform offerings for government use.
About Autonomic Resources
Autonomic Resources (www.autonomicresources.com) is a service integration firm and cloud provider serving the U.S. federal government. Core capabilities include the implementation of strategic technologies including data center automation, cloud computing, open source adoption, information assurance and compliance, advanced network infrastructure, and software development services.
At the Advantage Co, our G8 partnerships have led the way to maximizing our potential in internet technology, especially when it comes to cloud computing. If you are not up to speed about cloud computing, you will be missing out on the next technology revolution. Every aspect of our business will be impacted by cloud computing. The potential is limitless.
I have posted this short tutorial as a starting point for your introduction into this emerging technology. Please learn as much as you can. It will prove invaluable to you as we build our business.
Cloud computing comes into focus when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. The Cloud computing business model generally encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT’s existing capabilities.
Cloud computing is basically the delivery of computing as a service rather than a product. Shared software and information are provided to computers and other devices as a metered service over the Internet. A parallel to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the infrastructure required to provide the service.
Costs are generally reduced in a cloud delivery model, whereas capital expenditure is converted to operational expenditure. This also lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is usage-based and fewer IT skills are required for implementation (in-house).
What is cloud computing?
There’s a good chance you already use some form of cloud computing. If you have an e-mail account with a Web-based e-mail service like Yahoo! or Gmail, then you’ve had some experience with cloud computing. Instead of running an e-mail program on your computer, you log in to a Web e-mail account remotely. The software and storage for your account doesn’t exist on your computer — it’s on the service’s computer cloud.
Let’s say you’re an IT Director at a large company. Your responsibilities include making sure that all of your employees have the right hardware and software they need to do their jobs. Buying computers for everyone isn’t enough — you also have to purchase software or software licenses to give employees the tools they require. Whenever you have a new hire, you have to buy more software or make sure your current software license allows another user. You find it difficult to manage it all effectively and economically. There is an alternative: cloud computing.
Instead of installing a suite of software for each computer, you’d only have to load one application. That application would allow workers to log into a Web-based service which hosts all the programs the user would need for their job. Remote machines [usually owned by a service provider] would run everything from e-mail to word processing to complex data analysis programs for you. This is cloud computing, and it has changed the entire computer industry.
In a cloud computing system, there’s a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user’s side decrease. Maintenance of cloud computing applications is easier, because they do not need to be installed on each user’s computer. The only thing the user’s computer needs to be able to run is the cloud computing system’s interface software [like Google Docs], which can be as simple as a Web browser, and the cloud’s network takes care of the rest.
The applications of cloud computing are practically limitless. With the right middleware, a cloud computing system can execute all the programs a normal computer can run. Everything from generic word processing software to customized computer programs designed for a specific company can work on a cloud computing system.
Clients are able to access their applications and data from anywhere at any time. They can access the cloud computing system using any computer linked to the Internet. Data isn’t confined to a hard drive on one user’s computer or even a corporation’s internal network.
Cloud Computing Architecture
What makes up a cloud computing system? Although cloud computing is an emerging field of computer science, the idea has been around for years. It’s called cloud computing because the data and applications exist on a “cloud” of Web servers. When talking about a cloud computing system, it’s helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the “cloud” section of the system.
The front end includes the client’s computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.
You’ve Been Virtually Served
Most of the time, servers don’t run at full capacity. That means there’s unused processing power going to waste. It’s possible to fool a physical server into thinking it’s actually multiple servers, each running with its own independent operating system. The technique is called server virtualization. By maximizing the output of individual servers, server virtualization reduces the need for more physical machines.
On the back end of the system are the various computers, servers, and data storage systems that create the “cloud” of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.
A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. It is the software layer that lies between the operating system and applications on each side of a distributed computing system in a network.
Middleware is software that provides a link between separate software applications. Middleware is sometimes called plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This definition would also fit enterprise application integration and data integration software.
Middleware is a relatively new addition to the computing landscape. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968. It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network.
Middleware Organizations: IBM, Red Hat, Oracle Corporation and Microsoft are major vendors providing middleware software. Vendors such as Axway, SAP, TIBCO, Informatica, Pervasive and webMethods were specifically founded to provide Web-oriented middleware tools. Groups such as the Apache Software Foundation, OpenSAF and the ObjectWeb Consortium (now OW2) encourage the development of open source middleware. Microsoft .NET “Framework” architecture is essentially “Middleware” with typical middleware functions distributed between the various products, with most inter-computer interaction by industry standards, open APIs or RAND software licence. Solace Systems provides middleware in purpose-built hardware for implementations that may experience scale.
Grids, Clouds, and Utilities.
Cloud computing is closely related to grid computing and utility computing. In a grid computing system, networked computers are able to access and use the resources of every other computer on the network. In cloud computing systems, that usually only applies to the back end. Utility computing is a business model where one company pays another company for access to computer applications or data storage.
After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernising their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving “two-pizza teams” could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.
If a cloud computing company has a lot of clients, there’s likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. A cloud computing system must make a copy of all its clients’ information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.
Why would anyone want to rely on another computer system to run programs and store data?
- Clients are able to access their applications and data from anywhere at any time. They can access the cloud computing system using any computer linked to the Internet. Data isn’t be confined to a hard drive on one user’s computer or even a corporation’s internal network.
- It brings hardware costs down. Cloud computing systems reduce the need for advanced hardware on the client side. You don’t need to buy the fastest computer with the most memory, because the cloud system takes care of those needs for you. Instead, you can buy an inexpensive computer terminal. The terminal could include a monitor, input devices like a keyboard and mouse, and just enough processing power to run the middleware necessary to connect to the cloud system. You don’t need a large hard drive because you store all your information on a remote computer.
- Corporations that rely on computers have to make sure they have the right software in place to achieve goals. Cloud computing systems give these organizations company-wide access to computer applications. The companies don’t have to buy a set of software or software licenses for every employee. Instead, the company could pay a metered fee to a cloud computing company.
- Servers and digital storage devices take up space. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end.
- Corporations would save money on IT support. Streamlined hardware would, in theory, have fewer problems than a network of heterogeneous machines and operating systems.
- If the cloud computing system’s back end is a grid computing system, then the client could take advantage of the entire network’s processing power. Often, scientists and researchers work with calculations so complex that it would take years for individual computers to complete them. On a grid computing system, the client could send the calculation to the cloud for processing. The cloud system would tap into the processing power of all available computers on the back end, significantly speeding up the calculation.
Once an internet protocol connection is established among several computers, it is possible to share services within any one of the following layers:
A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery and that is in essence useless without it. Examples include some computers (example: Chromebooks), phones (example: Google Nexus series) and other devices, operating systems (example: Google Chrome OS), and browsers.
Cloud application services or “Software as a Service (SaaS)” deliver software as a service over the Internet, eliminating the need to install and run the application on the customer’s own computers and simplifying maintenance and support. A cloud application is software provided as a service. It consists of the following: a package of interrelated tasks, the definition of these tasks, and the configuration files, which contain dynamic information about tasks at run-time. Cloud tasks provide compute, storage, communication and management capabilities. Tasks can be cloned into multiple virtual machines, and are accessible through application programmable interfaces (API). Cloud applications are a kind of utility computing that can scale out and in to match the workload demand. Cloud applications have a pricing model that is based on different compute and storage usage, and tenancy metrics.
What makes a cloud application different from other applications is its elasticity. Cloud applications have the ability to scale out and in. This can be achieved by cloning tasks into multiple virtual machines at run-time to meet the changing work demand. Configuration Data is where dynamic aspects of cloud application are determined at run-time. There is no need to stop the running application or redeploy it in order to modify or change the information in this file.
Cloud platform services, also known as platform as a service (PaaS), deliver a computing platform and/or solution stack as a service, often consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers. Cloud computing is becoming a major change in our industry, and one of the most important parts of this change is the shift of cloud platforms. Platforms let developers write certain applications that can run in the cloud, or even use services provided by the cloud. There are different names being used for platforms which can include the on-demand platform, or Cloud 9. Regardless of the nomenclature, they all have great potential in developing, and when development teams create applications for the cloud, each must build its own cloud platform.
Cloud infrastructure services, also known as “infrastructure as a service” (IaaS), deliver computer infrastructure – typically a platform virtualization environment – as a service, along with raw (block) storage and networking. Rather than purchasing servers, software, data-center space or network equipment, clients instead buy those resources as a fully outsourced service. Suppliers typically bill such services on a utility computing basis; the amount of resources consumed (and therefore the cost) will typically reflect the level of activity.
The servers layer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services, including multi-core processors, cloud-specific operating systems and combined offerings.