Life Science Application, FDA and Cloud Computing


Life science application comes under Life sciences industry.

Life sciences consist of all fields of science that involve the scientific study of living organisms such as human beings, plants, and animals. The study of behaviour of organisms is only included in as much as it involves a clearly biological aspect. Biology and medicine remain main parts of the life sciences, having said that technological advances in molecular biology and biotechnology have directed it to a burgeoning of specializations and new interdisciplinary fields.

Cloud Computing – Download Free EBooks and Whitepapers

R&D process in the life science can be a long and expensive undertaking. The product development process follows basic steps at a very high level as described below:

• Phase 1 – recognition of the particle, initial testing, and toxicology studies

• Phase 2 – more development, formulation, and human testing

• Phase 3 – double blind clinical trials to test efficacy and submission for FDA approval

Life science industry operates under the regulatory guidelines put forward by the Food & Drug Administration (FDA).

Food and Drug Administration is a federal agency in the Department of Health and Human Services. It is established to regulate the release of new foods and health related products.

The IT organizations in life science companies must adhere to the FDA guidelines put forth in the Code for Federal Regulations 21 Part 11 (CFR 21 Part 11). It defines how systems managing electronic records in life science firms must be validated and verified to ensure that the operation of and the information in these systems can be trusted.

Title 21 CFR Part 11 of the Code of Federal Regulations deals with the Food and Drug Administration (FDA) guidelines on electronic records and electronic signatures in the United States. CFR Part 11, as it is called, defines the criteria under which electronic records and signatures are considered to be reliable and equivalent to paper records.

Part 11 requires drug makers, manufacturers, biologics developers, biotech companies, and other FDA-regulated industries to implement controls such as audits, system validations, audit trails, electronic signatures, and documentation for software and systems involved in processing electronic data that are (a) required to be maintained by the FDA predicate rules or (b) used to demonstrate compliance to a predicate rule with some specific exceptions.

The actual Part 11 compliance process for any application includes software, hardware, and operational environment for the system itself. This allows an IT Team to answer the questions.

To prove these things the system validation process has three primary components, the Installation Qualification (IQ), the Operational Qualification (OQ), and Performance Qualification scripts. Organizations manage IT environment separately for the life science applications and with proper controls placed.

CFR does not ask organization on How to do it? but it states What needs to be done.

It all comes to convincing the FDA auditor whether the Cloud environment conforms to the FDA compliance requirements or not.

Cloud computing can improve and speed up process by reducing IT complexity and cost while allowing R&D organizations to focus on the ‘what’ of the R&D process in stead of the ‘how’.

But, how Cloud Computing and FDA can be brought on a same table is the biggest issue because:

Audit / Track of following items are needed.

Ø      Hardware serial number

Ø      System configuration

Ø      Equipment location

Ø      Exact versions off all installed software

FDA compliance in Public Cloud is impossible till now because you must be aware about the detailed information on the hardware and software that your system will be running on and even the exact physical location of the resources as well.

In Private Cloud, owner has control over all resources (Hardware, Software) and thus it is still possible.

In a nutshell, public cloud model just does not fit for the current practices for validation in FDA regulated organizations. However private cloud environment could be leveraged to provide life science companies with a short cut to completing overall system validation Public Cloud’s benefit “Economy of Scale” will be out of reach in this case.

A community cloud may be established where several organizations have similar requirements and seek to share infrastructure so as to realize some of the benefits of cloud computing. With the costs spread over less users than a public cloud (but more than a single tenant) this option is more expensive but may offer a higher level of privacy, security and/or policy (FDA) compliance.

What’s latest?

Amazon EC2 Dedicated Instances

Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer.

NOTE: hardware dedicated to a single customer

Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.

You can easily create a VPC that contains dedicated instances only, providing physical isolation for all Amazon EC2 compute instances launched into that VPC, or you can choose to mix both dedicated instances and non-dedicated instances within the same VPC based on application-specific requirements

To get started using Dedicated Instances within an Amazon VPC, perform the following steps:

  • Open and log into the AWS Management Console
  • Create an Amazon VPC if you do not already have one on the Amazon VPC tab
  • Click on Launch Instance from the EC2 Dashboard
  • Select Launch Instances Into Your Virtual Private Cloud
  • Modify the instance tenancy from Default to Dedicated in the Request Instances Wizard
  • Start using your instance with the knowledge it will not share hardware with instances launched by other customers

Dedicated Instances certainly does help in building a case for FDA compliance and step in a RIGHT direction.
Reference:

http://www.hpcinthecloud.com/blogs/Cloud-Infrastructure-and-FDA-Compliance-92894189.html

http://www.hpcinthecloud.com/blogs/The-Possibilities-of-Cloud-in-the-Life-Sciences-Industry-91439669.html

http://www.hpcinthecloud.com/blogs/Negotiating-IT-in-the-FDA-Regulatory-Environment-92093279.html

http://aws.amazon.com/dedicated-instances/

http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/cfrsearch.cfm

Advertisements

Amazon Web Services Certified Solutions Architect Certification – Notes – 1


Reference: Aws Documentation

I am just reading material from AWS and creating Notes from it. SHaring here so it an be useful to others as well. Screenshots are also taken from the Documentation and Other material from AWS.

Cloud Computing – Download Free EBooks and Whitepapers
Java – Download Free EBooks and Whitepapers
Windows – Download Free EBooks and Whitepapers
When you deploy any type of application, you typically need to do the following:• Set up a computer to run your application.• Secure your application and resources.

• Set up your network for users to access your application.

• Scale your application.

• Monitor your application and resources.

• Ensure that your application is fault-tolerant.

An AMI is a template that contains a software configuration (e.g., operating system, application server, and applications).
When you launch your Amazon EC2 instances, you can store your root device data on Amazon Elastic Block Store (Amazon EBS) or the local instance store. Amazon Elastic Block Store (Amazon EBS) is a durable, block-level storage volume that you can attach to a single Amazon EC2 running instance.Amazon EBS volumes behave like raw, unformatted, external block devices you can attach.
Alternatively, the local instance store is a temporary storage volume and persists only during the life of the instance
You can stop and restart an Amazon EBS-backed instance, but you can only run or terminate anAmazon EC2 instance store-backed instance.By default, any data on the instance store is lost if the instance fails or terminates.
In AWS, a key pair is used to connect to your instance.
AWS has security groups that act like inbound network firewalls so you can decide who can connect to your Amazon EC2 instance over which ports.
Auto Scaling can automatically launch and terminate instances on your behalf according to the policies that you set. If you have defined a baseline AMI, Auto Scaling launches new instances with the exact same configuration.
Amazon CloudWatch monitors AWS cloud resources and the applications you run on AWS.You can collect and track metrics, analyze the data, and react immediately to keep your applications and business running smoothly.You can use information from Amazon CloudWatch to take action on the policies that you set using Auto Scaling.You can monitor the status of your instances by viewing status checks and scheduled events for yourinstances
Elastic Load Balancing provides this service in the same way that an on-premises load balancer does. You can associate a load balancer with an Auto Scaling group. As instances are launched and terminated, the load balancer automatically directs traffic to the running instances. Elastic LoadBalancing also performs health checks on each instance. If an instance is not responding, the load balancer can automatically redirect traffic to the healthy instances.
You can control access between the servers and subnets by using inbound and outbound packet filtering provided by network access control lists and security groups. Some other cases where you may want touse Amazon VPC include:• Hosting scalable web applications in the AWS cloud that are connected to your data center

• Extending your corporate network into the cloud

• Disaster recovery

To make your web application fault-tolerant, you need to consider deploying your computers in different physical locations.Availability Zones are analogous to data centers.It’s even more advantageous to spread your instances across Regions. If a region, including all of its

Availability Zones, becomes completely unavailable, your traffic is routed to another region.

 1. AWS Use Cases - 1
 1. AWS Use Cases - 2
AWS currently provides AMIs based on the following versions of Windows:

  • Microsoft Windows Server 2012 (64-bit)
  • Microsoft Windows Server 2008 R2 (64-bit)
  • Microsoft Windows Server 2008 (64-bit)
  • Microsoft Windows Server 2008 (32-bit)
  • Microsoft Windows Server 2003 (64-bit)
  • Microsoft Windows Server 2003 (32-bit)
3. Application Architecture in AWSAs an example, we’ll walk through a deployment of a simple web application. If you’re doing something

else, you can adapt this example architecture to your specific situation. In this diagram, Amazon EC2

instances in a security group run the application and web server. The Amazon EC2 Security Group acts

as an exterior firewall for the Amazon EC2 instances. An Auto Scaling group maintains a fleet of Amazon

EC2 instances that can be automatically added or removed in order to handle the presented load. This

Auto Scaling group spans two Availability Zones to protect against potential failures in either Availability

Zone. To ensure that traffic is distributed evenly among the Amazon EC2 instances, an Elastic Load

Balancer is associated with the Auto Scaling group. If the Auto Scaling group launches or terminates

instances to respond to load changes, the Elastic Load Balancer automatically adjusts accordingly.

To install the Auto Scaling command line tools to your local computer.PROMPT>as-cmdThis command returns a list of all the Auto Scaling commands and their descriptions.
Amazon EC2 instances created from a Public AMI use a public/private key pair, rather than apassword, for signing in. The public key is embedded in your instance.You use the private keyto sign in securely without a password.

.pem extension).

A security group defines firewall rules for your instances. The new rules are automatically enforced for all running instances.4.Security Group in AWS
For Windows, It can take up to 30 minutes to get the original password from the time you launched your Amazon EC2 instance.
Elastic Load Balancing automatically distributes and balances the incoming application traffic among allthe instances you are running, improving the availability and scalability of your application.5.Elastic Load Balancing in AWS

5.Listener COnfiguration in Elastic Load Balancing in AWS

This example uses a single forward slash so that Elastic Load Balancing sends the query to your HTTP server’s default home page, whether that default page is named index.html, default.html, or a different name.
After you create a load balancer, you can modify any of the settings except for Load BalancerName and Port Configuration.5.Port Configuration in Elastic Load Balancing in AWS
As a best practice, you should have sufficient instances across Availability Zones to survive the lossof any one Availability Zone.
The rules for this security group will be enforced when the instances that use these rulesare launched.
Auto Scaling launches and terminates Amazon EC2 instances automatically according to user-defined policies, schedules, and alarms.For example, you can instruct Auto Scaling to launch an additional instance whenever CPU usage on one or more existing instances exceeds 60 percent for ten minutes, or you could tell Auto Scaling to terminate half of your website’s instances over the weekend, when you expect traffic to be low.Auto Scaling groups can even work across multiple Availability Zones.

With Auto Scaling, you can ensure that you always have at least one healthy instance running.

By setting the minimum and maximum number to be the same, you can ensure that you always have the desired number of instances even if one instance fails.

When you create your actual website, as a best practice you should launch sufficient instances across Availability Zones to survive the loss of any one Availability Zone. Additionally, the maximum number of instances must be greater than the minimum to make use of the Auto Scaling feature.

In this example, you will set up the basic infrastructure that must be in place to get Auto Scaling startedfor most applications. You’ll do the following:• Create a launch configuration.

• Create an Auto Scaling group.

• Create a policy for your Auto Scaling group.

PROMPT>as-create-launch-config MyLC –image-id ami-191dc970 –instance-type

m1.large –group webappsecuritygroup –key mykeypair –monitoring-disabled

 

monitoring-disabled specifies that you want to use basic monitoring instead of detailed monitoring.

By default, detailed monitoring is enabled.

To create an Auto Scaling group in which you can launch multiple Amazon EC2 instances, you will use the as-create-auto-scaling-group command. Use the following parameters to define your Auto Scaling group.

PROMPT>as-create-auto-scaling-group MyAutoScalingGroup –launch-configuration

MyLC –availability-zones us-east-1b, us-east-1c –min-size 2 –max-size

2 –load-balancers MyLB

 

To create a policy to enlarge your fleet of instances, we will use the as-put-scaling-policy command. This policy applies to the Auto Scaling group you created in the previous step.

PROMPT>as-put-scaling-policy MyScaleUpPolicy –auto-scaling-group MyAutoScal

ingGroup –adjustment=1 –type ChangeInCapacity –cooldown 300

 

adjustment is the number of instances you want to increment or decrement. For this example, use 1.

cooldown is the time, in seconds, after an action before Auto Scaling should evaluate conditions again.

Auto Scaling can decrease the number of instances when your application doesn’t need the resources, saving you money. To create a policy for terminating an instance, start from the policy you just created, change the policy name, and then change the value of adjustment from 1 to -1.

PROMPT>as-put-scaling-policy MyScaleDownPolicy –auto-scaling-group

MyAutoScalingGroup “–adjustment=-1” –type ChangeInCapacity –cooldown 300

Amazon CloudWatch is a web service that enables you to monitor, manage, and publish various metrics and to configure alarm actions based on those metrics.The following diagram demonstrates how Amazon CloudWatch and Auto Scaling work together. The Amazon EC2 instance reports its NetworkOut metric to Amazon CloudWatch. Amazon CloudWatch fires an alarm if the specified threshold has been been exceeded and reports this to the Auto Scaling Group.6. Amazon CloudWatch in AWS

6. Alarms in Amazon CloudWatch in AWS

PROMPT>as-update-auto-scaling-group MyAutoScalingGroup –min-size 0 –maxsize0PROMPT>as-describe-auto-scaling-groups MyAutoScalingGroup –headers

PROMPT>as-delete-auto-scaling-group MyAutoScalingGroup

Cost Benefits of Cloud Computing


Cloud Computing is a “newsworthy” term in the IT industry in recent times and it is here to stay!

Cloud Computing is not a technology, or even a set of technologies — it’s an idea. Cloud Computing is not a standard defined by any standards organization.

Basic understanding for Cloud: “Cloud” represents the Internet; Instead of using application installed on your computer or saving data to your hard drive, you’re working and storing stuff on the Web. Data is kept on servers and used by the service you’re using; tasks are performed in your browser using an interface / console provided by the service.

A credit card and Internet access is all you need to make an investment in technology. Businesses will find it easier than ever to provision technology services without the involvement of IT.

There are many definitions available in the market for Cloud Computing but we have aligned it with NIST publication and with our understanding. NIST defines cloud computing by describing five essential characteristics, three cloud service models, and four cloud deployment models.

NIST's Architecture of Cloud Computing
NIST’s Architecture of Cloud Computing

Ref: NIST

“Cloud Computing is a self service which is on demand, Elastic, Measured, Multi-tenant, Pay per use, Cost-effective and efficient”. It is the access of data, software applications, and computer processing power through a ‘cloud’/a group of many on line/demand resources. Tasks are assigned to a combination of connections, software and services accessed over a network. This network of servers and connections is collectively known as “the cloud.”

Cloud service delivery is divided among three fundamental classifications referred as the “SPI Model,”

Cloud Service Models - IaaS, PaaS, SaaS
Cloud Service Models – IaaS, PaaS, SaaS

Software as a Service is a model of software deployment where an application is hosted as a service provided to customers across the Internet. By eliminating the need to install and run the application on the customer’s own computer, SaaS alleviates the customer’s burden of software maintenance, ongoing operation, and support. Salesforce is very popular Customer Relationship Management (CRM) software that is offered only as a service.

PaaS model makes all of the facilities required to support the complete life cycle of building and delivering web applications and services entirely available from the Internet. Google App Engine (GAE) is an example of PaaS. GAE provides a Python environment within which you can build, test and then deploy your applications.

Infrastructure as a Service (IaaS) is the delivery of computer infrastructure as a service. Rather than purchasing servers, software, data center space or network equipment, clients instead buy those resources as a fully outsourced service. Amazon Web Services (AWS) is one of the pioneers of such an offering. AWS’ Elastic Compute Cloud (EC2) is “a web service that provides resizable compute capacity”.

There are four deployment models for cloud services regardless of the service model utilized (SPI).

Public clouds refer to shared cloud services that are made available to a broad base of users. Although many organizations use public clouds for private business benefit, they don’t control how those cloud services are operated, accessed or secured. Popular examples of public clouds include Amazon EC2, Google Apps and Salesforce.com.

Private cloud describes an IT infrastructure in which a shared pool of computing resources—servers, networks, storage, applications and software services—can be rapidly provisioned, dynamically allocated and operated for the benefit of a single organization.

Hybrid Cloud represents composition of two or more cloud deployment models (private, community, or public) that remain unique but are bound together by uniform or proprietary technology that enables data and application portability.

Community Cloud represents infrastructure is shared by several organizations and supports a specific community that has shared concerns. E.g. FDA compliance needs specific controls where audit requirements can’t be met by other deployment models.

Cloud computing brings efficiencies and savings. The diverse benefits of cloud computing are undoubtedly worth pursuing. Cost-cutting is at the top of most companies’ lists of priorities in these challenging economic times. Having turned from revolutionary possibility into increasingly well-established custom, the cost of ‘outsourcing to the cloud’ is now falling dramatically.

In only paying for the resources used, therefore, operating costs can be reduced. After all, in-house data centres typically leave 85%-90% of available capacity idle. Cloud computing can lead to energy savings too, removing from individual companies the costly burden of running a data centre plus generator back-up and uninterruptible power supplies. Thus it results in reduction of CAPEX & OPEX.

Cloud Computing is in its formative years, but expect it to grow up quick. The prospective of Cloud Computing is mind boggling and the technology and business options will increase exponentially.

Still question remains, how Clouds are beneficial to the enterprises?

  • Focus on core business
  • Cloud computing increases the profitability by improving resource utilization. Pooling resources into large clouds drives down costs and increases utilization by delivering resources only for as long as those resources are needed.
  • Cloud computing is particularly valuable to small and medium businesses, where effective and affordable IT tools are critical to helping them become more productive without spending lots of money on in-house resources and technical equipment.
  • Cost savings
  • Remote access
  • Ease of availability
  • Real-time collaboration capabilities
  • Gain access to latest technologies
  • We can leverage the sheer processing power of the cloud to do things that traditional productivity applications cannot do. “For instance, users can instantly search over 25 GB worth of e-mail online, which is nearly impossible to do on a desktop.
  • To take another example, each document created through Google Apps is easily turned into a living information source, capable of pulling the latest data from external applications, databases and the Web. This revolutionizes processes as simple as creating a Google spreadsheet to compare stock prices from vendors over time, because the cells can be populated and updated as the prices change in real time.
  • Cloud computing offers almost unlimited computing power and collaboration at a massive scale for enterprises of all sizes.
  • “Salesforce.com has 1.2m users on its platform. If that’s not scalable show me something that is. Gmail is SaaS and how many users are on that?”
  • Multi-tenancy enables sharing of resources and costs among a large pool of users, allowing for:
  1. Centralization of infrastructure in areas with lower costs (such as real estate, electricity, etc.)
  2. Peak-load capacity increases (users need not engineer for highest possible load-levels)
  3. Utilization and efficiency improvements for systems that are often only 10-20% utilized.
  • Sustainability comes about through improved resource utilization, more efficient systems, and carbon neutrality.

But, are there any issues with Cloud Computing?

  • The benefits of cloud computing will not be realized if businesses are not convinced that it is secure. Trust is at the centre of success and providers have to prove themselves worthy of that trust if hosted services are going to work.
  • CIA (Confidentiality, Integrity, Availability)
  • Application performance
  • IT Security Standards – There are multiple standards for security protocol for IT systems that have yet to be implemented into cloud computing.
  • Regulatory compliance— the vendor will be required to participate in internal and external audits. They will need to find a way to accommodate auditors from all firms using their service. [FDA Compliance is not feasible yet.]

Let’s consider Facts and Figures before jumping into minor details of Cloud Computing. Compare the annual cost of Amazon EC2 with an equivalent deployment in co-located and on-site data centers by entering a few basic inputs (Ref: Amazon EC2 Cost Comparison Calculator).

Cost Component - Amazon EC2(Public Cloud), Co-Location, On-Site
Cost Component – Amazon EC2(Public Cloud), Co-Location, On-Site

Configuration:

High-CPU Instances: Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

20 High-CPU Extra Large Instance (75% utilization) and No. of Peak Instances – 5 with 10% Annual Utilization

  • 7 GB of memory
  • 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
  • 1690 GB of instance storage
  • 64-bit platform
  • I/O Performance: High
  • Avg. Monthly Data Transfer “In” Per Instance (GB) -10 GB
  • Avg. Monthly Data Transfer “Out” Per Instance (GB)- 20 GB
  • Region: US-East
  • OS: Linux

Cost Details:

On-site

TCO On-Site
TCO On-Site

Co-Location

TCO Co-Location
TCO Co-Location

Amazon EC2(Cloud Computing)

TCO Amazon EC2
TCO Amazon EC2

Annual Total Cost of Ownership (TCO) Summary

Annual Total Cost of Ownership (TCO) Summary
Annual Total Cost of Ownership (TCO) Summary

Hence … Proved 🙂

Cloud Computing – Download Free EBooks and Whitepapers
Java – Download Free EBooks and Whitepapers
Windows – Download Free EBooks and Whitepapers