Sorting
From A to Z
Deployments found: 7
LIONSGATE is a $2 billion diversified global entertainment corporation that produces feature films and television shows, which they distribute worldwide. Their products include the Emmy award-winning TV show Mad Men and the movie “Hunger Games”. Their productions appear in theaters, on TV, and online.
The Challenge
As a successful media and entertainment company, LIONSGATE was faced with IT challenges that confront many growing businesses:
Why Amazon Web Services Theresa Miller, Executive Vice President, Information Technology for LIONSGATE, explains why the company decided to enlist Amazon Web Services (AWS) to help them meet these objectives: “The economics were compelling. AWS cloud services proved to be easy to use via the Management Console, APIs, and tools. The system is secure and flexible to work with. Also, working with AWS as a company was a very positive experience.”
LIONSGATE started using the following AWS products in 2010:
The Challenge
As a successful media and entertainment company, LIONSGATE was faced with IT challenges that confront many growing businesses:
- Ever-expanding infrastructure and costs
- Increasing enterprise application workloads
- Tighter time-to-market requirements
Why Amazon Web Services Theresa Miller, Executive Vice President, Information Technology for LIONSGATE, explains why the company decided to enlist Amazon Web Services (AWS) to help them meet these objectives: “The economics were compelling. AWS cloud services proved to be easy to use via the Management Console, APIs, and tools. The system is secure and flexible to work with. Also, working with AWS as a company was a very positive experience.”
LIONSGATE started using the following AWS products in 2010:
- Amazon Simple Storage Service (Amazon S3) for storage
- Amazon Elastic Compute Cloud (Amazon EC2) for compute
- Amazon Elastic Block Store (Amazon EBS) for Amazon EC2 storage
"We have no concerns about security or compliance. It's not easy to replicate the same security levels that we have on premises, but working in AWS, we're confident that we're following best practices for data protection, network access, and other security measures", Leandro Gelasi, IT Officer
The Challenge Despite its long-established roots,Corte dei conti (Cdc)isn’t an institution that has remained entrenched in the past. It understands that modernization is key to keeping relevant in a fast-moving world, and as a result it has embraced change in its processes and structure. IT has been central to this. Leandro Gelasi, IT officer at Corte dei conti, says,“We have a deep commitment to continuous improvement, and to support this goal we need an agile and elastic IT infrastructure.” Gelasi and his team wanted to move away from time-consuming management of physical IT. “We wanted to focus on providing an excellent service, rather than on handling hardware,” he says. A larger initiative to boost employee productivity went hand in hand with this efficiency drive, as Gelasi continues, “We wanted to change the way our 3,000-plus employees worked, enabling them to access applications from anywhere, on any device. But we had to ensure that this flexibility for staff didn’t jeopardize the safety of data.” Given its high-profile role in keeping public finances in check—and with the Italian government requiring agencies to cut IT expenditure in line with wider budget cuts—Cdc also had to focus on reducing its own costs. With a largely Citrix-based infrastructure, Corte dei conti had invested a lot in training its staff in this technology. It wanted to make the most of this investment, while at the same time making its architecture more agile.
Why Amazon Web Services The answer was a hybrid cloud environment, and Cdc chose Amazon Web Services (AWS) and AWS Advanced Consulting Partner XPeppers to help it in this journey, starting with adopting a virtual desktop infrastructure (VDI) based on Amazon WorkSpaces. Gelasi says, “We looked at AWS and realized it was the perfect platform for our migration to the cloud. We had worked with XPeppers before, so it was our first choice to help us move to AWS and ensure seamless integration with our Citrix environment.” The infrastructure runs on 25 Amazon Elastic Compute Cloud (Amazon EC2) instances, which run only during office hours, between 8:00 am and 8:00 pm. Cdc uses AWS Lambda to orchestrate the startup and shutdown for each instance. Each department has a dedicated Amazon Virtual Private Cloud (Amazon VPC) and a virtual private network connection between the VPCs and Cdc’s data centers. Paolo Latella, solutions architect at XPeppers, says, “Because it deals with sensitive data, Corte dei conti needs a secure architecture. We worked with Cdc to explain best practices in the cloud, ensuring that it maintains the highest security levels.” For example, AWS Identity and Access Management (IAM) helps the court control access to resources, and Amazon CloudWatch allows the team to keep applications running smoothly. Plus, through the AWS Marketplace, Cdc can choose the software and services it needs to implement a security model that replicates its on-premises structure.
The Benefits First and foremost, Gelasi and his team feel safe working in the cloud. “We have no concerns about security or compliance,” he says. “It’s not easy to replicate the same security levels that we have on premises, but working in AWS, we’re confident that we’re following best practices for data protection, network access, and other security measures.”
He continues, “The service that our users are getting is vastly improved. We have very little feedback, which is great for us. No news is good news in IT.” In addition, internal users have more flexibility and can access applications on their laptops, tablets, and smartphones from anywhere. “We have made it possible for court employees such as magistrates to work effectively from home. Previously, they could only access applications from the office, but now they can do this wherever they are. As a result, they’re much more productive. Decisions get made faster and the whole system works better. It’s a brilliant result for our entire organization,” says Gelasi. Managing processes is also easier, so the Cdc IT team can focus on developing services for both internal and external clients. One of the IT team’s goals in the organization’s larger drive to boost efficiency is to provide services to government agencies across Italy. Gelasi says, “With our AWS infrastructure, it’s easier for us to offer IT to other institutions, which helps them cut costs in line with government initiatives.” “We’re saving money in the cloud too,” he continues. “By moving to AWS, we avoided €40,000 in hardware costs.” Operating expenses are more difficult to determine, but Gelasi is convinced that with the VDI project, Cdc is cutting energy consumption and saving money on air conditioning and electricity. “One of the drivers of the project was to get better visibility of costs and be more accountable,” he says. “As we move more of our infrastructure to the AWS cloud, we’ll be able to do this too.” Having successfully deployed VDI to 250 users across Cdc, the team is now rolling it out across all of the organization’s regions, eventually giving its 3,000 employees the tools to be more productive. The court is also working with XPeppers to build its disaster recovery on AWS and move more workloads to the cloud for improved agility. “The biggest benefit of working in the AWS cloud? I can’t pinpoint just one,” says Gelasi. “It’s the whole package. We’ve got more flexibility, we can scale seamlessly, and we have more time to provide a great service to our customers.”
The Challenge Despite its long-established roots,Corte dei conti (Cdc)isn’t an institution that has remained entrenched in the past. It understands that modernization is key to keeping relevant in a fast-moving world, and as a result it has embraced change in its processes and structure. IT has been central to this. Leandro Gelasi, IT officer at Corte dei conti, says,“We have a deep commitment to continuous improvement, and to support this goal we need an agile and elastic IT infrastructure.” Gelasi and his team wanted to move away from time-consuming management of physical IT. “We wanted to focus on providing an excellent service, rather than on handling hardware,” he says. A larger initiative to boost employee productivity went hand in hand with this efficiency drive, as Gelasi continues, “We wanted to change the way our 3,000-plus employees worked, enabling them to access applications from anywhere, on any device. But we had to ensure that this flexibility for staff didn’t jeopardize the safety of data.” Given its high-profile role in keeping public finances in check—and with the Italian government requiring agencies to cut IT expenditure in line with wider budget cuts—Cdc also had to focus on reducing its own costs. With a largely Citrix-based infrastructure, Corte dei conti had invested a lot in training its staff in this technology. It wanted to make the most of this investment, while at the same time making its architecture more agile.
Why Amazon Web Services The answer was a hybrid cloud environment, and Cdc chose Amazon Web Services (AWS) and AWS Advanced Consulting Partner XPeppers to help it in this journey, starting with adopting a virtual desktop infrastructure (VDI) based on Amazon WorkSpaces. Gelasi says, “We looked at AWS and realized it was the perfect platform for our migration to the cloud. We had worked with XPeppers before, so it was our first choice to help us move to AWS and ensure seamless integration with our Citrix environment.” The infrastructure runs on 25 Amazon Elastic Compute Cloud (Amazon EC2) instances, which run only during office hours, between 8:00 am and 8:00 pm. Cdc uses AWS Lambda to orchestrate the startup and shutdown for each instance. Each department has a dedicated Amazon Virtual Private Cloud (Amazon VPC) and a virtual private network connection between the VPCs and Cdc’s data centers. Paolo Latella, solutions architect at XPeppers, says, “Because it deals with sensitive data, Corte dei conti needs a secure architecture. We worked with Cdc to explain best practices in the cloud, ensuring that it maintains the highest security levels.” For example, AWS Identity and Access Management (IAM) helps the court control access to resources, and Amazon CloudWatch allows the team to keep applications running smoothly. Plus, through the AWS Marketplace, Cdc can choose the software and services it needs to implement a security model that replicates its on-premises structure.
The Benefits First and foremost, Gelasi and his team feel safe working in the cloud. “We have no concerns about security or compliance,” he says. “It’s not easy to replicate the same security levels that we have on premises, but working in AWS, we’re confident that we’re following best practices for data protection, network access, and other security measures.”
He continues, “The service that our users are getting is vastly improved. We have very little feedback, which is great for us. No news is good news in IT.” In addition, internal users have more flexibility and can access applications on their laptops, tablets, and smartphones from anywhere. “We have made it possible for court employees such as magistrates to work effectively from home. Previously, they could only access applications from the office, but now they can do this wherever they are. As a result, they’re much more productive. Decisions get made faster and the whole system works better. It’s a brilliant result for our entire organization,” says Gelasi. Managing processes is also easier, so the Cdc IT team can focus on developing services for both internal and external clients. One of the IT team’s goals in the organization’s larger drive to boost efficiency is to provide services to government agencies across Italy. Gelasi says, “With our AWS infrastructure, it’s easier for us to offer IT to other institutions, which helps them cut costs in line with government initiatives.” “We’re saving money in the cloud too,” he continues. “By moving to AWS, we avoided €40,000 in hardware costs.” Operating expenses are more difficult to determine, but Gelasi is convinced that with the VDI project, Cdc is cutting energy consumption and saving money on air conditioning and electricity. “One of the drivers of the project was to get better visibility of costs and be more accountable,” he says. “As we move more of our infrastructure to the AWS cloud, we’ll be able to do this too.” Having successfully deployed VDI to 250 users across Cdc, the team is now rolling it out across all of the organization’s regions, eventually giving its 3,000 employees the tools to be more productive. The court is also working with XPeppers to build its disaster recovery on AWS and move more workloads to the cloud for improved agility. “The biggest benefit of working in the AWS cloud? I can’t pinpoint just one,” says Gelasi. “It’s the whole package. We’ve got more flexibility, we can scale seamlessly, and we have more time to provide a great service to our customers.”
The Challenge
Since its founding in 2012, Coinbase has quickly become the leader in bitcoin transactions. As it prepared to respond to ever-increasing customer demand for bitcoin transactions, the company knew it needed to invest in the right underlying technology. “We’re now in the phase of legitimizing this currency and bringing it to the masses,” says Rob Witoff , director at Coinbase . “As part of that, our core tenets are security, scalability, and availability.”
Security is the most important of those tenets, according to Witoff . “We control hundreds of millions of dollars of bitcoin for our customers, placing us among the largest reserves in our industry,” says Witoff . “Just as a traditional bank would heavily guard its customers’ assets inside a physical bank vault, we take the same or greater precautions with our servers.”
Scalability is also critical because Coinbase needs to be able to elastically scale its services globally without consuming precious engineering resources. “As a startup, we’re meticulous about where we invest our time,” says Witoff . “We want to focus on how our customers interact with our product and the services we’re offering. We don’t want to reinvent solutions to already-solved foundational infrastructure.” Coinbase also strives to give its developers more time to focus on innovation. “We have creative, envelope-pushing engineers who are driving our startup with innovative new services that balance a delightful experience with uncompromising security,” says Witoff . “That’s why we need to have our exchange on something we know will work.”
Additionally, Coinbase sought a better data analytics solution. “We generate massive amounts of data from the top to the bottom of our infrastructure that would traditionally be stored in a remote and dated warehouse. But we’ve increasingly focused on adopting new technologies without losing a reliable, trusted core,” says Witoff . “At the same time, we wanted the best possible real-time insight into how our services are running.”
To support its goals, Coinbase decided to deploy its new bitcoin exchange in the cloud. “When I joined Coinbase in 2014, the company was bootstrapped by quite a few third-party hosting providers,” says Witoff . “But because we’re managing actual value and real assets on our machines, we needed to have complete control over our environment.”
Why Amazon Web Services Coinbase evaluated different cloud technology vendors in late 2014, but it was most confident in Amazon Web Services (AWS). In his previous role at NASA’s Jet Propulsion Laboratory, Witoff gained experience running secure and sensitive workloads on AWS. Based on this, Witoff says he “came to trust a properly designed AWS cloud.” The company began designing the new Coinbase Exchange by using AWS Identity and Access Management (IAM), which securely controls access to AWS services. “Cloud computing provides an API for everything, including accidentally destroying the company,” says Witoff . “We think security and identity and access management done correctly can empower our engineers to focus on products within clear and trusted walls, and that’s why we implemented an auditable self-service security foundation with AWS IAM.” The exchange runs inside the Coinbase production environment on AWS, powered by a custom-built transactional data engine alongside Amazon Relational Database Service (Amazon RDS) instances and PostgreSQL databases. Amazon Elastic Compute Cloud (Amazon EC2) instances also power the exchange. The organization provides reliable delivery of its wallet and exchange to global customers by distributing its applications natively across multiple AWS Availability Zones. Coinbase created a streaming data insight pipeline in AWS, with real-time exchange analytics processed by an Amazon Kinesis managed big-data processing service. “All of our operations analytics are piped into Kinesis in real time and then sent to our analytics engine so engineers can search, query, and find trends from the data,” Witoff says. “We also take that data from Kinesis into a separate disaster recovery environment.” Coinbase also integrates the insight pipeline with AWS CloudTrail log files, which are sent to Amazon Simple Storage Service (Amazon S3) buckets, then to the AWS Lambda compute service, and on to Kinesis containers based on Docker images. This gives Coinbase complete, transparent, and indexed audit logs across its entire IT environment. Every day, 1 TB of data—about 1 billion events—flows through that path. “Whenever our security groups or network access controls are modified, we see alerts in real time, so we get full insight into everything happening across the exchange,” says Witoff . For additional big-data insight, Coinbase uses Amazon Elastic MapReduce (Amazon EMR), a web service that uses the Hadoop open-source framework to process data, and Amazon Redshift, a managed petabyte-scale data warehouse. “We use Amazon EMR to crunch our growing databases into structured, actionable Redshift data that tells us how our company is performing and where to steer our ship next,” says Witoff . All of the company’s networks are designed, built, and maintained through AWS CloudFormation templates. “This gives us the luxury of version-controlling our network, and it allows for seamless, exact network duplication for on-demand development and staging environments,” says Witoff . Coinbase also uses Amazon Virtual Private Cloud (Amazon VPC) endpoints to optimize throughput to Amazon S3, and Amazon WorkSpaces to provision cloud-based desktops for global workers. “As we scale our services around the world, we also scale our team. We rely on Amazon WorkSpaces for on-demand access by our contractors to appropriate slices of our network,” Witoff says. Coinbase launched the U.S. Coinbase Exchange on AWS in February 2015, and recently expanded to serve European users.
The Benefits Coinbase is able to securely store its customers’ funds using AWS. “I consider Amazon’s cloud to be our own private cloud, and when we deploy something there, I trust that my staff and administrators are the only people who have access to those assets,” says Witoff . “Also, securely storing bitcoin remains a major focus area for us that has helped us gain the trust of consumers across the world. Rather than spending our resources replicating and securing a new data center with solved challenges, AWS has allowed us to hone in on one of our core competencies: securely storing private keys.” Coinbase has also relied on AWS to quickly grow its customer base. “In three years, our bitcoin wallet base has grown from zero to more than 3 million. We’ve been able to drive that growth by providing a fast, global wallet service, which would not be possible without AWS,” says Witoff . Additionally, the company has better visibility into its business with its insight pipeline. “Using Kinesis for our insight pipeline, we can provide analytical insights to our engineering team without forcing them to jump through complex hoops to traverse our information,” says Witoff . “They can use the pipeline to easily view all the metadata about how the Coinbase Exchange is performing.” And because Kinesis provides a one-to-many analytics delivery method, Coinbase can collect metrics in its primary database as well as through new, experimental data stores. “As a result, we can keep up to speed with the latest, greatest, most exciting tools in the data science and data analytics space without having to take undue risk on unproven technologies,” says Witoff . As a startup company that built its bitcoin exchange in the cloud from day one, Coinbase has more agility than it would have had if it created the exchange internally. “By starting with the cloud at our core, we’ve been able to move fast where others dread,” says Witoff . “Evolving our network topology, scaling across the globe, and deploying new services are never more than a few actions away. This empowers us to spend more time thinking about what we want to do instead of what we’re able to do.” That agility is helping Coinbase meet the demands of fast business growth. “Our exchange is in hyper-growth mode, and we’re in the process of scaling it all across the world,” says Witoff . “For each new country we bring on board, we are able to scale geographically and at the touch of a button launch more machines to support more users.” By using AWS, Coinbase can concentrate even more on innovation. “We trust AWS to manage the lowest layers of our stack, which helps me sleep at night,” says Witoff . “And as we go higher up into that stack—for example, with our insight pipeline—we are able to reach new heights as a business, so we can focus on innovating for the future of finance.”
Why Amazon Web Services Coinbase evaluated different cloud technology vendors in late 2014, but it was most confident in Amazon Web Services (AWS). In his previous role at NASA’s Jet Propulsion Laboratory, Witoff gained experience running secure and sensitive workloads on AWS. Based on this, Witoff says he “came to trust a properly designed AWS cloud.” The company began designing the new Coinbase Exchange by using AWS Identity and Access Management (IAM), which securely controls access to AWS services. “Cloud computing provides an API for everything, including accidentally destroying the company,” says Witoff . “We think security and identity and access management done correctly can empower our engineers to focus on products within clear and trusted walls, and that’s why we implemented an auditable self-service security foundation with AWS IAM.” The exchange runs inside the Coinbase production environment on AWS, powered by a custom-built transactional data engine alongside Amazon Relational Database Service (Amazon RDS) instances and PostgreSQL databases. Amazon Elastic Compute Cloud (Amazon EC2) instances also power the exchange. The organization provides reliable delivery of its wallet and exchange to global customers by distributing its applications natively across multiple AWS Availability Zones. Coinbase created a streaming data insight pipeline in AWS, with real-time exchange analytics processed by an Amazon Kinesis managed big-data processing service. “All of our operations analytics are piped into Kinesis in real time and then sent to our analytics engine so engineers can search, query, and find trends from the data,” Witoff says. “We also take that data from Kinesis into a separate disaster recovery environment.” Coinbase also integrates the insight pipeline with AWS CloudTrail log files, which are sent to Amazon Simple Storage Service (Amazon S3) buckets, then to the AWS Lambda compute service, and on to Kinesis containers based on Docker images. This gives Coinbase complete, transparent, and indexed audit logs across its entire IT environment. Every day, 1 TB of data—about 1 billion events—flows through that path. “Whenever our security groups or network access controls are modified, we see alerts in real time, so we get full insight into everything happening across the exchange,” says Witoff . For additional big-data insight, Coinbase uses Amazon Elastic MapReduce (Amazon EMR), a web service that uses the Hadoop open-source framework to process data, and Amazon Redshift, a managed petabyte-scale data warehouse. “We use Amazon EMR to crunch our growing databases into structured, actionable Redshift data that tells us how our company is performing and where to steer our ship next,” says Witoff . All of the company’s networks are designed, built, and maintained through AWS CloudFormation templates. “This gives us the luxury of version-controlling our network, and it allows for seamless, exact network duplication for on-demand development and staging environments,” says Witoff . Coinbase also uses Amazon Virtual Private Cloud (Amazon VPC) endpoints to optimize throughput to Amazon S3, and Amazon WorkSpaces to provision cloud-based desktops for global workers. “As we scale our services around the world, we also scale our team. We rely on Amazon WorkSpaces for on-demand access by our contractors to appropriate slices of our network,” Witoff says. Coinbase launched the U.S. Coinbase Exchange on AWS in February 2015, and recently expanded to serve European users.
The Benefits Coinbase is able to securely store its customers’ funds using AWS. “I consider Amazon’s cloud to be our own private cloud, and when we deploy something there, I trust that my staff and administrators are the only people who have access to those assets,” says Witoff . “Also, securely storing bitcoin remains a major focus area for us that has helped us gain the trust of consumers across the world. Rather than spending our resources replicating and securing a new data center with solved challenges, AWS has allowed us to hone in on one of our core competencies: securely storing private keys.” Coinbase has also relied on AWS to quickly grow its customer base. “In three years, our bitcoin wallet base has grown from zero to more than 3 million. We’ve been able to drive that growth by providing a fast, global wallet service, which would not be possible without AWS,” says Witoff . Additionally, the company has better visibility into its business with its insight pipeline. “Using Kinesis for our insight pipeline, we can provide analytical insights to our engineering team without forcing them to jump through complex hoops to traverse our information,” says Witoff . “They can use the pipeline to easily view all the metadata about how the Coinbase Exchange is performing.” And because Kinesis provides a one-to-many analytics delivery method, Coinbase can collect metrics in its primary database as well as through new, experimental data stores. “As a result, we can keep up to speed with the latest, greatest, most exciting tools in the data science and data analytics space without having to take undue risk on unproven technologies,” says Witoff . As a startup company that built its bitcoin exchange in the cloud from day one, Coinbase has more agility than it would have had if it created the exchange internally. “By starting with the cloud at our core, we’ve been able to move fast where others dread,” says Witoff . “Evolving our network topology, scaling across the globe, and deploying new services are never more than a few actions away. This empowers us to spend more time thinking about what we want to do instead of what we’re able to do.” That agility is helping Coinbase meet the demands of fast business growth. “Our exchange is in hyper-growth mode, and we’re in the process of scaling it all across the world,” says Witoff . “For each new country we bring on board, we are able to scale geographically and at the touch of a button launch more machines to support more users.” By using AWS, Coinbase can concentrate even more on innovation. “We trust AWS to manage the lowest layers of our stack, which helps me sleep at night,” says Witoff . “And as we go higher up into that stack—for example, with our insight pipeline—we are able to reach new heights as a business, so we can focus on innovating for the future of finance.”
The Challenge
With the acquisition of hardware and platform partner, AlertMe, in 2015, Centrica Connected Home was faced with the prospect of a significant shift in focus. Previously the relationship had been one of vendor-customer with AlertMe also pursuing it's own goals for expansion and licensing of its software. After the acquisition, Centrica Connected Home moved to quickly integrate the technical talent from the two companies and then to realign the development efforts of the teams. The new common goals of product evolution, feature enhancement and international launch, presented a number of challenges in the form of a rapid scaling requirement for their live platform, whilst maintaining stability and availability. Added to these demands on the company were an expansion into new markets, and brand new product launches, including smart boiler service and a growing ecosystem of new Hive smart home devices. They even found the time to develop deeply functional Alexa skills for their products and hence be a Smart Home Launch Partner for the Amazon Echo in the UK in 2016.
Why Amazon Web Services The entire end-to-end infrastructure on which the Hive Platform is based—including marketing and support websites, data collection services, and the real-time store for user and analytics data—runs on AWS technologies. The core technologies used to power Hive are Amazon Elastic Cloud Compute (Amazon EC2), Amazon Relational Database Service (Amazon RDS), and Amazon Simple Storage Service (Amazon S3). The new challenges meant they had to seek solutions in additional specialised, managed AWS services. Working with the AWS IoT Service Team under the EMEA IOT Lead for Amazon Web Services, Claudiu Pasa, they began a proof of concept project for migration from their existing device management platform to a specialised AWS IoT based service for new and existing devices. This deeper AWS integration enabled the replacement of other platform components with a leaner, faster Lambda based microservices infrastructure, with Amazon EC2 and Amazon RDS still playing a large part in their infrastructure for longer lived components such as data stores and platform UIs. Additional use of integrated AWS services such as Amazon S3 data storage and web hosting, Amazon API Gateway, Amazon Cognito and Amazon Cloudfront offer attractive benefits, when used in concert with more traditional infrastructure, such as lower latency to the customer, less scalability limitations and more resilience, allowing their engineering team to focus on systems that add value to the business such as advanced monitoring using AWS partner Wavefront, aggregated logging and application analysis using Amazon Elasticsearch Service, and cost analysis and attribution using resource tags and consolidated billing in Amazon Organisations.
The Benefits Centrica Connected Home is a great example of lean enterprise in action. Although it’s part of one of the UK’s biggest corporations, it operates in an agile way, learning quickly while delivering a cutting-edge product to hundreds of thousands of satisfied customers. “Our teams are empowered to make their own decisions and mistakes, and can pick up the tools and run with them, trying new things and innovating. AWS helps us to achieve this lean, agile infrastructure because it we can work flexibly and without constraint but within a consistent environment.”, says Adrian Heesom, COO Centrica Connected Home. Heesom continues, “Our ability to develop new features is much easier in our AWS environment. Plus, the AWS cloud delivers a consistently available hosting platform for our services. The ease of deploying resources in multiple physical AWS locations gives us confidence in the reliability of our environment.” Christopher Livermore, Centrica Connected Homes Head of Site Reliability Engineering says, "Leveraging managed, optimised services such as Amazon EC2, Amazon S3, AWS IoT, API Gateway, AWS Lambda, Amazon Cloudfront, Amazon RDS and Amazon Cognito allows our developers and engineers to focus on product delivery and its value to our customers. It abstracts away some of the common problems of operating system configuration and architecture design. It also makes it easier to maintain a good, common framework for product development across all our teams, internationally." Cost is a two-fold benefit for Centrica Connected Home. It can access a range of environments to experiment cost-effectively, while paying only for IT resources as they’re consumed. It’s a model that the team have adopted for its own products and related services.“More and more of our customers want to “pay as they go” for our Centrica Connected Home products and services,”Heesom says.“This not only aligns with the way we pay for AWS and make our finance model easier, but it enables us to focus even more resources on innovating our services further.”
With the acquisition of hardware and platform partner, AlertMe, in 2015, Centrica Connected Home was faced with the prospect of a significant shift in focus. Previously the relationship had been one of vendor-customer with AlertMe also pursuing it's own goals for expansion and licensing of its software. After the acquisition, Centrica Connected Home moved to quickly integrate the technical talent from the two companies and then to realign the development efforts of the teams. The new common goals of product evolution, feature enhancement and international launch, presented a number of challenges in the form of a rapid scaling requirement for their live platform, whilst maintaining stability and availability. Added to these demands on the company were an expansion into new markets, and brand new product launches, including smart boiler service and a growing ecosystem of new Hive smart home devices. They even found the time to develop deeply functional Alexa skills for their products and hence be a Smart Home Launch Partner for the Amazon Echo in the UK in 2016.
Why Amazon Web Services The entire end-to-end infrastructure on which the Hive Platform is based—including marketing and support websites, data collection services, and the real-time store for user and analytics data—runs on AWS technologies. The core technologies used to power Hive are Amazon Elastic Cloud Compute (Amazon EC2), Amazon Relational Database Service (Amazon RDS), and Amazon Simple Storage Service (Amazon S3). The new challenges meant they had to seek solutions in additional specialised, managed AWS services. Working with the AWS IoT Service Team under the EMEA IOT Lead for Amazon Web Services, Claudiu Pasa, they began a proof of concept project for migration from their existing device management platform to a specialised AWS IoT based service for new and existing devices. This deeper AWS integration enabled the replacement of other platform components with a leaner, faster Lambda based microservices infrastructure, with Amazon EC2 and Amazon RDS still playing a large part in their infrastructure for longer lived components such as data stores and platform UIs. Additional use of integrated AWS services such as Amazon S3 data storage and web hosting, Amazon API Gateway, Amazon Cognito and Amazon Cloudfront offer attractive benefits, when used in concert with more traditional infrastructure, such as lower latency to the customer, less scalability limitations and more resilience, allowing their engineering team to focus on systems that add value to the business such as advanced monitoring using AWS partner Wavefront, aggregated logging and application analysis using Amazon Elasticsearch Service, and cost analysis and attribution using resource tags and consolidated billing in Amazon Organisations.
The Benefits Centrica Connected Home is a great example of lean enterprise in action. Although it’s part of one of the UK’s biggest corporations, it operates in an agile way, learning quickly while delivering a cutting-edge product to hundreds of thousands of satisfied customers. “Our teams are empowered to make their own decisions and mistakes, and can pick up the tools and run with them, trying new things and innovating. AWS helps us to achieve this lean, agile infrastructure because it we can work flexibly and without constraint but within a consistent environment.”, says Adrian Heesom, COO Centrica Connected Home. Heesom continues, “Our ability to develop new features is much easier in our AWS environment. Plus, the AWS cloud delivers a consistently available hosting platform for our services. The ease of deploying resources in multiple physical AWS locations gives us confidence in the reliability of our environment.” Christopher Livermore, Centrica Connected Homes Head of Site Reliability Engineering says, "Leveraging managed, optimised services such as Amazon EC2, Amazon S3, AWS IoT, API Gateway, AWS Lambda, Amazon Cloudfront, Amazon RDS and Amazon Cognito allows our developers and engineers to focus on product delivery and its value to our customers. It abstracts away some of the common problems of operating system configuration and architecture design. It also makes it easier to maintain a good, common framework for product development across all our teams, internationally." Cost is a two-fold benefit for Centrica Connected Home. It can access a range of environments to experiment cost-effectively, while paying only for IT resources as they’re consumed. It’s a model that the team have adopted for its own products and related services.“More and more of our customers want to “pay as they go” for our Centrica Connected Home products and services,”Heesom says.“This not only aligns with the way we pay for AWS and make our finance model easier, but it enables us to focus even more resources on innovating our services further.”
Established in 1958, the National Aeronautics and Space Administration (NASA) has been working around the world—and off of it—for almost 60 years, trying to answer some basic questions: What’s out there in space? How do we get there? What will we find? What can we learn there, or learn just by trying to get there, that will make life better here on Earth?
Exploring Space: No Rocket Science Degree Needed Have you ever looked up at night and wondered about the mysteries of space? Or marveled at the expansiveness of our galaxy? You can easily explore all this and more at the NASA Image and Video Library, which provides easy access to more than 140,000 still images, audio recordings, and videos—documenting NASA’s more than half a century of achievements in exploring the vast unknown. For NASA, providing the public with such easy access to the wonders of space has been a journey all its own. NASA began providing online access to photos, video, and audio in the early 2000’s, when media capture began to shift from analog and film to digital. Before long, each of NASA’s 10 field centers was making its imagery available online, including digitized versions of some older assets. Therein was the challenge: “With media in so many different places, you needed institutional knowledge of NASA to know where to look,” says Rodney Grubbs, imagery experts program manager at NASA. “If you wanted a video of the space shuttle launch, you had to go to the Kennedy Space Center website. If you wanted pictures from the Hubble Space Telescope, you went to the Goddard Space Flight Center website. With 10 different centers and dozens of distributed image collections, it took a lot of digging around to find what you wanted.” Early efforts to provide a one-stop shop consisted of essentially “scraping” content from the different sites, bringing it together in one place, and layering a search engine on top. “In large part, those initial efforts were unsuccessful because each center categorized its imagery in different ways,” says Grubbs. “As a result, we often had five to six copies of the same image, each described in different ways, which made searches difficult and delivered a poor user experience.” In 2011, NASA decided that the best approach to address this issue was to start over. By late 2014, all the necessary pieces for a second attempt were in place:
Decades of NASA Achievements – All in One Place Development of the new NASA Image and Video Library was handled by the Web Services Office within NASA’s Enterprise Service and Integration Division. Technology selection, solution design, and implementation was managed by InfoZen, the WESTPrime contract service provider. As an Advanced Consulting Partner of the AWS Partner Network (APN), InfoZen chose to build the solution on Amazon Web Services (AWS). “Amazon was the largest cloud services provider, had a strong government cloud presence, and offered the most suitable cloud in terms of elasticity,” recalls Sandeep Shilawat, Cloud Program Manager at InfoZen. NASA formally launched its Image and Video Library in March 2017. Key features include:
While NASA’s new Image and Video Library delivers a wealth of new convenience and capabilities, for people like Grubbs and Walls, it’s just the beginning. “We now have an agile, scalable foundation on which to do all kinds of amazing things,” says Walls. “Much like with the exploration of space, we’re just starting to imagine all that we can do with it.”
Exploring Space: No Rocket Science Degree Needed Have you ever looked up at night and wondered about the mysteries of space? Or marveled at the expansiveness of our galaxy? You can easily explore all this and more at the NASA Image and Video Library, which provides easy access to more than 140,000 still images, audio recordings, and videos—documenting NASA’s more than half a century of achievements in exploring the vast unknown. For NASA, providing the public with such easy access to the wonders of space has been a journey all its own. NASA began providing online access to photos, video, and audio in the early 2000’s, when media capture began to shift from analog and film to digital. Before long, each of NASA’s 10 field centers was making its imagery available online, including digitized versions of some older assets. Therein was the challenge: “With media in so many different places, you needed institutional knowledge of NASA to know where to look,” says Rodney Grubbs, imagery experts program manager at NASA. “If you wanted a video of the space shuttle launch, you had to go to the Kennedy Space Center website. If you wanted pictures from the Hubble Space Telescope, you went to the Goddard Space Flight Center website. With 10 different centers and dozens of distributed image collections, it took a lot of digging around to find what you wanted.” Early efforts to provide a one-stop shop consisted of essentially “scraping” content from the different sites, bringing it together in one place, and layering a search engine on top. “In large part, those initial efforts were unsuccessful because each center categorized its imagery in different ways,” says Grubbs. “As a result, we often had five to six copies of the same image, each described in different ways, which made searches difficult and delivered a poor user experience.” In 2011, NASA decided that the best approach to address this issue was to start over. By late 2014, all the necessary pieces for a second attempt were in place:
- The Imagery Experts Program had developed and published a common metadata standard, which all NASA’s centers had adopted.
- The Web Enterprise Service Technologies (WESTPrime) service contract, one of five agency-wide service contracts under NASA’s Enterprise Services program, provided a delivery vehicle for building and managing the new site.
- The Federal Risk and Authorization Management Program (FedRAMP), which provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.
Decades of NASA Achievements – All in One Place Development of the new NASA Image and Video Library was handled by the Web Services Office within NASA’s Enterprise Service and Integration Division. Technology selection, solution design, and implementation was managed by InfoZen, the WESTPrime contract service provider. As an Advanced Consulting Partner of the AWS Partner Network (APN), InfoZen chose to build the solution on Amazon Web Services (AWS). “Amazon was the largest cloud services provider, had a strong government cloud presence, and offered the most suitable cloud in terms of elasticity,” recalls Sandeep Shilawat, Cloud Program Manager at InfoZen. NASA formally launched its Image and Video Library in March 2017. Key features include:
- A user interface that automatically scales for PCs, tablets, and mobile phones across virtually every browser and operating system.
- A search interface that lets people easily find what they’re looking for, including the ability to choose from gallery view or list view and to narrow-down search results by media type and/or by year.
- The ability to easily download any media found on the site—or share it on Pinterest, Facebook, Twitter, or Google+.
- Access to the metadata associated with each asset, such as file size, file format, which center created the asset, and when it was created. When available, users can also view EXIF/camera data for still images such as exposure, shutter speed, and lens used.
- An application programming interface (API) for automated uploads of new content—including integration with NASA’s existing authentication mechanism.
- Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable compute capacity in the cloud. This enables NASA to scale up under load and scale down during periods of inactivity to save money, and pay for only what it uses.
- Elastic Load Balancing (ELB), which is used to distribute incoming traffic across multiple Amazon EC2 instances, as required to achieve redundancy and fault-tolerance.
- Amazon Simple Storage Service (Amazon S3), which supports object storage for incoming (uploaded) media, metadata, and published assets.
- Amazon Simple Queue Service (SQS), which is used to decouple incoming jobs from pipeline processes.
- Amazon Relational Database Service (Amazon RDS), which is used for automatic synchronization and failover.
- Amazon DynamoDB, a fast and flexible NoSQL database service, which is used to track incoming jobs, published assets, and users.
- Amazon Elastic Transcoder, which is used to transcode audio and video to various resolutions.
- Amazon CloudSearch, which is used to support searching by free text or fields.
- Amazon Simple Notification Service (SNS), which is used to trigger the processing pipeline when new content is uploaded.
- AWS CloudFormation, which enables automated creation, updating, and destruction of AWS resources. InfoZen also used the Troposphere library, which enables the creation of objects via AWS CloudFormation using Python instead of hand-coded JSON—each object representing one AWS resource such as an instance, an Elastic IP (EIP) address, or a security group.
- Amazon CloudWatch, which provides a monitoring service for AWS cloud resources and the applications running on AWS.
- Easy Access to the Wonders of Space. The Image and Video Library automatically optimizes the user experience for each user’s particular device. It is also fully compliant with Section 508 of the Rehabilitation Act, which requires federal agencies to make their technology solutions accessible to people with disabilities. Captions can be turned on or off for videos played on the site, and text-based caption files can be downloaded for any video.
- Built-in Scalability. All components of the NASA Image and Video Library are built to scale on demand, as needed to handle usage spikes. “On-demand scalability will be invaluable for events such as the solar eclipse that’s happening later this summer—both as we upload new media and as the public comes to view that content,” says Bryan Walls, Imagery Experts Deputy Program Manager at NASA.
- Good Use of Taxpayer Dollars. By building its Image and Video Library in the cloud, NASA avoided the costs associated with deploying and maintaining server and storage hardware in-house. Instead, the agency can simply pay for the AWS resources it uses at any given time.
While NASA’s new Image and Video Library delivers a wealth of new convenience and capabilities, for people like Grubbs and Walls, it’s just the beginning. “We now have an agile, scalable foundation on which to do all kinds of amazing things,” says Walls. “Much like with the exploration of space, we’re just starting to imagine all that we can do with it.”
The Challenge
After maintaining on-premises hardware and custom publishing software for nearly two decades, The Seattle Times sought to migrate its website publishing to a contemporary content management platform. To avoid the costs of acquiring and configuring new hardware infrastructure and the required staff to maintain it, the company initially chose a fully managed hosting vendor. But after several months, The Times' software engineering team found it had sacrificed flexibility and agility in exchange for less maintenance responsibility. As the hosted platform struggled with managing traffic under a vastly fluctuating load, The Seattle Times team was hamstrung in its ability to scale up to meet customer demand. Tom Bain, the software engineering manager overseeing the migration effort, says, "We had a fairly standard architecture in mind when we set out to do the migration, and we encouraged our vendor to adapt to our needs, but they struggled with the idea of altering their own business model to satisfy our very unique hosting needs."
Why Amazon Web Services To address these core scalability concerns, The Seattle Times engineering team considered several alternative hosting options, including self-hosting on premises, more flexible managed hosting options, and various cloud providers. The team concluded that the available cloud options provided the needed flexibility, appropriate architecture, and desired cost savings. The company ultimately chose Amazon Web Services (AWS), in part because of the maturity of the product offering and, most significantly, the auto-scaling capabilities built into the service. The Seattle Times' new software is built on the LAMP stack, and the added benefits of native, Linux-based cloud hosting made the most sense when choosing a new vendor. The Seattle Times developed a proof-of-concept and implementation plan, which was reviewed by a team from AWS Support. “They looked over our architecture and said, ‘Here are some things that we recommend you do, some best practices, and some lessons learned,’ ” says Rob Grutko, director of technology for The Seattle Times. “They were very helpful in making sure we were production ready.” After implementing the desired system architecture and vetting the chosen components and configuration with AWS, The Times deployed its new system in just six hours. The website moved to the AWS platform between 11 p.m. and 3 a.m. and final testing was completed by 5 a.m. — in time for the next news day.
How Seattle Times Uses AWS Seattletimes.com is now hosted in an Amazon Virtual Private Cloud (Amazon VPC), a logically isolated section of the AWS cloud. It uses Amazon Elastic Compute Cloud (Amazon EC2) for resizable compute capacity and Amazon Elastic Block Store (Amazon EBS) for persistent block-level storage volumes. Amazon Relational Database Service (Amazon RDS) serves as a scalable cloud-based database, Amazon Simple Storage Service (Amazon S3) provides a fully redundant infrastructure for storing and retrieving data, and Amazon Route 53 offers a highly available and scalable Domain Name System (DNS) web service. The Times is using Amazon CloudFront in front of several Amazon S3 buckets to distribute a huge collection of photo imagery. The combination of Amazon CloudFront and Amazon S3 is used to embed photos into news stories distributed to The Times readers with low latency and high transfer speeds. Additionally, Amazon ElastiCache serves as an in-memory “cache in the cloud” in The Times’ new configuration. The Times is also using AWS Lambda to resize images for viewing on different devices such as desktop computers, tablets, and smartphones.
The Benefits With AWS, The Seattle Times can now automatically scale up very rapidly to accommodate spikes in website traffic when big stories break, and scale down during slower traffic periods to reduce costs. “Auto-scaling is really the clincher to this,” Grutko says. “With AWS, we can now serve our online readers with speed and efficiency, scaling to meet demand and delivering a better reader experience.’’ Moreover, news images can now be rapidly resized for different viewing environments, allowing breaking-news stories to reach readers faster. “AWS Lambda provides us with extremely fast image resizing,” Grutko says. “Before, if we needed an image resized in 10 different sizes, it would happen serially. With AWS Lambda, all 10 images get created at the same time, so it’s quite a bit faster and it involves no server maintenance.” Rather than relying on a hosting service to fix inevitable systems issues, The Times now has complete control over its back-end environment, enabling it to troubleshoot problems as soon as they occur. “When an issue happens, we can go under the hood and troubleshoot to get around nearly any problem,” says Grutko. “It’s our environment, and we control it.” When the company encounters a problem that it can’t solve, it relies on AWS Support. “Our on-boarding experience was quite good with the AWS support team,” says Miles Van Pelt, senior development engineer at The Seattle Times. “It really felt like they went out of their way to answer our questions and research topics that we couldn't readily find in their extensive documentation.” By choosing AWS, The Seattle Times is now better positioned to deliver in its pursuit of being a leading-edge digital news media company. “By moving to AWS, we’ve regained the agility and flexibility we need to support the company’s journalistic mission without incurring the expense and demands required of a pile of physical hardware,” says Grutko .
After maintaining on-premises hardware and custom publishing software for nearly two decades, The Seattle Times sought to migrate its website publishing to a contemporary content management platform. To avoid the costs of acquiring and configuring new hardware infrastructure and the required staff to maintain it, the company initially chose a fully managed hosting vendor. But after several months, The Times' software engineering team found it had sacrificed flexibility and agility in exchange for less maintenance responsibility. As the hosted platform struggled with managing traffic under a vastly fluctuating load, The Seattle Times team was hamstrung in its ability to scale up to meet customer demand. Tom Bain, the software engineering manager overseeing the migration effort, says, "We had a fairly standard architecture in mind when we set out to do the migration, and we encouraged our vendor to adapt to our needs, but they struggled with the idea of altering their own business model to satisfy our very unique hosting needs."
Why Amazon Web Services To address these core scalability concerns, The Seattle Times engineering team considered several alternative hosting options, including self-hosting on premises, more flexible managed hosting options, and various cloud providers. The team concluded that the available cloud options provided the needed flexibility, appropriate architecture, and desired cost savings. The company ultimately chose Amazon Web Services (AWS), in part because of the maturity of the product offering and, most significantly, the auto-scaling capabilities built into the service. The Seattle Times' new software is built on the LAMP stack, and the added benefits of native, Linux-based cloud hosting made the most sense when choosing a new vendor. The Seattle Times developed a proof-of-concept and implementation plan, which was reviewed by a team from AWS Support. “They looked over our architecture and said, ‘Here are some things that we recommend you do, some best practices, and some lessons learned,’ ” says Rob Grutko, director of technology for The Seattle Times. “They were very helpful in making sure we were production ready.” After implementing the desired system architecture and vetting the chosen components and configuration with AWS, The Times deployed its new system in just six hours. The website moved to the AWS platform between 11 p.m. and 3 a.m. and final testing was completed by 5 a.m. — in time for the next news day.
How Seattle Times Uses AWS Seattletimes.com is now hosted in an Amazon Virtual Private Cloud (Amazon VPC), a logically isolated section of the AWS cloud. It uses Amazon Elastic Compute Cloud (Amazon EC2) for resizable compute capacity and Amazon Elastic Block Store (Amazon EBS) for persistent block-level storage volumes. Amazon Relational Database Service (Amazon RDS) serves as a scalable cloud-based database, Amazon Simple Storage Service (Amazon S3) provides a fully redundant infrastructure for storing and retrieving data, and Amazon Route 53 offers a highly available and scalable Domain Name System (DNS) web service. The Times is using Amazon CloudFront in front of several Amazon S3 buckets to distribute a huge collection of photo imagery. The combination of Amazon CloudFront and Amazon S3 is used to embed photos into news stories distributed to The Times readers with low latency and high transfer speeds. Additionally, Amazon ElastiCache serves as an in-memory “cache in the cloud” in The Times’ new configuration. The Times is also using AWS Lambda to resize images for viewing on different devices such as desktop computers, tablets, and smartphones.
The Benefits With AWS, The Seattle Times can now automatically scale up very rapidly to accommodate spikes in website traffic when big stories break, and scale down during slower traffic periods to reduce costs. “Auto-scaling is really the clincher to this,” Grutko says. “With AWS, we can now serve our online readers with speed and efficiency, scaling to meet demand and delivering a better reader experience.’’ Moreover, news images can now be rapidly resized for different viewing environments, allowing breaking-news stories to reach readers faster. “AWS Lambda provides us with extremely fast image resizing,” Grutko says. “Before, if we needed an image resized in 10 different sizes, it would happen serially. With AWS Lambda, all 10 images get created at the same time, so it’s quite a bit faster and it involves no server maintenance.” Rather than relying on a hosting service to fix inevitable systems issues, The Times now has complete control over its back-end environment, enabling it to troubleshoot problems as soon as they occur. “When an issue happens, we can go under the hood and troubleshoot to get around nearly any problem,” says Grutko. “It’s our environment, and we control it.” When the company encounters a problem that it can’t solve, it relies on AWS Support. “Our on-boarding experience was quite good with the AWS support team,” says Miles Van Pelt, senior development engineer at The Seattle Times. “It really felt like they went out of their way to answer our questions and research topics that we couldn't readily find in their extensive documentation.” By choosing AWS, The Seattle Times is now better positioned to deliver in its pursuit of being a leading-edge digital news media company. “By moving to AWS, we’ve regained the agility and flexibility we need to support the company’s journalistic mission without incurring the expense and demands required of a pile of physical hardware,” says Grutko .
Club Automation drives new business growth, safely migrates its health club management application to AWS, protects customer data, and provisions firewalls in 15 minutes instead of several hours by using Barracuda NextGen Firewalls on the AWS Cloud. The organization provides cloud-based enterprise resource planning (ERP) software for health and athletic clubs throughout the United States. Club Automation migrated its applications to AWS and uses Barracuda firewalls provisioned through the AWS Marketplace.
About Club Automation
Club Automation a leading cloudbased software provider with a mission of contributing to a healthier and more active world by empowering more-efficient health and fitness club management. Based in Chicago, the company offers a software-as-a-service (SaaS) solution that enables health and fitness clubs to run their facilities effortlessly.
The Challenge
Not long ago, Club Automation was a small upstart company in the health club software industry with a big goal: to revolutionize the entire industry with a SaaS enterprise resource planning (ERP) solution that manages all parts of a health club’s business. The company is now experiencing explosive business growth. “We came into the club ERP space as an underdog, but we’ve grown extremely fast,” says Max Longin, a founding partner at the company. “About 70 percent of our total revenue as a company has come in the past year.” Even so, Longin considers this a period of “controlled growth.” “We have not really been marketing ourselves—our new customers have been coming to us through word of mouth. Our concern has been that if our systems are not ready to scale to support more growth, we could compromise performance and our customers’ experience.”
To address that concern, Club Automation sought to move its SaaS application to a new cloud technology provider. “We needed more agility and scalability than we had with our previous hybrid-cloud solution, which included a secure but legacy private-cloud environment,” Longin confirms. “We had to scale ahead of required capacity, which was costly and required a lot of planning. We wanted to be more agile, so we could quickly roll out new apps and features for our customers.”
As Club Automation considered new cloud technologies, it also needed to ensure strong security for its application workloads. “We operate in a cardholder environment, and our solution needs to be PCI compliant and highly secure,” Longin says. “We can’t allow access to our backend systems by anyone other than our developers. We had to eliminate attack surface areas within a cloud environment, and we needed the security to enable our business to move our workloads to the cloud safely.”
Why Amazon Web Services
Club Automation decided to move its SaaS application to the Amazon Web Services (AWS) cloud, in part because AWS addressed the company’s security and performance challenges. “Previously, we were not set up to support geographic growth, because we only had a few dispersed data centers and we had challenges deploying security quickly and getting solid performance in all areas of the United States,” Longin says. “We looked at Microsoft Azure, but it wasn’t the right solution for our needs,” says Longin. “AWS fit like a glove, and it offers the best services for our business.” Club Automation runs its web servers on Amazon Elastic Compute Cloud (Amazon EC2) instances and runs background jobs on AWS Elastic Beanstalk, a service for deploying and scaling web applications. The company is also using Amazon Aurora, a hosted relational database service, to store and manage customer membership and financial data.
To safely migrate its SaaS application workloads to AWS, Club Automation chose to work with Barracuda Networks, an AWS Partner Network (APN) Advanced Technology Partner with an AWS Security Competency certification. Barracuda provides firewalls engineered for AWS to help customers deploy a comprehensive security architecture and increase protection against cyberattacks and advanced threats. “I had a previous business relationship with Barracuda and was impressed with the stability of the solutions,” Longin says. Club Automation deployed Barracuda NextGen Firewalls to help secure the company’s AWS environment. The firewalls are installed on an Amazon EC2 instance in the Club Automation Amazon Virtual Private Cloud (Amazon VPC). Each firewall sits in a public subnet, protecting against unauthorized access to the private subnets where the cardholder data environment is located.
Club Automation was able to easily purchase and deploy the Barracuda firewalls through the AWS Marketplace, an online store where customers can find software and services from AWS partners so they can build solutions and run their businesses.
The Benefits
By moving its SaaS application to the AWS Cloud, Club Automation has been able to keep up with its rapid rate of growth. “AWS makes it very easy for us to scale and innovate,” says Longin. “We needed the right platform to enable growth, and we have that. Instead of having to carefully control growth because of platform limitations, we can scale on demand to support an increasing number of clubs with our application. We no longer have any restrictions on how large or fast we grow.” The company now has the agility to respond quickly to customer needs and can deploy its solutions 30–40 percent faster. Longin says, “We have to innovate by giving clubs the features they’re looking for. For example, we’re currently rolling out a new mobile app, branded by each club, and we could not have done that without using AWS and Barracuda.”
Club Automation is taking advantage of Barracuda firewalls to help secure its growing number of AWS services. “We are using the Barracuda NextGen Firewalls, provisioned through the AWS Marketplace, to effectively guard our application against web-based attacks and application layer attacks,” says Longin. “The Barracuda solution plugs in seamlessly to our AWS environment, and it is doing its job of minimizing the attack surface area and helping our customers keep club member cardholder data protected.”
Club Automation has also decreased the amount of time the configuration process took with its previous firewall solution. Barracuda offerings on the AWS Marketplace support AWS CloudFormation templates, which allow developers and administrators to deploy applications within a stack of AWS-related resources. “The Barracuda firewall is a self-service, cloud-based solution that takes less than 15 minutes to get up and running, as opposed to the hours and sometimes days the previous solution took,” Longin says. “Provisioning new users is much simpler and faster. Instead of opening a support ticket and waiting for it to be addressed, we can just go into AWS and provision new users ourselves. This is a key benefit for us as we keep growing.”
Relying on Barracuda, Club Automation enabled its IT team to securely move its SaaS workloads to AWS. “We had considered using a cloud solution a few years ago, but cloud offerings were not what they are today, and security solutions like Barracuda’s were not available,” says Longin. “Our move to AWS would not have been possible without Barracuda firewalls,” remarks Longin. “Using Barracuda helped us safely transition more of our workloads to AWS, and we expect our full production environment to be all-in on AWS by the end of the year.”
In addition, Club Automation benefited from the ease of deployment from the AWS Marketplace. “It couldn’t have been more simple,” says Longin. “All we had to do was find the solution and then quickly configure and deploy it through the AWS Marketplace. In the software industry, it’s rare when something works as expected, but the AWS Marketplace did just that.” In the near future, Club Automation expects to use the marketplace for the upcoming Barracuda metered billing service. “With metered billing, we will be able to consume Barracuda services in the same way we consume AWS services, which will be very cost-effective for us,” Longin says.
Previously, Club Automation had been holding back on expansion and had only grown through word of mouth, because it was concerned that its IT staff could not support rapid expansion. Now, using AWS, the company is poised for major growth. “We are ready and able to grow,” says Longin. “We have started hiring inside sales representatives and creating marketing plans, because we have a platform that enables scalability and expansion while also allowing us to maintain our high standards of customer service. To keep growing fast, we need agility and innovation. That’s what fueled our transition to AWS and Barracuda, and it will continue fueling our growth in this industry.”
The ROI4CIO Deployment Catalog is a database of software, hardware, and IT service implementations. Find implementations by vendor, supplier, user, business tasks, problems, status, filter by the presence of ROI and reference.