Case Study
Enabled Digital Global Scheduling Using Aurora Postgres for an American global music corporation

About the Client

Our Client is one of the world’s leading music company. It owns and operates several businesses such as recorded music, music publishing, merchandising and audiovisual content.


  • Weighed down by legacy architecture and data loss during replication, timely business decisions on album / track releases were impacted
  • Support for existing HP P-series servers came close to end of life and license cost to stay with Oracle was exorbitant
  • Downstream Release Operations were impacted due to delayed availability of tracks / albums for scheduling in DIGS
  • Oracle system has reached its peak utilization and impacting performance of business systems
  • Poor user experience with deteriorating hardware performance

Solution Highlights

Data Migration
  • The Oracle database on prem was migrated to Amazon Aurora Postgres
  • DB size of 1.5 TB migrated and 200 Gb/yr growth
  • 2000 concurrent user request
  • AWS DMS was utilised to migrate from source to target
Monitoring & Logging
  • To monitor the status of the ongoing replication tasks with Amazon CloudWatch
  • Configure Amazon Simple Notification Service (Amazon SNS) to notify you of errors in the CloudWatch logs for the task
  • Network throughput, Client connections, I/O for read, write, or metadata operations

To fulfill SocialHi’5 need for a client self-service portal that was also easy to maintain, Agilisium’s 5-member expert team built a custom web application with a heavy focus on the visualization of campaign outcomes. They also developed in parallel a DevOps process to maintain, scale and operate this portal.

Web Application Architecture

A variety of AWS services and some open source technologies were used to build and run the web application. The web layer used the PHP framework, included a login and authentication system, and used AWS QuickSight to render its outcome dashboards.

The app layer was built on Python, and the backend services were run on Elastic Container Service (ECS) dockers with Auto Scaling and Auto Load Balancing (ALB) to ensure high availability of the portal. The database was run in a private subnet and used RDS MySQL as the database service.

DevOps Process:

As mentioned earlier, SocialHi5 necessitated that the solution offered was easy to maintain, scale, and operate. To that end, Agilisium’s DevOps engineers developed a 2-part DevOps process focusing on

  • CI/CD for web application development
  • Infrastructure Provisioning for maintenance.

Continuous Integration/Continuous Deployment (CI/CD Process)

All application (Web & App Tier) maintenance was articulated via AWS’s Code Pipeline. AWS’s Code Commit, Code Deploy, and Code Build services were invoked to automate the enhancement and maintenance of the self-service portal.

CI/CD Process Flow: Web Tier

CI/CD Process Flow: Web Tier

Infrastructure provisioning

All infrastructure was hosted on an exclusive SocialHi5 Virtual Private Cloud (VPC), to add an extra layer of confidentiality. AWS CloudFormation templates were used to spin up and maintain a host of AWS services utilized for the self-service portal.

Serverless Web application hosting: EC2, ECS, RDS, S3, SSM, VPC, NAT Gateway, ALB with Autoscaling Group, LAMBDA, Certificate Manager, Route53 were some of the services used to get the portal live.

Security: Web Application Firewall (WAF) was used with Cross-site scripting, Geo match, and SQL injection rules to protect from common cyber threats in conjunction with the AWS inspector service.

Monitoring and Logging: CloudWatch, OpsWorks, Config & Inspector services were also invoked to cover configuration management, logging, and monitoring of the application and infrastructure.

  • Use of a primary database instance and one or more read replicas with the primary fail-over read replica located in a different availability zone from the primary instance
  • Multi-AZ by employing an SSD-backed virtualized storage layer purpose-built for database workload
  • IAM best practices and principles are followed
  • Least privileged access is provided
  • Unique non-root credentials are provided
  • Programmatic access for API calls
  • Security groups are defined to restrict traffic
  • All Data stores are in private subnet
AWS services used:
  • Amazon Simple Storage Service
  • Amazon EC2
  • Amazon EC2 Auto Scaling
  • AWS Identity & Access Management
  • AWS Config
  • AWS CloudTrail
  • Amazon CloudWatch
  • AWS Systems Manager
  • Amazon Aurora
Results and Benefits
  • 4x faster availability of releases in DIGS for scheduling
  • Agile Data platform that is Analytics ready
  • Timely business decisions on track/album releases in DIGS; due to significant performance improvement in bulk updates
  • Erroneously opted-out releases in DIGS can be fixed & scheduled back for release on intended schedule, due to potential uplift in reporting performance
  • All hype created up until release by UMG’s & Artist’s Marketing teams would not go down the drain; due to missed schedule