About the Client
Our Client is an American global music corporation, the world’s leading music company. It owns and operates a broad array of businesses engaged in recorded music, music publishing, merchandising, and audiovisual content in more than 60 countries.
- Need to maintain multiple Log files from different sources
- Dealing with huge volume of data (More than 5TB data per day) as more applications were involved
- Audition of cumulative log files collected from different source was required
- Need of cost-effective solution which can fulfill the enterprise level security requirements
- A solution was developed using Kinesis Firehose to collect and process files from different sources and store it in Internal Splunk for Audit purpose
- The same solution was leveraged across all the projects within UMG and the total volume per day exceeded 5TB
- Different Pipeline were built for each sources and the same was leveraged across all the projects to collect. The following slides contains the flow of each of those pipeline and detailed solution
- Client has a Website being hosted on Amazon S3 and Cloud Front with Lambda at Edge and AWS WAF in front
- All the infra logs are made available for the internal security team in SPLUNK for the audit purpose (external facing)
- Cloud Front logs were being written to S3, [email protected] logs to CloudWatch and Amazon S3 logs is written and maintained in the CloudTrail bucket
- CloudFront Logs: CloudFront logs -> S3 bucket _> Lambda(Event Notification)-> Firehose(Put record)->SPLUNK(Internal)
- CloudTrail Logs: Lambda -> Edge -> CloudWatch Logs -> Firehose(CW Subscription) -> Lambda(Data transformation) -> SPLUNK(Internal)
- WAF Logs: AWS WAF -> Firehose -> SPLUNK(Internal)
To fulfill SocialHi’5 need for a client self-service portal that was also easy to maintain, Agilisium’s 5-member expert team built a custom web application with a heavy focus on the visualization of campaign outcomes. They also developed in parallel a DevOps process to maintain, scale and operate this portal.
Web Application Architecture
A variety of AWS services and some open source technologies were used to build and run the web application. The web layer used the PHP framework, included a login and authentication system, and used AWS QuickSight to render its outcome dashboards.
The app layer was built on Python, and the backend services were run on Elastic Container Service (ECS) dockers with Auto Scaling and Auto Load Balancing (ALB) to ensure high availability of the portal. The database was run in a private subnet and used RDS MySQL as the database service.
As mentioned earlier, SocialHi5 necessitated that the solution offered was easy to maintain, scale, and operate. To that end, Agilisium’s DevOps engineers developed a 2-part DevOps process focusing on
- CI/CD for web application development
- Infrastructure Provisioning for maintenance.
Continuous Integration/Continuous Deployment (CI/CD Process)
All application (Web & App Tier) maintenance was articulated via AWS’s Code Pipeline. AWS’s Code Commit, Code Deploy, and Code Build services were invoked to automate the enhancement and maintenance of the self-service portal.
CI/CD Process Flow: Web Tier
CI/CD Process Flow: Web Tier
All infrastructure was hosted on an exclusive SocialHi5 Virtual Private Cloud (VPC), to add an extra layer of confidentiality. AWS CloudFormation templates were used to spin up and maintain a host of AWS services utilized for the self-service portal.
Serverless Web application hosting: EC2, ECS, RDS, S3, SSM, VPC, NAT Gateway, ALB with Autoscaling Group, LAMBDA, Certificate Manager, Route53 were some of the services used to get the portal live.
Security: Web Application Firewall (WAF) was used with Cross-site scripting, Geo match, and SQL injection rules to protect from common cyber threats in conjunction with the AWS inspector service.
Monitoring and Logging: CloudWatch, OpsWorks, Config & Inspector services were also invoked to cover configuration management, logging, and monitoring of the application and infrastructure.
- Amazon CloudWatch metrics are enabled to determine the health of each component of the workload. All the metrics are monitored by Grafana tool
- Jenkins used for automating code build and deployment
- Prometheus - used to collect the metrics from POD, node (backend)
- Grafana - used to display the content (frontend)
- Kiali - used to live network monitoring
- Splunk / Loki - log monitoring
- IAM best practices and principles are followed
- Alert mechanisms are set so that anything that is not in compliance is immediately notified
- SSL certificate is installed on the Classic Load Balancer
- All Data stores are in private subnet
AWS services used:
- Amazon Kinesis Data Firehose
- AWS WAF
- AWS Identity & Access Management
- Amazon Simple Storage Service (S3)
- Amazon CloudFront
- AWS CloudFormation
- AWS CloudTrail
- Amazon CloudWatch
- AWS Key Management Service
- Amazon Simple Notification Service
- AWS Config
- AWS Lambda
- The solution developed has a long-term road map, where the same has been leveraged across several projects
- 5TB data per day were processed and stored into internal splunk for audit purpose
- Kinesis Firehose were leveraged to create the pipeline
- Cost effective and improved overall performance of the system