- Traditional deployments of the Customer Insights product built by the client’s data science team were hobbled by multiple handoffs, manual tests, and siloed development and IT operations teams.
- When data scientists needed information, they were required to contact the data engineering team every time, which then wrote specific code for each query, manually moving the data into a table to perform the analysis. With so many requests coming in, the data team was often backlogged, sometimes taking days to fulfil specific requests.
- The team was unable to duplicate the software, hardware configuration, containers, Python environments, R libraries, and store it in a source-control system to re-create it for testing, deployment and the downstream phases. System was taking too long to launch and scale clustered compute resources
The lack of a single source of truth, quality data and ad hoc manual reporting processes undermined top management’s visibility of integrated insights on sales, sales rep interactions, marketing reach, brand performance, market share, and territory management. Understandably, the client wanted to align information that has hitherto been in silos, to gain a 360-degree product movement view, to optimize sales planning and gain competitive edge.
- With continued investments in Big Data Analytics technologies that support its business decisions, the client did not have the bandwidth to support day-to-day maintenance of the systems and limited time to focus on strategic initiatives
- The Big Data Analytics environment is in the process of constant redesign and migration of jobs. The support team must address these changes while stabilizing existing processes and improving performance
Agilisium built a robust and highly scalable DevOps solution for the client to increase cost efficiencies and leverage more DevOps automation principles. The migration to AWS Modern Infrastructure revolutionized real-time analytics and personalization aspects of the solution. Below are the other details of the solution
- The core of our client’s infrastructure is Amazon EC2 instances in an Amazon VPC utilizing Advanced Networking features and utilizing Amazon’s Object Storage product Amazon S3.
- Jenkins CI is used to develop a robust deployment framework which reduced many of the manual activities such as manual build, manual testing, manual packaging
- Jenkins dashboard was used as a unified project dashboard to monitor project status, build failures, code coverage etc.
- For core DevOps capabilities, Agilisium used Jenkins, Ansible, Data dog, Terraform, Bitbuckets to define the DevOps framework
How We Worked Together
Post a maturity assessment, Agilisium devised and enacted strong governance frameworks at executive, program and operational levels with scheduled check point review meetings, to close current gaps identified in scope management.
Scrum calls twice daily ensured that application owners were apprised about progress made at operational level. While monthly milestone review meetings aligned priorities at program level, the quarterly executive steering committee meeting clearly set the engagement priorities at executive level.
To ensure a smooth transition of new solutions, a 1-week workshop and demo on S3, Redshift and Tableau were given to the client’s business and technical teams.
- The client reduced the application’s release cycle time by 50%, thereby improving time-to-market
- Automation and CI reduced the average production release time by 30 minutes
- With continuous analytics and self-service, data scientists own the data project from the original idea all the way to production
- They became autonomous and are dedicating more time to producing actual insights in the solution
- More adaptive and responsive to decision makers in the organization by releasing new features and changes based on needs
- Reduced IT overhead and maintenance costs
- The customer’s code deployment process is integrated with cloud-native tool Jenkins, improving both deployment velocity and reducing manual errors
- The monitoring tools and configuration management scripts catch and resolve anomalies or misconfigured resources
- Reduced the time that a team spends configuring and deploying infrastructure, providing a substantial gain in efficiency
- High availability, highly secured services and reduced latency
- Data scientists could run the complete iteration of data exploration, preparation, model development, deployment and production push multiple times a day without relying on the big data engineer
- Big Data engineers, in turn, work on scalability and storage optimization, enabling streaming architectures and so on.