10+ years of overall IT experience and over 5 years of experience as a Data Engineer, with a background in working with Big Data technologies such as Python, MySQL, PySpark, AWS EC2, GitHub, CI/CD, Agile development, Adobe, Jenkins, Databricks, and Azure pipelines.
Overview
11
11
years of professional experience
1
1
Certification
Work History
Staff Engineer - Data Engineering
Altimetrik
Banaglore
06.2025 - 11.2025
Contributed to a finance-focused project by streamlining engineering workflows, developing robust ETL pipelines to centralize raw transactional data in S3, and implementing data validation and schema evolution checks to ensure high-quality, reliable data for sales compensation analytics, and maintain compliance with data governance standards.
Technologies used: SQL, AWS EC2, S3, ADAPT, and data pipelines.
Financial technology: Data engineer.
Sr. Associate – Projects
Cognizant
Bangalore
10.2023 - 05.2025
The existing preference center (Epsilon) has certain limitations. To overcome this, a new preference center, OneTrust, has been proposed. OneTrust offers the capability to access real-time user preferences, which is crucial for enhanced data handling and user experience.
API Integration: Utilized various API calls to retrieve and process preferences and related data from OneTrust, ensuring seamless integration with AbbVie's database.
Extract transforms load data from different source systems like Adobe & Amplitude using API calls using technologies like Python, PySpark and spark SQL and processing the data
Created automated pipelines in Jenkins to extract load and transform data from Adobe 2.0
It is a Bank of America's comprehensive risk analytics platform used for enterprise, wholesale, retail, and consumer portfolios.
This platform generates reports for allowance measurement, forecasting and capital adequacy for BofA portfolios which is in turn used by fed reserve reporting team.
In generally it deals with loss forecasting for consumers and wholesale analytics for cashflow, charge off and asset quality determination.
Technology Used: Python, SQL, Postman, Git and Toad for oracle, Hive.
Key Responsibilities:
Responsible for identifying and analyzing user stories for feasibility.
Working closely with the team members also collaborating with other teams.
Need to write workflows using python (pandas, PySpark modules) to enhance the platform capabilities
Member of Technical Staff
Netskope Ind Pvt Ltd
Bangalore
12.2014 - 09.2020
Responsible to write optimized code to fetch hosting provider name and list of possible provider locations where cloud application can be hosted.
Senior Manager / Staff Data Engineer - Data Engineering and Analytics at CVS HealthSenior Manager / Staff Data Engineer - Data Engineering and Analytics at CVS Health