Data professional with over 6 years of experience designing, developing, and maintaining data warehouse solutions. Pioneered and enhanced robust ETL processes using Informatica PowerCenter, SSIS to extract financial data from Oracle, SQL Server and mainframe systems into a centralized data warehouse. Optimized ETL jobs with partitioning and pushdown techniques, reducing daily load time by up to 30% for key reports. Developed Python scripts to monitor job logs, automatically identifying and capturing errors in real time to trigger error notifications to stakeholders. Created Shell scripts to conventionally handle job failures by checking the exit status of ETL tasks and taking corrective actions such as job retries or triggering fallback processes. Extensive background across the Full Software Development Life Cycle (SDLC) and data analysis, delivering effective business solutions through technical and analytical expertise. Worked with DBAs to implement indexing strategies that reduced page reads and improved overall database performance by 35%. Exceptional SQL writing and fixing skills including DDL, DML, DCL and hands on in complex T-SQL codes, triggers, functions, clustered and un-clustered indexes, views and joins. Strong skills in data visualization by creating various reports such as different level of Parametrized Reports, Drill Through Reports, Sub Reports and Nested Reports. Enhanced DAX queries and data models, improving dashboard performance by 50%. Executed end-to-end data solutions using Azure Cloud services, including Azure SQL, Data Factory, and Logic Apps, to support scalable data integration and automation workflows. Engineered data pipelines in Azure Databricks using PySpark, transforming and cleansing high-volume streaming and batch data from multiple sources, increasing processing speed by 3x. Led Azure Blob Storage and Azure Data Lake for storing and processing large volumes of structured and unstructured data, optimizing data access and retrieval processes. Streamlined the deployment and scaling of data solutions using Azure DevOps, ensuring continuous integration and delivery (CI/CD) for cloud-based applications and services. Applied CI/CD pipelines using Jenkins to automate data workflows, integrating Bitbucket for version control and checkpointing, resulting in a 75% reduction in manual deployment errors and faster rollback during data pipeline failures.