Summary
Overview
Work History
Education
Skills
<Enter your own>
Skills And Technologies
Timeline
Generic

Sai Krishna

Toronto,Canada

Summary

Results-driven IT professional with approximately 6 years of experience in the Insurance, Aviation, Banking, and pharmaceutical sectors. Over 3 years of recent expertise in Azure Data Factory, Azure Databricks, Azure SQL, Azure Data Lake Gen1/Gen2, and Delta Lake. Proficient in designing and developing robust data ingestion pipelines using Azure Data Factory, as well as Spark applications via PySpark within Databricks for comprehensive data extraction, transformation, and aggregation across multiple file formats to meet business requirements. Experienced in building Enterprise Data Warehouses and Data Marts, implementing dimensional modeling (Star/Snowflake schemas), and developing logical/physical data models in Azure Synapse (Dedicated SQL Pool DW). Skilled in end-to-end business and system analysis, ETL solution design, release/change management, requirements gathering, process flow design, production support, testing, and detailed deliverable documentation. Demonstrated proficiency throughout the complete software development lifecycle (SDLC), including database design/modeling, development, and deployment for various business intelligence and data warehouse/data mart (ODS) initiatives. Adept at performing business requirements gathering, development, implementation, and comprehensive technical documentation, including source-to-target mapping. Experienced in utilizing reusable/non-reusable transformations such as XML, Normalizer, Expression, Aggregator, Lookup, Union, Joiner, Filter, and Stored Procedure. Familiar with bulk, normal, and incremental loads, and adept at implementing data cleansing, profiling, and validating test plans to ensure successful data loading processes. Competent with scheduling tools like Control-M and Autosys, and managing diverse data sources including relational databases, flat files (fixed/delimited), Parquet, and Delta formats. Skilled at performance tuning for Databricks notebooks and ADF pipelines, overseeing code migration across environments, maintaining project planning through sprint work items, and providing thorough documentation for UAT, test cases, and data validation.

Overview

7
7
years of professional experience

Work History

Azure Data Engineer

CGI/TD
Toronto, Canada
04.2022 - Current
  • Developed and managed pipelines in Azure Data Factory integrating data from sources such as Azure SQL, Blob Storage, and write-back tools.
  • Executed ETL operations on Teradata sources and integrated outputs into Azure storage solutions using Azure Data Factory and Databricks (PySpark).
  • Managed data ingestion and processing across multiple Azure services (Data Lake, Storage, SQL, DW) with advanced experience leading multiple Azure big data implementations.
  • Engineered scalable architecture leveraging Azure components for optimal business problem resolution.
  • Developed Logic Apps workflows for API integration and data collection, built generic Databricks notebooks to enable code reuse, and designed parallel notebook execution to enhance performance.
  • Created Power BI reports and visualizations, scheduled refreshes, and performed data transformation within queries to optimize reporting.
  • Conducted data analysis, closely collaborating with analytics teams to tailor datasets to project needs.
  • Environment: SDLC, Oracle, Azure SQL, Azure Data Lake, Azure Data Factory v2, Azure Databricks, PySpark, SQL Server, Logic Apps, Function Apps, Power BI, JIRA

Azure Data Engineer

CHUBB Canada
Toronto, Canada
02.2021 - 01.2022
  • Analyzed and optimized existing programs with thorough coding and testing methodologies.
  • Designed and implemented ELT pipelines using Azure Data Factory and developed U-SQL & SQL scripts based on complex business logic.
  • Delivered incremental loads and implemented SCD Type1 and Type2 mechanisms, alongside data quality validation routines.
  • Utilized Azure DevOps for code management and followed Agile methodologies for collaborative development.
  • Environment: Azure SQL, Azure Data Lake, Azure Data Factory, U-SQL, Azure DevOps

L2-ETL Developer

BNP Paribas
Montreal, Canada
10.2019 - 12.2020
  • Interpreted business requirements and delivered tailored ETL solutions, including parameterized and multi-parameter cascading reports.
  • Developed stored procedures, views, functions, and handled complex data transformations for data marts population.
  • Generated, deployed, and maintained reports via SSRS, ensuring accessibility through web interfaces.
  • Environment: SQL Server, SSRS, SSIS

SQL Developer

APOTEX
ON, Canada
10.2018 - 09.2019
  • Collaborated with business stakeholders to identify needs and implement solutions via stored procedures, views, and functions.
  • Created DDLs and constraints, conducted performance tuning, and executed both unit and integration testing.
  • Environment: SQL Server, HTML, CSS, JavaScript, ASP.NET, MVC

Education

PGDM -

St. Lawrence College
Kingston

Bachelor of Engineering - Information Technology

JNTU
Hyderabad

Skills

  • Data transformation techniques
  • Software development
  • Team collaboration
  • Data curation
  • PySpark and SQL
  • Azure ADF and Databricks
  • ETL operations
  • Power BI and SSRS
  • JIRA management
  • Microsoft Office suite
  • Azure SQL and Oracle databases

<Enter your own>

Title: Azure Data Engineer

Skills And Technologies

PySpark, SQL, Azure ADF, Azure Databricks, Power BI, SSRS, JIRA, Microsoft Word, Excel, PowerPoint, Azure SQL, Oracle 11g/10g, SQL Server 2010, Windows, UNIX, Oracle Linux

Timeline

Azure Data Engineer

CGI/TD
04.2022 - Current

Azure Data Engineer

CHUBB Canada
02.2021 - 01.2022

L2-ETL Developer

BNP Paribas
10.2019 - 12.2020

SQL Developer

APOTEX
10.2018 - 09.2019

PGDM -

St. Lawrence College

Bachelor of Engineering - Information Technology

JNTU
Sai Krishna