Summary
Overview
Work History
Skills
Timeline
Generic

Bhuvaneshwari Maturi

Summary

Over 5+ years of experience as a Snowflake DBA and Developer Supporting development teams across geographies.
Experienced DWH, ETL and QA Analyst with strong knowledge in design, development, and support of databases and information products as well as data analysis / reporting / Testing.
Responsible for leading, designing and implementing the overall ODS and EDW work with business and project stakeholders to create Enterprise Warehouse.
As a Snowflake DBA responsible for on boarding the users onto snowflake, Creating and managing storage integrations, db. objects, stages (Internal & External), le formats (JOSN, CSV, TSV).
As a Snowflake DBA (Security admin) responsible for management of roles, network and data masking
policies, global user(service) & role privileges management.
Created and monitored resource monitors to alert the credit consumptions at account level and for individual warehouses.
Created and Managed users on matillion, managed the instance back - up’s, managing the cloud credentials (AWS) and transportation of objects between the environments.
Created snowpies to load streaming data from S3 buckets. Created SQS and implemented IAM policies.
Created Canonical model in Snowflake as a single source of truth by building data pipelines connecting
and migrating data from different sources like Structured and Semi Structured data (JASON, CSV, XML)
Worked on cost Optimizations on Snowflake DB by analyzing the queries and re-engineering the data pipelines used by various services (Warehouse, Snow pipe, clustering). Costs reduced by 15%.
As a security admin worked on importing data from marketplace, importing data, Data Migration, Extraction, and replication of data across regions.
Expert in tuning the SQL queries to improve performance of the system.
As an administrator has worked on creating S3 bucket lifecycle policies.
Expertise in Developing complex Business rules by creating graphs and various transformation AB INITIO.
Extensive experience in strategizing for Tableau Architecture and implementing for projects with Snowflake/Big Query/SAP HANA/SAP BW/SQL Server/Salesforce Sales Cloud as data - sources.
Developed Source-Target mapping documents for the Dimensions and Fact tables
Integrated custom visual based on business requirements using Power BI Desktop.
Extensively used Alteryx workflows to process Extract, Transform and load the data into stage area and warehouse.
Hands on experience in using the Alteryx for data cleansing and standardization.

Expertise in defects logging, tracking and managing defect life cycle using JIRA, RALLY, QC/ALM, Service Now, Practitest.
Hands on experience with version control systems and CI/CD tools like GitHub, Jenkins and Puppet.
Experienced in Agile (Scrum) methodology and DataOps process for continuous development.
Excellent problem solving, interpersonal, and communication skills.
Collaborated and engaged with product managers, Product Owners and architects in problem solving, planning and decision making.
Requirement Analysis, functional & Architectural documentation, development, testing, change management, industrialization, User trainings.
Defined processes, Artifacts, coaching, prioritizing Product Backlogs and implemented JIRA for Agile project management.

Overview

6
6
years of professional experience

Work History

Sr.snowflake Developer

Jd Power Systems
01.2021 - Current

Developed and refined the Spark process for ODS (Operations Data Store) by making changes and enhanced the performance of the data ingestion from raw and refined to publishing Postgres data to the core script using Python and PySpark.
Developed complex SQL queries for querying data against different data bases for data verification process.
Prepared the Test Plan and Testing Strategies for Data Warehousing Application
Developed ETL test scripts based on technical specifications/Data design documents and Sources to Target mappings.
Extensively interacted with developers, business& management teams to understand the OPM project business requirements and ETL design document specifications.
Participated in regular project status meetings and QA status meetings.
Extensively used and developed SQL scripts/queries in backend testing of Databases.

Validating data fields from the refined zone to ensure the integrity of the published table.
Converting ingested data (csv, XML, Json) to parquet le format in compressed form.
Created business models from business cases and enterprise architecture requirements for process
monitoring, improvement, and reporting and led the team in business intelligence solutions development
Experience in performing transformations and actions on RDD, Data frames, Data sets using Apache spark.
Good Knowledge of Spark and Hadoop Architecture and experience in using PySpark for data processing.
Applied advanced DW techniques and Informatica best practices to load a Financial, HR & Supply Management Data Warehouse, Data Marts, and Downstream Systems.
Developed several complex Informatica mappings, Mapplets, stored procedures, and reusable objects
to implement the business logic and to load the data incrementally.
Design, develop, and test Informatica mappings, workflows, work lets, reusable objects, SQL queries,
and Shell scripts to implement complex business rules.
Developed Gsutil scripts for compression with Gzip, backup, and transfer to edge node with all necessary file operational requirements for BQ load jobs.
Worked on data that was a combination of unstructured and structured data from multiple sources and automated the cleaning using Python scripts.
Congregated data from multiple sources and performed resampling to handle the issue of imbalanced data.
Coded in PostgreSQL to publish 10 million records from more than 90 tables to ensure the integrity of data ow in real-time.
Providing a single environment for data integration and data federation with role-based tools that share common metadata using Informatica data virtualization.
Understanding ETL requirement specifications to develop HLD & LLD for type-1, SCD Type-II, and Type-III mappings and was involved in testing for various data/reports.
Experienced as Senior ETL Developer (Hadoop ETL/ Teradata /Vertica / Informatica / DataStage/Mainframe), Subject Matter Expertise (SME), Production Support Analyst, QA Test.
Extensively worked on TFS (Microsoft) as a tool to deploy production-level code in part with Git.
Constructed robust, high-volume data pipelines and architecture to prepare data for analysis by the client.
Architected complete, scalable data warehouse and ETL pipelines to ingest and process millions of rows daily from 30+ data sources, allowing powerful insights and driving daily business decisions.
Implemented optimization techniques for data retrieval, storage, and data transfer.
Creating test cases for ETL mappings and design documents for production support Setting up, monitoring, and using a Job Control System in Development/QA/Prod
Extensively worked with flat files and Excel sheets of data sources. Wrote scripts to test the flat flies data in the databases,Scheduling and automating jobs to be run in a batch process.

Environment: Informatica Power Center 10.4, Snowflake, Spark 2.4, H Base 1.2, Tableau 10, Power BI, Python 2.7 and 3.4, Scala, PySpark, HDFS, Flume 1.6, Hive, Zeppelin, PostgreSQL, MySQL, TFS, Linux, Spark SQL,Kafka, NIFI, Sqoop 1.46, AWS (S3).

Snowflake Developer

Quoter
04.2018 - 01.2021

Business Requirements review with the client, worked with the Business Analysts to iron out the gaps in the business requirements, and provided advice on any application improvements or caveats. Wrote SnowSQL, Stored Procedures, and designed snowflake tables (Temporary, Transient, Permanent)
Working with Snowflake objects- Warehouses, Roles, Databases, Schemas, Tables, Views, Constraints and Snowflake Table Clustering Keys.
Work with Snowpipe to enable automatic loading of data from les including semi-structured data types such as JSON, Avro, and Parquet.
Work with Snowflake-Streams, Time Travel, Copy statements, Validate and validate pipe load statements.
Requirement Analysis: Analyzed the requirements, to validate the assumptions about the application architecture against the actual application architecture based on Code and Data Analysis, and update business of any discrepancies. Identify the re-usability of the existing application modules that might satisfy the requirements and suggest any alternative approaches to reduce work effort.
Design - Technical Design Creation: Created comprehensive TDD (Technical Design Documents), clearly documented assumptions, with technical ow and physical Data modeling - designed table structures, indexes, sequences, constraints, triggers, other database objects, Modularized design with technical functionality with Package/Procedure design with parameters, designed application-level exception handling strategy which is followed as a standard, clearly defined the functionality of the Unix scripting,ETL functionality, Autosys Scheduler.
Development - Developed and lead end-to-end solutions in integrating various technologies into the application for the respective project using ETL DataStage jobs, sequencers, mappings, transformations,

UDFs, re-usable components, command tasks, Oracle PL/SQL Programming and writing Stored procedures in Microsoft SQL Server DB.
Migrated DataStage 8.5 Jobs to DataStage 11.5 as part of ITA App project.
Migrated DataStage 11.5 Jobs to DataStage 11.7as part of ETL Migration project.
Migrate on-premise databases (MySQL, SQL Server, Oracle, Netezza) to Staging schemas of Snowflake.
Developed DataStage Jobs, Unix Scripts (including sed, awk), PL/SQL, and SQL Performance Tuning.
Created Code Review Standards for UNIX Scripting, SQL, and PL/SQL.
Performed Code reviews - to identify missed requirements, check on coding standards, exception
handling, cover all scenarios, scheduling, performance issues, and other application impacts.
QA Support & Production Support - Root Cause Analysis on the issues reported, code changes/fixes,
impact analysis, work around until code changes.
Environment: ETL- IBM Infosphere DataStage V 8.5, 11.5, 11.7, Oracle 12C (SQL, PLSQL), Unix Shell Scripting,Snowflake Cloud DB 5.8.2, SnowSQL, MS SQL Server 2012, IICS Cloud ETL, AWS Cloud- SQS, EC2, S3,Redshift, RDS, Autosys, Microsoft Azure

Skills

    TECHNICAL SKILLS
    Programming Language: Python, JSON, XML, COBOL
    ETL Tools: Snowflake, INFORMATICA,OLTP: CICS
    Control Language: JCL
    Databases: DB2, Oracle, SQL, SNOWSQL
    Access Methods: VSAM
    Debugging Tools: Expeditor, Intertest and IBM debugger
    Utilities/Tools: QMF, SPUFI, DB2 PLATINUM, File aidTSO/ISPF, IDCAMS, IEBGENER, MQ Series
    Configuration Control: ISPW, Endeavor, Change man
    Schedulers: Stonebranch UAC, CA7, Control-M and Jobtrack
    Bug Tracking Tool: HP Quality Center, JIRA and Practicestest
    Operating Systems: Windows 95/98/NT/2000, MVS/ESA, Z/OS, OS/390UNIX, LINUX
    Microsoft Technology: MS word, MS Excel, Visio, Power point
    Development Methodologies: Agile, Waterfall and Iterative Development
    Interfacing tools: Putty, WinSCP

Timeline

Sr.snowflake Developer

Jd Power Systems
01.2021 - Current

Snowflake Developer

Quoter
04.2018 - 01.2021
Bhuvaneshwari Maturi