Summary
Overview
Work History
Education
Skills
Languages
Timeline
Generic

Bharath Aennam

Vancouver,BC

Summary

A top performing technologist with 8 years of experience in DevOps implementation, Cloud Platform Engineering, Containerization, Infra Development, Automation and Python development
Experience in implementing DevOps/SRE practices within private and Public cloud architectures.

Experience in Enterprise Data Platform development framework on AWS.

Proven track record in the successful implementation of agile methodologies, software containerization and orchestration, Continuous integration/delivery, build and release management and configuration management.

Expertise in API and Application development using Python programming.

Expertise in automating CICD deployments

Expertise in deploying application onto containers
Experience working with top notch financial and health care firms.

Overview

10
10
years of professional experience

Work History

Senior DevOps/SRE Engineer

Jd Power Systems
04.2021 - Current

Project Description:

The objective for JPMCs Public Cloud Platform Engineering teams is to design, build and Deploy applications onto Public Cloud platform. JPMC is building DataMesh in AWS and Cloud team helps them to provision AWS resources, Build CICD pipelines and Deploy applications onto EKS. Serverless framework development using python programming and Terarform

Job Duties:

  • Developed JPMC's Enterprise Data Platform on AWS cloud using Python, terraform and CICD Pipelines
  • Developed Serverless framework to onboard AW Producer Lake accounts into AWS Central DataMesh accounts using Lambda, Step Functions, API Gateway and DynamoDB
  • Developed Terraform modules to automate create and onboard AWS Data Lake accounts for consumer banking customers.
  • Developed Continuous Integration/ Delivery pipelines for Python, Java and Terraform applications
  • Software containerization using Docker and Deploy applications using Kubernetes and EKS
  • Python, Java, Nodejs and golang applications deploymented into Kubernetes
  • Building container images using Skaffold and Kaniko
  • Create and Import SSL Certificates into ACM's and configure with ALB's
  • Developed Terraform modules for various AWS Services provisioning like KMS,S3,ASG, IAM roles, Role_policy_updater and SQS
  • Terraform Enterprise Work spaces creation and Management
  • Sentinel implementated for Chase Policy as Code framework
  • Writing Sentinel Policies for AWS services approved my Chase and updating existing one's as per the new standards
  • Lake formation and Glue implementation for DB,Table and column level access entitlements
  • Developed Unit and Integration test cases using Python Pytest and Robot framework
  • Applications designed and released using Blue Green Deployment model
  • Experience in Setting up EKS cluster and deploy customer applications
  • Experience in setting up Application Load Balancers (ALB's) and Auto Scaling groups on Production EC2 Instances to build Fault-Tolerant and High Availability applications.
  • Developed Cost Calculator API's using Python flask framework to compare the cost between on prem and AWS
  • Developed CICD Pipelines for AWS IAC implementations
  • AWS cloud resource creation using Service Catalog
  • Experience in Amazon Web Services (AWS) Cloud services troubleshooting using tools like Cloud Watch, Cloud trail, Data Dog and Splunk.

DevOps/SRE Engineer

MOGO
06.2017 - 03.2021

Project Description:

DomaniRx product team focused on shaping the future of Pharmacy Benefit Management by delivering a cloud-native, API-driven claims adjudication Cloud platform. AWS cloud team is responsible helping product team in Public Cloud adaption.

Job Duties:

  • Design, develop, implement, maintain and continuously refine data warehouse architectures on AWS
  • S3 setup and cross account sharing automation for health care records sharing
  • Proficient in developing Continuous Integration/ Delivery pipelines
  • Hands-on experience in software containerization using Docker and container orchestration tools like Kubernetes and EKS
  • Developed Terraform modules for various AWS Services provisioning like KMS,S3,EC2,ASG and IAM roles
  • AWS services provisioning using Terraform modules like EC2, KMS, S3, Lambda and EMR
  • Creating IAM roles and policies for various customer use cases
  • Terraform Enterprise work spaces creation and management
  • Enterprise level hybrid Cost Calculator to create cost transparency between Private and public cloud cost.
  • Lead transition from manual resource provisioning to 'Infrastructure As Code' using terraform, including modules development and implementation.
  • Developed API's for Private cloud Infrastructure Provisioning and Build automations.
  • Developed Ansible play books for telemetry collection
  • Developed Automation As a Service framework for infra Configuration management using Ansible.
  • Data Migration from On-prem to AWS Cloud
  • Applications deployments on modern PaaS platform built on Pivotal Cloud Foundry

Linux System Admin and Storage Engineer

Aurobindo Pharma
06.2014 - 07.2015

Project Description:

Aurabindo Pharma limited is an Indian Pharmaceutical manufacturing giant. Storage Division responsible for managing distributed storage systems to store PB's of customer data.

Job Duties:

  • Automation of storage operations using python
  • Responsible for performance analysis of the Storage frames/fabrics and provide optimized solutions to build stable infra
  • Study different LOB Application/DB storage related requirements and provide optimize solutions
  • Deeper investigation on the current SAN infrastructure and provide solutions
  • Write technical standards for Deploy, Capacity and support teams
  • Design SAN for new application deploy and participate in application deploy testing’s
  • Trend analysis of the current infrastructure and provide Capacity related solutions
  • New frames design, Build and implementation for EMC VMAX, IBM XIV and HDS VSP
  • Migration Projects EMC DMX to VMAX, VSP to VMAX, Clariion to VMAX.
  • Target modeling for Migrations for EMC VMAX, IBM XIV and VSP

Education

Master Of Science In Information, Network And Computer Security - INCS

NYIT
Vancouver, BC
05.2017

Bachelor of Technology - Computer Science Engineering

Jawarhar Lal Technology University
Hyderabad
05.2014

Skills

  • Python programming
  • API's developement
  • Serverless Stack - AWS Lambda, Step Functions, API Gateways and DynamoDB
  • Data Platform - DataMesh, Data Lakes, Lake Formation, Glue, EMR and Airflow
  • Infrastructure As Code development - Terraform
  • Containers - Docker, Kubernetes, AWS EKS, Skaffold, Kaniko and JFrog
  • Compute -AWS EC2, AWS ASG and VMware
  • Storage - AWS S3, EBS, EFS and Software Defined Storage
  • Load Balancers - AWS ALB and ACM
  • Configuration management - Ansible
  • Dashboard creation - Grafana,Edge and Tableau
  • Databases - Mysql, MSsql, AWS Athena and Redshift
  • Tools Adaption - Jira, Bitbucket, AIM and Jenkins
  • Telemetry Development - Prometheus and Splunk
  • CICD - Jenkins Groovy and Pattern builds
  • Middleware - Redis and SQS
  • Deployment Patters - Blue Green Deployment
  • Unit Testing - Pytest
  • Integration/Acceptance Testing- Robot framework

Languages

English
Professional Working

Timeline

Senior DevOps/SRE Engineer

Jd Power Systems
04.2021 - Current

DevOps/SRE Engineer

MOGO
06.2017 - 03.2021

Linux System Admin and Storage Engineer

Aurobindo Pharma
06.2014 - 07.2015

Master Of Science In Information, Network And Computer Security - INCS

NYIT

Bachelor of Technology - Computer Science Engineering

Jawarhar Lal Technology University
Bharath Aennam