Proven career in conceiving & implementing effective ideas / strategies that can add value to organization through inspiring leadership, rich experience & innovation excellence. Disciplined Professional with more than 14 years of expertise in Application Support. A team player experienced in the Banking & Finance Domain environment. Ready to take on the challenges in an exciting new position.
Entitlements Management – Identity & Access Management, Since Feb’17, Shell scripting, DB2, Sybase, Autosys, Informatica Power center, Aveksa, Azure, Spark, IAM (Identity and Access Management) govern and manage the firm’s entitlements and the adoption of Applications into Entitlement Management Platform (EMP). Identity and Access Management system help organizations improve their information security and compliance. EMP serves the data to Aveksa Review Platform where entitlements are reviewed by appropriate reviewers and monitors, after which the entitlements will get revoked., Acting as a SME and SPOC for 6 Firm Critical Applications; understanding and analyzing requirement specifications of the application Working on user requests and monitoring job failures and other issues (DB Blockings, NAS Space compressing) Supervising the Incident & change management and coordinating with the Development Team for any code change related issues Preparing flow charts on applications workflow; creating Knowledge Base & Trend Metrics Analyzing and fixing issues/errors within the application or occurred during the various processes; examining the scope for new enhancements and Automations Deploying new production code and Decommissioning old/aged applications Arranging calls between Dev, Business Team and Support Team for new enhancements and weekly updates Test Data Management & Sanitization, Apr’12 – Jan’17, Informatica Power Center 9.5.1, Workflow Manager, Workflow Monitor, Informatica Power Exchange, Shell scripting, Perl scripting, SQL, Teradata, DB2, SQL Server, BTEQ, mload, fast export, fast load, Informatica Power Exchange, TDMS The Objective of the Project, Test Data management and Sanitization (TDMS), is to limit the production data (real time data) access for IT application development and testing teams by masking or obfuscating confidential information (like Name, Address, Date of Birth, SSN, Tax ID, FA Number, Passport Number, Credit Card, Driving license, Phone Number) hence protecting client sensitive data. In addition, managing and controlling client sensitive data transmitted to and from vendors and sensitive data stored in developer's workstations. The system collects data from mainframe or by querying against the Teradata database, DB2 database and SQL server database. The data collected from the remote extracted, transformed and loaded through Informatica in the target database or servers sent to the external Trillium, Business Objects and Actuate etc. This data is further used by downstream applications for and testing purpose., Cross-functionally coordinated with Application Team Members and Onsite Client; attended meetings with the clients to cascade the day-to-day progress update on the project Maintained PII object inventory from identifying and analyzing the delta objects and report to Application Team owning that object and assisting the team to certify object identified as PII or not Worked on various automation and optimization activity of project entailing mainframe reusable component changes, design, coding, Testing, implementation and support. Performed analysis on the scope of system improvement and suggested new changes to the system to make it more stable and efficient Components Engineering Group – Open Source Technology, May’10 – Mar’12, Shell Scripting, Pentaho, NagiOS, Eucalyptus Open Source Cloud, Amazon EC2, Hadoop, Hive, Pig, Sqoop, hbase, This project primary focus is to enable TCS capability in Open Source Technology and its adoption across customer projects, reuse generic solution components, Build tools and solutions to leverage them and also to enhance the usability and integration of component engineering principles. This project (OST) explores and analyses various technologies in Open Source based on market trend and also on Customer Requirements to bring cost effective and best solution over competitors. Not only analyze, but also helps the customers to choose from various products available in the market, and also works on customizing as per the requirement by integrating services in the product and products, helps in low cost solution to the client. Open Source Technology Group also does Proof of Concept (POC) to bring customer’s confidence and also to meet their expectations. We also provide constant support during the Software Development Life Cycle (SDLC) and also provide Value added services at no extra cost like training them and maintenance of the project. This project also conducts trainings across the organization for the awareness of new technologies to ease the effort from the traditional way., Analysed Customer Requirements and imparting assistance in all the phases of SDLC Responded to Request for Proposal (RFP) and Request for Information (RFI); created Proof of Concept (POC) Provided Effort Estimation; conduced Trainings across ISU’s in Open Source for various technologies Hadoop Eco-system, Apr’10 – Mar’14, Shell Scripting, Perl Scripting, Hadoop, Hive, Hbase, Pentaho, NagiOS, Eucalyptus Cloud, Bigdata gained its traction to reduce the processing time from the Tera bytes or Peta bytes of data. This is for the same reason, TCS adopted the new technology to understand and provide customized solution to its partners. To achieve this, TCS formed a central team to learn and investigate the necessity of the Hadoop and its tools., Successfully installed and configured Apache Hadoop, Hive, Hbase and Pig environment on the prototype servers; imparted trainings to over 1000 users Established 30-node cluster across locations over WAN. These two locations are 250KM apart; migrated data between 2 different clusters Led the setup of 10-node cluster over Eucalyptus open source Cloud; managed the configuration to test High Availability mechanism in the cluster. Installed and configured NagiOS on the cloud and monitored the Hadoop cluster server health; steered the implementation of hot, cold and incremental backup Front-led the process optimization by calculating the performance on various loads on the multi node cluster Loaded unstructured data into Hadoop File System (HDFS); engaged in a version upgrade Implemented Commissioning and Decommissioning of new nodes to existing cluster across WAN Executed secured password less connection between nodes in a cluster; created a Hadoop Cluster on the Eucalyptus Cloud, later configured in TCS internal labs Automated the addition and decommission of the nodes as part of the Cloud cluster with effective usage of commodity hardware