Job Duties:
Perform system / applications requirement gathering and analysis and create and document Functional and Technical design specifications;
Perform architecture design, interface and workflow development, development and implementation of Oracle database system / application;
Development and enhancement of Oracle Reports, Forms and Interfaces;
Conversion of oracle database system from legacy systems to ERP systems;
Perform systems migration and integrations;
Interact with various development and business teams on system integration, testing and quality assurance;
Develop and implement system /application testing – functional and performance testing;
Analyze, develop and document root causal analysis (RCA) and implement solutions for production bugs;
Develop and implement Testing Documents TE 20 and Setup document BRl00;
Design, develop and implement PL/SQL Functions, Procedures & Triggers as part of integration process;
Coordinate and report issues and project status to Project Manager;
Participate in Project Review meetings with other Development Team members and Business Team on monthly and yearly basis;
Design, code, program, develop and implement the Hadoop (Big Data) systems/database architectural redesigns, flow charts and work flow and database models and user interfaces using Visual Studio, HTML, CSS, Java Script, ASP, ASP.NET, SQL Server, PL/SQL, Shell Scripting, XML;
Design, program, and implement software codes and software scripts for data loading and validation using Java, Java Script for validations;
Design, code, program and load database models, tables, views and stored procedures and queries for the application/system to capture and analyze data using SQL Scripts, SQL Developer and SQL Server;
Review and analyze data for importing, uploading and validation;
Data mapping and data migration between database and developing software scripts that correctly capture the data being migrated;
Deploy, migrate, customize and integrate the redesigned/modified database system/application;
Design, program and implement software code and software scripts for Data conversion and migration;
Perform Big data analytics, data validation and stored procedures developing and using SQL queries and MS SQL Server;
Installed and configured multiple Hadoop clusters on the platform as per the requirements;
Configured several Hadoop components on the database clusters such as MapReduce, YARN, Zookeeper, HDFS, Hive, HBase, Spark, Sqoop, Oozie etc. and ensure that they are functioning as expected;
Develop and automate the ETL workflows for importation and analysis of required data from several RDMS sources such as SQL server, Oracle and DB2 into Hadoop (Big Data) using components like Oozie, Sqoop, Hive, UNIX shell scripts and scheduled them to run daily;
Install and configure data analytics tools such as R SAS on the Hadoop clusters and provide necessary technical support to data scientists to ensure that they work properly;
Monitor multiple clusters on the Hadoop platform and resolving any issue that may occurred, to ensure that the platform is always available for the data scientists to run their jobs;
Develop and implement an automated monitoring application to transmit high alerts if any component of the Hadoop platform is down using Python;
Design develop and implement HiveQL queries to materialize the existing hive tables to generate data that can directly be used for data analytics;
Configure and administer Spark and provide technical training and support to the Scientist analytics team on use of Spark for use cases.
Minimum Requirement: Bachelor of Science degree or equivalent in Computer Science or related fields such as Information Science or Software / Computer Engineering and several years of hands on software or database systems design and development experiences.
Location: Herndon, VA and Laurel, Maryland