Job Opportunities:

Job Title: Senior Software Engineer
Location: Reston, VA
Duration: Year-End+

Job Description:
Experience in Tivoli Netcool, performs high complexity (i.e. system level applications) analysis, design, development and unit testing of software applications from user requirements and design documents. The candidate also resolves defects encountered during various testing cycles.

Requirements:


Job Title: Big Data Administrator
Location: Reston, VA
Duration: 6 Months+

Job Description:

Requirements:


Job Title: Systems Analyst Specialist
Location: Richmond, VA
Duration: 3 Years

Qualification:
The Candidate must have an expert understanding of Linux, Hadoop Ecosystem and associated infrastructure. Knowledge of setting up and configuring Kerberos, Spark, R Studio, Kafka, Flume, Shiny, Ranger, Oozie, NiFI etc. is a must Should have a solid understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks Should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups Solid Understanding on premise and Cloud network architectures

Responsibilities:
Will work fairly independently, and performs complex development and support services with the IT Enterprise infrastructure teams, ensuring operability, capacity and reliability for the Big Data System. Will assist in planning, design, support, implementation and troubleshooting activities. Will work with developers and Architects to support an optimal & reliable Big Data Infrastructure. Will be on call and may need to on evenings/weekends as required. Will be responsible for implementation and ongoing administration of Hadoop infrastructure, align with the Architect to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments Will setup new users in Linux. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, HBase and Yarn access for the new users, luster maintenance as well as creation and removal of nodes using appropriate administrative tools. Performance tuning of Hadoop clusters and Spark processes, Screen Hadoop cluster for job performances and capacity planning, Monitor Hadoop cluster connectivity and security, set up and monitor users of the system Manage and review Hadoop log files. File system management and monitoring Diligently team with developers to guarantee high data quality and availability and Collaborate with application teams & users to perform Hadoop updates, patches, version upgrades when required Work with Vendor support teams on support tasks and troubleshoot system issues.


Disclaimer:

The information provided in this website is for informational purpose only. While every care has been taken to ensure the information is useful and accurate, we does not accept any legal liability or responsibility for the information so provided, or for any loss caused directly or indirectly in connection with the use of the information.