Engineer Sr
Location: This position will work a hybrid model (remote and office). Ideal candidates will live within 50 miles of the following PulsePoint location. 7406 Fullerton Street, Suite 340, Jacksonville, FL 32256
Hours: Monday to Friday, 8:00 am to 5:00 pm
The Engineer Sr will gather business requirements, analyze business processes, and provide optimal options for improving processes with best practices.
How you will make an impact:
• Design and develop custom Claims Framework.
• Extract source data from Ingestion framework transformations, load, and reconciliation.
• Develop Applications and Implement Spark data processing project to handle data from various RDBMS sources and vendor files.
• Write scripts to automate application deployments and configurations.
• Develop interactive shell scripts for scheduling various data cleansing and data loading processes.
• Develop a series of Spark jobs automated using Control-M in several steps for different purposes.
• Import data from the Oracle database to the hive warehouse.
• Work extensively with importing metadata into Hive and migrate existing tables and applications to work on Hive and AWS s3.
• Review and implement technical design specifications based on mapping documents.
REF# 164507
Minimum Requirements:
Bachelor's degree in Computer Science, Management Information Systems, or a related field. Five (5) years of experience working in related occupation(s).
Additional Requirements:
Five (5) years of experience must include:
• Five (5) years of experience in gathering business requirements, analyzing business processes, and providing optimal options for improving processes with best practices.
• Five (5) years of experience in designing and developing custom Claims Framework for Spark Jobs.
• Five (5) years of experience extracting source data from Ingestion framework transformations, load and reconciliation.
• Five (5) years of experience in developing Spark Applications by using Scala and Implementing Spark data processing projects to handle data from various RDBMS sources and vendor files.
• Five (5) years of experience in reviewing and implementing technical design specifications based on mapping documents.
• Five (5) years of experience in Big Data technologies and building data pipelines using AWS services including Lambda, AWS step functions, EMR, Hive for datalake, Redshift, for efficient execution.
• Three (3) years of experience in building streaming applications using Kafka, and Kinesis.
• Three (3) years of experience in Data Architecture, Peer reviews, code reviews, and mentoring the scrum team.
|
|
|
Subscribe to job alerts and upload your resume!
*By registering with our site, you agree to our
Terms and Privacy Policy.