Data Systems Engineer
Please find the Job Position Details; Position: Data Systems Engineer Location: Alpharetta, GA OR Menlo Park, CA (3 Days Onsite - Hybrid) Duration: 12+ Months (with possible extension) Years of experience: 7-15 years Exp Must: Need to have strong working knowledge on Linux platform Application Development Exp - Python (#1) - Ruby or Shell (#2) Need to be able to run the application within Linux - debug - understand when they run the application there is a memory CPU, etc. ELK (Elastic Search) Exp Kafka building pipeline with ELK and Kafka Strong communication, Team Player and someone who is open to learning and getting their hands dirty. Ability to learn - Curiosity Plus : Fling Snowflake Database Exp Spark Data processing Good Data Analysis Background ELK Certification Observability and Data Analysis Day to Day: This person will not only be doing the pipeline development, but will work in large sacle cluster setting. Will requires the understanding of scalability in the kafka environment This person will ensure the jobs are up and running, debug it and support the 100s of customers who use their system Be the person to support those people Pipeline development using Kafka, ELK and Hycon Will need a strong knowledge of linux because that’s where they deploy their jobs in the 100s Pipeline is within kafka and almost all the data is writing within ELK (need to understand how the data is stored) This is a diverse background! Job Description: RTOI Data Engineer Realtime operation intelligence team in client Enterprise Computing is responsible to stream terabytes of data daily. Client has built job frameworks to run large scale ETL pipelines with Kafka, ElasticSearch ELK, Snowflake, Hadoop. Client applications run both on perm and on the cloud. There are hundreds of dashboards built for business and operations to provide insight and actionable items at real time. Client is looking for streaming, data engineer: Understand distributed systems architecture, design and tradeoff. Design and develop ETL pipelines with a wide range of technologies. Able to work on full cycle of development including defining requirement, design, implementation, testing, deployment. Strong communication skills to collaborate with various teams. Able to learn new technologies and work independently. Requirements: 5 years of application development experience, at least 2 years data engineering with Kafka. Working experience writing and running applications on Linux. 5 years of coding experience with at least one of the languages: Python, Ruby, Java, C/C++, GO. SQL and database experience. Optional: AWS or other cloud technologies. ElasticSearch ELK