Software Engineer-1715

Remote, USA Full-time
About the position FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we’re making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job Summary Job Description DUTIES: Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team; perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; process data using Python and Shell scripts; optimize performance using Java Virtual Machine (JVM); architect and integrate data using Delta Lake and Apache Iceberg; automate the deployment, scaling, and management of containerized applications using Kubernetes; develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitor applications and platforms using Datadog and Grafana; store and query relational data using MySQL and Presto; support applications under development and customize current applications; assist with the software update process for existing applications, and roll-outs of software releases; analyze, test, and assist with the integration of new applications; document all development activity; research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports; monitor and evaluate competitive applications and products; review literature, patents, and current practices relevant to the solution of assigned projects; collaborate with project stakeholders to identify product and technical requirements; conduct analysis to determine integration needs; perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements; and build CI/CD pipelines to automate the quality assurance process and minimize manual errors. Position is eligible to work remotely one or more days per week, per company policy. REQUIREMENTS: Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; processing data using Python and Shell scripts; developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitoring applications and platforms using Datadog and Grafana; and storing and querying relational data using MySQL and Presto; of which one (1) year includes optimizing performance using Java Virtual Machine (JVM); architecting and integrating data using Delta Lake and Apache Iceberg; and automating the deployment, scaling, and management of containerized applications using Kubernetes Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Responsibilities • Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team • Perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Process data using Python and Shell scripts • Optimize performance using Java Virtual Machine (JVM) • Architect and integrate data using Delta Lake and Apache Iceberg • Automate the deployment, scaling, and management of containerized applications using Kubernetes • Develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitor applications and platforms using Datadog and Grafana • Store and query relational data using MySQL and Presto • Support applications under development and customize current applications • Assist with the software update process for existing applications, and roll-outs of software releases • Analyze, test, and assist with the integration of new applications • Document all development activity • Research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports • Monitor and evaluate competitive applications and products • Review literature, patents, and current practices relevant to the solution of assigned projects • Collaborate with project stakeholders to identify product and technical requirements • Conduct analysis to determine integration needs • Perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements • Build CI/CD pipelines to automate the quality assurance process and minimize manual errors Requirements • Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Processing data using Python and Shell scripts • Developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitoring applications and platforms using Datadog and Grafana • Storing and querying relational data using MySQL and Presto • Of which one (1) year includes optimizing performance using Java Virtual Machine (JVM) • Architecting and integrating data using Delta Lake and Apache Iceberg • Automating the deployment, scaling, and management of containerized applications using Kubernetes Apply tot his job
Apply Now

Similar Jobs

Support Engineer 2

Remote, USA Full-time

DevOps Engineer (Virtual) in Philadelphia, PA in Comcast

Remote, USA Full-time

Solutions Engineer 3 (Sales Engineering)

Remote, USA Full-time

Associate Visual and Motion Designer

Remote, USA Full-time

Golang Security Automation Developer

Remote, USA Full-time

Cross Platform and Project Management Intern

Remote, USA Full-time

Communications Manager job at Duke University in Durham, NC

Remote, USA Full-time

PR and Communications Manager

Remote, USA Full-time

Sr Manager, Communications

Remote, USA Full-time

Communications Manager (North America)

Remote, USA Full-time

Experienced Online Chat Support Agent – Remote Customer Service Representative for Dynamic and Growth-Oriented Company

Remote, USA Full-time

Experienced Senior Test Engineer for Netflix Remote Careers - Cloud Gaming and UI Development Expert

Remote, USA Full-time

Certified Professional Coding / Billing

Remote, USA Full-time

Account Opening Specialist - Trading & Demat Accounts (Work From Home)

Remote, USA Full-time

Experienced Online Chat Support Specialist - Customer Service Excellence at blithequark

Remote, USA Full-time

Sales Development Representative - EMEA

Remote, USA Full-time

Senior AML Officer

Remote, USA Full-time

**Experienced Part-Time Remote Data Entry Clerk – Urgent Hire at blithequark**

Remote, USA Full-time

Experienced English Transcription and Translation Specialist for Remote Linguistic Services

Remote, USA Full-time

Project Coordinator & Administrative Assistant

Remote, USA Full-time
Back to Home