Other Jobs
Loading...

Staff Software Engineer

Sorry, looks like this job is no longer open 😔

Check out other job openings on our job board!

View more
Company
HEB
Job location
Dallas, United States
Salary
Undisclosed
Posted
Hosted by
Appcast

Job details

Responsibilities:

Company Name: H-E-B, LP
Job Location: 3890 W. Northwest Hwy., Suite 400, Dallas, TX 75220
Job title: Staff Software Engineer
Minimum Salary: $149,781
Education: Bachelor's degree in Electronics and Communication Engineering, Computer Science, or related.
SOC Code: 15-1252
SOC Occupation Title: Software Developers
Duration: Regular Hire
Work week: Full-time
Supervision Experience Required: No
Travel Required: No - Employer will allow remote/telecommuting throughout the US.

Experience: 7 years of experience with Data Engineering, or related. Requires the following skills: 7 years of experience in developing Big data pipelines using Spark, Scala, Map Reduce, Pig, Sqoop, and Hive. Transferring and analyzing data to Big Query from various sources. Working with cloud technologies GCP Dataproc and cloud storage. Must have hands-on experience in Code Migration and Data Migration for Extracting, Transforming, and Loading data using ETL tools (Sync sort, DMExpress) on UNIX and Windows. Creating BI reports for business users using Looker. Developing Teradata SQL Scripts through various procedures, functions, and packages to implement the business logics. Scheduling ETL workflows using Oozie, Autosys, Atomic, Crontab, and Apache Airflow. Data modeling and performance tuning using versioning tools GITHUB and TortoiseSVN.

Job duties: Research, design, and develop computer and network software or specialized utility programs for a statewide supermarket chain. Analyze user needs and develop software solutions, applying principles and techniques of computer science, engineering, and mathematical analysis. Update software or enhance existing software capabilities. Design, develop, and modify software systems using scientific analysis and mathematical models to predict and measure the outcomes and consequences of the design. Develop multiple automated shell scripts for data transfer from other sources to Hadoop. Develop Spark RDD's and Data Frames to utilize the capability of built-in memory processing improving the performance of the application. Work with Hadoop admin in setting up edge nodes and installing the required software on cluster. Design and implement a data pipeline enabling near real time systems such as Micro Services Architecture for the business decision making system. Analyze data to design the scalable algorithm using Spark. Develop the generic applications such as data pull and setup the notebooks in cluster.

Get the freshest news and resources for developers, designers and digital creators in your inbox each week
Start Free Trial
Connect
RSSFacebookInstagramTwitter (X)
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
© 2000 - 2024 SitePoint Pty. Ltd.