Other Jobs
Loading...

AWS Data Engineering & Transformation Specialist

Apply Now
Company
OnX
Job location
Toronto, CA
Salary
Undisclosed
Posted
Hosted by
Adzuna

Job details

Overview In today's rapidly evolving environment, organizations need to make data-driven decisions that deliver enterprise value. Our OnX Cloud and Artificial Intelligence practitioners design, develop, and implement large-scale data ecosystems, leveraging cloud-based platforms to integrate structured and unstructured data. We utilize automation, cognitive, and science-based techniques to manage data, predict scenarios, and prescribe actions. By continuously optimizing our cloud infrastructure and providing As-a-Service offerings, we ensure ongoing insights and improvements to enhance operational efficiency. We assist clients in transforming their businesses by developing organizational intelligence programs and strategies, enabling them to stay ahead in their markets. Our team works with clients to: Implement large-scale data ecosystems, including data management, governance, and the integration of structured and unstructured data to generate insights using cloud-based platforms. Use automation, cognitive, and science-based techniques to manage data, predict scenarios, and prescribe actions. Enhance operational efficiency by maintaining data ecosystems, sourcing analytics expertise, and providing As-a-Service offerings for continuous insights and improvements. Responsibilities As a Data Engineering & Transformation Specialist, you will: Design, develop, and implement data pipelines and workflows: Collaborate with cross-functional teams to build efficient and scalable data solutions using AWS Glue and AWS DataBrew. Develop and execute data transformation processes: Write Python code to perform data cleaning, manipulation, and transformations to meet business requirements. Design and manage relational database schemas: Create and maintain database structures to ensure data integrity and accessibility. Contribute to data integration and ETL processes: Work with various data sources and systems to extract, transform, and load data into target destinations. Optimize data storage and retrieval: Implement best practices for data storage and querying to enhance performance. Requirements Proficiency in AWS Glue and AWS DataBrew. Strong programming skills in Python for data transformation. Solid understanding of relational database schemas and design. Experience in designing and implementing data pipelines. Excellent problem-solving skills and the ability to troubleshoot data-related issues. Effective communication skills, written and oral. Nice to have : Experience in Git, Redhat OpenShift, VMWare, Databricks, Snowflake, OpenSearch, ElasticSearch, Oracle or MS SQL DBs Familiarity with other IaC tools such as CloudFormation or Ansible. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). AWS, Microsoft Azure or Google Cloud certifications Education Four years of College resulting in a Bachelor's Degree or equivalent Bachelor's in Business, Computer Science, Engineering, or related field
Apply Now
Get the freshest news and resources for developers, designers and digital creators in your inbox each week
Start Free Trial
Connect
RSSFacebookInstagramTwitter (X)
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
© 2000 - 2024 SitePoint Pty. Ltd.