Upcoming Webinar: From Data to Decisions

How AI Insights Support
Innovation in Financial Services

Thu, Apr 25, 2024
8 AM PT / 11 AM ET

Register Now

Senior Data Pipeline Engineer (AWS)

Location Icons Vancouver, Canada
Apply Now

About Us

Celestial Systems, a name synonymous with Enterprise Software development since 2001. By designing reliable software, delivering quality services, and developing valuable partnerships, we are on a journey of continuous innovation. Our enterprise-level experience in engineering enables us to offer critical development services for organizations to be successful in the digital ages. Moreover, at Celestial, we focus on adhering to the complete software development lifecycle process. Our experts are curious to work with the latest technologies and are extremely passionate about software development. Whether its Frontend Development, Backend, DevOps, QA and Testing or Cloud Hosting, our tailor-made technology solutions, provide answers to the problems. We are ISO certified enterprise-grade software company.

Roles and Responsibilities

Celestial is rapidly expanding or Enterprise Data Services team to offer our customers a single point for all requirements including, Data Engineering, Data Analytics, Data Protection and Data Ethics/Governance.

We are looking for Senior Data Pipeline Engineers (multiple positions) for ongoing projects, within the Data Services and ML/AI Group. If you are motivated and willing to work in a startup mode on exciting NextGen projects do read on.


Requirements

• 4+ years of experience in big data engineering, with a focus on Spark and SQL.
• 4+ Years Experience in developing Large-scale ETL Pipelines using AWS services not limited to IAM, Glue, Lambda, S3, Athena, SNS, SQS, DynamoDB, RDS, EMR, ECS, Route 53 and Redshift.
• Highly skilled in Python, PostgreSQL/RDS/Redshift, and writing complex SQL queries and analysis of data correlations.
• Design, construct, and manage Data Lake environments including Data ingestion, staging, Data quality monitoring, and business modeling, Data Schema Normalization, etc.
• Develop, construct, secure, test and maintain architectures (databases and large-scale processing systems)
• Design, develop and implement technical solutions in SQL environment (databases, SQL, data marts, integrations)
• Experience in tools like Jenkins, GIT and/or AWS CodePipeline/CodeBuild/CodeDeploy
• Experience building data-driven unit test suites for data platforms and modeling highly dimensional datasets
• Strong functional understanding of Sagemaker, Jupyter and other Data Science tools on AWS

PREFERRED SKILLS
Experience designing and building big data architectures using tools such as Hadoop, Hive, or Kafka.

ADDITIONAL NOTES
If you are applying for Part-time/Contract role please email your resume with contact details, and "Hourly Consulting Rate Expectations".

Apply for this Job

Drop your Resume *

This website uses cookie to offer you the best experience online. By continuing to use our website, you agree to the use of cookie. If you would like to know more about cookie and how to manage them, please view our Privacy & Cookie Policy.