Company

Singtel

Join Telegram Group

@SingaporeJobs
@新加坡工作群

Join WhatsApp Community

@SingaporeJobs&Internships

Join WeChat Groups

@新加坡工作微信群

Post Date

2023-Mar-20

Close Date

2023-Jul-31

Job Type

FULLTIME

Salary

SGD 8000-16000 per MONTH

Location

Singapore, , Singapore

Company Description

Singtel is one of the largest listed Singapore companies on the Singapore Exchange by market capitalisation. The Group has a vast network of offices throughout Asia Pacific, Europe and the USA, and employs more than 23,000 staff worldwide.

Job Description

    • Manage multiple data engineering delivery / project teams comprise of internal data engineers and IT service providers to ensure that projects or enhancements are delivered within the agreed scope, budget, and schedule
    • Work with business stakeholders to develop and analyze big data needs
    • Design, develop and automate large scale, high-performance distributed data processing systems (batch and/or real-time streaming) that meet both functional and non-functional requirements
    • Design data models for optimal storages across data layers, workload and presentation retrieval to meet critical business requirements and platform operational efficiency
    • Deliver high level & detailed design to ensure that the solution meet business requirements and align to the data architecture principles and technology stacks
    • Practice high quality data engineering/software engineering towards building data platform infrastructure and data pipelines at scale to deliver Big Data Analytics and Data-Science initiatives
    • Partner with business domain experts, data scientists, and solution designers to identify relevant data-assets, domain data model and data solutions. Collaborate with product data engineers to coordinate backlog feature development of data pipelines patterns and capabilities
    • Own and lead data engineering projects; data pipelines delivery with reliable, efficient, testable, & maintainable artifacts, involves ingest & process data from a large number and variety of data sources
    • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data products for greater scalability
    • Drive Cloud data engineering practices and Cloud Lake-house re-platform drive to build & scale Modern Data Platform & Infrastructure
    • Build, optimize and contribute to shared Data Engineering Frameworks and tooling, Data Products & standards to improve the productivity and quality of output for Data Engineers
    • Design and build scalable Data APIs to host Operational data and Data-Lake assert in Data[1]Mesh / Data Fabric Architecture
    • Drive Modern Data Platform operations using Data Ops, ensure data quality, monitoring the data system. Also support Data science MLOps platform
    • Drive and deliver industry standard Devops (CI/CD) best practices, automate development and release management

Requirements

    • Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent
    • Minimum of 10 years of experience in Data Engineering, Data Lake Infrastructure, Data Warehousing, Data Analytics tools or related, in design and developing of end-to-end scalable data pipelines and data products
    • Experience in building and operating large and robust distributed data lakes (multiple PBs) and deploying high performance with reliable system with monitoring and logging practices
    • Experience in designing and building data products and pipelines using some of the most scalable and resilient open-source big data technologies; Spark, Delta Lake, Kafka, Flink, Airflow, Presto and related distributed data processing
    • Experience with data modelling for data warehousing
    • Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations
    • Build and deploy high performance modern data engineering & automation frameworks using programming languages such as Scala/Python and automate the big data workflows such as ingestion, aggregation, ETL processing etc
    • Good understanding of data modeling and high-end design, data engineering / software engineering best practices - include handling and logging errors, monitoring the system, fault[1]tolerant pipelines, data quality and ensuring a deterministic pipeline with DataOps
    • Experience working in Telco Data Warehouse and / or Data Lake
    • Excellent experience in using ANSI SQL for relational databases like – Postgres, MySql, Oracle and knowledge of Advance d SQL on distributed analytics engines – Databricks SQL, Snowflake, etc
    • Proficiency programming languages like Scala, Python, Java, Go, Rust or scripting languages like Bash
    • Experience on cloud systems like AWS, Azure, or Google Cloud Platform o Cloud data engineering experience in at least one cloud (Azure, AWS, GCP) o Experience with Databrick (Cloud Data Lakehouse)
    • Experience on Hadoop stack: HDFS, Yarn, Hive, HBase, Cloudera, Hortonworks
    • Experience on NoSQL & Graph databases (KeyValue/Document/Graph) and similar – Cassandra, Hbase, Tiger Graph DB, Cloud Native N o SQL D B
    • Experience on Event Streaming Platform, Message Queues like Kafka, Pulsar , Rabbit -MQ, Redis - MQ o Event Processing systems – Kafka Streaming, KSQL, Spark Streaming, Apache Flink, Apache Beam etc.

Similar Jobs


No matching job found...


More Jobs Apply Now