Discover companies you will love

  • Senior Data Engineer for Grab-Singtel Digibank

Build a new Digibank with Grab & Singtel as the Platform Engineer in our Data Services team!

Senior Data Engineer for Grab-Singtel Digibank
Full-time

on 2021-03-09

113 views

0 requested to visit

Build a new Digibank with Grab & Singtel as the Platform Engineer in our Data Services team!

Full-time
Full-time

Share this post via...

Joel Wong

Hey, I’m Joel! With a strong interest in branding, marketing and design, I am actively pursuing relevant education and experience in the field!

Charmaine Kum

Charmaine is a Marketing graduate from the Singapore University of Social Sciences who is passionate about the dynamic and ever-changing media industry.

Grab (Singapore)'s members

Hey, I’m Joel! With a strong interest in branding, marketing and design, I am actively pursuing relevant education and experience in the field!

What we do

We are living in dynamic times. Technology is reshaping how we live, and we want to use it to redefine how financial services are offered. Grab is one of the leading technology company in Southeast Asia offering everyday services to the masses. Singtel is Asia’s leading communications group connecting millions of consumers and enterprises to essential digital services. This is why we are coming together to unlock big dreams, and financial inclusion for people in our region is just one of them. We want to build a digital bank with the right foundation - using data, technology and trust to solve problems and serve customers. If you have what it takes to help build this new Digibank with us.

Why we do

Mission 1. Trust that you will have a safe ride Travel with confidence knowing that Grab’s top priority is your safety. From driver safety training and vehicle safety checks, to personal accident insurance coverage for all our drivers and passengers and government partnerships to promote safety, you know we have your back. 2. Take the transport option that fits your need We put freedom in your hands. The most transport options, at every price point, with comfort, speed and affordability – you can have it all at the touch of a button. 3. Let us take care of you We believe that a sustainable business is one that improves the lives of the people it touches – passengers, drivers, employees, governments and society at large.

How we do

Life at Grab is all about positive disruption – and yes, crazy days are part of that package too. Still, that’s never stopped a Grabber from having fun. In fact, it’s what keeps us motivated to shake things up further. Life as a Grabber means succeeding in a culture of passion and innovation. We are hungry to make a difference, and recognise that good decisions often come from the heart. We are humbled by our communities, and are proud to serve them with honour. We come from all over the world, united by a common goal to make life better everyday for our users. If you share our mission of Driving Southeast Asia Forward, apply to be part of the team today!

As a new team member

As the Platform Engineer in the Data Services team, you will be working on all aspects of Data, from Platform and Infra build out to pipeline engineering and writing tooling/services for augmenting and fronting the core platform. You will be responsible for building and maintaining the state-of-the-art data Life Cycle management platform, including acquisition, storage, processing and consumption channels. The team works closely with Data scientists, Product Managers, Legal, Compliance and business stakeholders across the SEA in understanding and tailoring the offerings to their needs. As a member of the Data Services, you will be an early adopter and contributor to various open source big data technologies and you are encouraged to think out of the box and have fun exploring the latest patterns and designs in the fields of Software and Data Engineering. The day-to-day activities: ​Build and manage the data asset using some of the most scalable and resilient open source big data technologies like Airflow, Spark, Apache Atlas, Kafka, Yarn, HDFS, ElasticSearch, Presto/Dremio, HDP, Visualization layer and more. - Design and deliver the next-gen data lifecycle management suite of tools/frameworks, including ingestion and consumption on the top of the data lake to support real-time, API-based and serverless use-cases, along with batch (mini/micro) as relevant - Build and expose metadata catalog for the Data Lake for easy exploration, profiling as well as lineage requirements - Enable Data Science teams to test and productionize various ML models, including propensity, risk and fraud models to better understand, serve and protect our customers - Lead technical discussions across the organization through collaboration, including running RFC and architecture review sessions, tech talks on new technologies as well as retrospectives - Apply core software engineering and design concepts in creating operational as well as strategic technical roadmaps for business problems that are vague/not fully understood - Obsess security by ensuring all the components, from a platform, frameworks to the applications are fully secure and are compliant by the group’s infosec policies. The must haves: - At least 2+ years of relevant experience in developing scalable, secured, fault tolerant, resilient & mission-critical Big Data platform - Able to maintain and monitor the ecosystem with 99.9999% availability - Candidates will be aligned appropriately within the organization depending on experience and depth of knowledge - Must have sound understanding for all Big Data components & Administration Fundamentals. Hands-on in building a complete data platform using various open source technologies - Must have good fundamental hands-on knowledge of Linux and building big data stack on top of AWS/Azure using Kubernetes - Strong understanding of big data and related technologies like HDFS, Spark, Presto, Airflow, apache atlas etc. - Good knowledge of Complex Event Processing (CEP) systems like Spark Streaming, Kafka, Apache Flink, Beam etc. - Experience with NoSQL databases – KV/Document/Graph and similar - Proven Ability to contribute to the open source community and up-to-date with the latest trends in the Big Data Space - Able to drive devops best practices like CI/CD, containerization, blue-green deployments, 12-factor apps, secrets management etc in the Data ecosystem - Able to develop an agile platform with auto scale capability up & down as well vertically and horizontally - Must be in a position to create a monitoring ecosystem for all the components in use in the data ecosystem - Proficiency in at least one of the programming languages Java, Scala, Python or Go along with a fair understanding of runtime complexities - Must have the knowledge to build Data metadata, lineage and discoverability from scratch. “Educated” on the latest developments in the areas of Good understanding on Machine Learning models and efficiently supporting them is a plus If you share our vision of driving South East Asia forward, click "I'm Interested" for this opportunity to submit your completed profile!

What we do

We are living in dynamic times. Technology is reshaping how we live, and we want to use it to redefine how financial services are offered. Grab is one of the leading technology company in Southeast Asia offering everyday services to the masses. Singtel is Asia’s leading communications group connecting millions of consumers and enterprises to essential digital services. This is why we are coming together to unlock big dreams, and financial inclusion for people in our region is just one of them. We want to build a digital bank with the right foundation - using data, technology and trust to solve problems and serve customers. If you have what it takes to help build this new Digibank with us.

Why we do

Mission 1. Trust that you will have a safe ride Travel with confidence knowing that Grab’s top priority is your safety. From driver safety training and vehicle safety checks, to personal accident insurance coverage for all our drivers and passengers and government partnerships to promote safety, you know we have your back. 2. Take the transport option that fits your need We put freedom in your hands. The most transport options, at every price point, with comfort, speed and affordability – you can have it all at the touch of a button. 3. Let us take care of you We believe that a sustainable business is one that improves the lives of the people it touches – passengers, drivers, employees, governments and society at large.

How we do

Life at Grab is all about positive disruption – and yes, crazy days are part of that package too. Still, that’s never stopped a Grabber from having fun. In fact, it’s what keeps us motivated to shake things up further. Life as a Grabber means succeeding in a culture of passion and innovation. We are hungry to make a difference, and recognise that good decisions often come from the heart. We are humbled by our communities, and are proud to serve them with honour. We come from all over the world, united by a common goal to make life better everyday for our users. If you share our mission of Driving Southeast Asia Forward, apply to be part of the team today!

As a new team member

As the Platform Engineer in the Data Services team, you will be working on all aspects of Data, from Platform and Infra build out to pipeline engineering and writing tooling/services for augmenting and fronting the core platform. You will be responsible for building and maintaining the state-of-the-art data Life Cycle management platform, including acquisition, storage, processing and consumption channels. The team works closely with Data scientists, Product Managers, Legal, Compliance and business stakeholders across the SEA in understanding and tailoring the offerings to their needs. As a member of the Data Services, you will be an early adopter and contributor to various open source big data technologies and you are encouraged to think out of the box and have fun exploring the latest patterns and designs in the fields of Software and Data Engineering. The day-to-day activities: ​Build and manage the data asset using some of the most scalable and resilient open source big data technologies like Airflow, Spark, Apache Atlas, Kafka, Yarn, HDFS, ElasticSearch, Presto/Dremio, HDP, Visualization layer and more. - Design and deliver the next-gen data lifecycle management suite of tools/frameworks, including ingestion and consumption on the top of the data lake to support real-time, API-based and serverless use-cases, along with batch (mini/micro) as relevant - Build and expose metadata catalog for the Data Lake for easy exploration, profiling as well as lineage requirements - Enable Data Science teams to test and productionize various ML models, including propensity, risk and fraud models to better understand, serve and protect our customers - Lead technical discussions across the organization through collaboration, including running RFC and architecture review sessions, tech talks on new technologies as well as retrospectives - Apply core software engineering and design concepts in creating operational as well as strategic technical roadmaps for business problems that are vague/not fully understood - Obsess security by ensuring all the components, from a platform, frameworks to the applications are fully secure and are compliant by the group’s infosec policies. The must haves: - At least 2+ years of relevant experience in developing scalable, secured, fault tolerant, resilient & mission-critical Big Data platform - Able to maintain and monitor the ecosystem with 99.9999% availability - Candidates will be aligned appropriately within the organization depending on experience and depth of knowledge - Must have sound understanding for all Big Data components & Administration Fundamentals. Hands-on in building a complete data platform using various open source technologies - Must have good fundamental hands-on knowledge of Linux and building big data stack on top of AWS/Azure using Kubernetes - Strong understanding of big data and related technologies like HDFS, Spark, Presto, Airflow, apache atlas etc. - Good knowledge of Complex Event Processing (CEP) systems like Spark Streaming, Kafka, Apache Flink, Beam etc. - Experience with NoSQL databases – KV/Document/Graph and similar - Proven Ability to contribute to the open source community and up-to-date with the latest trends in the Big Data Space - Able to drive devops best practices like CI/CD, containerization, blue-green deployments, 12-factor apps, secrets management etc in the Data ecosystem - Able to develop an agile platform with auto scale capability up & down as well vertically and horizontally - Must be in a position to create a monitoring ecosystem for all the components in use in the data ecosystem - Proficiency in at least one of the programming languages Java, Scala, Python or Go along with a fair understanding of runtime complexities - Must have the knowledge to build Data metadata, lineage and discoverability from scratch. “Educated” on the latest developments in the areas of Good understanding on Machine Learning models and efficiently supporting them is a plus If you share our vision of driving South East Asia forward, click "I'm Interested" for this opportunity to submit your completed profile!
0 upvotes

    0 upvotes

    What happens after you apply?

    1. ApplyClick "Want to Visit"
    2. Wait for a reply
    3. Set a date
    4. Meet up

    Company info

    Founded on 06/2012

    6,000 members

    1Wallich Street #35-01 Guoco Tower, 078881