WHO WE ARE:
As engineers on Perx’ Data team, our mission is to build blazingly fast, robustly reliable data processing infrastructures on a googol scale – able to outpace our ever-increasing volumes of data, in real time – and then leverage on them to help answer Perx’ most critical and fundamental business, engineering & data problems.
WHAT YOU’LL DO:
As a Data Engineer on the Analytics team, you will be the “source of truth” for Perx’s most fundamental data – such as end-customer engagement and client usage data – along with core metrics such as daily (DAU) and monthly active users (MAU).
Alongside designing & implementing the plumbing & infrastructure that will power the Analytics frameworks, you will also help lead the company’s decision to use bleeding-edge data technologies and features, working directly with our infrastructure team to integrate them into the services you design at scale.
In doing so, you will help empower the Engineering department, tens of co-workers, thousands of marketing analysts and millions of end customers to dream of new insights and new possibilities.
WHO YOU ARE:
You are a go-getter & dreamer, wanting to join a community of extremely talented, forward-thinking & diverse engineers in the industry & region. You gain happiness in building & scaling resilient, robust, well performing, and end-to-end tested distributed systems that can power the most business-critical applications. You want to learn, work with, and leverage on cutting-edge open-source technologies. The ideal candidate has experience with and/or history of contributions to Python, Hadoop, Spark, Redshift, Cassandra, PostGREs, Ruby (on Rails) or similar technologies. You have experience in distributed systems, database internals, or performance analysis.
SKILLS AND EXPERIENCE:
MS in computer science or a related field OR BS in computer science and 2.5 years of experience in software engineering.
Backend development experience with a solid foundation in data pipelines, distributed systems, large-scale data processing.
Experience with DBs like AWS Redshift, PostGREs, MySQL.
Experience with ETL and query language.
Proficiency with Python, Scala or Java. Experience with Ruby is a plus.
Experience with Linux/Unix systems and AWS / cloud environments.
Working knowledge of MapReduce, Hadoop, HDFS. Experience with Spark is a big plus! 😀