Senior Data Engineer

event location iconUS Remote

event time icon

SalesIntel is the top revenue intelligence platform on the market. Our unique approach to data collection, enhancement, verification and growth distinguishes us from the market, solidifying our position as the best B2B data partner.

The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up.

You will collaborate with our software engineers, database architects, data analysts and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

What You’ll Be Doing:

  • Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system.

Qualifications:

  • Bachelor’s degree in Engineering, Computer Science, or relevant field.
  • 5+ years of experience in a Data Engineer role.
  • 3+ years experience with Apache Spark and solid understanding of the fundamentals.
  • Deep understanding of Big Data concepts and distributed systems.
  • Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.
  • Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.
  • Cloud Experience with DataBricks
  • Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.
  • Comfortable working in a linux shell environment and writing scripts as needed.
  • Comfortable working in an Agile environment
  • Machine Learning knowledge is a plus.
  • Must be capable of working independently and delivering stable, efficient and reliable software.
  • Excellent written and verbal communication skills in English.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed above are representative of knowledge, skill and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Location: US Remote