Current Job Openings >> Data Engineer
Data Engineer
Summary
Title:Data Engineer
ID:ENG-22-16ab
Team:Engineering
Description

The mission of the Processing Team at Hawkeye 360 (HE360) is to build cross-domain systems to perform RF-based data collection and geolocation. The Processing team includes experts across FPGA development, embedded software, software defined radio, and cloud development; plus deep knowledge of signal-of-interest (SOI) digital signal processing, RF communications systems, RF measurement systems, and geolocation. HE360 is currently seeking a Data Engineer who can help the Processing team build and deploy data pipelines for RF processing and geolocation.

As a data engineer on the Processing team, you will be responsible for implementing distributed, reliable backend software systems to consume and leverage RF data at scale. You will need some prior experience running data pipelines in production, with a passion for robustness, observability, and monitoring. A successful data engineer will be expected to work closely with RF & Geolocation domain specialists, data scientists, and analysts to deploy pipelines while optimizing for both performance and low-latency. We support a broad range of software to accomplish our mission, especially favoring python and C++ for backend software; Kubernetes clusters on AWS; data pipelines orchestrated with Airflow; data storage with Amazon S3 and PostgreSQL as appropriate; Elasticsearch and Kibana for logs analytics and monitoring dashboards.

Location: This position is hybrid with work from home flexibility.

As the Data Engineer, your main responsibilities will be:
  • Contribute to the design, implementation, and testing of HE360’s data platform and data pipelines; optimizing for scalable, low-latency deployment within a batch-processing cloud environment
  • Build, document, and support software systems & tools (data pipelines, utility libraries, core features, etc) enabling high-quality research and production deployments throughout the team
  • Participate in collaborative & fast-paced software development practices, particularly performing merge request reviews, providing design feedback, etc
  • Guide and mentor other individual contributors; work closely with RF & Geolocation domain specialists to achieve the team mission
Your skills and qualifications:
Education and experience:
  • B.S. degree in Computer Science or comparable or equivalent experience
  • 3+ years of professional experience
  • 1+ year of experience building data pipelines and other cloud-based infrastructure: workflow management (e.g. Airflow, Argo workflows, AWS step functions), object storage, relational databases (specifically PostgreSQL, PostGIS, and experience writing/testing SQL), REST/GraphQL APIs, message passing (Kafka, SNS), etc
  • Experience with data science and/or software development using python, especially using industry-standard standard python libraries: pandas, scipy, scikit, dask/ray, flask, fastAPI, etc
  • Experience building software and tools facilitating effective research & development – a passion for writing clean code, scalable architectures, test-driven development, and robust logging
Essential:
  • Familiarity with CI/CD best practices: automated testing, using a dev/prod workflow, deploying to Artifactory or other package manager, deploying containerized software, etc.
  • Track record of building and supporting mission-critical backend applications in production
Desirable:​​​​​​
  • Experience administrating modern cloud applications and infrastructure running in Kubernetes on AWS or other cloud provider
  • Working knowledge of frontend development (react/angular, javascript, webassembly, etc), especially prior examples building proof-of-concept applications to consume & interact with data products
  • Familiarity with the ELK stack (elasticsearch, logstash, kibana) for aggregating logs, creating queries/dashboards, and monitoring production deployments in real time
  • Familiarity with software acceleration including multi-core parallelism, cluster-based scaling (e.g. Dask, Spark, etc), and/or GPUs, for bespoke applications
  • Familiarity with RF signal processing or geolocation algorithms and applications, particularly in a batch-processed cloud environment
Salary Range: $90,000 - $140,000 annually

HawkEye 360 offers a compensation package that includes a competitive base salary plus annual performance bonus and benefits. We consider many factors when determining salary offers, such as candidate's work experience, education, training & skills, as well as market and business considerations. We are also open to considering candidates with experience and qualifications at a different level than required in a job posting, which may affect the compensation package offered.

Company Overview:
HawkEye 360 is delivering a revolutionary source of global knowledge based on radio frequency (RF) geospatial analytics to those working to make the world a safer place. The company operates a commercial satellite constellation that detects, geolocates, and identifies a broad range of signals & behaviors. We employ cutting edge AI techniques to equip our global customers with high-impact insights needed to make decisions with confidence. HawkEye 360 is headquartered in Herndon, Virginia.

HawkEye 360 is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity Employer, making decisions without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, marital status, national origin, age, veteran status, disability, or any other protected class.

To all recruitment agencies: HawkEye 360 does not accept unsolicited agency resumes. Please do not forward resumes to our jobs alias, HawkEye 360 employees or any other organization location. HawkEye 360 is not responsible for any fees related to unsolicited resumes.
This opening is closed and is no longer accepting applications
ApplicantStack powered by Swipeclock