Current Job Openings >> Senior Software Engineer
Senior Software Engineer
Summary
Title:Senior Software Engineer
ID:DAT-25-B3
Team:Data & Analytics
Description

HE360 is currently seeking a Senior Software Engineer to design, build, and deploy world-class algorithms for scalable cloud processing.  

The role would be part of the Data Engineering team in the Data & Analytics group. Data Engineering manages the transition to production for advance machine learning and geolocation algorithms developed by both the Processing Algorithms and Data Science teams. This team also develops and manages scalable data processing platforms for exploratory data analysis and real-time analytics to support our analysts in their geospatial data exploration needs. As a Senior Software Engineer, you will work closely with HawkEye 360’s scientists to optimize algorithms for low-latency, highly scalable production environments that directly support our customers.  

We work in small teams to rapidly prototype and productize new ideas based on hands-on, in-the-weeds engineering. You'll be responsible for designing and implementing distributed backend software systems. We support a broad range of software applications to accomplish our mission, especially favoring Python and C++ languages for batch processing within cloud deployments (Kubernetes + Docker). 

Location: This position is hybrid with work from home flexibility. 

As the Senior Software Engineer, your main responsibilities will be:
  • Write efficient, clean, and testable Python code for data engineering workflows 
  • Design, build, and maintain scalable ETL pipelines to support data ingestion and transformation at scale. 
  • Develop and optimize parallel processing frameworks to improve data throughput and performance. 
  • Implement and maintain pipeline orchestration using tools such as Airflow or similar. 
  • Design and manage cloud-native data solutions using AWS, including S3, RDS and other AWS services. 
  • Perform database maintenance and optimization to ensure reliability, integrity, and performance across systems. 
  • Containerize applications using Docker, and deploy/manage them using Kubernetes. 
  • Work closely with Processing Algorithms & Data Science teams to integrate, optimize, and deploy state-of-the-art algorithms to production-ready applications 
  • Apply debugging and problem-solving skills to support and troubleshoot data-intensive applications in production, with the expectation of on-call responsibility as part of the role 
  • Participate in collaborative software development practices, particularly performing merge request reviews, providing design feedback, etc. 
  • Work in a fast-paced agile environment, effectively communicate and track development activities using agile tools like JIRA/Confluence.  
  • Work independently within a geographically distributed team. 
Your skills and qualifications:
Essential education and experience:
  • B.S. degree in Computer Science, Electrical/Computer Engineering, or comparable experience 
  • 3+ years of professional software development experience using Python, and experience with standard python tools and frameworks (e.g. NumPy, Pandas, SciPy, SciKit) 
  • Proven experience in building and maintaining Extract, Transform, and Load (ETL) pipeline and data workflows. 
  • Hands-on experience working within an AWS environment, including knowledge of AWS services and solutions (Amazon S3, Amazon EC2) 
  • Experience with modern data orchestration tools (e.g., Apache Airflow, Argo CD) 
  • Deep understanding of parallel processing and performance optimization techniques 
  • Experience developing and supporting DevOps best-practices (e.g., GitLab-based CI/CD) 
  • Experience in Docker containerization and Kubernetes for deployment and scaling. 
  • Experience with monitoring and logging solutions, particularly the Grafana and OpenTelemetry (OTel) tech stack for monitoring and configuring production alerts. 
Desirable:​​​​​​
  • Familiar with Infrastructure as Code (IaC) tools (e.g. terraform) 
  • Knowledge of streaming data tools (e.g., Apache Kafka, Spark) 
  • Experience using Ray, Spark, Dask, or other frameworks for parallelizing and distributing compute-heavy tasks 
  • Experience with standard Python tools & framework (e.g.  NumPy, pandas, SciPy, scikit) 
Base Salary Range: $130,000 - $180,000 annually

HawkEye 360 offers a compensation package that includes a competitive base salary plus annual performance bonus and benefits. We consider many factors when determining salary offers, such as candidate's work experience, education, training & skills, as well as market and business considerations. We are also open to considering candidates with experience and qualifications at a different level than required in a job posting, which may affect the compensation package offered.

Company Overview:
HawkEye 360 is delivering a revolutionary source of global knowledge based on radio frequency (RF) geospatial analytics to those working to make the world a safer place. The company operates a commercial satellite constellation that detects, geolocates, and identifies a broad range of signals & behaviors. We employ cutting edge AI techniques to equip our global customers with high-impact insights needed to make decisions with confidence. HawkEye 360 is headquartered in Herndon, Virginia.

HawkEye 360 is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity Employer, making decisions without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, marital status, national origin, age, veteran status, disability, or any other protected class.

To all recruitment agencies: HawkEye 360 does not accept unsolicited agency resumes. Please do not forward resumes to our jobs alias, HawkEye 360 employees or any other organization location. HawkEye 360 is not responsible for any fees related to unsolicited resumes.
ApplicantStack powered by Swipeclock