Custom Software Engineering Lead
Job Description:
- Technical Leadership & Architecture: Lead the design and development of robust, scalable, and secure backend systems and event-driven APIs using FastAPI. Define the technical direction and system architecture, balancing short-term deliverables with long-term scalability and maintainability.
- Hands-on Development & Code Quality: Serve as a primary contributor to the codebase, leading by example through hands-on development. Conduct thorough code and technical design reviews to ensure high standards for quality, performance, and security are met.
- Deploy AI Models: Collaborate with data scientists and ML engineers to integrate, containerize, and deploy AI/ML models (e.g., NLP, recommendation engines, generative AI) into production environments.
- Containerization & Orchestration: Engineer containerized applications for deployment on cloud platforms using Kubernetes.
- Asynchronous Processing & Distributed Systems: Design for scale using asynchronous processing and task queues (such as Celery, RabbitMQ, Kafka) to handle long-running or unreliable tasks independently from the main API.
- Big Data Search & Storage: Manage big data processing workflows and the storage of that data in object storage, indexing tools, and distributed query engines (PySpark, OpenSearch, Hive Metastore, Trino).
- Team Management & Mentorship: Mentor and guide a team of engineers, fostering a culture of continuous learning, collaboration, and technical excellence.
- Operational Excellence: Implement best practices for testing, automation, monitoring, and deployment, ensuring systems are observable and resilient in production environments.
Here's what you'll need:
- 7+ years of professional experience in one or more of the following areas:
- Proficient in Python, with knowledge of the libraries and frameworks that make up the Python ecosystem.
- Familiarity with UI/UX frameworks used with Python applications.
- Experience designing and implementing RESTful APIs using tools like FastAPI.
- Experience with asynchronous task processing tools like Celery and RabbitMQ.
- Strong working knowledge of Docker and Kubernetes for building and deploying scalable, cloud-native applications.
- Familiarity with distributed data orchestration and processing pipelines, using tools like Spark and Airflow.
- Experience configuring, deploying, and tuning distributed search technologies, such as: Trino, OpenSearch, and Elasticsearch.
- Experience with relational databases like PostgreSQL.
- Experience with CI/CD pipelines, cloud platforms (AWS, GCP, Azure), and deploying applications within the Linux ecosystem.
Security clearance:
- Active TS/SCI with Poly security clearance is required.
As required by local law, Accenture Federal Services provides reasonable ranges of compensation for hired roles based on labor costs in the states of California, Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Washington, Vermont, the District of Columbia, and the city of Cleveland. The base pay range for this position in these locations is shown below. Compensation for roles at Accenture Federal Services varies depending on a wide array of factors, including but not limited to office location, role, skill set, and level of experience. Accenture Federal Services offers a wide variety of benefits. You can find more information on benefits here. We accept applications on an on-going basis and there is no fixed deadline to apply.
The pay range for the states of California, Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Washington, Vermont, the District of Columbia, and the city of Cleveland is:
$130,200 - $265,300 USD
Apply for this job
*
indicates a required field