Altimate Ai
Altimate AI, founded in 2022 in San Francisco, is revolutionizing enterprise data operations through the power of AI. Our mission is to alleviate the burden on overworked and understaffed enterprise data teams by providing innovative AI-driven solutions that automate and accelerate a wide range of data tasks. Our flagship product, DataPilot, offers advanced data automation capabilities, while our new DataMates technology brings the concept of agentic AI to data operations, acting as virtual teammates for data professionals. Our solutions seamlessly fit into existing tools like VSCode, Git, and Slack, performing tasks ranging from data documentation to performance optimization.
By leveraging a proprietary framework that combines multiple language models and a custom-built knowledge graph, we enable contextually aware AI agents that integrate seamlessly into existing workflows. Our solutions, including ambient AI for continuous monitoring and optimization, are designed to meet the growing demands of data operations, business intelligence, and analytics in an era of ever-increasing data volumes. Used by thousands of users across the world, and backed by prominent investors, we're positioned at the forefront of the AI-powered data engineering revolution. You can read more about us in a recently published venture beat article.
As a team, we are Silicon Valley veterans who previously created category-defining data and AI products loved by thousands of companies worldwide. We have experienced the journey of a small startup to IPO very closely. We have started on a similar journey again and are backed by prominent advisors and VC firms with multi-billion dollar portfolios.
We're in search of a Senior Generative AI Engineer who brings deep expertise in building and deploying large language models and AI systems at scale. This role is for those with extensive experience in developing production-grade AI solutions, particularly in the realm of generative AI and language models. The ideal candidate will play a crucial role in advancing our AI capabilities, designing intelligent agent architectures, and implementing sophisticated RAG pipelines that power our products used by thousands of data teams worldwide.
5+ years of hands-on ML/AI experience with at least 1 year of hands-on experience in GenAI/LLM projects
Track record of successfully deploying ML/AI systems in production environments
Experience working with enterprise customers and understanding their unique requirements
Strong programming skills in Python, including expertise in ML stack (Hugging Face, NumPy, Pandas, scikit-learn)
Production experience with deep learning frameworks (PyTorch/Tensorflow) and transformer architectures
API development expertise (FastAPI) and production deployment experience
Proven experience with prompt engineering best practices and LLM evaluation metrics
Infrastructure & Deployment
Experience with vector databases (e.g., Pinecone, Weaviate) and RAG pipelines
Experience with observability and monitoring of LLM applications
Proficiency in model compression, distillation, and deployment at scale
Model Development & LLMs
Expertise in semantic search and embedding models
Experience in LLM frameworks (LangChain, LlamaIndex) and agentic AI systems
Advanced model fine-tuning techniques, including LoRA, Prompt Tuning, and Adapter Training
Advanced AI Agent Architecture: Design and implement sophisticated multi-agent systems that can understand context, make decisions, and enable seamless collaboration between AI and human data professionals.
Large-Scale RAG Systems: Build and optimize retrieval-augmented generation pipelines that can efficiently process and utilize enterprise-scale knowledge bases.
Model Optimization & Deployment: Implement advanced techniques (LoRA, Prompt Tuning) to adapt off-the-shelf models for specific enterprise data tasks, ensuring faster, more accurate performance.
Scalable AI Infrastructure: Develop and optimize cloud-native architectures (AWS, Kubernetes) for large-scale training, inference, and multi-agent orchestration.
Enterprise Security & Compliance: Implement robust security measures for LLM applications, including data anonymization, prompt injection prevention, and compliance with enterprise security standards.
Innovation Leadership: Shape the future of enterprise AI by developing novel approaches to autonomous data operations and AI-driven automation.
Product Development: Direct influence on our flagship products, DataPilot and DataMates, working on cutting-edge features that define the next generation of AI-powered data tools.
Team Growth: Help build and mentor a growing team of AI engineers, shaping our technical culture and practices.
Open Source Contribution: Lead and contribute to our open-source initiatives, establishing yourself as a thought leader in the AI community.
Technical Growth: Opportunity to work with the latest advancements in AI technology and shape the architecture of next-generation AI systems.
Competitive salary and equity
Access to cutting-edge AI infrastructure and resources
Team offsite(US) and opportunities to attend and present at leading industry conferences (across the world)
Learning budget for AI courses, books, and computing resources
Dynamic and intellectually stimulating work environment with a team of talented engineers
Opportunities to shape the direction of the company and leave a lasting impact