Job Description
As a full spectrum Azure integrator, we assist hundreds of companies to realize the value, efficiency, and productivity of the cloud. We take customers on their journey to enable, operate, and innovate using cloud technologies – from migration strategy to operational excellence and immersive transformation.
If you like a challenge, you’ll love it here, because we’re solving complex business problems every day, building and promoting great technology solutions that impact our customers’ success. The best part is, we’re committed to you and your growth, both professionally and personally.
Job Overview:
As a Machine Learning Engineer, you will deliver ML models and pipelines that solve real-world business problems, while leveraging ML Ops best practices, to ensure successful deployment of ML models and application code. You will leverage cloud-based architectures and technologies to deliver optimized ML models at scale. You will use programming languages like Python and work with popular ML frameworks like Scikit Learn, Tensorflow etc..
If you get a thrill working with cutting-edge technology and love to help solve customers’ problems, we’d love to hear from you. It’s time to rethink the possible. Are you ready?
Qualifications:
- Proven track record in delivering Generative AI/LLM solutions in production environments.
- Minimum 4 years of programming experience with Python, Scala, or Java.
- At least 2 years of hands-on experience with Generative AI or conversational AI projects.
- Experience deploying traditional ML and deep learning models at scale.
- Strong experience with PyTorch, TensorFlow, Scikit-learn, and relevant AI libraries.
- Familiarity with vector search, embeddings, and semantic search architectures.
- Understanding of ML Ops best practices for production deployments.
Responsibilities:
- Deliver production-ready AI/ML solutions from concept to deployment.
- Lead the design and implementation of Generative AI applications.
- Apply deep learning and traditional ML methods as appropriate.
- Build scalable, cloud-native AI services and APIs.
- Implement ML Ops pipelines for automation, testing, and monitoring.
- Collaborate with product, engineering, and infrastructure teams.
- Mentor teams on best practices for both traditional ML and Generative AI.
About Rackspace Technology
We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future.
More on Rackspace Technology
Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know.
Responsibilities
Responsibilities:
- Deliver production-ready AI/ML solutions from concept to deployment.
- Lead the design and implementation of Generative AI applications.
- Apply deep learning and traditional ML methods as appropriate.
- Build scalable, cloud-native AI services and APIs.
- Implement ML Ops pipelines for automation, testing, and monitoring.
- Collaborate with product, engineering, and infrastructure teams.
- Mentor teams on best practices for both traditional ML and Generative AI.
Requirements
Qualifications:
- Proven track record in delivering Generative AI/LLM solutions in production environments.
- Minimum 4 years of programming experience with Python, Scala, or Java.
- At least 2 years of hands-on experience with Generative AI or conversational AI projects.
- Experience deploying traditional ML and deep learning models at scale.
- Strong experience with PyTorch, TensorFlow, Scikit-learn, and relevant AI libraries.
- Familiarity with vector search, embeddings, and semantic search architectures.
- Understanding of ML Ops best practices for production deployments.
Similar Jobs
Tax Accountant - Gurgoan
rackspace
On-siteFull-time
Application Security Engineer II - IN ( Night Shift)
rackspace
RemoteFull-time
Customer Service Technician II
rackspace
On-siteFull-time
Senior Financial Analyst -US Shift
rackspace
RemoteFull-time
ServiceNow Developer IV
rackspace
RemoteFull-time
Customer Discovery and Commercial Strategy Lead
rackspace
RemoteFull-time
ServiceNow Developer
rackspace
On-siteFull-time
Citrix Engineer III - IN
rackspace
On-siteFull-time
ForgeRock Engineer
rackspace
RemoteFull-time
L2 Azure Infrastructure Specialist (Terraform + AKS) - Immediate joiner preferred
rackspace
RemoteFull-time
AWS Devops Engineer III (PSDE-III)
rackspace
RemoteFull-time
Azure DataOps Lead
rackspace
On-siteFull-time
SENIOR BIG DATA INFRA ENGINEER
rackspace
RemoteFull-time
Data Engineer (Infra/DevOps Focus)
rackspace
RemoteFull-time
AWS PSDE III ( Require only immediate joiners )
rackspace
RemoteFull-time
Disclaimer: Real Jobs From Anywhere is an independent platform dedicated to providing information about job openings. We are not affiliated with, nor do we represent, any company, agency, or agent mentioned in the job listings. Please refer to our Terms of Services for further details.
