Job Title: GenAI Engineer
Location: Hyderabad, Telangana, India
Department: Emerging Technologies / AI & Data Engineering
Reporting To: Head of AI / Director of Innovation


Role Purpose

The GenAI Engineer will design, build, deploy, and maintain generative AI systems and large language model (LLM)-based solutions that support Greenko’s digital initiatives. This will include embedding AI/ML capabilities into energy management platforms, optimizing workflows, enabling advanced analytics, and ensuring reliable, scalable, secure operation of GenAI models.


Key Responsibilities

  • Develop and deploy GenAI applications using LLMs (open source or proprietary) for use cases such as predictive analytics, natural language interfaces, summarization, RAG (Retrieval-Augmented Generation), etc.
  • Lead prompt engineering, fine-tuning, model evaluation, and optimization (latency, cost, accuracy, bias/hallucination mitigation).
  • Integrate GenAI systems with existing backend systems, databases (SQL/NoSQL), microservices, APIs, cloud platforms (Azure +/or GCP or others), vector databases (e.g. Pinecone, FAISS), embedding generation, etc.
  • Set up and manage MLOps pipelines / workflows: versioning, monitoring, deployment, continuous evaluation, and maintenance of models in production.
  • Design for scalability, performance, and reliability (handling inference workloads, batch vs streaming, caching / quantization if needed).
  • Collaborate with cross-functional teams (data engineering, software engineering, DevOps, domain experts in energy/operations) to deploy AI solutions that deliver business value.
  • Implement safety, governance, ethical AI practices: model interpretability, fairness, compliance, data privacy.
  • Document technical designs, model decisions, API contracts, etc. Mentor junior engineers or interns as needed.


Required Skills & Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related field.
  • 3-7+ years (or as per seniority) experience with AI/ML, especially working hands-on with LLMs, transformers, etc.
  • Strong proficiency in Python, and experience with relevant ML frameworks/libraries (e.g. PyTorch, TensorFlow, Hugging Face). Skills in data manipulation with Pandas, NumPy, etc.
  • Experience with prompt engineering, RAG pipelines, embedding models, vector databases (FAISS, Pinecone, etc.).
  • Good knowledge of database systems: both relational (e.g. PostgreSQL, MS SQL) and NoSQL (e.g. MongoDB).
  • Comfortable with cloud services (especially Azure) for model hosting, deployment, monitoring; familiarity with Docker, Kubernetes, container orchestration.
  • Version control (Git), CI/CD pipelines, MLOps tools (e.g. MLflow or others).
  • Strong problem-solving, analytical skills; ability to debug model behavior (e.g., hallucinations, bias) and optimize performance (latency, cost).
  • Excellent communication skills; ability to translate technical concepts to non-technical stakeholders; collaboration across disciplines.


Preferred Skills / Plus

  • Experience with open-source techniques like LoRA / QLoRA, quantization, model distillation, etc.
  • Experience in prompt flow or prompt engineering frameworks / tools.
  • Exposure to real-world production deployments in energy / clean-technology domain / industrial-scale systems.
  • Experience using monitoring / logging / observability tools (e.g., Grafana, Kibana, telemetry) for model infra.
  • Experience with Windows/Linux (Ubuntu/RedHat) operating systems, tooling for both.
  • Familiarity with workflows involving Anaconda, Jupyter Notebooks, distributed computing, etc.