Graco manufactures and markets premium equipment to move, measure, control, dispense and spray a wide variety of fluid and powder materials. What does that mean? Well, we pump peanut butter into your jar, and the oil in your car. We glue the soles of your shoes, the glass in your windows and the screen on your phone. We spray the finish on your vehicle, coatings on your pills, the paint on your house and texture on your walls. Graco is part of your daily life.
The Graco Intern Program offers more than just work experience-it's a chance to make an impact. As an intern, you'll take on projects that matter to the business, contribute to initiatives that drive progress, and develop skills that prepare you for what's ahead. Throughout the program, you'll expand your industry knowledge, collaborate with professionals who are passionate about doing things the right way, and experience a culture that thrives on new ideas and continual growth. You'll also take part in events designed to support both your learning and personal development. The program concludes with a final presentation where you'll showcase your achievements and the difference you've made.
The AI Engineer Intern position is responsible for assisting in the development, testing, and deployment of Generative AI solutions in collaboration with product managers, software engineers, and cloud architects. The individual in this position will have foundational knowledge of Large Language Models (LLMs) and generative AI technologies with hands-on experience in implementing AI-powered business applications. This role supports the technical implementation of Gen AI solutions on cloud platforms and contributes to solving real-world business problems through innovative LLM-based applications.
What You Will Do:
Generative AI Development
Participate in the full GenAI development lifecycle including prompt engineering, model fine-tuning, testing, and deployment.
Assist in developing and maintaining scalable LLM-based applications and generative AI solutions.
Implement and optimize prompts for various business use cases including content generation, document processing, and intelligent automation.
Support integration of foundation models (GPT, Claude, LLaMA) and custom LLMs into cloud-based production systems.
Deploy and manage AI models on cloud platforms (AWS Bedrock, OCI AI Services) to deliver business solutions.
Document prompt strategies, model configurations, and deployment workflows to ensure reproducibility and adhere to quality standards.
Cross-Functional Partnership
Work with business stakeholders to understand requirements and design LLM-powered solutions for specific use cases.
Collaborate closely with cloud engineers, architects, and cross-functional teams to implement GenAI applications.
Research and experiment with the latest generative AI models, frameworks, and cloud AI services to improve business solutions.
Stay up to date with emerging LLM technologies, prompt engineering techniques, and cloud AI platform capabilities.
What You Will Bring:
Currently pursuing or recently completed a degree in Computer Science, Data Science, Artificial Intelligence, or related field.
Understanding of Large Language Models (LLMs), generative AI concepts, and foundation model architectures.
Proficiency in Python and familiarity with GenAI frameworks (LangChain, LlamaIndex, Transformers).
Experience with cloud platforms (AWS, OCI) and AI services (AWS Bedrock, OCI AI Services, Azure OpenAI).
Knowledge of prompt engineering techniques, model fine-tuning, and LLM optimization strategies.
Understanding of API integration, microservices architecture, and cloud-native development.
Strong problem-solving skills with focus on business use case implementation.
Ability to work independently and collaboratively in a team environment.
Familiarity with version control systems (Git) and cloud development practices.
Accelerators:
Experience with advanced prompt engineering, RAG (Retrieval-Augmented Generation) implementations, and vector databases.
Knowledge of cloud infrastructure (AWS EC2, Lambda, OCI Compute) and serverless architectures for AI applications.
Familiarity with LLMOps tools (MLflow, Weights & Biases) and model monitoring in cloud environments.
Experience with containerization (Docker, Kubernetes) for deploying GenAI applications at scale.
Understanding of AI governance, model safety, and responsible AI practices.
Experience working in an Agile environment with cloud-first development approaches.
Previous projects involving business process automation using generative AI.
At Graco, you truly make a difference. Your unique talents contribute to our organizational growth and future. Not only do you make a difference, but Graco's culture empowers employees to create their own career path. Whether you choose to advance within your current department or explore new opportunities in different divisions, you have the ability to build your future. Our managers are here to provide support and guidance as you continue to grow within your career.
Graco has excellent opportunities available to individuals who want to be part of a fast-moving, growing company that is committed to quality, innovation and solving fluid handling problems for our customers. Graco is proud to be named a Best Place to Work by Fortune Magazine in 2016, 2018, 2019, 2021 & 2022. Graco offers attractive compensation, benefits and career development opportunities. Graco's comprehensive benefits include medical, dental, stock purchase plan, 401(k), tuition reimbursement and more.
Our company uses E-Verify to confirm the employment and eligibility of all newly hired employees. To learn more about E-Verify, including your rights and responsibilities, please visit www.dhs.gov/E-Verify.
The base pay range for this position is listed below, exclusive of fringe benefits or other compensation. If you are hired, your final base hourly rate will be determined based on factors such as geographic location, skills, competencies, education, and/or experience. In addition to those factors, we will also consider internal equity of our current employees. Please keep in mind that the range provided is the full base salary range for the role. Hiring at or near the maximum of the range would not be typical to allow for future and continued salary growth.
$21.00 - $28.00