- Participate in building, testing, and maintaining efficient data pipelines to support machine learning workflows. This includes data ingestion, preprocessing, transformation, and ensuring data quality and consistency across environments
- Contribute to the creation of intelligent agents capable of autonomous decision-making. Assist in integrating these agents into larger systems or applications, ensuring robust performance and scalability
- Work on extracurricular or research-oriented tasks involving the training, fine-tuning, and evaluation of machine learning models using standard frameworks. Analyze model performance and iterate based on insights and metrics
- Enhance team collaboration through participation in Agile development processes and understanding professional teamwork dynamics
- Proficient in Python syntax and collections, pip, and Object-Oriented Programming (OOP) basics
- Good understanding of basic linear algebra (vector and matrix operations), derivatives, and function plots
- Knowledge of ML fundamentals: overfitting, gradient descent, classification, regression, metrics, and examples of ML models
- Basic familiarity with Pandas for data manipulation and analysis
- Knowledge of SQL basics and database schema design
- Experience training models using PyTorch or other frameworks, understanding of NLP and prompt engineering fundamentals would be a plus
- Experience developing web services is desirable
- Experience with web services, Bash, and Linux is desirable
- English proficiency at B2 level or higher
- Capacity to study for 6 hours daily
At Vention, we assemble senior-level, dedicated teams of developers to help fast-growing startups and innovative enterprises drive impact and achieve their goals. We’ve delivered solutions across multiple domains, including FinTech, PropTech, AdTech, HealthTech, e-commerce, and more.
Our Data team works with clients to create data platforms from scratch or modify and update existing platforms. The tech stack depends on the project, but we mainly use Spark (along with Scala, Python, or Java) – as well as Apache Kafka, Apache Cassandra, Apache Hadoop, Apache Parquet, and AWS.
Internal knowledge transfer activities are conducted within the Data Engineering Family (which includes data practice & data competency) – it is a space for all of our specialists to share their experiences, learn new skills, host meetups, mentor others, and more.
- Enjoy personalized learning with intimate group sizes of 3-15 or opt for a one-on-one experience
- Our dynamic curriculum offers a mix of hands-on practice and essential theory, tailored for groups or adjusted to fit individual needs
- Give yourself at least three months to dive deep into the material in a group, or choose an individual internship length that aligns with international standards
- Discover the industry inside out. This internship provides insights into the IT world, giving you a leg up in your future career
- Receive guidance and support from an experienced mentor throughout your internship journey
- Beyond learning, there's a chance for employment. Successful interns might land a full-time job with us after the program
- Dive into real-world projects! Get hands-on experience with genuine IT challenges and see firsthand the solutions in action
Engineer your success!